diff --git a/MCM/1995-2008/1995MCM/1995MCM.md b/MCM/1995-2008/1995MCM/1995MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..97c453c15f54d79a9105b71f89e8d5cd5da806e7 --- /dev/null +++ b/MCM/1995-2008/1995MCM/1995MCM.md @@ -0,0 +1,2886 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campell@beloit.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy State University + +Montgomery + +P.O. Drawer 4419 + +Montgomery, AL 36103 + +JMCargal@aol.com + +Development Director + +Laurie W. Aragón + +Creative Director + +Roger Slade + +Production Manager + +George W. Ward + +Project Manager + +Roland Cheyney + +Copy Editors + +Seth A. Maislin + +Emily T. Sacca + +Distribution Coordinator + +Bill Whalen + +Executive Assistant + +Annette Moccia + +Graphic Designer + +Julie Olsen + +# AP Journal + +Vol. 16, No. 3 + +# Associate Editors + +Don Adolphson + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +Leah Edelstein-Keshet + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Peter A. Lindstrom + +Walter Meyer + +Gary Musser + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. ("Gene") Woolsey + +Brigham Young University + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University in Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +University of British Columbia + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +North Lake College + +Adelphi University + +Oregon State University + +Eastern Washington University + +Georgia College + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois and NSF + +Colorado School of Mines + +# Vol. 16, No 3 1995 + +# Table of Contents + +# Publisher's Editorial + +Once More into the Breach Solomon A. Garfunkel 185 + +# Modeling Forum + +Results of the 1995 Mathematical Contest in Modeling Frank Giordano. 189 +A Specialized Root-Finding Method for Rapidly Determining the Intersections of a Plane and a Helix Matthew Evans, Andrew Flint, and Noah Kubow 209 +The Single Helix R. Robert Hentzel and Scott Williams 225 +Planes and Helices Samar Lotia, Eric Musser, and Simeon Simeonov 237 +Judge's Commentary: The Outstanding Helix Intersections Papers Daniel Zwillinger 251 +Practitioner's Commentary: The Outstanding Helix Intersections Papers Pierre J. Malraison 255 +Author's Commentary: The Outstanding Helix Intersections Papers Yves Nievergelt 257 +Paying Professors What They're Worth +Jay Rosenberger, Andrew M. Ross, and Dan Snyder 259 +The World's Most Complicated Payroll Frank Thorne, W. Garrett Mitchener, and Marci Gambrell 275 +Long-term and Transient Pay Scale for College Faculty +Christena Byerley, Christina Phillips, and Cliff Sodergren 287 +How to Keep Your Job as Provost Liam Forbes, Marcus Martin, and Michael Schmahl 297 +Judge's Commentary: The Outstanding Faculty Salaries Papers Donald E. Miller 313 + +# Publisher's Editorial Once More into the Breach + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02173 + +S.Garfunkel@mail.comap.com + +This seems an extremely auspicious time to present an update on COMAP activities. The MCM issue of *The UMAP Journal* is always a cause for celebration at COMAP. It seems hard to believe that this coming February will mark the twelfth running of the contest. It also marks the publication of MCM: *The First Ten Years*, a review of the first ten years of the Mathematical Contest in Modeling. We hope that all of you who read the *Journal* and have followed this competition will receive a copy and appreciate as much as I do, the work of Paul Campbell in putting this volume together. + +What else is doing at COMAP? Well, as many of you are aware, we are in the middle of a major secondary-school curriculum development project—ARISE. This project is producing an NCTM-Standards-based comprehensive grades 9–11 mathematics curriculum. We have already produced and are field-testing materials for grades 9 and 10 and hope to have all of the materials available for commercial publication in 1998. Needless to say, this project has consumed a great deal of our energies (and not an insubstantial amount of NSF funds). + +But I want to focus on our major new undergraduate project, which is just coming to fruition. For the past four years, COMAP has been working on a new entry-level course for mathematics and science majors, which we have entitled Principles and Practice of Mathematics. This course, funded by the Division of Undergraduate Education of NSF, has resulted in a one-year mathematics text to be published by Springer-Verlag in early 1996. Our expressed purpose is to show students both the breadth and the depth of mathematics. Our goals are to attract and retain more students in serious mathematics courses of study and to provide students with a much broader early view of what our subject is about. + +Perhaps the best way to describe this course is by listing the table of contents. This will also serve to recognize the contributions of a superb author team. + +1. Change (Frank Giordano and Chris Arney, U.S. Military Academy, and Sheldon Gordon, Suffolk Community College, SUNY) +2. Position (Robert Bumcrot, Hofstra University) +3. Linear Algebra (Alan Tucker, SUNY at Stony Brook) +4. Combinatorics (Rochelle Wilson Meyer, Nassau Community College) +5. Graph Theory and Algorithms (Paul J. Campbell, Beloit College) +6. Analysis of Algorithms (Rochelle Wilson Meyer, Nassau Community College) +7. Logic and the Design of Intelligent Machines (Rochelle Wilson Meyer, Nassau Community College) +8. Chance (Michael Olinick, Middlebury College) +9. Modern Algebra (Joseph Gallian, University of Minnesota-Duluth) + +The authors were encouraged and guided by editor Walter Meyer (Adelphi University), and Zaven Karian (Denison University) served as technology advisor. Moreover, we were fortunate from the beginning of this project to have the advice of a powerful and prestigious advisory committee, whose members included: + +Saul Gass, University of Maryland +- Andrew Gleason, Harvard University +Zaven Karian, Denison University +- Joseph Malkevitch, York College, CUNY +David Moore, Purdue University +- Henry Pollak, Bellcore (retired) +Paul Sally, University of Chicago +J. Laurie Snell, Dartmouth College +- Marcia Sward, Mathematical Association of America +Alan Tucker, SUNY at Stony Brook +Carol Wood, Wesleyan University +Gail Young, Columbia Teachers College + +The text has been field-tested in draft form at some 40 colleges across the country, and we are grateful to the brave faculty at these institutions for their efforts in making this a teachable and user-friendly book. + +The publication of any new textbook always feels like the completion of a great deal of work, and *Principles and Practice* is no exception. But we are more than aware that in this case publication of the text is the beginning of a great deal of additional work. This is, after all, a revolutionary text. There are no courses with this title in college catalogs. Adoption of this text (in its entirety) implies a reconfiguration of the undergraduate math major. This sort of change doesn't happen over night. However, we are convinced that on reading this work, faculty will see many opportunities for enhancing the first undergraduate mathematics experience of majors. We hope that the charm of the ideas and the modernity of the applications will foster a will to experiment. We believe that through this experimentation we will take a new look at our curriculum and that real change can occur. What better way to describe what COMAP has always stood for? + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and the University of Connecticut at Storrs for eleven years and has dedicated the last 20 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he has directed three telecourse projects: For All Practical Purposes, Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary-school mathematics. + +# Modeling Forum + +# Results of the 1995 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +Department of Mathematical Sciences + +United States Military Academy + +West Point, NY 10996-1786 + +# Introduction + +A total of 320 teams of undergraduates, from 194 schools, spent the third weekend in February working on applied mathematics problems. They were part of the eleventh Mathematical Contest in Modeling (MCM). On Friday morning, the MCM faculty advisor opened a packet and presented each team of three students with a choice of one of two problems. After a weekend of hard work, typed solution papers were mailed to COMAP on Monday. Seven of the top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first ten contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-1994). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +# Problem A: The Single Helix + +The problem consists of assisting a small biotechnological company in designing, proving, programming, and testing a mathematical algorithm to locate "in real time" all the intersections of a helix and a plane in general positions in space (see Figure 1). + +Similar programs for Computer Aided Geometric Design (CAGD) enable engineers to view a plane section of the object that they design, for example, an aircraft jet engine, an automobile suspension, or a medical device. Moreover, engineers may also display on the plane section such quantities as air flow, stress, or temperature, coded by colors or level curves. Furthermore, + +![](images/011a75fa9b83199a5a5a3bada1a18b6616b067bf54ef7078811cecf80c52a9f0.jpg) +Figure 1. Some intersections of a helix with a plane. + +engineers may rapidly sweep such plane sections through the entire object to gain a three-dimensional visualization of the object and its reactions to motion, forces, or heat. To achieve such results, the computer programs must locate all the intersections of the viewed plane and every part of the designed object with sufficient speed and accuracy. General "equation solvers" may in principle compute such intersections; but for specific problems, special methods may prove faster and more accurate than general methods. In particular, general software for Computer Aided Geometric Design may prove too slow to complete computations in real time, or too large to fit in the finished medical devices being developed by the company. The considerations just explained have led the company to the following problem. + +Problem: Design, justify, program, and test a method to compute all the intersections of a plane and a helix in general positions (at any locations and with any orientations) in space. + +A segment of the helix may represent, for example, a helicoidal suspension spring or a piece of tubing in a chemical or medical apparatus. + +The need for some theoretical justification of the proposed algorithm arises from the necessity of verifying the solution from several points of view. This can be done through mathematical proofs of parts of the algorithm, and through tests of the final program with known examples. Such documentation and tests will be required by government agencies for medical use. + +# Problem B: Aluacha Balaclava College + +Aluacha Balaclava College has just hired a new Provost. Problems with faculty compensation at the college forced the former Provost to resign, so the new Provost needs to make the institution of a fair and reasonable compensation system her first priority. As a first step in this process, she has hired your team as consultants to design a compensation system that reflects the following circumstances and principles. + +# Circumstances + +There are four faculty ranks: Instructor, Assistant Professor, Associate Professor and Professor, in ascending order. Faculty with Ph.D. degrees are hired at the rank of Assistant Professor. Faculty who are working on a Ph.D. are hired at the rank of Instructor and promoted automatically to Assistant Professor upon completion of their degrees. Faculty may apply for promotion from Associate Professor to Professor after serving at the rank of Associate for seven or more years. The promotion decisions are made by the Provost with recommendations from a faculty committee and are not your concern. + +Faculty salaries are for the ten-month period September through June. Raises are always effective beginning in September. The total amount of money available for raises varies from year to year and generally is not known until March for the following year. + +The starting salary this year for an Instructor with no prior teaching experience was $27,000 and for an Assistant Professor was$ 32,000. Faculty can receive credit, upon hire, for as much as seven years of teaching experience at other institutions. + +# Principles + +- All faculty should get a raise any year that money is available. +- Faculty should get a substantial benefit from promotion. If one is promoted in the minimum possible time, the benefit should be roughly equal to seven years of normal (non-promotion) raises. +- Faculty who get promoted on time (after seven or eight years in rank) and have careers of 25 or more years should make roughly twice as much at retirement as a new Ph.D. starting off. +- Faculty in the same rank with more experience should be paid more than others with less experience. But the effect of an additional year of experience should diminish over time. In other words, if two faculty stay in the same rank, their salaries should tend to get closer over time. + +# The Project + +First, design a new pay system without cost-of-living increases. Then incorporate cost-of-living increases. The final piece of this project is to design a transition process for existing faculty that will move all salaries towards your system without cutting anyone's salary. The existing faculty salaries, ranks and years of service, are in Table 1. Discuss any refinements that you think would improve your system. + +The Provost has asked for a detailed pay system plan that she can use for implementation, as well as a short executive summary in clear language, which she can present to the Board and to the faculty. The summary should outline the model, its assumptions, its strengths and weaknesses, and the expected results. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at Salisbury State University, Maryland. At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Helix Intersection3184382146
College Salaries42641103174
74484185320
+ +The judges designated seven papers as Outstanding. They appear in this special issue of The UMAP Journal. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +Table 1. +Salary data for Problem B. + +
CaseYearsRankSalaryCaseYearsRankSalaryCaseYearsRankSalary
14ASSO54,000219ASST43,508320ASST39,072
411PROF53,900515PROF44,206617ASST37,538
723PROF48,844810ASST32,84197ASSO49,981
1020ASSO42,5491118ASSO42,6491219PROF60,087
1315ASSO38,002144ASST30,0001534PROF60,576
1628ASST44,562179ASST30,8931822ASSO46,351
1921ASSO50,9792020ASST48,000214ASST32,500
2214ASSO38,4622323PROF53,5002421ASSO42,488
2520ASSO43,892265ASST35,3302719ASSO41,147
2815ASST34,0402918PROF48,944307ASST30,128
315ASST35,330326ASSO35,942338PROF57,295
3410ASST36,9913523PROF60,5763620ASSO48,926
379PROF57,9563832ASSO52,2143915ASST39,259
4022ASSO43,672416INST45,500425ASSO52,262
435ASSO57,1704416ASST36,9584523ASST37,538
469PROF58,974478PROF49,9714823PROF62,742
4939ASSO52,058504INST26,500515ASST33,130
5246PROF59,749534ASSO37,9545419PROF45,833
556ASSO35,270566ASSO43,0375720PROF59,755
5821PROF57,797594ASSO53,500606ASST32,319
6117ASST35,6686220PROF59,333634ASST30,500
6416ASSO41,3526515PROF43,2646620PROF50,935
676ASST45,365686ASSO35,941696ASST49,134
704ASST29,500714ASST30,186727ASST32,400
7312ASSO44,501742ASST31,900751ASSO62,500
761ASST34,5007716ASSO40,637784ASSO35,500
7921PROF50,5218012ASST35,158814INST28,500
8216PROF46,9308324PROF55,811846ASST30,128
8516PROF46,090865ASST28,5708719PROF44,612
8817ASST36,313896ASST33,4799014ASSO38,624
915ASST32,210929ASSO48,500934ASST35,150
9425PROF50,5839523PROF60,8009617ASST38,464
974ASST39,500983ASST52,0009924PROF56,922
1002PROF78,50010120PROF52,3451029ASST35,798
10324ASST43,9251046ASSO35,27010514PROF49,472
10619ASSO42,21510712ASST40,42710810ASST37,021
10918ASSO44,16611021ASSO46,1571118ASST32,500
11219ASSO40,78511310ASSO38,6981145ASST31,170
1151INST26,16111622PROF47,97411710ASSO37,793
1187ASST38,11711926PROF62,37012020ASSO51,991
1211ASST31,5001228ASSO35,94112314ASSO39,294
12423ASSO51,9911251ASST30,00012615ASST34,638
12720ASSO56,8361286INST35,45112910ASST32,756
13014ASST32,92213112ASSO36,4511321ASST30,000
13317PROF48,1341346ASST40,4361352ASSO54,500
1364ASSO55,0001375ASST32,21013821ASSO43,160
1392ASST32,0001407ASST36,3001419ASSO38,624
14221PROF49,68714322PROF49,9721447ASSO46,155
14512ASST37,1591469ASST32,5001473ASST31,500
14813INST31,2761496ASST33,37815019PROF45,780
1514PROF70,50015227PROF59,3271539ASSO37,954
1545ASSO36,6121552ASST29,5001563PROF66,500
15717ASST36,3781585ASSO46,77015922ASST42,772
1606ASST31,16016117ASST39,07216220ASST42,970
1632PROF85,50016420ASST49,30216521ASSO43,054
16621PROF49,9481675PROF50,81016819ASSO51,378
16918ASSO41,26717018ASST42,17617123PROF51,571
17212PROF46,5001736ASST35,7981747ASST42,256
17523ASSO46,35117622PROF48,2801773ASST55,500
17815ASSO39,2651794ASST29,50018021ASSO48,359
18123PROF48,8441821ASST31,0001836ASST32,923
1842INST27,70018516PROF40,74818624ASSO44,715
1879ASSO37,38918828PROF51,06418919INST34,265
19022PROF49,75619119ASST36,95819216ASST34,550
19322PROF50,5761945ASST32,2101952ASST28,500
19612ASSO41,17819722PROF53,83619819ASSO43,519
1994ASST32,00020018ASSO40,08920123PROF52,403
20221PROF59,23420322PROF51,89820426ASSO
+ +
Outstanding Teams
Institution and AdvisorTeam Members
+ +Helix Intersections Papers + +"A Specialized Root-Finding Method for Rapidly Determining the Intersections of a Plane and a Helix" + +Harvey Mudd College Matthew Evans + +Claremont, CA Andrew Flint + +David L. Bosley Noah Kubow + +"The Single Helix" +Iowa State University +R. Robert Hentzel +Ames, IA +Scott Williams +Stephen J. Willson + +"Planes and Helices" +Macalester College +St. Paul, MN +A. Wayne Roberts +Simeon Simeonov + +Faculty Salaries Papers + +" Paying Professors What They're Worth " Harvey Mudd College Jay Rosenberger Claremont, CA Andrew M. Ross David L. Bosley Dan Snyder + +"The World's Most Complicated Payroll" +North Carolina School of Science & Mathematics Frank Thorne +Durham, NC W. Garrett Mitchener +Dot Doyle Marci Gambrell + +"Long-Term and Transient Pay Scale for College Faculty" +Southeast Missouri State University +Cape Girardeau, MO +Robert W. Sheets + +"How to Keep Your Job as Provost" +University of Alaska Fairbanks Liam Forbes +Fairbanks, AK Marcus Martin +John P. Lambert Michael Schmahl + +# Meritorious Teams + +# Helix Intersections Papers (13 teams) + +Baylor University, Waco, TX (Frank H. Mathis) + +Beijing Institute of Tech., Beijing, China (Yan-ping Zhao) + +California Polytechnic State Univ., San Luis Obispo, CA (Thomas O'Neil) (two teams) + +Duke University, Durham, NC (David P. Kraines) + +Harvard University, Cambridge, MA (Harry R. Lewis) + +Lewis & Clark College, Portland, OR (Robert W. Owens) + +Natl. Univ. of Defense Tech., Changsha, Hunan, China (Wang XiaoXing) + +New Mexico Inst. of Mining and Tech., Socorro, NM (Brian T. Borchers) + +South China University of Tech., Guangzhou, Canton, China (Lejun Xie) + +Southeast University, Nanjing, Jiangsu, China (Huangjun Sunzhizhong) + +Trinity College Dublin, Dublin, Ireland (Timothy G. Murphy) + +University College Galway, Galway, Ireland (M. Tuite) + +University of Alaska Anchorage, Anchorage, AK (Ted L. Gifford) + +University of Utah, Salt Lake City, UT (Don H. Tucker) + +Worcester Polytechnic Institute, Worcester, MA (Arthur C. Heinricher) + +Xidian University, Xian, Shaanxi, China (Wang Yu Ping) + +Xidian University, Xian, Shaanxi, China (Ma Yu Xiang) + +# Faculty Salaries Papers (26 teams) + +College of William & Mary, Williamsburg, VA (Hugo J. Woerdeman) + +Fudan University, Shanghai, China (Cao Yuan) + +Fudan University, Shanghai, China (Tan Yongji) + +Harbin Institute of Technology, Harbin, China (Shi Peilin) + +Hiram College, Hiram, OH (James R. Case) + +JiLin University, Changchun, Jilin, China (Lu Xian Yui) + +Kenyon College, Gambier, OH (Dana N. MacKenzie) + +Luther College, Decorah, IA (Reginald D. Laursen) + +Mt. St. Mary's College, Emmitsburg, MD (Fred J. Portier) + +Muhlenberg College, Allentown, PA (David A. Nelson) + +Natl. Univ. of Defense Tech., Changsha, Hunan, China (Wu MengDa) + +Shanghai Jiatong University, Shanghai, China (Longwan Xiang) + +Southwestern University, Georgetown, TX (Therese Shelton) + +Texas A & M Univ., College Station, TX (Denise E. Kirschner) + +Trinity College Dublin, Dublin, Ireland (James C. Sexton) + +U.S. Air Force Academy, USAF Academy, CO (Jeffrey S. Stonebraker) + +Univ. of Alaska Fairbanks, Fairbanks, AK (Patricia A. Andresen) + +Univ. of Colorado at Denver, Denver, CO (David C. Fisher) + +Univ. of Missouri-Rolla, Rolla, MO (Roger H. Hering) + +University of Dallas, Irving, TX (Charles A. Coppin) + +University of Dayton, Dayton, OH (Ralph C. Steinlage) + +Vilnius University, Vilnius, Lithuania (Ricardas Kudzma) + +Wake Forest University, Winston-Salem, NC (Stephen B. Robinson) + +Washington University, St. Louis, MO (Hiro Mukai) + +Wheaton College, Wheaton, IL (Paul Isihara) + +Xidian University, Xian, Shaanxi, China (Mao Yong Cai) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +The Institute for Operations Research and the Management Sciences (INFORMS) awarded to each member of two Outstanding teams a cash award and a three-year membership. The teams were from Macalester College (Helix Intersections Problem) and Harvey Mudd College (Faculty Salaries Problem). The teams made presentations at a special MCM session and were given cash awards. Moreover, INFORMS gave free one-year memberships to all members of Meritorious and Honorable Mention teams. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. Each team member received a cash prize, and each team received a subsidized trip to the July 1995 SIAM Annual Meeting in San Diego, CA. The teams were from Iowa State University (Helix Intersections Problem) and from University of Alaska Fairbanks (Faculty Salaries Problem). These teams made presentations at a special modeling minisymposium. + +# Judging + +Director + +Frank R. Giordano, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Associate Directors + +Chris Arney, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +# Helix Intersections Problem + +Head Judge + +Marvin S. Keener, Mathematics Dept., Oklahoma State University, Stillwater, OK + +Associate Judges + +Ben A. Fusaro (Triage), Dept. of Mathematical Sciences, Salisbury State University, Salisbury, MD + +Patrick Driscoll, Virginia Polytechnic Institute and State University, Blacksburg, VA + +Mario Juncosa, RAND Corporation, Santa Monica, CA + +Veena Mendiratta, AT&T Bell Labs, Naperville, IL + +Keith Miller, National Security Agency, Fort Meade, MD + +Mike Moody, Harvey Mudd College, Claremont, CA + +Lee Seitelman, Glastonbury, CT + +Matthew Witten, University of Texas, Austin, TX + +Daniel Zwillinger, Zwillinger & Associates, Arlington, MA + +# Faculty Salaries Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +Robert M. Tardiff (Triage), Dept. of Mathematical Sciences, Salisbury State University, Salisbury, MD + +Karen Bolinger, Mathematics Dept., Arkansas State University, State University, AR + +James Case, Baltimore, Maryland + +William Fox, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Jerry Griggs, University of South Carolina, Columbia, SC + +Don Miller, Dept. of Mathematics, St. Mary's College, Notre Dame, IN + +Peter Olsen, National Security Agency, Fort George G. Meade, MD + +Judith Pastor, Haverly Systems, Inc., Houston, TX + +Catherine Roberts, Mathematics Dept., University of Rhode Island, Kingston, RI + +Theresa Sandifer, Mathematics Dept., Southern Connecticut State Univ., New Haven, CT + +Michael Tortorella, Middletown, NJ + +# Triage Session + +Director + +Ben A. Fusaro + +Head Judge, Helix Intersections Problem + +Ben A. Fusaro + +Head Judge, Faculty Salaries Problem + +Robert M. Tardiff + +Associate Judges + +Homer W. Austin + +Alfred S. Beebe, University of Maryland, Eastern Shore, Princess Anne, MD + +E. Boyd, University of Maryland, Eastern Shore, Princess Anne, MD + +Donald C. Cathcart + +S.M. Hetzler + +T.O. Horseman + +Peter Olsen, National Security Agency, Fort George G. Meade, MD + +Fatollah Salimian + +Kathleen M. Shannon + +Barbara A. Wainwright + +M.E. Williams + +W.J. Yurek, Worcester-Wicomico Community College, Salisbury, MD + +Except as noted, the triage judges were from Salisbury State University, Salisbury, MD. + +# Sources of the Problems + +The Helix Intersections Problem was contributed by Yves Nievergelt (Eastern Washington University, Cheney, WA), who describes its origin in his Author's Commentary in this issue. The Faculty Salaries Problem was contributed by Kathleen M. Shannon (Salisbury State University, Salisbury, MD); the data are public information from Salisbury State University. + +# Acknowledgments + +MCM was funded this year by the National Security Agency, whose support we deeply appreciate. We thank Dr. Gene Berg of NSA for his coordinating efforts. The MCM is also indebted to INFORMS and SIAM, which provided judges, prizes, and forums for presentations of student papers. + +I thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually, a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +A = Helix Intersections Problem + +B = Faculty Salaries Problem + +
INSTITUTIONCITYADVISORAB
ALABAMA
Univ. of AlabamaHuntsvilleClaudio H. MoralesP
ALASKA
University of AlaskaAnchorageTed L. GiffordM
FairbanksJohn P. LambertHO
Patricia A. AndresenM
ARIZONA
Northern Arizona U.FlagstaffTerence R. BlowsPP
ARKANSAS
Hendrix CollegeConwayZe'ev BarelP
Williams Baptist Coll.Walnut RidgeLana S. RhoadsP
Joy HollowayP
CALIFORNIA
Calif. Inst. of Tech.PasadenaAlexander S. KechrisH
Calif. Poly. State Univ.S. Luis ObispoThomas O'NeilM,M
Ernest BlattnerP
Calif. State Poly. Univ.PomonaJames R. McKinneyP
Calif. State UniversityNorthridgeGholam Ali ZakeriP
Harvey Mudd CollegeClaremontDavid L. BosleyOO
Humboldt State Univ.ArcataJeffrey B. HaagP
Kathleen M. CroweH
Loyola Marymount U.Los AngelesThomas M. ZachariahP
Pomona CollegeClaremontAmy RadunskayaH
Sonoma State Univ.Rohnert ParkClement E. FalboH
Univ. of CaliforniaBerkeleyAllen M. ChenPH
Univ. of RedlandsRedlandsAlexander E. KoonceP
COLORADO
Metro. State CollegeDenverThomas E. KelleyP
Regis UniversityDenverDiane M. WagnerP
U.S. Air Force Acad.USAF Acad.Jeffrey S. StonebrakerH,HM
University of ColoradoDenverDavid C. FisherM,P
U. of Northern ColoradoGreeleyWilliam W. BoschP
U. of Southern ColoradoPuebloPaul R. ChaconP
CONNECTICUT
Southern Conn. St. Univ.New HavenEdward F. AboufadelP
University of BridgeportBridgeportNinygi WangH
Natalia B. RomalisP
University of HartfordW. HartfordDiego M. BenardeteP
Western Conn. St. Univ.DanburyEdward SandiferH
Judith A. GrandahlH
DISTRICT OF COLUMBIA
Georgetown UniversityWashingtonAndrew VogtPP
FLORIDA
Florida Inst. of Tech.MelbourneLaurene V. FausettP
Jacksonville UniversityJacksonvilleRobert A. HollisterP
Stetson UniversityDelandLisa O. CoulterP
University of S. FloridaFort MyersCharles E. LindseyP,P
GEORGIA
Wesleyan CollegeMaconJoseph A. IskraP
IDAHO
Lewis-Clark State Coll.LewistonBrent BradberryP
ILLINOIS
Illinois CollegeJacksonvilleDarrell E. AllgaierH
Illinois Wesleyan Univ.BloomingtonLawrence N. StoutP
Wheaton CollegeWheatonPaul IsiharaM
INDIANA
Rose-Hulman Inst. of Tech.Terre HauteAaron D. KlebanoffP
Saint Mary's CollegeNotre DamePeter D. SmithPP
Valparaiso UniversityValparaisoRick GillmanP
IOWA
Clarke CollegeDubuqueCarol A. SpiegelP,P
Grinnell CollegeGrinnellAnita E. SolowP,P
Iowa State UniversityAmesStephen J. WillsonO
Luther CollegeDecorahReginald D. LaursenM
Maharishi Int'l Univ.FairfieldCathy GoriniPP
Teikyo Marycrest Univ.DavenportSusan T. YoungbergP
Univ. of Northern IowaCedar FallsTimothy L. HardyP
Gregory M. DotsethH
KANSAS
Benedictine CollegeAtchisonJo Ann Fellin, O.S.B.P
KENTUCKY
Asbury CollegeWilmoreKenneth P. RietzHH
Bellarmine CollegeLouisvilleJohn A. OppeltH
Western Kentucky U.Bowling GreenDouglas D. MooneyP
LOUISIANA
McNeese State Univ.Lake CharlesSid L. BradleyP
George F. MeadP
MAINE
Bowdoin CollegeBrunswickAdam B. LevyP
Colby CollegeWatervilleAmy H. BoydP
University of MaineOronoGrattan P. MurphyP
MARYLAND
Hood CollegeFrederickJohn BoonP
Loyola CollegeBaltimoreDipa ChoudhuryH,H
Mt. St. Mary's Coll.EmmitsburgFred J. PortierM
Theresa A. FrancisP
Salisbury State U.SalisburyKathleen M. ShannonP
Steve M. HetzlerP
Univ. of MarylandCollege ParkMichael C. FuP
MASSACHUSETTS
Harvard UniversityCambridgeHarry R. LewisM,P
Roger W. BrockettH
Smith CollegeNorthamptonRuth HaasP
U. of MassachusettsAmherstEdward A. ConnorsHH
Worcester Poly. Inst.WorcesterArthur C. HeinricherM
Bogdan VernescuH
MICHIGAN
Calvin CollegeGrand RapidsSteven P. DirkseP
Eastern Michigan U.YpsilantiChristopher E. HeeH
Lawrence Tech. Univ.SouthfieldRuth G. FavroP
Howard WhitstonP
Southwest. Mich. C.DowagiacRonald SawatzkyP
MINNESOTA
Macalester CollegeSt. PaulWayne A. RobertsOH
Moorhead State Univ.MoorheadRonald M. JeppsonP
MISSISSIPPI
Jackson State Univ.JacksonDavid C. BramlettH
Carl DrakeP
MISSOURI
Crowder CollegeNeoshoCheryl IngramP
Missouri Southern St. Coll.JoplinPatrick CassensP,P
Northeast Missouri St. U.KirksvilleSteven J. SmithH
Northwest Missouri St. U.MaryvilleRussell N. EulerP
Southeast Missouri St. U.Cape GirardeauRobert W. SheetsO
University of MissouriRollaRoger H. HeringM
Washington UniversitySt. LouisHiro MukaiHM
Wentworth Mil. AcademyLexingtonJ.O. MaxwellP
NEBRASKA
Hastings CollegeHastingsDavid B. CookeH
Nebraska Wesleyan Univ.LincolnP. Gavin LaRoseP
NEVADA
Sierra Nevada CollegeIncline VillageSue WelschP
NEW JERSEY
New Jersey Inst. of Tech.NewarkBruce G. BukietP
NEW MEXICO
New Mexico State Univ.Las CrucesMarcus S. CohenH
New Mexico Tech.SocorroBrian T. BorchersM
NEW YORK
Hofstra UniversityHempsteadR.N. GreenwellPP
Ithaca CollegeIthacaJohn C. MaceliH
James E. ConklinH
Le Moyne CollegeSyracuseWilliam C. RinamanP
Nazareth C. of RochesterRochesterRonald W. JorgensenH
Pace UniversityPleasantvilleRobert A. CiceniaP
Rensselaer Poly. InstituteTroyMark LeviP
Siena CollegeLoudonvilleT.H. RousseauP
Westchester Comm. Coll.ValhallaRowan LindleyP
NORTH CAROLINA
Appalachian State Univ.BooneJaimie HebertP
Holly HirstH
Duke UniversityDurhamDavid P. KrainesM
Richard A. ScovilleH
N.C. Schl. of Sci. & MathDurhamDot DoyleHO
Univ. of North CarolinaWilmingtonRussell L. HermanH
Chapel HillJon W. TolleH
Wake Forest UniversityWinston-SalemStephen B. RobinsonM
Western Carolina Univ.CullowheeJeff A. GrahamP
Joseph S. SportsmanH
NORTH DAKOTA
Univ. of North DakotaWillistonWanda M. MeyerP
Grand ForksDavid J. UherkaP
OHIO
College of WoosterWoosterMatthew BrahmP
Hiram CollegeHiramMichael A. GrajekP
James R. CaseM
Kenyon CollegeGambierDana N. MacKenziePM
University of DaytonDaytonRalph C. SteinlageM
Xavier UniversityCincinnatiRichard J. PulskampH
OKLAHOMA
Oklahoma State Univ.StillwaterJohn E. WolfeH
Southeastern Okla. St. U.DurantBrett M. ElliottP
OREGON
Lewis & Clark CollegePortlandRobert W. OwensM
Southern Oregon St. C.AshlandKemble R. YatesP
PENNSYLVANIA
Bloomsburg UniversityBloomsburgScott E. InchP
Gannon UniversityErieRafal F. AblamowiczP,P
Gettysburg CollegeGettysburgJames P. FinkP
Messiah CollegeGranthamD.C. PhillippyH
Muhlenberg CollegeAllentownDavid A. NelsonM,P
Univ. of PittsburghJohnstownStephen J. CurranP
Westminster CollegeN. WilmingtonCarolyn CuffP
RHODE ISLAND
Rhode Island CollegeProvidenceD.L. AbrahamsonP
SOUTH CAROLINA
Central Carolina Tech. C.SumterKaren G. McLaurinP,P
The CitadelCharlestonKanat DurgunP
Coastal Carolina Univ.ConwayPrashant S. SansgiryP
Columbia CollegeColumbiaScott A. SmithP
Francis Marion Univ.FlorenceCatherine A. AbbottP
Midlands Tech. CollegeColumbiaJohn R. LongP
Rick BaileyP
SOUTH DAKOTA
Northern State Univ.AberdeenA. S. ElkhaderH
TENNESSEE
David Lipscomb Univ.NashvilleMark A. MillerPP
TEXAS
Baylor UniversityWacoFrank H. MathisMH
Southwestern UniversityGeorgetownTherese SheltonM
Texas A & M UniversityColl. StationDenise E. KirschnerM
University of DallasIrvingCharles A. CoppinM
U. Texas-Pan AmericanEdinburgRoger A. KnobelPP
UTAH
University of UtahSalt Lake CityDon H. TuckerM,P
Utah State UniversityLoganMichael C. MinnotteP
Weber State UniversityOgdenAfshin GhoreishiP
VERMONT
Johnson State CollegeJohnsonGlenn D. SproulP,P
VIRGINIA
Coll. of William & MaryWilliamsburgLawrence M. LeemisP
Hugo J. WoerdemanM
James Madison Univ.HarrisonburgJames S. SochackiP
Roanoke CollegeSalemJeffrey L. SpielmanP
Thomas Jefferson H. S. for Sci. & Tech.AlexandriaGeraldine F. OlivetoP
Peter J. BraxtonH
University of RichmondRichmondKathy W. HokeH
Virginia Pol. Inst. & St. U.BlacksburgJoel A. NachlasH
Sheldon H. JacobsonP
WASHINGTON
Eastern Wash. Univ.CheneyDavid JabonP
Univ. of Puget SoundTacomaiRobert A. BeezerP
Martin JacksonH
Western Wash. Univ.BellinghamTjalling J. YpmaHP
WISCONSIN
Alverno CollegeMilwaukeeSusan F. PustejovskyP,P
Beloit CollegeBeloitPhilip D. StraffinPP
Northcentral Tech. Coll.WausauFrank J. FernandesP
Robert J. HenningP
Carmen OlsonP
Ripon CollegeRiponRobert J. FragaH,P
St. Norbert CollegeDe PereJohn A. FrohligerP
University of WisconsinMadisonHoward E. ConnerH
OshkoshAndrew E. LongP
K. L. D. GunawardenaP
University of WisconsinPlattevilleSherrie NicolH
Clement T. JeskeH
Stevens PointNorman CuretH
BULGARIA
Bulgarian Acad. of Sci.SofiaJordan B. TabovP
Petar S. KenderovH
CANADA
Scarborough College, University of TorontoToronto, Ont.Paul S. SelickP,P
University of CalgaryCalgary, Alb.David R. WestbrookP
University of TorontoToronto, Ont.Luis A. SecoH,H
CHINA
Anhui UniversityHefeiTeng YaoqingP
Wang HuiminP
Automation Eng'ng Coll. of Beijing Union Univ.BeijingRen Kai-longP
Wang Xin-fengH
Beijing Institute of Tech.BeijingZhao Yan-pingM
Qin HongxunH
Beijing Normal UniversityBeijingLiu LaifuH
Zeng WenyiP
Di ZhengruP
Beijing U. of Post & Tel.BeijingDing JinkouP
Luo Shou ShanP
Beijing U. of Sci. & Tech.BeijingWang BingtuanP
Chen MingwenP
Yang XiaomingPP
China Pharmaceutical U.NanjingYang Jing HuaP
Qiu Jia XueH
Chongqing UniversityChongqingChu GongH
Ren ShanqiangH
Qu GongP
Liu QiongxunP
Dalian University of Tech.DalianYu Hong QuanH
He Ming FengP
E. China U. of Sci. & Tech.ShanghaiXu SanbaoH,H
Yuanhong LuP,P
Fudan UniversityShanghaiCao YuanM
Tan YongjiM
Ye Yao-huaH
He YeP
Harbin Engineering Univ.HarbinZhang XiaoweiP
Shen JihongH
Gao ZhenbinP
Harbin Institute of Tech.HarbinShi PeilinPM,P,P
Hefei University of Tech.HefeiDu XueqiaoH
Huazhong U. Sci. & Tech.WuhanQi HuanP
JiLin UniversityChangchunLin ZhenghuaH
Lu Xian YuiM
Zhao ShuvenP
Jinan UniversityGuangzhouZeng YijunP
Fan SuohaiP
Nanjing U. Aero. & Astro.NanjingguZhang YiH
Gu YudiH
Zhou YiqianH
Nanjing U. of Sci. & Tech.NanjingQian Xiong PingP
Zhao Chong GaoP
Natl. U. of Defense Tech.ChangshaWu MengDaM
Wang XiaoXingM
Wu YiH
Northwestern Poly. Univ.XianSu ChaoweiP
Xu WeiP
Yang JianP
Rong HaiwuP
Peking UniversityBeijingLei GongyanH
Wu Wei MinP,P
Shanghai Jiatong Univ.ShanghaiXiang LongwanHM
Chen ZhiheHH
Shanghai Teachers' Univ.ShanghaiDan ShaH
South China U. of Tech.GuangzhouXiao RenyueH
Xie LejunM
Zhu FengfengP
Southeast UniversityNanjingDeng Jian-mingH
Huangjun SunzhizhongM
Tsinghua UniversityBeijingGao CeliP
Song BinhengP
U. of Sci. & Tech. of ChinaHefeiWang ShuheH
Xu JunmingH
Xian Jiaotong UniversityXianHe XiaoliangP
Dong TianxinP
Zhou YicangP
Dai YonghongP
Xidian UniversityXianCai Mao YongM
Wang Yu PingM
Ma Yu XiangM
Zhongshan UniversityGuangzhouZhang LeiPP
He YuanjiangP
HONG KONG
Hong Kong Baptist UniversityKowloonWai Chee ShiuP
Li Zhi LiaoH
IRELAND
Trinity College DublinDublinJames C. SextonM
Timothy G. MurphyM
University College GalwayGalwayPatrick M. O’LearyH
M. TuiteM
LATVIA
University of LatviaRigaJanis P. VucansP,P
LITHUANIA
Vilnius UniversityVilniusAlgirdas ZabulionisP
Ricardas KudzmaM
ZIMBABWE
University of ZimbabweHarareJames PreenP
+ +# A Specialized Root-Finding Method for Rapidly Determining the Intersections of a Plane and a Helix + +Matthew Evans + +Andrew Flint + +Noah Kubow + +Harvey Mudd College + +Claremont, CA 91711 + +{mevans, aflint, nkubow}@hmc.edu + +Advisor: David L. Bosley + +# Introduction + +Our problem is to locate all of the intersections between a helix and a plane that are in general position. + +The problem statement leaves several potentially significant parameters unspecified. In the most general case, solutions may be entirely intractable. Certainly, such cases would be computationally difficult and therefore inappropriate for real-time simulations. At the onset of our investigation, we made the following assumptions: + +- Software application. The problem is motivated by a desire to predict intersections using computer software, and therefore any solution should be proposed as if it were the computational engine for a larger package. We further assume that all relevant information about the plane and helix is passed into this engine from the user interface. +- Nonzero tolerance. The engine should be expected to locate approximate points of intersection within a certain numerical tolerance. Exact solutions are not required for graphical applications. Particularly when a rapid sequence of solutions is desired, the tolerance should increase, to minimize computation time. +- Frame-by-frame animation. We approach the problem of real-time simulation as a finite sequence of discrete static instances of the general problem. For example, a $90^{\circ}$ rotation of the plane is simulated by a handful of fixed relative orientations, which the engine solves sequentially. Each solution set is then used to construct a single frame in the sequence, which is animated for the user in real time. + +- Nondegenerate, regular finite helix. We assume that the helix is circular, of finite height, and with a uniform pitch and nonzero radius at any fixed time. +- Effectively infinite plane. We consider only an infinite plane, since the possibility of a helix sneaking around the edge of a plane segment adds considerable difficulty to the problem, and we do not believe that is in the spirit of the problem as stated. +- Relative coordinate system. The positions of any intersections are to be generated by the engine in its own coordinate system and are then passed out to the calling function. If necessary, the calling function then rescales and translates these points to a coordinate system appropriate for the user interface, including projecting these points in three-space onto a two-dimensional screen. + +# Constructing the Model + +On-screen, a helix and plane may be oriented in virtually any manner. In constructing a model, however, we are concerned only with the relative orientations and positions of the two objects. In light of this, we are free to choose an arrangement that is easiest to treat mathematically. + +# Assigning a Coordinate System + +We begin by assigning a coordinate system to the model. First, we choose one end of the helix to be the initial point and the other end to be the terminal point. The $z$ -axis is the axis of the helix, and the $x$ -axis contains the initial point. The origin of the coordinate system is thereby fixed at the bottom of the helical axis. The orientation of the helix in this coordinate system is shown in Figure 1. For the purposes of discussion, we consider only right-handed helices, but the model can easily be extended to left-handed ones. + +To locate the plane in this coordinate system, we take $(0, 0, z_0)$ to be the point of intersection between the plane and the $z$ -axis. This intersection is guaranteed to occur provided that the vector normal to the cutting plane is not perpendicular to the $z$ -axis (a special case, which we address later). + +Since we have located the point $(0,0,z_0)$ , the plane is completely specified by the angles $\theta_0$ and $\phi_0$ . In our representation, $\theta_0$ is the angle formed by the line of intersection of the cutting plane with the plane $z = z_0$ , as measured counterclockwise from the $x$ -axis. The angle $\phi_0$ is the angle of declination of the cutting plane from the $z$ -axis. One could think of this as creating a plane specified by $z_0$ , $\theta_0$ , and $\phi_0$ by starting with a vertical + +![](images/4750d53c62eb569a12d8fa02043629bbe834e4f5302d3786725609db100069c8.jpg) +Figure 1. The helix-defined coordinate system. + +plane along the $xz$ -axis, then rotating the plane about the $z$ -axis counterclockwise through $\theta_0$ radians, rotating back from the $z$ -axis by $\phi_0$ radians, and finally translating directly upward to $z_0$ . The orientation of the plane in the coordinate system is shown in Figure 2. + +![](images/f3de1e629a47cd8a103f683c913346b6ce3fca510154db55b24db305f18e69e2.jpg) +Figure 2. Fixing the plane in the coordinate system. + +# Parametrizing the Problem + +By defining our coordinate system in this way, we can preserve any relative orientation of a helix and a cutting plane while fixing the helix in a vertical position. In these coordinates, we can easily describe these two objects with explicit equations. If the helix has radius $R$ , pitch $p$ , and length $L$ , then it is given in rectangular coordinates by the parametric equations + +$$ +x = R \cos 2 \pi t, \quad y = R \sin 2 \pi t, \quad z = p t, \quad 0 \leq t \leq L / p. \tag {1} +$$ + +For $R > 0$ , this generates a right-handed helix. + +Similarly, the plane can be described directly in rectangular coordinates by + +$$ +z = - \sin \theta_ {0} \tan \left(\frac {\pi}{2} - \phi_ {0}\right) x + \cos \theta_ {0} \tan \left(\frac {\pi}{2} - \phi_ {0}\right) y + z _ {0}. \tag {2} +$$ + +For a derivation of this result, see Appendix A. + +Now consider a right circular cylinder of radius $R$ and height $L$ , centered at the origin and resting on the $xy$ -plane. This cylinder is given by + +$$ +x = R \cos 2 \pi t, \quad y = R \sin 2 \pi t, \quad z = s, \quad 0 \leq s \leq L. \tag {3} +$$ + +This cylinder contains the helix, and the intersection of the cylinder and the cutting plane must contain all of the intersections between the helix and the plane. The intersection of the cylinder and the plane is just the projection of the cylinder onto the cutting plane, which is the curve + +$$ +z = R \tan \left(\frac {\pi}{2} - \phi_ {0}\right) \sin (2 \pi t - \theta_ {0}) + z _ {0}, +$$ + +as determined by the simultaneous solution of the equations for $z$ in (2) and (3). See Appendix A for details. + +# A Visual Interpretation + +Further insight into the problem of finding intersections between the helix and the cutting plane can be gained from the following illustration. Suppose that we place a bead on the top of the helix and allow it to slide downward along the helix at a constant rate. If there is a bright light directly above the bead, we will see the bead's shadow repeatedly tracing out an elliptical curve on the cutting plane. We further endow the bead with the magical ability to slide directly through the plane without stopping. Each time the bead comes into contact with its shadow, the bead must be passing through the plane. + +In mathematical terms, the motion of the bead is described by (1)—if you invert time, or reverse the influence of gravity, anyway—and the motion of its shadow is given by (3), simply allowing $t$ to run on as in (1). Clearly, then, since the $x$ - and $y$ -components of the bead and its shadow are equivalent for all $t$ , we really search only for those instances when they share the same $z$ -component. + +Thus, we would like to find all solutions to the equation + +$$ +p t = R \tan \left(\frac {\pi}{2} - \phi_ {0}\right) \sin \left(2 \pi t - \theta_ {0}\right) + z _ {0}, \quad 0 \leq t \leq L / p. \tag {4} +$$ + +Unfortunately, this is a transcendental equation whose solutions cannot be found analytically for $\phi_0 \neq 0$ . When $\phi_0 = 0$ , this equation is not a valid description of the helical intersections with the cutting plane; we have found analytical solutions to the problem in this case (see Appendix B). + +Despite the fact that (4) has no analytical solutions when $\phi_0 \neq 0$ , we have developed a solution algorithm that takes advantage of the unusual characteristics of the equation. A presentation and discussion of our algorithm is given in the following section. Four views of the intersection of a sample helix and plane are shown in Figure 3. + +![](images/7040d0a60882b47607e35bd2dc5b27d78f308e31343bcf4e7058ee0484abaa22.jpg) + +![](images/ace8592a164b3f791f78e1bf1dc8af29b027341b6866777bb13d5b365d5baa8c.jpg) + +![](images/de510baa5309dd820a2a4d705ccbd9cf54b332613103fe4692f6bfe308fd1f47.jpg) +Figure 3. Four views of a typical plane-helix intersection. + +![](images/ae726463f2557d0df2ab27fb582ef2dece671551326d8052bd7452e2107990b9.jpg) + +# Nondimensionalizing the Problem + +Eq. (4) can be somewhat difficult to deal with. To allow for easier analysis, we nondimensionalize the equation using the definitions + +$$ +\tau = 2 \pi t, \qquad \beta = \frac {2 \pi_ {0}}{p}, \qquad \sigma = \frac {2 \pi R \tan \left(\frac {\pi}{2} - \phi_ {0}\right)}{p}. +$$ + +This nondimensionalization allows us to consider the equation + +$$ +\tau = \sigma \sin (\tau - \theta_ {0}) + \beta . +$$ + +We then define a function + +$$ +f (\tau) = \sigma \sin \left(\tau - \theta_ {0}\right) + \beta - \tau \tag {5} +$$ + +and attempt to solve $f(\tau) = 0$ . By finding the roots of $f$ , we find the points of intersection between the helix and the plane. Having found a root $\tau^{*}$ of $f$ , we simply divide this value by $2\pi$ and substitute into the parametric equations in (1) to locate the points of intersection in Cartesian coordinates. + +# Analysis of the Model + +In proceeding to analyze this model, we first produced a plot of $f$ , as shown in Figure 4. The curve represents the vertical separation between the bead and its shadow for any $\tau$ . + +![](images/d286ecefdb35320b076e4b02b7d951c780706bda50afaaaa9a48e781d0705b8a.jpg) +Figure 4. A plot of the function $f$ , of vertical separation between the bead and its shadow, as a function of $\tau$ . + +First note that the function $f$ is bounded above by $\sigma \cdot 1 + \beta - \tau$ and below by $\sigma \cdot (-1) + \beta - \tau$ , the dashed lines in Figure 4. These lines cross the $\tau$ -axis at $\beta \pm \sigma$ , so we can limit our search for roots of $f$ to $\tau$ in the interval $(\beta - \sigma, \beta + \sigma)$ . Of course, for real-time applications, even limiting the search for intersections to this interval may not let us achieve sufficiently fast solutions. Furthermore, the size of this interval depends on variables controlled by the user and cannot be guaranteed to be small. Moreover, because of the rapid oscillations of the function, most standard root-finding algorithms, such as Newton's method or the bisection method, will not perform adequately on this function. + +# A Fast Approximation + +As a first approximation, we devised a root-finding technique that constructs a linear sketch of $f$ . The local maxima and minima of $f$ can be found + +analytically by locating the roots of its first derivative. The solutions of + +$$ +\frac {d f}{d t} = \sigma \cos (\tau - \theta_ {0}) - 1 = 0 +$$ + +are given by + +$$ +\tau = \theta_ {0} \pm \arccos \frac {1}{\sigma} + 2 \pi n, \qquad \text {f o r} \sigma > 1. +$$ + +(The special case when $|\sigma| \leq 1$ is addressed in Appendix B.) We use this family of points to create a jagged linear approximation $g$ of $f$ , by connecting each maximum to its immediately surrounding minimum, and vice versa. Figure 5 illustrates $g$ . From this we can easily (and quickly) estimate the roots of $f$ by finding the roots of $g$ . It is especially important to note that because $g$ is hinged on the local maxima and minima of $f$ , there will be precisely the same number of roots of $g$ as of $f$ . In other words, this method of root approximation can never miss a real intersection, or introduce an artificial intersection, of the helix and the plane. + +![](images/798ddbe4f35f64859735d21ff3c9da2c93ce80f30db4ec0509b05633bf4f0f57.jpg) +Figure 5. The function $g$ , which approximates $f$ . + +From Figure 4, we see that the roots of $f$ occur, in increasing $\tau$ , either between a maximum and the following minimum, or between a minimum and the following maximum. We refer to the former as a "descending" root and the latter as an "ascending" root. According to the derivation in Appendix C, all of the ascending roots are separated by a constant interval in $\tau$ . The constant interval is $w\pi (1 + 1 / s)$ , where $s$ is the slope of the line connecting (in the case of ascending roots) a maximum to the following minimum. Descending roots are separated by a different constant interval in $\tau$ , with the same expression for $\tau$ but with $s$ being the slope between a maximum and the following minimum. + +For example, having located the first ascending root of $g$ at $\tau_{1}$ , we know that the $(n + 1)^{\mathrm{st}}$ ascending root occurs at $\tau_{1} + 2\pi n(1 + 1 / s)$ . In this way, we can generate the complete collection of $g$ 's roots almost immediately. Only those $n$ roots corresponding to roots on the finite helix need to be considered, thus limiting $n$ . The limits are derived in Appendix C. + +This calculation of the roots of $g$ constitutes a rough but very fast estimate of the roots of $f$ and in some cases may actually suffice for real-time graphical applications. If the calling program allows a generous enough tolerance in the coordinates of the helical intersections, this initial collection of approximate roots will be adequate, and the problem can be considered solved for the current frame in the animation sequence. + +It should further be noted that as more intersections occur, the linear approximation should generate increasingly accurate estimates. In terms of the model, the user may increase the number of intersections by either increasing $R$ (the radius of the helix), decreasing $p$ (the pitch of the helix), or decreasing $\phi_0$ (the inclination of the cutting plane with the helical axis). Note that all of these changes cause an increase in the nondimensional parameter $\sigma$ . Therefore, the vertical distance between adjacent maxima and minima increases, and the linear approximation $g$ becomes more and more accurate as an indicator of the roots of $f$ . Thus, as the apparent complexity of the problem increases, our algorithm experiences only a nominal increase in runtime and achieves more accurate first approximations. + +# A Rapid Root Search + +In many cases, however, the roots of the linear approximation to $f$ may not satisfy the accuracy requirements for the current frame. In this event, the algorithm engages in a more precise method for finding the roots of $f$ . + +Taking as seed values the roots of $g$ , we use a modified Newton's method to zero in on the roots of $f$ . The method is modified by taking an approximation of the derivative of $f$ at the seed point. Essentially, we use two approximations to the derivative of $f$ . The constant approximation to an upward slope is given by $\pi / 2\sigma$ , and $-\pi / 2\sigma$ is used to approximate a negative slope. These constants are used for improving the location of descending and ascending roots, respectively. The determination of these approximations is given in Appendix C. + +We use this alternative to a true Newton's method for two reasons. + +- Each computation of the derivative entails computing a cosine function, which is orders of magnitude more time-consuming than a simple variable lookup. By using two constant values, we significantly reduce the computation time. +- Newton's method often has trouble locating the roots of functions with periodic derivatives, such as $f$ . + +As in Newton's method, we evaluate the function $f$ at a seed point $\tau_{s}$ to determine the direction and magnitude of the error there. The next approximation is $f(\tau_{s}) \times \pi / 2\sigma$ plus a small perturbation. This process is repeated until the approximation is sufficiently close to the root, yielding an estimated intersection within the desired tolerance. The perturbation has period three, so our linear method is unlikely to fall into an indefinite oscillation about a root. Additionally, the multiplicative factor $\pi / 2\sigma$ is well below $\pi / \sigma$ , which is the limit on jump size that prevents the algorithm from skipping too far from the approximate root and missing the actual root all together. A more detailed discussion of this technique and the parameters chosen in presented in Appendix C. + +# Testing the Model + +We coded our algorithm in $\mathrm{C}++$ and ran several test cases to confirm its root-finding capabilities for this particular problem. Our trials suggest not only that the engine is very rapid in its approximations of the roots of $f$ , but also that it can attain a great level of accuracy with a nominal time penalty. The code itself is presented in Appendix D. [EDITOR'S NOTE: Omitted.] + +# Runtime + +We programmed a series of ten frames, with each frame representing a $10^{\circ}$ rotation of the cutting plane about the $z$ -axis. The angle of declination $\phi_0$ was fixed at $45^{\circ}$ , and all parameters describing the helix were held constant. We feel this might represent a typical course of duties demanded by the user. On average, the algorithm calculated about ten points of intersection between the plane and helix in each frame and was able to generate all ten frames in 0.4 seconds. This indicates an average speed of about 25 frames per second. + +To put the algorithm to a more demanding test, we then programmed a series of 100 frames. This time, all parameters were permitted to vary randomly (within appropriate bounds) between frames. We found basically the same runtime estimate: 20-25 frames per second. + +# Accuracy + +In each of our trial runs, we specified a tolerance level that corresponds to an allowable error in the distance between an estimated intersection and an actual intersection in three-space. Within the algorithm, this tolerance is transformed into an upper bound on the allowable error in the root approximation. In every case tested, our algorithm was able to satisfy this accuracy + +test. This is because the algorithm will always succeed in finding an actual root within the precision of the machine or that requested by the calling program: Because we use a period-three perturbation in the root-finding iterations, the algorithm cannot bounce indefinitely around the root. + +Furthermore, our root-finding method outperformed the root-finding routines of Mathematica (which uses Newton's method), which gave approximations always farther from the true root than our algorithm's approximations. + +# A Graphical Test + +The graphical results of a simulation are presented in Figure 6. + +![](images/3c505684cd9aa52fd832961a6846264adaa69d761a21d14a3f8c92c942c71d99.jpg) +Figure 6. Intersection points as generated by our algorithm. + +![](images/e6d1a6c23cc4bf8374ca7444a4ec4e29254dec70abe1350353a3adfffe02e31c.jpg) + +# Critique of the Model + +# Strengths of the Model and Algorithm + +By fixing the coordinate system about the helix, we can easily construct parametric equations to describe the helix in Cartesian coordinates. This representation in turn allows us to construct a relatively simple equation, the roots of which correspond to actual intersections between the helix and the cutting plane. These roots can easily be translated to give the locations of the intersections in Cartesian coordinates. + +Our algorithm for finding the roots of (5) also has several notable features. First, by using a linearized approximation to $f$ , we are guaranteed never to miss a root. That is, the number of approximate roots found from the linearization $g$ will always equal the true number of roots of $f$ . Furthermore, our root-searching technique can always improve on these estimates of the roots of $f$ to within the desired accuracy. That is, for computational purposes, the algorithm always finds the correct roots. + +In addition, since we compute all the parameters of (5) for each new frame, any parameter describing the system can vary arbitrarily between consecutive frames. For example, although the radius of the helix is assumed to be uniform along its entire length in one frame, the radius may increase or decrease in the successive frame with no performance penalty. Similarly, both the length and the pitch of the helix and the relative orientation of the plane to the helix can vary between frames without disturbing the root-finding capabilities of the engine. + +Our algorithm can determine roots very quickly. In fact, as the number of intersections—and thus the number of roots—increases, the speed with which these roots are determined increases, because the first approximation of the roots is more likely to satisfy the accuracy criterion. + +Finally, the algorithm exhibits very little sensitivity to the input parameters. That is, there are no pathological cases for which the algorithm fails. The model represented by (5) fails to describe the desired situation only when the cutting plane is parallel to the helical axis, for which case we give analytical results. + +# Weaknesses of the Model + +Any weaknesses of the model are present in the assumptions of the model. The model assumes a helix of only finite length, with a uniform radius and pitch along its entire length. Provided the plane is not parallel to the helical axis, intersections can occur over only a finite stretch of the helix. More important, the model does not succeed in representing a helix that varies in radius or pitch, though such a case may not even qualify as a true helix to some. + +Another drawback is that the model considers only a static relationship between the helix and the plane. No attempt has been made to incorporate, for example, a rate of change of declination of the plane from the helical axis. Instead, our model assumes that relative motion between the two objects occurs in consecutive discrete steps. + +The algorithm to find the roots of equation (5) also has some drawbacks. As with any computational routine, runtime is bound to increase when greater accuracy is desired. For primarily graphical applications, however, extreme accuracy is seldom required. Furthermore, the runtime limitations are dramatic only in the case of $\sigma$ approaching 1 from above. + +In addition, computational techniques inherently introduce numerical errors in solutions; in some cases this may cause misleading results. For example, consider a situation where the cutting plane doesn't cut directly through the helix but instead contains only a single point where the helix is tangent to the plane. This would correspond to a point in (5) where $f = 0$ and $df / d\tau = 0$ . In this case, our algorithm would find two roots, one corresponding to an ascending root and the other to a descending root. However, the roots would be identical within machine error (not the user-requested error) and thus would yield two identical points of intersection in three-space. + +# Appendix A: Derivations of Equations + +# Derivation of the Plane Equation + +A plane is determined by a normal vector $(a, b, c)$ and a point $(x_0, y_0, z_0)$ in the plane. The $z$ -coordinate of the plane can be written as a function of $x$ and $y$ as + +$$ +z = \frac {a}{c} (x - x _ {0}) + \frac {b}{c} (y _ {0} - y) + z _ {0}. +$$ + +The cutting plane is defined so that it contains $(0,0,z_0)$ . Furthermore, a vector normal to the cutting plane is given by the cross product + +$$ +\vec {n} = (\cos \theta , \sin \theta , 0) \times \left(- \cos \left(\frac {\pi}{2} - \phi_ {0}\right) \sin \theta , \cos \left(\frac {\pi}{2} - \phi_ {0}\right) \cos \theta , \sin \left(\frac {\pi}{2} - \phi_ {0}\right)\right). +$$ + +This product reduces to + +$$ +\vec {n} = \left(\sin \theta \sin \left(\frac {\pi}{2} - \phi_ {0}\right), - \cos \theta \sin \left(\frac {\pi}{2} - \phi_ {0}\right), \cos \left(\frac {\pi}{2} - \phi_ {0}\right)\right). +$$ + +Thus, the $z$ -coordinate of the cutting plane as a function of $x$ and $y$ is + +$$ +z = - \sin \theta_ {0} \tan \left(\frac {\pi}{2} - \phi_ {0}\right) x + \cos \theta_ {0} \tan \left(\frac {\pi}{2} - \phi_ {0}\right) y + z _ {0}. +$$ + +# The Equation of the Elliptical Intersection + +Combining the equations for $x$ and $y$ given in (1) and the equation for $z$ in (2) yields the intersection of the cutting plane and the cylinder enclosing the helix: + +$$ +\begin{array}{l} {z _ {e}} = {- \sin \theta_ {0} \tan \left(\frac {\pi}{2} - \phi_ {0}\right) R \cos 2 \pi t + \cos \theta_ {0} \tan \left(\frac {\pi}{2} - \phi_ {0}\right) R \sin 2 \pi t + z _ {0}} \\ { = } { R \tan \left( \frac { \pi } { 2 } - \phi _ { 0 } \right) \left( \cos \theta _ { 0 } \sin 2 \pi t - \sin \theta _ { 0 } \cos 2 \pi t \right) + z _ { 0 } } \\ { = } { R \tan \left( \frac { \pi } { 2 } - \phi _ { 0 } \right) \sin ( 2 \pi t - \theta _ { 0 } ) + z _ { 0 } . } \\ \end{array} +$$ + +# Appendix B: Special Cases + +# The Case of a Vertical Cutting Plane + +When $\phi_0 = 0$ , the cutting plane is parallel to the $z$ -axis, and the parameter $z_0$ has no interpretation. To describe the plane, let the perpendicular distance from the plane to the $z$ -axis be $r_0$ . The equation of the plane becomes + +$$ +r = \frac {r _ {0}}{\sin (2 \pi t - \theta_ {0})}. +$$ + +Note that if $r_0 > R$ , there are no intersections between the plane and the helix. Otherwise, finding the intersections of a vertical cutting plane with the helix is equivalent to finding all $t$ that satisfy either of the following equations + +$$ +R = \left\{ \begin{array}{l} \frac {r _ {0}}{\sin (2 \pi t - \theta_ {0} - 2 \pi n)} \\ \frac {r _ {0}}{\sin (\pi - 2 \pi t + \theta_ {0} - 2 \pi m)}, \end{array} \right. +$$ + +where $m, n \in \mathbb{Z}$ . Solving for $t$ yields infinite families of solutions + +$$ +t = \left\{ \begin{array}{l} \frac {\theta_ {0} + \arcsin \frac {r _ {0}}{R}}{2 \pi} + n \\ \frac {\theta_ {0} - \arcsin \frac {r _ {0}}{R}}{2 \pi} + m + \frac {1}{2}. \end{array} \right. +$$ + +Having found these values of $t$ , we need only substitute them back into the equations that define the helix to determine all of the intersection points exactly. Because the helix is finite, we expect that this will hold for only a finite selection of $ms$ and $ns$ , the bounds on which are given in Appendix C. + +# The Case $\sigma \leq 1$ + +For $\sigma \leq 1$ , the derivative $df / d\tau = \sigma \cos (\tau -\theta_0) - 1$ is nonpositive for all $\tau$ . Thus, $f$ is monotonically nonincreasing and crosses the axis exactly once. Since this root must occur between $\beta +\sigma$ and $\beta -\sigma$ , a simple bisection search can be used. Our algorithm starts the bisection search at $\beta$ and has an initial step size of $\sigma /2$ . + +# Appendix C + +# Constructing a Linear Approximation of $f$ + +# The Slope of $g$ + +Finding the slopes of the line segments from which $g$ is constructed can be broken down into two parts: finding the slope of the ascending segments and finding the slope of the descending segments. + +First we find the slope of the ascending segments. Since all of these segments are parallel, we may choose any one of them as the basis for our problem. Let the endpoints of the segment be denoted by $\tau_{\mathrm{M}}$ (a maximum) and $\tau_{\mathrm{m}}$ (the preceding minimum). From the roots of $df / d\tau$ we find + +$$ +\tau_ {\mathrm {M}} = \theta_ {0} - \arccos \frac {1}{\sigma}, \quad \tau_ {\mathrm {m}} = \theta_ {0} + \arccos \frac {1}{\sigma}. +$$ + +The slope of the line connecting these two points is given by + +$$ +s _ {d} = \frac {f (\tau_ {\mathrm {M}}) - f (\tau_ {\mathrm {m}})}{\tau_ {\mathrm {M}} - \tau_ {\mathrm {m}}}, +$$ + +which, upon application of the definition of $f$ (and a little algebra), reduces to + +$$ +s _ {d} = \frac {\sqrt {\sigma^ {2} - 1}}{\arccos \frac {1}{\sigma}} - 1. +$$ + +A similar approach finds the slope of the descending segments, using the same $\tau_{\mathrm{M}}$ but with $\tau_{\mathrm{m}}$ being the minimum that follows it rather than precedes it. Again, from the roots of $df / d\tau$ , we have + +$$ +\tau_ {\mathrm {M}} = \theta_ {0} - \arccos \frac {1}{\sigma} + 2 \pi , \qquad \tau_ {\mathrm {m}} = \theta_ {0} + \arccos \frac {1}{\sigma}, +$$ + +which, using the definitions of slope and of the function $f$ , yields + +$$ +s _ {d} = \frac {- \sqrt {\sigma^ {2} - 1}}{\arccos \frac {- 1}{\sigma}} - 1. +$$ + +# The Roots of $g$ + +Once the slopes for $g$ have been found, finding the equations of the lines from which $g$ is made is a simple matter of using a point that is known to be on one of the lines in combination with the slope. Evaluating $f$ at $\tau = \theta_0$ yields the point $f(\theta_0) = \beta - \theta_0$ on the descending line segment used in finding the slope. Thus, this line has a root at $\tau_d = \theta_0 - (\beta - \theta_0) / s_d$ . In the same vein, we have $f(\theta_0 + \pi) = \beta - \theta_0 - \pi$ , this time on the ascending line segment used in + +finding the slope. Thus, this line has a root at $\tau_{d} = \theta_{0} + \pi - (\beta - \theta_{0} - \pi) / s_{d}$ . Finally, the equations of the lines are + +$$ +g _ {d _ {0}} (\tau) = s _ {d} (\tau - \tau_ {d}), \qquad g _ {a _ {0}} (\tau) = s _ {a} (\tau - \tau_ {a}). +$$ + +Since $f(\tau + 2\pi) = f(\tau) - 2\pi$ , all of the lines used in the construction of $g$ are given by + +$$ +g _ {d _ {n}} (\tau) + 2 \pi n = s _ {d} (\tau - \tau_ {d} - 2 \pi n), \qquad g _ {a _ {m}} (\tau) + 2 \pi m = s _ {d} (\tau - \tau_ {a} - 2 \pi m), +$$ + +where $m, n \in \mathbb{Z}$ , which can be reduced to + +$$ +g _ {d _ {n}} (\tau) = s _ {d} \left[ \tau - \tau_ {d} - 2 \pi n \left(1 + \frac {1}{s _ {d}}\right) \right], +$$ + +$$ +g _ {a _ {m}} (\tau) = s _ {d} \left[ \tau - \tau_ {a} - 2 \pi m \left(1 + \frac {1}{s _ {a}}\right) \right]. +$$ + +This formulation shows that the roots of the lines from which $g$ is constructed simply repeat in $\tau$ every $2\pi (1 + 1 / s)$ . Thus, the set of all of these roots is given by + +$$ +\tau_ {d _ {n}} = \theta_ {0} - \frac {\beta}{s _ {d}} + 2 \pi n \left(1 + \frac {1}{s _ {d}}\right), \tag {6} +$$ + +$$ +\tau_ {a _ {m}} = \theta_ {0} - \frac {\beta}{s _ {a}} + 2 \pi m \left(1 + \frac {1}{s _ {a}}\right) + \pi . +$$ + +Since $f$ can have roots only between $\beta \pm \sigma$ , we need concern ourselves only with the roots of $g$ that lie between these bounds. Further limits can be placed on the range of possible roots by recalling that the helix is of finite length and thus intersections can occur only in $0 \leq \tau \leq 2\pi L / p$ . Calling the upper bound $\tau_h$ (the lesser of $2\pi L / p$ and $\beta + \sigma$ ) and the lower bound $\tau_\ell$ (the greater of 0 and $\beta - \sigma$ ), the limits on the integers $n$ and $m$ are given by + +$$ +\tau_ {\ell} \leq \theta_ {0} - \frac {\beta}{s _ {d}} + 2 \pi n \left(1 + \frac {1}{s _ {d}}\right) \quad \leq \tau_ {k} +$$ + +$$ +\tau_ {\ell} \leq \theta_ {0} - \frac {\beta}{s _ {d}} + 2 \pi m \left(1 + \frac {1}{s _ {a}}\right) + \pi \leq \tau_ {k}. +$$ + +Solving these equations for $n$ and $m$ yields + +$$ +\frac {\frac {\beta}{s _ {d}} - \theta_ {0} - \tau_ {\ell}}{2 \pi \left(1 + \frac {1}{s _ {d}}\right)} \leq n \leq \frac {\frac {\beta}{s _ {d}} - \theta_ {0} + \tau_ {k}}{2 \pi \left(1 + \frac {1}{s _ {d}}\right)} +$$ + +$$ +\frac {\frac {\beta}{s _ {a}} - \theta_ {0} - \tau_ {\ell} - \pi}{2 \pi \left(1 + \frac {1}{s _ {a}}\right)} \leq n \leq \frac {\frac {\beta}{s _ {a}} - \theta_ {0} + \tau_ {k} - \pi}{2 \pi \left(1 + \frac {1}{s _ {a}}\right)}. +$$ + +This range in $n$ and $m$ , when substituted back into (6), produces all of the roots of $\mathbf{g}$ , and thus we have a set of approximations to all of the roots of $f$ . + +# Quick and Safe Approximation of the Slope of $f$ + +Since the period of $df / d\tau$ is $2\pi$ , a change in $\tau$ that is greater than $\pi$ could move us from the search for one ascending (or descending) root to another. To prevent this, we use $\pi / 2$ as the limit for possible changes in our approximation of $\tau$ (for any single step). Since $f$ is bounded by $g \pm \sigma$ , the maximum value for a given root of $g$ is $|\sigma|$ . Thus, a search routine that produces successive approximations by $\tau_n = \pm (\pi / \sigma)f(\tau_{n-1})$ (positive when searching on an ascending segment of $g$ , and negative when on a descending segment) would never move away from its intended target root. However, machine errors could cause this search method to converge to the wrong root, so we use one-half of that quantity. + +Furthermore, we add a small (less than $10\%$ ) period-three oscillation to the constant to prevent the search algorithm from oscillating about a root with a period other than an integer multiple of three. + +# Reference + +Edwards, C.H., and David E. Penney. 1990. *Calculus and Analytic Geometry*. Englewood Cliffs, NJ: Prentice-Hall. + +# The Single Helix + +R. Robert Hentzel + +Scott Williams + +Iowa State University + +Ames,IA 50011 + +topquark@iastate.edu + +williams@ttl.teradyne.com + +Advisor: Stephen J. Willson + +# Introduction + +We present an iterative algorithm that finds all points of intersection between a finite, infinitely thin helix and an infinite plane, ordered along the helix. The algorithm has constant space complexity and has time complexity proportional to the number of intersections and the logarithm of the desired precision. + +# Assumptions + +- The helix is of finite length. +- The helix is infinitely thin. +- The plane is infinite in extent. +- The helix has constant radius. +- No more than one helix needs to be considered simultaneously; if multiple helices must be modeled, the routine may be run sequentially on each. + +# Input and Output + +Input to the program consists of: + +- the endpoints of the central axis of the helix, +- one point which is on the helix, +- the winding number of the helix, +- the handedness of the helix, + +- the specification of a plane, and +- the desired precision of the solutions. + +The winding number of a helix is the number of times that a point traveling on the helix makes a complete circle while its projection on the helix axis advances by one unit. In other words, it is the number of coils of the helix contained in one distance unit parallel to its axis. Figure 1 illustrates a helix with a winding number of 2.0. + +![](images/f8432492f94bef3527ecf39e391bc84db435d511a982819782f556d55a9ca816.jpg) +Figure 1. A helix with winding number of 2.0 (note the axes!). + +Output consists of an ordered list of intersections, giving for each the distance along the axis of the helix, the coordinates of the intersection point, and the distance from the helix point to the plane (which will be less than the prescribed precision. + +# Helicoidal Normal Form + +We will do all of our computations with geometric figures (helices and planes) which have been put into helicoidal normal form (HNF), described below. Doing so takes advantage of the symmetries of helices and of the infiniteness of planes to simplify the resulting calculations and increase the speed of calculating intersection points. The transformation is invertible, + +so the coordinates of intersection points obtained may be transformed back into the original system. + +# Definition of Helicoidal Normal Form + +In HNF, helices and planes have the following properties: + +- The axis extends from $(0, 0, 0)$ to $(0, 0, 2\pi \eta L)$ , where $L$ is the length of the original helix and $\eta$ is the original winding number of the helix. +- The point $(1,0,0)$ is on the helix; that is, the coordinate system is oriented so that the lowest point on the helix is on the $x$ -axis and the radius is unity. +- The winding number of the helix is $1 / 2\pi$ +- The plane's normal vector is normalized to unit length. +- The helix is right-handed. +- The $z$ -component of the plane's normal vector is nonnegative. + +Thus, a normalized helix may be parameterized as + +$$ +r (t) = 1, \qquad \theta (t) = t, \qquad z (t) = t +$$ + +in radial coordinates or as + +$$ +x (t) = \cos t, y (t) = \sin t, z (t) = t +$$ + +in Cartesian coordinates. The equation of a plane is given by + +$$ +a x + b y + c z + d = 0, +$$ + +with the triple $(a,b,c)$ representing a unit normal vector, so that + +$$ +a ^ {2} + b ^ {2} + c ^ {2} = 1 \qquad \mathrm {a n d} \qquad c > 0, +$$ + +as per the definition of the helicoidal normal form. + +# Transformation to Helicoidal Normal Form + +The helix and plane are specified by the user with the following data: + +- the endpoints of the symmetry axis of the helix: + +$$ +\vec {x} _ {0} = (x _ {0}, y _ {0}, z _ {0}), \qquad \vec {x} _ {1} = (x _ {1}, y _ {1}, z _ {1}); +$$ + +- any point on the helix, $\vec{p} = (x_p, y_p, z_p)$ ; + +- the winding number of the helix, $\eta > 0$ ; +- any $(a, b, c, d)$ quadruplet representing the plane, so that the normal vector $\vec{n}$ is $(a, b, c)$ ; and +- the handedness of the helix. + +The transformation to HNF consists of seven steps: + +1. translation of one axis endpoint to the origin, +2. rotation of coordinates to bring the other endpoint to the $z$ -axis, +3. rotation of coordinates to eliminate any initial phase of the helix, +4. space inversion to ensure right-handedness of helix, +5. scaling of coordinates to normalize the helix radius, +6. scaling of coordinates to normalize the helix winding number, and +7. normalizing the normal vector of the plane. + +The details of each step follow. + +# Translation of one axis endpoint to the origin + +We translate the coordinate system to bring $x_0$ (one end of the helix's symmetry axis) to $(0, 0, 0)$ : + +$$ +\vec {x} _ {0} \gets \vec {x} _ {0} - \vec {x} _ {0} = \vec {0} +$$ + +$$ +\begin{array}{c c c} \vec {x} _ {1} & \leftarrow & \vec {x} _ {1} - \vec {x} _ {0} \end{array} +$$ + +$$ +\vec {p} \quad \leftarrow \quad \vec {p} - \vec {x} _ {0}. +$$ + +We can find the effect on the representation of the plane as follows. If $\vec{x}$ is originally on the plane, then + +$$ +\vec {n} \cdot \vec {x} + d = 0, +$$ + +so the translated point $\vec{x} -\vec{x}_0$ satisfies the condition + +$$ +\vec {n} \cdot (\vec {x} - \vec {x} _ {0}) + (d + \vec {n} \cdot \vec {x} _ {0}) = 0. +$$ + +Accordingly, we transform + +$$ +d \leftarrow d + \vec {n} \cdot \vec {x} _ {0}. +$$ + +# Rotation of coordinates to bring the other endpoint to the $z$ -axis + +Taking + +$$ +\theta = \arctan (x _ {1} / y _ {1}), \qquad \phi = \arctan \left(\frac {\sqrt {x _ {1} ^ {2} + y _ {1} ^ {2}}}{z _ {1}}\right), +$$ + +we can construct the rotation matrix + +$$ +R = R _ {y z} R _ {x y} = \left[ \begin{array}{c c c} \cos \phi & 0 & - \sin \phi \\ 0 & 1 & 0 \\ \sin \phi & 0 & \cos \phi \end{array} \right] \left[ \begin{array}{c c c} \cos \theta & - \sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{array} \right], +$$ + +which can be applied to bring $x_{1}$ to coincide with $(0,0,L)$ , where $L$ is the original length of the helix, given by + +$$ +L = \sqrt {\vec {x} _ {1} \cdot \vec {x} _ {1}}. +$$ + +That is, we transform + +$$ +\vec {x} _ {0} \gets R \vec {x} _ {0} = \vec {0}, \qquad \vec {x} _ {1} \gets R \vec {x} _ {1} = (0, 0, L), \qquad \vec {p} \gets R \vec {p}. +$$ + +We can find the effect on the representation of the plane as follows. If $\vec{x}$ were originally on the plane, then + +$$ +\vec {n} \cdot \vec {x} + d = 0. +$$ + +Now the rotated point $R\vec{x}$ satisfies the condition + +$$ +R \vec {n} \cdot R \vec {x} + d = 0, +$$ + +so we make the transformation + +$$ +\vec {n} \leftarrow R \vec {n}. +$$ + +# Rotation of coordinates to eliminate any initial phase + +We now look at the point $p$ on the helix and note its parametric representation as + +$$ +r _ {p} = \sqrt {x _ {p} ^ {2} + y _ {p} ^ {2}}, \qquad \theta_ {p} = \arctan (y _ {p} / x _ {p}), \qquad z _ {p} = z _ {p}. +$$ + +Thus, in going downward along the helix's axis from $z = z_{p}$ to $z = 0$ (the bottom end), we will go through $\eta z_{p}$ rotations, bringing the initial angular position of the helix $(\theta_0)$ to + +$$ +\theta_ {0} = \theta_ {p} - 2 \pi \eta z _ {p}. +$$ + +We wish this to coincide with the $x$ -axis, so we apply an additional counterclockwise rotation of $-\theta_0$ about the $z$ -axis to $\vec{x}_1, \vec{x}_2, \vec{p}$ and the representation of the plane, using the same techniques as in the previous section. + +# Scaling of coordinates to normalize helix radius + +We can now parameterize our helix as + +$$ +x (t) = r \cos t, \qquad y (t) = r \sin t, \qquad z (t) = t, +$$ + +with $r = r_p = \sqrt{x_p^2 + y_p^2}$ . We note that if any point $\vec{x}(t)$ coincides with the plane, then + +$$ +a \cdot r \cos t + b \cdot r \sin t + c t + d = 0; +$$ + +so the radius-normalized point $(\cos t, \sin t, t)$ satisfies + +$$ +(a r) \cos t + (b r) \sin t + c t + d = 0, +$$ + +and we can make the transformation + +$$ +a \leftarrow a r, \qquad b \leftarrow b r, \qquad r \leftarrow 1. +$$ + +# Space inversion to ensure right-handedness of helix + +If the helix is originally left-handed, it can be made right-handed by effecting a spatial inversion of the coordinate system about the $xz$ -plane. This is accomplished by negating the $y$ -coordinate of the plane's normal vector: + +$$ +b \leftarrow - b, \quad \text {h a n d e d n e s s} \leftarrow \text {r i g h t}. +$$ + +# Scaling of coordinates to normalize helix winding number + +The final parameter to normalize is the helix winding number. We do this by forcing the helix to advance one rotation per $2\pi$ advance along its axis. To compensate, we scale our helix. + +If the helix originally advanced $\eta$ turns per unit axis advance and had a length of $L$ , then the same number of turns will be made in a length of $2\pi \eta L$ with a winding number of $1/2\pi$ . Thus: + +$$ +L \gets 2 \pi \eta L, \qquad \eta \gets \frac {1}{2 \pi}. +$$ + +We can find the effect on the representation of the plane as follows. If a point $\vec{x} = (x, y, z)$ is originally on the plane, then + +$$ +a x + b y + c z + d = 0. +$$ + +Then, since the transformed point $(x,y,\eta z)$ fulfills the condition + +$$ +a x + b y + \frac {c}{2 \pi \eta} (2 \pi \eta z) + d = 0, +$$ + +we can make the transformation: + +$$ +c \leftarrow \frac {c}{2 \pi \eta}. +$$ + +# Normalizing the normal vector of the plane + +We make the transformations + +$$ +a \gets \frac {a}{\sqrt {a ^ {2} + b ^ {2} + c ^ {2}}}, \qquad b \gets \frac {b}{\sqrt {a ^ {2} + b ^ {2} + c ^ {2}}}, +$$ + +$$ +c \gets \frac {c}{\sqrt {a ^ {2} + b ^ {2} + c ^ {2}}}, \qquad d \gets \frac {d}{\sqrt {a ^ {2} + b ^ {2} + c ^ {2}}}. +$$ + +If $c < 0$ , we negate each component of the normal vector (but not $d$ ). + +# Locating Intersections + +# Transcendental Lemma + +We need solutions of the transcendental equation + +$$ +f (t) = a \cos t + b \sin t + c = 0 +$$ + +subject to the restriction that $a^2 + b^2 + c^2 = 1$ . We observe that + +$$ +\begin{array}{l} \sqrt {a ^ {2} + b ^ {2}} \sin \left(t + \arctan \frac {b}{a}\right) \\ = \sqrt {a ^ {2} + b ^ {2}} \left[ (\sin t) \left(\cos \arctan \frac {b}{a}\right) + (\cos t) \left(\sin \arctan \frac {b}{a}\right) \right] \\ = \sqrt {a ^ {2} + b ^ {2}} (\sin t) \left(\frac {a}{\sqrt {a ^ {2} + b ^ {2}}}\right) + \sqrt {a ^ {2} + b ^ {2}} (\cos t) \left(\frac {b}{\sqrt {a ^ {2} + b ^ {2}}}\right) \\ = a \sin t + b \sin t. \\ \end{array} +$$ + +So, we attack the original problem as: + +$$ +\begin{array}{l} \sqrt {a ^ {2} + b ^ {2}} \sin \left(t + \arctan \frac {b}{a}\right) = - c, \\ \arcsin \left(\sin \left(t + \arctan \frac {b}{a}\right)\right) = \arcsin \left(\frac {- c}{\sqrt {a ^ {2} + b ^ {2}}}\right), \\ { t } { = } { \arcsin \left( \frac { - c } { \sqrt { 1 - c ^ { 2 } } } \right) - \arctan \left( \frac { b } { a } \right) . } \\ \end{array} +$$ + +We note: + +- If $c > \sqrt{2}/2$ , then the initial arcsin will have an argument larger than unity and will be nonexistent. +- The arcsin function will yield two distinct values in $[0, 2\pi)$ , both of which must be checked. +- If $t$ is a solution of $f(t) = 0$ , then $t + n2\pi$ must also be a solution for integers $n$ . + +# Difference Function + +The phrase "the (helix) point $t$ " will be taken to mean the helix point parameterized by $t$ , namely $(\cos t, \sin t, t)$ for $t \in [0, L]$ . Since all points $\vec{x}$ on the plane satisfy + +$$ +a x + b y + c z + d = 0, +$$ + +we see that finding the points of intersection between the helix and the plane amounts to finding $t$ such that + +$$ +f (t) = a \cos (t) + b \sin (t) + c t + d = 0 +$$ + +for $t\in [0,L]$ + +We name $f$ our difference function since it provides a measure of the distance between the helix and the plane for a given $t$ . In fact, $f(t)$ is the perpendicular distance from the helix point $t$ to the plane. We will eventually seek intersections by attempting to minimize the absolute value of $f(t)$ . + +# Slope of the Difference Function + +The slope of the difference function at the helix point $t$ is given by: + +$$ +f ^ {\prime} (t) = \left. \frac {d f (x)}{d x} \right| _ {x \leftarrow t} = - a \sin t + b \cos t + c +$$ + +We note: + +- For $c > \sqrt{a^2 + b^2}$ (which is equivalent to $c > \sqrt{2}/2$ ), the slope is everywhere positive, so the difference function is monotonically increasing. +- The form of the slope function is that studied in the Transcendental Lemma above, so we know that we can find zeros of the slope function, and hence local extrema of the difference function, analytically. Here the two values for the arcsin correspond to a local maximum and a local minimum of the distance function. + +# Upper and Lower Bounds + +It is easy to find bounds on $t$ that contain all of the possible intersection points. At an intersection point, we have + +$$ +a \cos t + b \sin t + c t + d = 0, +$$ + +so + +$$ +t = \frac {- a \cos t - b \sin t - d}{c}. +$$ + +Simple calculus, plus using $c = \sqrt{a^2 + b^2}$ , $|\cos t| \leq 1$ , and $|\sin t| \leq 1$ , shows that + +$$ +- \sqrt {a ^ {2} + b ^ {2}} \leq - a \cos t - b \sin t \leq \sqrt {a ^ {2} + b ^ {2}}, +$$ + +which provides bounds on $t$ : + +$$ +\ell_ {b} \equiv \frac {- \sqrt {a ^ {2} + b ^ {2}} - d}{c} \leq t \leq \frac {\sqrt {a ^ {2} + b ^ {2}} - d}{c} \equiv u _ {b}. +$$ + +All intersection points $t$ must lie within these bounds. + +If $c = 0$ (the discontinuity in the above formulae), the plane runs exactly parallel to the axis of the helix and would intersect an infinite helix either infinitely often (inside the radius) or never (outside the radius). In this case, $\ell_b$ may be taken to be $-\infty$ and $u_b$ to be $+\infty$ , since both values will be truncated by the finiteness of the helix, as mentioned in the next subsection. + +# Intervals and Subintervals + +We define the search interval lower bound $\ell_B$ to be the larger of 0 (one end of the helix) and $\ell_b$ . We define the search interval upper bound $u_B$ to be the smaller of $L$ (the other end) and $u_b$ . All intersections must lie in the search interval $[\ell_B, u_B]$ . + +We divide the interval $[\ell_B, u_B]$ into subintervals, broken by the local extrema, both maxima and minima. Each subinterval, except the leftmost and rightmost, is bounded by adjacent local extrema. The leftmost and rightmost are bounded on their "inner" side by a local extremum and on their "outer" side by either $\ell_B$ or $u_B$ . We can do this because we can apply the Transcendental Lemma to the slope function to calculate two local extrema, one minimum and one maximum. Since the spacings of minima and maxima are both $2\pi$ , we can now locate all local extrema within the search interval by repeated addition/subtraction of $2\pi$ from the original maximum and minimum (see Figure 2). + +It may be the case, if $c > \sqrt{2} / 2$ , that there are no local extrema. Then there is only one subinterval, $[\ell_B, u_B]$ , which contains the only intersection. (There can be only one intersection, since $f$ is monotonically increasing in this case.) + +# Searching the Subintervals + +We now consider each subinterval separately. We let $\ell_t$ and $u_t$ represent the initial left and right endpoints of the subinterval. We evaluate $f(\ell_t)$ and $f(u_t)$ and compare the signs. If the signs agree, then there cannot be an intersection in the subinterval; if there were, there would have to be a local extremum within the subinterval, but all local extrema fall on subinterval boundaries by construction. However, if the signs are different, there must + +![](images/d8e339426aaf9fd2741d8be54c7a30f8dc8a048397af1b90de34a6119019b424.jpg) +Figure 2. Difference function $f(t)$ with search interval limits shown as the outermost dotted vertical lines, zero shown as a dashed horizontal line, and subinterval boundaries shown as dotted vertical lines. Note the small(er) subintervals at the ends. + +be an intersection point, by application of the Intermediate Value Theorem to the continuous function $f$ , which is positive at one endpoint and negative at the other. + +If endpoints show that no intersection can exist (i.e., both have the same sign), then the subinterval is immediately discarded (this can occur only with the leftmost and rightmost subintervals). Otherwise, we begin a binary search for the intersection point, provided that the endpoint itself is not a point of intersection within the desired precision. + +We evaluate $f$ at the midpoint of the subinterval $(\ell_t + u_t) / 2$ and compare its sign with the endpoints. By the Intermediate Value Theorem, the intersection must lie between the midpoint and whichever endpoint differs in sign from it (they cannot both differ, since they have different signs). We can therefore define a new subinterval equal to either the left or right half of $[\ell_t, u_t]$ as appropriate and repeat this process. Eventually, evaluation of $f$ at the midpoint must yield a number within the prescribed accuracy from zero. At this point, the binary search terminates, having located the intersection with sufficient precision. + +We note that we apply the binary search only in subintervals that are guaranteed to contain an intersection by virtue of the Intermediate Value + +# Theorem. + +When intersections are found, the proper inverse transformations are performed to return the point to its original coordinate system, and it is displayed. + +After completing a subinterval, processing proceeds with the next and terminates following the rightmost. Because of the left-to-right processing of subintervals, the intersections are generated in order of increasing $t$ along the helix axis. + +# Space and Time Complexity + +# Space Complexity + +In our implementation, each subinterval is calculated and searched subsequently, obviating the need to store large arrays of points found or of edges of subintervals. Thus, the storage space needed is independent of the input. For our implementation, it is approximately 300 bytes. + +# Time Complexity + +The time to put the helix and plane into HNF is independent of the input and, in the current implementation, involves approximately 118 floating-point multiplies/divides and 36 floating-point trigonometric functions and square roots. Floating-point adds, subtracts, and comparisons take negligible time relative to multiplication, division, square roots, and trigonometric function calls. + +The time per intersection is 40 floating-point multiplications/divisions and 8 complicated functions, plus 4 multiplications and 3 complicated functions per iteration required. The largest that a subinterval may be is $2\pi$ , so to divide this by halving into a slice as small as the desired accuracy $\epsilon$ requires $\log_2(2\pi/\epsilon)$ steps. Thus, for each intersection, $40 + 4\log_2(2\pi/\epsilon)$ multiplications and $8 + 3\log_2(2\pi/\epsilon)$ complicated functions are required. For an accuracy of one part in $10^6$ , this amounts to 131 multiplications and 76 complicated functions per intersection. These are absolute bounds. + +There are sufficiently few operations that on a DECstation 5000/240, the processing can be carried out in a tiny fraction of a second for as many as 200 intersection points (we did not test larger numbers). + +We believe that this is sufficiently fast to be used in modeling large numbers of helices or as part of a real-time rendering engine. + +# Algorithm Analysis + +# Strengths + +- The algorithm requires little memory. +- The algorithm is guaranteed to find all intersection points within the desired precision, unless this is finer than the machine's floating-point precision. +- Intersections are generated quickly enough for real-time applications. +- The algorithm requires time linear in the number of intersection points and logarithmic in the desired precision. Hence, an order-of-magnitude increase in accuracy costs only 12 multiplications and 9 complicated functions per intersection. + +# Weaknesses + +- The algorithm does not take previous intersections into account when searching a subinterval; there is evidence to think that doing so could provide a moderate increase in speed, perhaps $15\%$ . +- The number of multiplications necessary per intersection can be cut by 18, by combining the three separate rotation matrices into a single one. +- The algorithm currently weights each endpoint equally when choosing a new subinterval during the binary search. Using a linear interpolation that weighted each endpoint by the value of $f$ evaluated at that endpoint could increase speed significantly, because of the smoothness and near-linearity of $f$ . + +# Planes and Helices + +Samar Lotia + +Eric Musser + +Simeon Simeonov + +Macalester College + +St. Paul, MN 55105 + +{slotia, ssimeonov}@macalstr.edu + +emusser@prodea.com + +Advisor: A. Wayne Roberts + +# Introduction + +We are asked to design, implement, and test a mathematical algorithm that locates in real time all of the intersections of a helix and a plane located in general positions in space. In addition, we must prove that our algorithm is mathematically and computationally correct. + +# Assumptions + +- "Real time." We assume that "real time" means that the time to solve a reasonably "difficult" problem must be very small. For example, an algorithm that is used for the back end of a Computer Aided Geometric Design program must not impose unacceptable delays on the user. When the user repositions the helix or plane, the user expects an immediate refresh of the screen—usually in only a fraction of a second. For other applications, our algorithm should take little time compared to the calling program. To meet these requirements, the algorithm must have a time complexity linear in the number of intersections; our algorithm is indeed linear. +- Helix/Plane. We assume the strict mathematical definition of a single helix: that it is infinite and nonelliptical, and that it "wraps around" a cylinder. We believe that our algorithm will work for other "helices," such as those having elliptical bases; but we have not tested these cases to any extent. We also assume that the plane extends infinitely. +- Correctness. We assume that twelve digits of precision is acceptable for all calculations. This is sufficient for most biotechnological applications. Twelve digits means that we have error less than the radius of some atoms [Chang 1994]. For some applications, engineers often expect fewer + +digits of accuracy; so our program allows the user to change the desired precision. + +# Summary of Approach + +Our approach to the problem involves + +definition of design requirements, +- development of a mathematical model for the problem, +- design and implementation of an algorithm, +- debugging and testing, and +- evaluation. + +# Definition of Design Requirements + +The algorithm will be used as a support routine for mission critical processes, where its failure to produce correct results, or failure to produce results on time, could have dire consequences. This fact leads us to the following design requirements, in decreasing order of importance: + +- Correctness. The algorithm must produce correct results. +- Robustness. The algorithm must handle exceptions well and not terminate abnormally. +- Performance. The algorithm must execute in real time. +- Efficiency. The algorithm must spare system resources, as long as the above three requirements are not adversely affected. +- Flexibility. The algorithm must allow users to formulate the problem in different ways, e.g., it must allow more than one way of defining a plane or a helix. Also, users must be able to fine-tune the algorithm to improve performance. +- Portability. The algorithm must be machine-independent, written in a common programming language, and easy to transfer to a different programming language of choice (e.g., to include it in an embedded system). + +Because algorithms execute on physical entities (computers) that have some finite working precision, it is possible that a computationally correct algorithm may produce incorrect results due to roundoff and compound errors. In our case, two types of errors are possible: skipping an existing + +root and reporting a root that does not exist. It is difficult to claim that one type of error should be preferred to another; we choose, if possible, to minimize the second kind of error given the possibility of some error of the first type. + +# Development of a Mathematical Model + +# Initial Development + +# The General Case + +Having examined the problem in general Cartesian coordinates, we found that a graphical or vector analysis-related approach would fail us, in the sense that it would be incredibly hard to program. Hence, we attempted to reduce the problem to an algebraic problem, since algebraic techniques are generally much more suited to programming. In effect, we "projected" the helix onto the plane in the following manner. + +Consider the general parametric equations of a helix in space (for derivation, see the Appendix): + +$$ +\begin{array}{l} {x} {=} {a _ {1 1} \cos \alpha t + a _ {1 2} \sin \alpha t + a _ {1 3} t + a _ {1 4}} \\ { y } { = } { a _ { 2 1 } \cos \alpha t + a _ { 2 2 } \sin \alpha t + a _ { 2 3 } t + a _ { 2 4 } } \\ { z } { = } { a _ { 3 1 } \cos \alpha t + a _ { 3 2 } \sin \alpha t + a _ { 3 3 } t + a _ { 3 4 } . } \\ \end{array} +$$ + +The equation of a plane in space may be written as + +$$ +a x + b y + c z - d = 0. +$$ + +Our transformation replaces $x, y$ , and $z$ on the left-hand side in this equation with the corresponding parametric forms of the helix, thus returning an expression in $t$ , which we call $f(t)$ : + +$$ +f (t) = A \cos t + B \sin t + C t - D, +$$ + +where $A, B,$ and $C$ are appropriately transformed coefficients. The $\alpha$ in the cos and sin terms has been incorporated into $C$ via a change of parameter from $t$ to $t / \alpha$ . + +The task now is to solve the equation $f(t) = 0$ . After perusing relevant literature, we concluded that this equation must be solved numerically [Plybon 1992]. Hence, we developed a numerical technique that, given the parameters $A, B, C,$ and $D$ , attempts to locate all the roots of the equation. Many well-documented algorithms guarantee convergence to roots—given certain bounds on the problem—and give an easily computable bound on the error in the result. The numerical technique that we employ is heavily influenced by information that we have about the equation $f(t) = 0$ . For + +instance, we know that the extrema of the function (if any) occur periodically, and we use this fact at several stages of our method, hence ensuring an efficient algorithm. We essentially built a robust, strong equation solver (one that uses problem-specific information to maximize the efficiency of the algorithm). + +The first step in our approach is to examine the general form of the function, as in Figure 1. We note that all roots must be located between two extrema that are on opposite sides of the $t$ -axis (assuming a continuous function). The only other case, which we handle separately, is when a root and an extremum or inflection point occur simultaneously. + +![](images/78b3edccfdf61b1926d714bb38fbb9cacca751ca3dedd325052d72401f0d2c26.jpg) +Figure 1. General functional form. + +We begin by locating the minima and maxima of the function $f(t)$ , which are periodic with period $2\pi$ . These are found by solving the equation + +$$ +f ^ {\prime} (t) = - A \sin t + B \cos t + C = 0. +$$ + +From + +$$ +\cos t = \frac {A \sin t - C}{B}, +$$ + +by using $\cos^2 t + \sin^2 t = 1$ we find + +$$ +\sin t = \frac {a c \pm b \sqrt {a ^ {2} + b ^ {2} - c ^ {2}}}{a ^ {2} + b ^ {2}}. +$$ + +However, this method returns some extraneous roots because of the squaring (just as does squaring $t = 1$ and solving $t^2 = 1$ ). These are discarded via a simple test, namely, substituting the values back into $f'(t)$ and checking whether or not the derivative is indeed zero. + +We then interpolate the root by connecting the two extrema via a line segment (as in Figure 2) and pass the interpolated value to our root-finding algorithm. + +![](images/3398ebae0eff20d46dafe0689e509f6e6f7965d48de7ce51df9f38074cb3dfcd.jpg) +Figure 2. The interpolation method. + +We must now judiciously choose a value of $t$ that guarantees roots in its immediate neighborhood. We choose the value $t_0 = D / C$ , as we are ensured that—if roots exist—there is one within $2\pi$ of this value of $t$ (see the Appendix for a detailed argument). + +# Certain Special Cases + +If the coefficient $C$ is 0, then the function $f(t)$ is periodic and oscillates to within $\sqrt{A^2 + B^2}$ (the maximum possible value of $A \cos t + B \sin t$ ) about the line $g(t) = D$ . Thus, if $|D| > \sqrt{A^2 + B^2}$ , then the function never intersects the $t$ -axis and we have no roots; the plane is parallel to the helix and outside the "reach" of its radius. In such a case, our program returns the message "No Roots." If, on the other hand, $|D| \leq \sqrt{A^2 + B^2}$ , then we have infinitely many roots; this case is handled appropriately. + +Another important case is that of only one root (see Figure 3). This occurs when $a^2 + b^2 - c^2 \leq 0$ . Then the equation has either one real solution or none. If a solution is found, it is at an inflection point. This condition is recognized by our algorithm and appropriately dealt with by calling the bisection method instead of the usual Newton-Raphson (which exhibits very slow convergence when dealing with simultaneous occurrence of roots and extrema/inflection points). The bisection method guarantees convergence as long as we bracket the root properly. We are confident that we do so, as + +we give bisection an interval of radius $2\pi$ about the point $t_0 = D / C$ . The bisection method achieves our prescribed goal of 12-digit accuracy in no more than 44 iterations. + +![](images/3134fa4d81cb423c893a0824c251692fb6f59fb3a8a43e42b1617187467374f6.jpg) +Figure 3. The single-root case. + +# Algorithm Description + +To facilitate understanding, we provide four levels of abstraction in the description of our algorithm. At the top level, we use a linearized flowchart that shows the main subproblems that need to be solved. [EDITOR'S NOTE: Because of space considerations, we do not reproduce the flowchart.] Parallel to the flowchart, at the second level of abstraction, we offer comments that provide more detail of the workings of the algorithm. They refer to the third level of abstraction, mathematical proofs and detailed explanations. Finally, at the lowest level of abstraction is the $\mathrm{C + + }$ code of our program, which includes many comments with Mathematica code and references to relevant literature. + +Before we go into the details of the algorithm implementation, we mention some general conventions. + +# - Input + +- Planes can be defined by the user in three ways: by general Cartesian equation of the form $ax + by + cz = d$ , by two vectors and a point, or by three points. +- Helices can be defined by the user in two ways: by general parametric equations, or by the three Eulerian angles and the translation vector + +that map the $z$ -axis to the central axis of the helix. + +# Output + +- If the helix does not intersect the plane, no roots are returned. +- If there are infinitely many solutions (a case in which the plane is parallel to the helix axis), sufficient information is provided so that the user can generate all of the intersection points. +- Otherwise, a structure containing the $x, y$ , and $z$ coordinates of the points of intersection is produced. + +# Accuracy of Estimation. + +The default working precision of calculation in our $\mathrm{C + + }$ program is twelve digits, but it can be changed by modifying a single variable in the code. The maximum working precision is limited by the floating point precision of the computer. + +# - Portability. + +- Our algorithm is implemented in ANSI C++, ensuring portability across most computing platforms. The code does not use machine-dependent features, and it could be translated to any procedural language. + +# Testing and Quality Control + +We devoted more than half of our algorithm design and implementation efforts to testing. We checked the correctness of ideas and implementations at four different levels: + +- Math Model. All transformations and function forms that we used were generated symbolically using Mathematica's standard features and the Vector Analysis and Rotations packages. All symbolic solutions to equations were checked using Mathematica. Whenever possible, we simplified expressions, sometimes by applying trigonometric substitution rules manually. +- Algorithm Design. Our root-finding procedure was carefully chosen always to find a bracketed root (if necessary, by invoking bisection). +- Implementation. We applied the function CForm to convert Mathematica expressions to C code when transferring expressions to our implementation. That minimized the chances of erroneous expression entry. The root-finding procedure that we use was taken from Plybon [1992]. The procedure was independently tested against the built-in Mathematica + +routine FindRoot, which implements a combination of Newton-Raphson and the secant methods [Wolfram 1991]. Our procedure never failed to find a root and never reported a root where there was none. In several cases where FindRoot failed, our procedure correctly managed to find a root. + +- Runtime. We performed three different types of checks on the output of our program: + +- We generated more than 50 functions of the form $f(t) = A\cos t + B\sin t + Ct - D$ and checked whether our program correctly found their roots. We implemented a set of Mathematica routines that finds the roots of the equation $f(t) = 0$ by the same algorithm as our program but with higher accuracy. In all test cases, the output of our program agreed with Mathematica's output, suggesting that round-off error is not a major problem in our implementation. In most cases, we visually inspected the graph of $f(t)$ to ensure that no roots were missed and that no false roots were introduced. The tests included a mix of test cases including none, one, many, and infinitely many roots. We considered potentially problematic cases, such as roots at tangency points and roots at inflection points. We explored and tested all control paths of the algorithm. Debugging output was generated and investigated carefully. +- We inputted more than fifty helices and planes in various input formats and used our algorithm to find the coordinates of the intersection points between them. Then for each test run we used Mathematica to do a 3-D plot of the plane, the helix, and the intersection points (see Figure 4). We inspected the 3-D plots from various viewpoints to ensure that no intersection points were missed and that no extraneous points were plotted. Our program passed all tests. +- In the testing, often we were uncertain about the actual location of the intersection points. So we designed an additional battery of tests to check the results of our program against a known (sub)set of the intersection points. We obtained the known intersection points by starting with an arbitrary helix and defining an intersecting plane either by choosing three points on the helix or by choosing a point on the helix and an arbitrary vector. In the first case, we knew the coordinates of at least three intersection points. In the second case, we could position the plane so as to experiment with different patterns of intersection. We then checked the results of our program against a subset of known roots. In all of the more than 50 test cases, our algorithm performed correctly. + +In all test cases, our program performed successfully. This result—added to the fact that our algorithm is closely based on a rigorously proven + +mathematical model—gives us a high degree of confidence that our program is indeed computationally correct. Also, we have seen no evidence that roundoff or compound errors are a significant source of error. + +![](images/8e234ded018cf6544ba8d29d80bc248756a67171cb249ab60438afe8a9fe4a44.jpg) +Figure 4. Our solutions plotted on the given helix and plane. + +# Evaluation + +# Correctness + +Our tests have shown that mathematically and computationally our algorithm is correct. Numerically, however, problems can arise. The possibility of compounding error and loss of precision is inevitable in some cases (though rare). + +Compounding error is introduced in routines that map our original helix onto a helix located about the $z$ -axis (i.e., computing the coefficients $A, B, C,$ and $D$ , for $f(t)$ ). We are uncertain of the error bounds on these calculations, but comparisons with high-precision Mathematica runs show that the error is small (less than $10^{-12}$ ). + +Fortunately, given correct values of $A$ , $B$ , $C$ , and $D$ , we can guarantee that the roots found are accurate to the desired working precision. This is because we use Newton-Raphson and the bisection method as our root-finding techniques. With Newton's method, it is possible to get some idea of absolute error of a root $x_0$ simply by looking at the value of $f(x_0)$ (it should be 0 if we have a root). Also, error is not compounded with Newton's method; each iteration actually decreases the error. Newton-Raphson converges quadratically [Plybon 1992], and near a root the number of significant digits approximately doubles with each step. + +Detrimental error can also enter the problem when calculating the brackets for each root. If error in numerical computation places a bracket on the wrong side of a root, that root may be lost. We have never encountered such a case but expect that cooked-up data could produce such an error. A possible remedy (not guaranteed to work in all cases) is first to approximate the bracket using our exact formulas and then to run a root-finding method on them to reduce the error. We did not implement this strategy, because we feel that the chance of this error occurring does not warrant the loss in speed that will occur. + +Further, in searching for solutions of $f(t) = 0$ , if two roots are extremely close, they can unfortunately be mistaken as one. However, if two extremely close roots are found, it may not even make sense to consider them as distinct, since they may be the product of numerical error. + +# Robustness + +Our algorithm checks extensively for exception cases and will not terminate abnormally due to a computation error or lack of system resources. We implemented checks for special cases that in effect trap errors. One example: We check for an infinite number of roots before we search for roots. We handle "special" cases such as tangencies, inflection points, double roots, etc. + +Our algorithm ensures that all roots are bracketed. This prevents Newton's method from accidentally going off and finding another root. Our implementation of Newton's method incorporates the bisection method in cases where Newton's does not converge fast enough or Newton's method departs the bracketed interval. The bisection method is guaranteed to find a root within a bracketed interval [Plybon 1992]. + +In our algorithm, there is an inherent limit on the number of possible roots; but this limit can be easily increased or even eliminated (allowing memory of the computer to be the only limitation). + +# Performance/Efficiency + +The performance of our algorithm is linear in the number of intersections. This is because we do a single root-find for each intersection (including bracketing). The complexity of a root-finding method is dependent on the how the function is shaped and the number of digits desired. Typically, when Newton's method is used, 5 or 6 iterations are required to find each root to 12 digits of precision, while the bisection method can be shown to take fewer than 44 iterations. + +For mapping the helix, finding first-order roots, etc., the time required is relatively constant. Therefore, the time complexity of the algorithm is dominated by the number of intersections. + +In addition, our code is efficient in use of space, since the space complexity is also linear in the number of intersections. + +# Suggestions for Improvement + +As the general organization of our algorithm is closely based on a mathematical model, we believe that it is not possible to improve it without a thorough revision of the underlying methodology. However, the implementation of the algorithm can be improved in several ways: + +- One can attempt to modify the evaluation of expressions so as to reduce the compound and roundoff errors. This often necessitates understanding and use of architecture-specific features of the processor on which the algorithm is executing, thus limiting portability. +- One can attempt to find bounds for the error introduced during the calculation of the $A, B, C$ , and $D$ coefficients for $f(t)$ . An estimation of the relationship between this error and the error generated by the root-finding algorithm will also be helpful. +- A useful yet hard-to-implement feature would be the inclusion of internal validation routines that improve correctness and robustness by monitoring for unacceptable computational errors while the algorithm is executing. Such routines would adversely affect performance. +- The input procedures could potentially be extended to include alternative definitions of a helix. However, our explorations of the relevant literature yielded no definitional forms different from the ones that we use. +- The Mathematica routines used in the testing process could be extended to infinite-precision calculation, running in batch mode to check at random the correctness of solutions provided by the real-time algorithm. +- The algorithm could be extended to handle other types of helices: alpha helices, double helices, etc. The underlying methodology of the algorithm should remain unchanged. + +# Appendix + +# General Parametric Equations for a Helix + +Consider a general $3 \times 4$ rotation-translation matrix: + +$$ +\left[ \begin{array}{l l l l} a _ {1 1} & a _ {1 2} & a _ {1 3} & a _ {1 4} \\ a _ {2 1} & a _ {2 2} & a _ {2 3} & a _ {2 4} \\ a _ {3 1} & a _ {3 3} & a _ {3 4} & a _ {3 4} \end{array} \right] +$$ + +and the "unit" helix (in vector form) + +$$ +\left(\cos (\alpha t - t _ {0}), \sin (\alpha t - t _ {0}), t\right). +$$ + +Applying the matrix produces any general helix: + +$$ +\left[ \begin{array}{c c c c} a _ {1 1} & a _ {1 2} & a _ {1 3} & a _ {1 4} \\ a _ {2 1} & a _ {2 2} & a _ {2 3} & a _ {2 4} \\ a _ {3 1} & a _ {3 3} & a _ {3 4} & a _ {3 4}. \end{array} \right] \left[ \begin{array}{c} \cos (\alpha t - t _ {0}) \\ \sin (\alpha t - t _ {0}) \\ t \\ 1 \end{array} \right], +$$ + +giving the general parametric equations of a helix in space: + +$$ +x = a _ {1 1} \cos (\alpha t - t _ {0}) + a _ {1 2} \sin (\alpha t - t _ {0}) + a _ {1 3} t + a _ {1 4} +$$ + +$$ +y = a _ {2 1} \cos (\alpha t - t _ {0}) + a _ {2 2} \sin (\alpha t - t _ {0}) + a _ {2 3} t + a _ {2 4} +$$ + +$$ +z = a _ {3 1} \cos (\alpha t - t _ {0}) + a _ {3 2} \sin (\alpha t - t _ {0}) + a _ {3 3} t + a _ {3 4}. +$$ + +Upon expanding the sin and cos terms, we obtain the same format as presented in the text of the paper. + +$$ +\mathbf {W h y} t _ {0} = D / C? +$$ + +The equation + +$$ +f (t) = a \cos t + B \sin t + C t - D +$$ + +can be thought of as the intersection of a helix with parametric equations + +$$ +x = \cos t, \qquad y = \sin t, \qquad z = t +$$ + +with a plane with equation + +$$ +A x + B y + C z = D. +$$ + +As far as the root-finder is concerned, we have a "vertical" helix with radius 1 and the plane as above. Thus, we say that the point at which the helix's central axis (also the $z$ -axis for the root-finder) meets the plane $Ax + By + Cz = D$ is the point around which the roots are distributed almost symmetrically. + +The justification for this claim is as follows. The intersection of a plane and a cylinder in space is an ellipse; as the helix lies on a cylinder, its intersections with a plane must lie on an ellipse. The center of the ellipse is the point of intersection of the plane and the helix's central axis. In Figure 5, we show the cylinder that the helix sits on, the ellipse of intersection with the plane, and the helix itself. The curve represents the ellipse of intersection and the lines are the helix. The whole picture shows the cylinder "unfolded." + +It can be shown that if any roots exist, they must do so within one complete rotation of the center of the ellipse. Hence, we use the center as the starting point for our root-finder. + +![](images/2b861e776a82fdb3089526d422a024c6b50cfc80827497fc8ab09f2d20db4ded.jpg) +Figure 5. Representation of the intersection of the helix and the plane. + +# References + +Canale, Raymond P., and Steven C. Chapra. 1985. Numerical Methods for Engineers. Reading, MA: Addison-Wesley. +Chang, Raymond. 1994. Chemistry. 5th ed. New York: McGraw-Hill. +Englefield, M.J. 1987. Mathematical Methods for Engineering and Science Students. Edward Arnold. +Press, William F., et al. 1990. Numerical Recipes in Pascal: The Art of Scientific Computing. New York: Cambridge University Press. +Plybon, Richard F. 1992. Applied Numerical Analysis. Boston: PWS-Kent. +Wolfram, Stephen. 1991. Mathematica: A System for Doing Mathematics by Computer. 2nd ed. Reading, MA: Addison-Wesley. + +# Judge's Commentary: The Outstanding Helix Intersections Papers + +Daniel Zwillinger +Zwillinger & Associates +Newton, MA 02165 +zwilling@world.std.com + +# Introduction + +Typical industrial tasks for applied mathematicians are varied, and many require a computational approach to solve a relatively simple problem. The Helix Intersections Problem was representative: The problem statement, solution techniques to be used, interpretation of the result, and techniques for checking the answer were all straightforward. Most of the submissions did, in fact, perform nearly all of the above steps. The judging criteria focused on how well each step was carried out as well as on the overall organization and clarity. + +# Thorough Analysis of All Cases + +A plane and a helix can have no intersections, any finite number of intersections, or an infinite number of intersections (in the case of an infinite helix or a degenerate helix with zero pitch). A computer program asked to find all the intersection points must respond appropriately to each of these cases. + +# Simplicity of the Resulting Numerical Problem + +Given a helix and a plane, it is straightforward to write down the parametric equations for the helix, depending on one variable, and the equation of the plane. Substituting the parametric equations into the equation for the plane results in an equation with a single variable. Finding the roots of this single variable equation is far simpler than finding the roots of a multiple-variable equation (as some teams proposed). + +# Numerical Solution of the Problem + +A bisection method is guaranteed to find a zero if appropriate endpoints are given, but the method is slow. Newton's method is the method of choice for nonlinear problems, since it is so much faster. However, several teams apparently did not know that Newton's method does not always converge (for example, near multiple roots). + +Since appropriate numerical bounds can be found, and bisection can be made to work, several teams used it. The judges preferred a bisection technique, with provable bounds on the results, to a Newton iteration with no mention of possible convergence problems. The team from Macalester College used both a Newton iteration and a bisection method (when the Newton method failed). + +# Testing the Results + +There are many ways in which the results of the computer program created to solve this task could be tested. Some teams used graphical methods, while others used the result of a more general-purpose equation solver (such as Mathematica). There often seemed to be confusion about the reliability of the results of programs such as Mathematica. + +# Responsiveness to the Question + +The original problem statement was concerned about the computational speed of locating the intersection points. The judges looked for statements about the computational requirements of the algorithms presented. There are many ways in which this issue could be addressed: the computer time per intersection point, the computer time saved when compared to a more general mathematical solver (such as Mathematica), or the computational complexity of the algorithm. + +# Conclusion + +Of course, really outstanding papers not only solve the problem but also consider possible extensions and possible limitations. Do these make the problem easier or harder? Do they make the problem applicable to another field? Restricting the problem to a finite-length helix (which is more physically reasonable) was considered by several teams, including the teams from Harvey Mudd College and Iowa State University. Using a finite-area sweeping plane was considered by the team from Harvey Mudd + +College. Additionally, one team considered more general helices, such as a spiral drawn on a cone. + +# About the Author + +Daniel Zwillinger received an undergraduate degree in mathematics from MIT and a Ph.D. in applied mathematics from Caltech. His Ph.D. research dealt with the focusing of waves as they travel through random media. He taught at Rensselaer Polytechnic Institute for four years, was in industry for several years, and has been managing a consulting group for the last few years. His work areas have included many industrial mathematics needs: radar, sonar, communications, visualization, statistics, and computer-aided design. He is the author of several mathematical reference books. + +# Practitioner's Commentary: The Outstanding Helix Intersections Papers + +Pierre J. Malraison +Manager, Design Constraints +Autodesk, Inc. +111 McInnis Parkway +San Rafael, CA 94903 +pierre.malraison@autodesk.com + +The problem statement is straightforward and clear, except for the description of the helix. All of the Outstanding papers assumed a cylindrical helix, which is probably the intent of the problem, but an elliptical helix could be used. + +The mathematical reduction used by the Macalester College team is the usual way to do surface-curve intersections in general: Implicitize one surface (in this case the plane), and parametrize the curve (the helix). The intersection points can then be expressed by solving the equation generated when the parametrized form satisfies the implicit equation. + +An alternative to the numerical solution of the resulting equation is to use a rational quadratic parametrization (instead of trigonometric) for the helix and end up with a polynomial function to solve. + +The team from Iowa State University used a similar strategy but with a different root-finding approach. + +The team from Harvey Mudd College used an approach that gets closer to (but didn't quite find) a different alternative: Intersect the cylinder the helix lies on with the plane, and then intersect the helix and that ellipse. The approach that the team took is correct but is limited to finite pieces of helix. + +In summary, all three teams provide correct solutions, with slightly different limiting assumptions. The main differences in the solutions are in root-finding strategies. + +# About the Author + +Dr. Malraison started out in (professional) life as a category theorist but saw the error of his ways and has been working in geometric modeling and CAD for the last 18 years. His current interests are generative languages and geometric constraints. + +# Author's Commentary: The Outstanding Helix Intersections Papers + +Yves Nievergelt + +Dept. of Mathematics + +Eastern Washington University + +Cheney, Washington 99004 + +ynievergelt@ewu.edu + +The problem of computing all the intersections of a plane and a helix in general positions in space arose at a small company in the western U.S. that designs medical technology. The problem came in the design of a helicoidal part of a device that doctors and technicians together will have to manufacture to fit the particular measurements of each patient. With x-ray data from the patient loaded in a computer with numerical and three-dimensional graphics capabilities, and with a program to compute the requested intersections, doctors and technicians can quickly vary the parameters of the helix, view the helicoidal part superimposed in space with a model of the patient, and examine critical locations by sweeping a plane section through them. + +The mathematically accurate yet medically vague description given in the problem statement typifies a common situation of real applications of mathematics: The small start-up company does not want anyone else to know the object of its current research and development. Even the company's name must remain secret, lest anyone else conduct a computed search of the publications of the company's staff and thence piece together a good guess of the objective. Such a situation explains, in part, the dearth of real applications of mathematics in textbooks. + +Nevertheless, because the mathematical problem fits in most undergraduate curricula in the mathematical sciences, one solution is scheduled to appear in 1995 in SIAM Review, published by the Society for Industrial and Applied Mathematics. The solution was developed in part with support from the National Science Foundation's grant DUE-9255539. + +Instructors interested in designing similar material for their own classes are encouraged to contact the author to participate in either of two workshops: 17-21 June 1996 in Spokane, WA, or 26-30 August 1996 in Seattle, WA. Through grant DUE-9455061, the National Science Foundation will pay for participants' room, board, and academic credit, and some summer stipends will be available for participants who would like to submit their material for publication. + +# About the Author + +Yves Nievergelt graduated in mathematics from the École Polytechnique Fédérale de Lausanne (Switzerland) in 1976, with concentrations in functional and numerical analysis of PDEs. He obtained a Ph.D. from the University of Washington in 1984, with a dissertation in several complex variables under the guidance of James R. King. He now teaches complex and numerical analysis at Eastern Washington University. + +Prof. Nievergelt is an associate editor of The UMAP Journal. He is the author of several UMAP Modules, a bibliography of case studies of applications of lower-division mathematics (The UMAP Journal 6 (2) (1985): 37-56) (in which the Brain-Drug Problem was discussed explicitly), and Mathematics in Business Administration (Irwin, 1989). + +Prof. Nievergelt was also the author of previous MCM problems: the Water Tank Problem (1989), the Brain Drug Problem (1990), and the Optimal Composting Problem (1993). + +# Paying Professors What They're Worth + +Jay Rosenberger + +Andrew M. Ross + +Dan Snyder + +Harvey Mudd College + +Claremont, CA 91711 + +{Jay_Rosenberger, Andrew_Ross, Daniel_Snyder}@hmc.edu + +Advisor: David L. Bosley + +# Introduction + +We develop a model with two variations: One encourages early retirement, the other does not. + +Our model is generous to those who are promoted or retire later than is typical. It does not allow the college to change gracefully the real starting salaries, although nominal starting salaries are adjusted each year for inflation. The model also takes into account those that it considers to be overpaid. Instead of not giving them any raises, it gives them cost-of-living raises. + +When the model is applied to the existing faculty at Aluacha Balaclava College with simulated hirings, promotions, and retirements, the faculty is separated into clearly different salary bands by the year 2010 (with the exception of two overpaid full professors), replacing the present muddle. + +# Constraints + +For convenience in reference, we note here the constraints: + +1. If there is enough money for raises, then everyone gets a raise. +2. Instructors who are promoted according to the usual schedule of seven years as an assistant professor and seven years as an associate professor and who work 25 years or more should receive at retirement twice as much as a new assistant professor's salary. +3. Although there should be a reward for years of experience, the salaries of two faculty members with equal rank should approach each other as they gain experience. + +4. The salary for a newly promoted faculty member should be about what it would have been in seven years, without the promotion. + +# Assumptions + +Although payments are actually made throughout the school year, and salary decisions are made in March, we assume that the decisions are made between discrete yearly salary payments. + +We also assume that when the decisions are made, the Provost has the budget for next year and an estimate of the cost-of-living increase. However, no information is available for years beyond the one for which salaries are being decided. + +Since we are prohibited from decreasing anyone's salary when moving the current faculty to the new scheme, we assume that there is always enough money to pay everyone's salaries from the previous year. That is, there might be no money for raises, but we can at least pay the faculty at last year's nominal level. + +We must give everyone a raise if anyone gets a raise, but we assume that we can give unequal raises. Otherwise, we would just split the money evenly among the faculty, and the current salary system would not change very much at all. + +Although Constraint 2 mentions 25 years and retirement, instructors are not forced to retire either at 25 years of experience or at 65 years of age. However, we assume an upper limit of 60 on the number of years of experience. + +Constraints 2 and 4 refer to real dollar values. Otherwise, it would be extremely difficult to guarantee promising new Ph.D.s that they will retire at twice their current salary. Besides, doubling the nominal salary in 25 years won't even keep up with $3\%$ inflation. + +Constraints 1 and 3 refer to nominal dollar values. The college always deals in nominal amounts when calculating budgets. There might not be enough to give the full cost-of-living increase; this would amount to a decrease in real salary. + +Faculty members who take longer to be promoted than usual may receive more than a seven-year raise if and when they finally do get promoted. That is, there is no penalty for being promoted late other than the salary lost while waiting. + +New faculty are hired only if there is enough other money to pay their starting salaries, which take into account any previous experience and are not taken from the amount available for raises. + +The salaries of retiring faculty do not get thrown into the pool for raises but can be applied to hiring new faculty. + +Since no information was given about the transition from assistant professor to associate professor, we will assume that one must have seven years of experience total (not necessarily with this college, or as an assistant professor) in order to be promoted. + +# Analysis of the Problem + +It is currently late in the winter of 1995, too late for next year's salaries to be decided by the model proposed in this paper. The first year that will be on the new salary model is 1996-97. Even then, salaries will gradually move toward the target curves, since some faculty members are overpaid and cannot have their salaries reduced. + +One solution is to set target salaries so high that everyone needs a raise to attain the target. This is clearly not a solution that the college would favor, although it would be rather popular with the faculty. + +It is difficult to conceive of a salary scheme that had no view of the future yet managed to reliably satisfy the constraints regarding promotions and retirement. Therefore, the model should contain an overall view of how much a faculty member at a certain rank with a certain number of years of experience should be paid. + +This feature is certain to cause some friction between the college and the faculty; how much is a full professor with ten years of experience worth? Moreover, at what rate should salaries increase, within the framework of the constraints? Should the model be set up to encourage or discourage retirement? Can someone be hired as a full professor with no experience? These are political questions for the Provost and faculty to negotiate; the model must handle whatever answers that negotiation produces. Faculty members must become accustomed to the Provost placing a certain value on them. + +# Design of the Model + +Our model presents an overall goal for a salary system and then adapts it to the real world. We offer a pair of core systems (logarithmic and linear) for the administration and faculty to decide between. These cores are what the college will pay the faculty if it has enough money to do so. These cores are in real dollars, not adjusted to inflation. They are then adjusted for inflation (both historical and predicted) and adjusted to meet a finite budget. There are two options to take care of faculty members who, according to the new system, are being overpaid. Finally, we deal with the unlikely event of a budget excess. + +# Variables + +Let $t$ be the current year; $t + 1$ is the year for which salaries are being computed. Let $t_i$ be the year that faculty member $i$ started at the college, adjusted by the number of years of experience credited upon entrance into the plan. Thus, if faculty member $i$ joined the college in 1994 with four years of experience, then $t_i = 1990$ . + +Let $T(i, t)$ denote the amount in real dollars that faculty member $i$ should get paid in year $t$ , the Target for $i$ for that year. This target will depend on the rank and number of years of experience of $i$ . + +In the cores that follow, $a_0$ , $b_0$ , $c_0$ , $d_0$ denote the initial salaries of a full professor, associate professor, assistant professor, and instructor, respectively. This is the amount paid to a faculty member with no experience on entering the plan. According to the problem statement, no one can be hired at a rank higher than assistant professor; however, the model could easily handle such an event. Indeed, one needs to estimate the "initial" salaries of associate professors and full professors to start the salary system. + +The initial salaries for no experience, d0 = $27,000 for an instructor and c0 = $32,000 for an assistant professor, are of some concern. We must have a0 < 2c0, or else full professors would have a decreasing salary in order to hit 2c0 at 25 years. Convention forces d0 < c0 < b0 < a0. It will turn out (see the Appendix) that a0 = $40,000 and b0 = $36,000 are good estimates. + +# The Two Cores + +According to Constraint 3, faculty members of equal rank but different experience should have their salaries approach each other as time goes on. This leaves two possibilities: Either the absolute difference goes to zero (logarithmic core), or the ratio goes to one (linear core). + +One might expect a model to have a core that has a horizontal asymptote, so that a faculty member's salary has a clear upper bound. However, as careers are limited to sixty years (Prof. Methuselah does not work at Aluacha Balaclava College), salaries are bounded. As long as the proper rate constants are chosen, no faculty member's salary will get too large for the college to handle. + +The first core, the logarithmic, increases more rapidly at the beginning of someone's career than at the end. Larger raises occur early, but by the time one has gained twenty-five years of experience, the salary curve has really flattened out. Here is the logarithmic core: + +$$ +T (i, t) = \left\{ \begin{array}{l l} d _ {0} \log_ {1 0} (d (t - t _ {i}) + 1 0), & \mathrm {f o r} i \mathrm {a n i n s t r u c t o r}; \\ c _ {0} \log_ {1 0} (c (t - t _ {i}) + 1 0), & \mathrm {f o r} i \mathrm {a n a s s i s t a n t p r o f e s s o r}; \\ b _ {0} \log_ {1 0} (b (t - t _ {i}) + 1 0), & \mathrm {f o r} i \mathrm {a n a s s o c i a t e p r o f e s s o r}; \\ a _ {0} \log_ {1 0} (a (t - t _ {i}) + 1 0), & \mathrm {f o r} i \mathrm {a f u l l p r o f e s s o r}. \end{array} \right. +$$ + +The $+10$ term inside the logarithm allows the instructor's starting salary to be the coefficient of the logarithm expression. Indeed, the equation becomes (taking $c$ as an example) $c_{0} \log_{10}(0 + 10) = c_{0} \cdot 1$ , so the factors out front are the initial salaries, with no scaling necessary. + +For the linear core, we have: + +$$ +T (i, t) = \left\{ \begin{array}{l l} c _ {0} + c (t - t _ {i} - 7), & \text {f o r i a n i n s t r u c t o r ;} \\ c _ {0} + c (t - t _ {i}), & \text {f o r i a n a s s i s t a n t p r o f e s s o r ;} \\ b _ {0} + b (t - t _ {i}), & \text {f o r i a n a s s o c i a t e p r o f e s s o r ;} \\ a _ {0} + a (t - t _ {i}), & \text {f o r i a f u l l p r o f e s s o r .} \end{array} \right. +$$ + +The variables $a, b, c,$ and $d$ are determined by Constraints 2 and 4. This core guarantees that a full professor retiring at twenty-five years after the usual promotions will make twice as much (in real dollars) as a new Ph.D. entering as an assistant professor. It also guarantees the equivalent of a seven-year raise for someone who gets promoted. The calculation of these coefficients is a matter of applying Constraints 2 and 4; they depend only on the initial salaries. (See the Appendix for a brief derivation.) + +Note that in the linear core, the instructor salary is a seven-year time shift of the of the assistant professor salary. Since there is uncertainty about when instructors will receive their Ph.D.s and become assistant professors, we shift the assistant professor salary curve to use it as an instructor salary curve. A seven-year shift makes the initial instructor salary $27,000, as it should be. + +In Figures 1 and 2, we graph the ideal salaries for all ranks of faculty. Note how the logarithmic core tapers off after twenty-five years, giving more experienced instructors less and less of a raise each year, while the linear core keeps giving them the same raise. It is in this way that the logarithmic core encourages retirement. + +# The Real World + +# Inflation + +Let $\gamma$ be the cost-of-living function from one year to the next, so that $\gamma (t + 1)$ is cost-of-living factor from year $t$ to year $t + 1$ ; a typical value would be 1.03 for $3\%$ inflation. Each person's real target salary for year $t$ is multiplied by the accumulated cost-of-living increases to produce the nominal target salary $N(i,t + 1)$ . If the new plan started in year $t^0$ , then the nominal target salary is + +$$ +N (i, t + 1) = T (i, t + 1) \hat {\gamma} (t + 1) \prod_ {j = t ^ {0}} ^ {t} \gamma (j), +$$ + +where $\hat{\gamma}(t + 1)$ is an estimate of the cost-of-living factor from the current year to the next. + +![](images/a0bd2603dc19dccc34d13bed129f74f02f14769501d16c8f5bfd4637fa4ecfd0.jpg) +Figure 1. The ideal salaries using the linear core. + +![](images/d8d3482ecddd98562f43e0c61f6b5361ae58f347dab6805736fb5ae364b229eb.jpg) +Figure 2. The ideal salaries using the logarithmic core, with linear core salaries shown for comparison. + +# Finite Budgets + +Of course, the college will not always have enough money to give each faculty member the full raise each year. This calls for some way to portion out the raises in accordance with who deserves them the most. If two faculty members each make $40,000 in 1996, and (according to one of the salary plans) the first has a target $41,000 in 1997 while the second's target is only $40,100, then the first should get a larger raise. + +Let $n(i, t)$ be the amount in nominal dollars that faculty member $i$ gets paid in year $t$ . Since budgets are limited, this will be at most $N(i, t)$ if nobody is overpaid. We have only some small amount of money, $M_r(t + 1)$ , for raises next year. This number is given to us by the outside world (the college's treasury). Let $M_n(t + 1)$ be the amount needed next year for raises if all targets are to be met. That is, + +$$ +M _ {n} (t + 1) = \sum_ {i} [ N (i, t + 1) - n (i, t) ], +$$ + +where all $M_s$ are measured in nominal dollars. Usually, $M_n > M_r$ : There isn't enough to give the faculty the raises that they deserve. + +A fair way to give raises is to give each person a raise in proportion to how much one is needed, where the proportion is the amount available for raises over the amount needed for raises. Thus, next year's salary for instructor $i$ is + +$$ +n (i, t + 1) = n (i, t) + \operatorname {r a i s e} (i, t), +$$ + +where $i$ 's raise from year $t$ to year $t + 1$ is + +$$ +\mathbf {r a i s e} (i, t) = \frac {M _ {r} (t + 1)}{M _ {n} (t + 1)} \left[ N (i, t + 1) - n (i, t) \right]. +$$ + +In this manner, instructors who are getting paid far less than their target will get a larger portion of the raises, bringing them closer to their target. + +# Bruised Egos + +The former salary plan has given some faculty members more money than they deserve under the new system. Thus, some salaries need to be reduced. We can't actually cut salaries; indeed, if there is money available, everyone has to get a raise. However, raises can be unequal. To make things simple, we could give an overpaid faculty member an $\epsilon$ -dollar raise each year until the target salary catches up with the actual salary. However, this is likely to bruise a few (overpaid) egos. + +A good way to placate the overpaid would be to give them a new nominal target $O(i, t)$ that corresponds only to the projected cost of living increase: + +$$ +O (i, t + 1) = \hat {\gamma} (t + 1) n (i, t). +$$ + +We then treat the overpaid who are underneath their new target $O$ just like those who are underneath their original target $N$ . If there is no positive inflation that year, just give the overpaid instructors some small amount each, to make sure they get a raise. Now, recompute the amount of money needed for raises of both types, and portion out the money that we have according to who needs it the most: + +$$ +M _ {n} (t + 1) = \sum_ {i} \left\{ \begin{array}{l l} N (i, t + 1) - n (i, t), & \text {i f i w o u l d b e u n d e r p a i d ;} \\ O (i, t + 1) - n (i, t), & \text {i f i w o u l d b e o v e r p a i d .} \end{array} \right. +$$ + +The new salaries are then computed as + +$$ +n (i, t + 1) = n (i, t) + \mathrm {r a i s e} (i, t), +$$ + +where the raise from year $t$ to year $t + 1$ is + +$\mathbf{r}\mathbf{a}\mathbf{i}\mathbf{s}\mathbf{e}(i,t) = \frac{M_r(t + 1)}{M_n(t + 1)}\times \left\{ \begin{array}{ll}N(i,t + 1) - n(i,t), & \text{if } i\text{ would be underpaid;}\\ O(i,t + 1) - n(i,t), & \text{if } i\text{ would be overpaid.} \end{array} \right.$ + +# Excess Funds + +Perhaps this belongs under an "Unreal World" section, but on the off-chance that there is more money available than is needed to put everyone on target, there are several options: + +- Raise everyone's salary. This has a negative consequence: it could put people over their targets for the next year, if the excess is very large. It could put the college into dire financial straits in the future, when the faculty members' salaries cannot be cut. However, if the excess is not very large, instructors will still be below target for the year after the excess, and not much harm is done. +- Give everyone bonuses. This would take care of the excess without raising the faculty's expectations for years to come. This is a better option from the college's point of view than raising salaries, and it is a common practice in industrial settings. +- Give it to the General Fund, perhaps to caffeine grants for sleep-starved students. + +# Model Verification + +We have projected the performance of the proposed models over the next fifty years. We analyzed the model both with and without such influences as limited budgets, inflation, hiring of new faculty, promotion of + +faculty members, and retirement. We have also analyzed the effect on this performance of changes in the chosen constants $a_0$ and $b_0$ . + +Figures 3 and 4 show the long-term effects on existing faculty of our model, using the linear and logarithmic cores. These figures assume no hiring, promotion, or retirement, and they ignore cost-of-living increases and possible monetary constraints. + +From these two figures, we can see how our model will move the faculty toward a uniform salary system over time. Faculty members with current salaries below the model's target are given raises to bring them up to target. Faculty members with current salaries above target are held to a constant salary (since there is no inflation) until the target catches up to them. + +Figures 5 and 6 show how our model, using the linear and logarithmic cores, behaves in the presence of $3\%$ annual inflation. In these graphs, the faculty retire according to the schedule described later, which explains why the graphs become more sparse at the left side. The graphs assume that the college has unlimited funds. Note that these two graphs are essentially identical for large $t$ , when the inflation terms dominate the model. + +Figures 7 and 8 show the behavior of the model, with the linear and logarithmic cores, when faculty are promoted, eventually retire, and are replaced by new hires who also are promoted and eventually retire. As before, the model brings faculty into a coherent salary structure over time. + +Figures 9 and 10 show the effect of budgetary constraints on the salary of an individual faculty member over time, for each of the cores. These graphs do not include promotions. In Figure 9, under the linear core, an early difference between the actual and target salaries gets magnified, because the yearly raise never decreases under the linear model but there is never enough money to give a full raise. In Figure 10, under the logarithmic core, an early difference between the actual and target salaries is eliminated as the yearly raises get smaller. These graphs demonstrate the model's ability to cope with limited-money situations. + +# Sensitivity Analysis + +To analyze the sensitivity of our model to changes in $a_0$ and $b_0$ , we varied these constants and examined the effect on the salaries of faculty members at each rank with fifty years of service. We held $c_0$ and $d_0$ constant because they were provided in the problem statement. Tables 1 and 2 show our results. + +Table 1 shows that the fluctuation in salary is higher with the linear core, as is to be expected. It is still only $8\%$ , though. Table 2 shows that in spite of $10\%$ variation in $a_0$ and $b_0$ , the fifty-year salaries of all four levels of faculty fluctuate by at most $2\%$ under the logarithmic core. We conclude that our model, especially with the logarithmic core, is relatively insensitive to its initial parameters. + +![](images/755b1962b594054f56925de0a32d010d105bf17a1a950d66b24c12b896f68511.jpg) +Figure 3. Long-term transition with linear core. + +![](images/51440c29c2001d5c6c154b2780a8c7db67a48baf06da0aaa593c06d3bea88a24.jpg) +Figure 4. Long-term transition with logarithmic core. + +Table 1. Effect of variations in initial conditions on salary after fifty years of service, using the linear core. + +
a0b0ProfessorAssociate Prof.Assistant Prof.Instructor
40,00036,00089,00078,00067,00062,000
42,00036,00086,91779,94467,97262,833
40,00038,00089,00075,33371,66766,000
42,00038,00086,91777,27872,63966,833
38,00034,00091,08378,72261,36157,166
+ +![](images/a532679ad923d6fc8bb42d65b68629588bad2005783fa56b24c6233e978e15d5.jpg) +Figure 5. Linear core with retirement and inflation. + +![](images/05a372fa9c322eee8f639b4ce6a9cd6aa7e0d5b39ae101f31e6457b7bf74525a.jpg) +Figure 6. Logarithmic core with retirement and inflation. + +Table 2. Effect of variations in initial conditions on salary after fifty years of service, using the logarithmic core. + +
a0b0ProfessorAssociate Prof.Assistant Prof.Instructor
40,00036,00074,10768,31260,18047,556
42,00036,00073,99668,54560,36547,601
40,00038,00074,01768,30660,89847,744
42,00038,00073,99668,54861,06147,788
38,00034,00073,93868,04859,53047,391
+ +![](images/a7fdcfd622aa71d1af19a403ecc7d0c3c5a961ecb2cca6c95b7226d3d5536dc7.jpg) +Figure 7. Hiring, promotion, and retirement, under the linear core. + +![](images/363b7cc59e2fe207beaa05324cc224e579c23f732d7a895b05c73c4ac21adb18.jpg) +Figure 8. Hiring, promotion, and retirement, under the logarithmic core. + +![](images/fdd0409aa1914701c9b0ad0a9100639b17786f30095980ed5ceff5ca407d1040.jpg) +Figure 9. Effect of monetary pressure on an individual faculty member, under the linear core. + +![](images/3302d925fdccfad6a7cc0acedb8ac600b6ed672adca3b258b49cbf9cf61ae045.jpg) +Figure 10. Effect of monetary pressure on an individual faculty member, under the logarithmic core. + +# Long-Term Performance + +In making long-term predictions, many real-world influences on our model had to be simulated. Specifically, we had to decide how many faculty members would change status each year, including new hires, promotions, and retirements. Also, we had to simulate the money available for promotion and arrive at a reasonable cost of living factor for each year. + +By examining the initial data, we concluded that that the college hires a mean of 9 faculty per year, with a standard deviation of 5 on a discretized normal distribution. We decided more or less arbitrarily that faculty will retire after working for 40 years, with a standard deviation of 2 years, again on a discretized normal distribution. For promotions, we decided that $50\%$ of assistant professors would become associate professors in 7 years, $25\%$ would become associate professors in 8 years, and so on with continued halving. We used the same probability distributions for associate professors being promoted to full professor. + +We used a constant $3\%$ inflation per year, or $\gamma_{i} = 1.03$ for all $i$ . We also assumed that the college would always be able to predict this value accurately for the next year. Small fluctuations in any $\gamma_{i}$ will have negligible effects on the model's performance, as will small inaccuracies in the college's yearly predictions. Finally, we analyzed the model in the presence of both limited and unlimited money supplies. For limited money, we made a constant amount of money available for raises and chose this constant such that it would be inadequate soon after the initial year of the simulation. + +# Strengths and Weaknesses + +The logarithmic option encourages instructors to retire earlier than the linear model, since the logarithm curve flattens out at higher values. This means that the raise for a professor with forty years of experience is small relative to the raise for someone with twenty-six years of experience. This gives the Provost a way to encourage or discourage retirement, based on the needs of the college. + +If the college wishes to adjust the real values of the starting salaries, everyone's salaries will change, not just those hired after the change is made. + +The faculty must be willing to settle for a higher potential salary, instead of a guaranteed higher salary, for promotions and retirement. As long as the college cannot guarantee enough money for everyone's raises, this will remain. + +Salary increases were calculated according to an faculty member on a track with the minimum years between promotions. This causes late promotions to receive the equivalent of an eight- or nine-year raise at their current rank. For example, an instructor promoted in eight years will receive a raise greater than the raise awarded for a promotion in seven years. + +Furthermore, all future promotions (if any) will have larger raises. Likewise, a professor retiring at twenty-six years experience, instead of twenty-five years will receive a salary greater than twice the salary of a beginning assistant professor (see the Appendix). + +# Appendix + +The two cores allow the initial salaries to be set by the outside world; however, the rate at which salary increases depends on these initial salaries. The constraints apply in this manner: + +Constraint 2: Consider a new Ph.D. with no prior experience. If promotions occur on time, then years 0 through 6 are spent as assistant professor, years 7 through 13 as associate professor, and years 14 and on as full professor. Thus, the Ph.D.'s first year as associate professor is year 7; it should correspond in salary to the year 14 as assistant professor: + +$$ +T (\text {a s s o c i a t e}, 7) = T (\text {a s s i s t a n t}, 1 4). +$$ + +Similarly, the Ph.D. becomes a full professor at year 14; that should correspond to the year 21 as associate professor: + +$$ +T (\text {p r o f e s s o r}, 1 4) = T (\text {a s s o c i a t e}, 2 1). +$$ + +We don't have to track inflation and budget constraints through the actual year because we are dealing in real dollars, and the salary curves don't change: Ideally, Associate Professor X with 15 years of experience in 1997 makes the same real amount as Associate Professor Y with 15 years of experience in 2010. + +Constraint 4: Similar to the previous constraint: + +$$ +2 T (\text {a s s i s t a n t}, 0) = T (\text {p r o f e s s o r}, 2 5). +$$ + +Since one's salary increases with time, those who retire after year 25 receive more than twice $T$ (assistant, 0), which fits the constraint. + +Earlier, we discussed the reasoning behind choosing the instructor salary curve in the linear core as a time shift: There is no explicit method of solving for a linear rate constant for the Instructor salary. We run into the same problem for the logarithmic model, except that a time shift would not produce a suitable result for the given starting salaries of an assistant professor and an instructor. Thus, we need to choose an average year in which to promote instructors to assistant professors. An instructor may be promoted after exactly one year; if we base the salary curve on a longer period, instructors who take longer than expected to earn their Ph.D.s would receive less than a seven-year raise upon promotion. Therefore, we promote instructors after one year. + +Substitution and simplifying produce the following rate constants: + +Logarithmic Core: + +$$ +\begin{array}{l} a = \left[ 1 0 ^ {2 c _ {0} / a _ {0}} - 1 0 \right] / 2 4 \\ b = \left[ (1 4 a + 1 0) ^ {a _ {0} / b _ {0}} - 1 0 \right] / 2 1 \\ c = \left[ (7 b + 1 0) ^ {b _ {0} / c _ {0}} - 1 0 \right] / 1 4 \\ d = \left[ (c + 1 0) ^ {c _ {0} / d _ {0}} - 1 0 \right] / 8 \\ \end{array} +$$ + +Linear Core: + +$$ +\begin{array}{l} a = (2 c _ {0} - a _ {0}) / 2 4 \\ b = (1 4 a + a _ {0} - b _ {0}) / 2 1 \\ c = (7 b + b _ {0} - c _ {0}) / 1 4 \\ \end{array} +$$ + +How do the starting salaries a0 = $40,000 and b0 = $36,000 work out so well? For the linear core, look at the initial salary for instructors: + +$$ +c (0 - 7) + c _ {0} = - 7 c + c _ {0}. +$$ + +The quantity $c$ is determined by $b$ and $b_0$ , while $b$ is determined by $a$ and $a_0$ . So, $a_0$ and $b_0$ affect the instructor's initial salary. With $a_0$ and $b_0$ chosen as they were, the starting instructor's salary comes out to $27,000, exactly as specified in the problem statement. + +# The World's Most Complicated Payroll + +Frank Thorne + +W. Garrett Mitchener + +Marci Gambrell + +North Carolina School of Science and Mathematics + +Durham, NC 27705 + +Advisor: Dot Doyle + +# Introduction + +We present a model for paying the faculty that we believe is fair and consistent with the requirements, as well as ways of dealing with cost-of-living increases, budget shortages, and the transition process. + +We use a two-track salary model: All of the instructors are on one track while all of the Ph.D.s are on the other. To accommodate both seniority and rank elements into our model, we make salary a function of "quality points," an index that incorporates both seniority and rank for Ph.D.s but just seniority in instructors. We use part of a root function as our salary function for both tracks and make the slope of the instructor function half that of the Ph.D. function. + +Furthermore, we designed a transition process for instructors promoted to assistant professor. To incorporate cost-of-living raises, our salary functions work in constant dollars. To deal with budget deficits, we let promotions happen as normally but also give out fractional quality points and make up the difference later. Our model does not provide for drastic deficits; these would require major changes, such as cuts or layoffs, which we leave as the business of the administration. + +To create a transition process, we give all faculty making more money than they should (as determined by the model) a minimum \(100 raise each year (in real, not constant, dollars) until they are on the track as they should be, and divert the remaining money into extra raises for the financially challenged. Estimating some data that were not provided, we determined that \(95\%\) of the overpaid faculty would be on the proper payroll track within 4.6 years, meaning that with the exception of a few grossly overpaid faculty, the problem of unfairness would be solved. + +Our model is highly flexible and adaptable; many of the parameters can be changed to fit the wishes of the administration. Unfortunately, many of the estimates that we produced are based on simulations of data that + +were not given. We do, however, include the templates we used, so better estimates can be made using data which should be in the college's files. + +What follows is our recommendation to the Provost for the new faculty compensation system, including the cost-of-living increases, the policy when there is not enough money to fully support the model, and the transition from the current system to the new, improved system. + +# Major Assumptions + +- All other colleges from which faculty might be transferred have a faculty ranking system similar to $ABC$ 's. Thus, when a faculty member transfers from another school, we have a good estimate both of her status in the system and of her experience. +- An assistant professor or associate professor may not be promoted until she has spent seven years at her current rank. The circumstances stipulate that an associate professor must work seven years before promotion but do not say anything about an assistant professor. Furthermore, we consider promotion after exactly seven years in a rank to be "on time." +- We have enough money to implement our model, and there is no inflation or deflation. ("The world is perfect.") Later we will consider the effect of limited funds and inflation. +- Every year that a faculty member receives a promotion benefit, she also receives a normal year's raise (the raise first and then the benefit). We assume that all promotions take place in between academic years. +- With no inflation, the starting salaries are $27,000 for instructors and$ 32,000 for assistant professors. In this model, a full professor who has worked at ABC for twenty-five years starting as assistant professor, with promotions after seven and fourteen years, will receive a salary that is twice the starting salary of an assistant professor (from Principle 3), or $64,000. +- The number of faculty members in each rank does not change significantly from one year to the next. + +# Problem Analysis and Development of the Model + +We graph the current salary data for each of the faculty ranks in Figure 1. The most obvious problem with the data is that the number of years on the chart is only the number of years at ABC. Many faculty members may + +![](images/1d0efe3ddbd2d7b46a650888a29ed349d71f11f9f010839f35fd6021a5a7607e.jpg) +Figure 1. Scatterplot of salary vs. years at ABC. + +have transferred in from some other institution with credit for up to seven years of experience, but we do not know how much credit they have been given. We also do not know the number of years spent in each rank for each faculty member moving through the system. We would expect that a faculty member with a 24-year career, for example, who spent 7 years as an assistant professor, 7 years as an associate professor, and 10 years as a full professor, would be making more than another faculty member who had spent 15, 8, and 1 years, respectively. In the data, however, both are listed just as 25-year full professors. + +Since the data are incomplete, we designed a model independent of the data. The first model we tried involved using several salary tracks, one for each rank, each with the same slope. On promotion, an instructor would move an extra seven years worth of salary on her own track and then move up to the next one. This complicated system resulted in several problems; for example, faculty who were promoted earlier would earn less money after a while than those promoted later. We wanted faculty promoted earlier to earn more money. Furthermore, we wanted a model that would be easy to present, easy to implement, and reasonably flexible. + +We opted for a different alternative. Since the salary system for assistant/associate/full professors is a single track with consistent rules, we deal with the instructor track separately, thus giving a two-track system. We tackle inflation by simply vertically stretching the curves by the next year's projected cost-of-living increase. + +The next problem was what to do if there is not enough money for each faculty member's expected raise. We wanted to see the effects of this change on the actual faculty at ABC, but lack of the prior work history of each faculty member stops us from being able to calculate quality points and the corresponding salary. Instead, we generated a possible set of work histories for the faculty. We used a MathCAD template to generate at random 204 (the number of current faculty members) sets of four points (the numbers of years spent in each rank) and then ran these through our salary function to find the salaries. We calculated the amount needed to pay everyone, + +the expected raises, and amounts for each faculty member when there is not enough to give everyone a full raise. For application at ABC, actual histories could be entered into the template. We also created a special plan for the transition between the current system and the new one, involving giving a minimum raise to those making more than they should and dividing the extra among the others. + +# The Ph.D. Track + +For the ranks of faculty with a Ph.D., it is much clearer how the salary increases than in the instructor phase. We are explicitly told that a promotion is "on time" for a Ph.D. if it happens after seven years and that the benefit should be the same as seven years of raises. For an instructor, there is no clear definition of "on time" or minimum time to promotion. Someone could stay instructor for her entire teaching career by never getting a doctorate. Furthermore, being an assistant professor is a prerequisite to reaching the higher levels, while being an instructor is not. + +We solve the problem of the difference by having the two salary schemes lie on different lines. Since we put all Ph.D.s on one salary track, we must find a way of dealing with promotion benefits and years of experience as two parts of the same variable, since salary can be a function of only these two factors. We introduce "quality points," which take into account the following rules: + +- Faculty receive 1 point for every year worked at ABC. +- When a faculty member is promoted, she receives 7 points and the correspondingly higher salary if she is promoted on time (after seven years). If she is promoted later than the minimum amount of time, the reward should be correspondingly less, to satisfy the constraints. We increase the number of points by $49 / t$ , where $t$ is the number of years spent in the rank from which she is promoted. +- Employees hired from other colleges receive 14 points if they have reached the rank of associate professor and an additional 14 points if they have reached the rank of full professor when they transfer in. These are the minimum numbers of points for these ranks. Furthermore, in whatever rank the employee enters, we give her up to seven years (quality points) of credit in that rank. For example, if an employee comes in with 30 years of assistant professor experience and 2 years of associate professor experience, we make her an associate professor with $14 + 2 = 16$ points; if another employee comes in with 30 years as an assistant professor and 9 years as an associate professor, we make her an associate professor with $14 + 7 = 21$ points. + +Given these conditions, we observe that a new Ph.D. who has worked 25 years and has made all of her promotions on time has earned 39 points and thereby should earn $64,000. We should therefore fit our function $f(x)$ , where $x$ is the number of quality points and $f(x)$ is dollars in thousands, to the following constraints: + +- The function must always be increasing, to satisfy the constraint that more experience in a rank results in more money: $f'(x) > 0$ . +- The function should always be concave down, so that its slope decreases. Therefore, the effect of a difference in experience will narrow over time, as prescribed: $f''(x) < 0$ . +$f(0) = 32$ and $f(39) = 64$ +- The slope should be reasonably large (but not too large) at the beginning, so new employees get reasonably large (but not huge) raises. + +We use part of a power function to determine our function. In general form, the equation is part of a $p^{\mathrm{th}}$ root equation, where $p$ is some (not necessarily integral) number greater than 1. Performing an affine transformation of sorts, we map (0,32) to (1,1), and (39,64) to $(2,2^{1 / p})$ . This was accomplished via the function (see Figure 2) + +$$ +f (x) = 3 2 \left[ 1 + (2 ^ {p} - 1) \frac {x}{3 9} \right] ^ {\frac {1}{p}}. +$$ + +![](images/3a0ee20ef0fb306cefb640d1add29d9d7f4a12081086eb8e3c64fa5b9b636dfc.jpg) +Figure 2. Salary vs. quality points: Ph.D. track $(f(x)$ , solid line) and instructor track $(g(x)$ , dotted line), with $p = 2$ . + +This family of functions has the advantage that we can adjust $p$ to produce a salary function that is very steep initially and later very narrow, or one that is close to a line. + +# The Instructor Track + +The instructor track cannot be defined as easily as the Ph.D. track, because we do not have any information about the expected value of the yearly raises. Arbitrarily, we set the raise for an instructor to be half of that of a Ph.D. with the same number of quality points. So the instructor salary $g(x)$ for $x$ quality points is + +$$ +g (x) = \frac {1}{2} f (x) + 1 1 = 1 6 \left[ 1 + (2 ^ {p} - 1) \frac {x}{3 9} \right] ^ {\frac {1}{p}} + 1 1 +$$ + +(see Figure 2). We add the 11 to force the function to go through (0, 27). The multiplier of one-half can be changed easily if necessary. The number of instructor quality points equals the number of years as an instructor, and an instructor moving into ABC from another institution can receive as many as 7 quality points. Note that between the initial salaries for instructor and for assistant professor, we initially have a $5,000 gap, which grows with quality points. We assume that if an instructor becomes an assistant professor very quickly, then her salary should be very close to that of someone who has had the same length of career and has always been an assistant professor, whereas an instructor who takes very long to get her Ph.D. should not be very close in salary to an assistant professor who has worked for the same number of years. Therefore, we suggest that upon the completion of a Ph.D., we give the new assistant professor an immediate salary raise of$ 5,000 (not enough to completely make up the difference) and then reevaluate her quality points on the Ph.D. scale. The function + +$$ +q _ {\mathrm {P h . D .}} = f ^ {- 1} \big (g (q _ {\mathrm {I n s t .}}) + 5 \big) +$$ + +Calculates the position on the Ph.D. curve for an instructor receiving a promotion and receiving a $5,000 benefit. + +# What Will It Cost? + +For small $p$ , the salary functions look too much like lines and salaries do not converge much; for high $p$ , the cost of paying the faculty is high. We feel that $p = 2$ is a reasonable value that ABC might want to use. + +Since we do not have the actual histories of the employees, we cannot calculate the actual sum of the salaries on the new system. Instead, we created randomized histories to get an idea of the monetary demands of the new system. This required a simulation satisfying a number of constraints, the details of which are in the Appendix. + +Several iterations of the simulation on different random faculties estimate that a highly experienced faculty with approximately the same number of faculty members in each rank would cost about $10.0 \pm 0.1$ million to + +pay for a value of $p = 2$ . This is not an average case estimate, but more of a "worst of the average" case estimate, because the simulated faculty have all been at ABC for their entire careers. Since most of the actual faculty have only been at ABC for a few years, they are making less money as a whole than the simulated faculties, so $10.0 million is more than the college should expect to pay under our system. + +The least the college could expect to pay out in a year would correspond to replacing the entire faculty with people from other schools with no experience in the rank at which they enter ABC. The faculty would come in with no credit, so each faculty member would make the baseline salary for her rank. The total minimum salary for the new system with $p = 2$ is $8.7 million, which is roughly the same as the current payroll ($ 8.8 million). + +The 1993 national average salaries for faculty members, from instructor to full professor, were $28,300,$ 38,600, $46,900, and $68,700 for a private college (as ABC must be with a name like that). For $p = 2$ , the baseline salaries at ABC for each category with promotions on time (in the same order) would be $27,000,$ 32,000, $46,100, and $56,800. These values are all less than the national average, but they are just the baseline values. Because of years in rank, many faculty members will be making more, creating an average closer to the national one. + +# The Cost of Living + +We have determined the salary functions for faculty members with a Ph.D. and for instructors in terms of the number of quality points (we have not, however, determined a value for the parameter $p$ ). We need to incorporate annual increases in the cost of living into this model. The Consumer Price Index (CPI) measures well the year-to-year changes in the inflation rate [Parkin 1993, 628]. + +We propose to increase everybody's salary by a proportion equal to the increase in the cost of living every year. So, if after one year, the cost of living has increased $4\%$ , then we just multiply $f(x)$ and $g(x)$ by 1.04. + +Note that the \(5,000 salary increase that promoted instructors receive must also be adjusted for the cost of living. + +# When Funds Run Short + +Up until this point, we have been assuming that ABC has enough money to pay everyone on the new salary schedule. Currently, ABC is paying out $8.8 million; unfortunately, according to our model, ABC has been paying out too little. This raises the question of what to do if ABC cannot afford the new system: how to pay everyone if funding is low for a year. + +We must make several assumptions about priorities. + +- Give the usual raise and a bonus to faculty who are promoted. +- Give the usual raise (if there is enough money) to faculty members not getting a promotion. +- Spend any excess money (if there is any after giving raises) on replacing faculty members who leave (about $7 \%$ each year; see the Appendix). +- If there is not even enough to give everyone their raise, do not hire replacements and give each faculty member a fraction of a quality point instead of a whole one. +- If the budget cut is very serious, other measures, such as cutting salary for some and laying off others, may need to be taken. We cannot dictate firing people, but the method for reducing salary that would be best is to subtract the same number of quality points from each faculty member. + +Since we do not have the actual salary histories of the employees at ABC, we created another random data set to experiment with. This data set, unlike the previous one, creates a population of faculty members in which some have spent their whole career at ABC and some have not (see the Appendix). Using our random data, we then created a MathCAD template to calculate the salary of each faculty member when there is not enough money. The data can easily be changed to calculate these amounts based on the actual numbers. + +If there is not enough money to pay everyone the full expected raise (not taking into account cost-of-living increases), we first pay everyone who is being promoted and then divide the remaining money among the remaining people by assigning each faculty member the same fractional quality point of a yearly increase. If the administration decides that it is not proper to hire someone during the money shortage, that faculty member's history should just be left out of the data file. + +We ran a simulation with a budget of $9.45 million for 1994-95. The amount needed to give everyone their raise and fulfill promotion raises was$ 9.50 million. Since there is not enough money, the template calculates the appropriate number of quality points to give out after raises. The result was to give each faculty member 0.735 points. + +# The Transition + +We created the following procedure for moving current low-paid faculty members to the new scale without cutting anyone's salary. We must assume that there is enough money for the transition. The major points are the following: + +- Anyone making more than they should receives a minimum raise (because the basic structure requires that each faculty member must receive a raise any year that money is available). We make this minimum raise $100. We do not, however, include an increase for cost of living. In adding the minimum raise, we never allow an employee's actual salary to drop below her deserved salary. +- First, we pay everyone their previous year's salary, and people making more than they should are paid the minimum raise. +- We distribute the remaining money as raises to the people making less than they should, proportional to the amount they are below the salary that they deserve, including a cost-of-living increase. +- We assume that the college budgets $10 million in 1994–95 dollars each year until the current salaries catch up to the new payment system. + +In order to create and test a template, we again need actual values. We use the current salaries and generate possible histories to go along with them. We calculate how long it would take for each overpaid faculty member to receive only as much money as she deserves. (We wanted to calculate how long it would take each underpaid faculty member to catch up, but we could not fix an error in our program.) Five runs of the program were combined. The number of people with a certain "catch-up" time appears to be an exponential distribution with a mean of 3.55. A quick computation shows that it should take 4.6 years for $95\%$ of the overpaid people to catch up to their salaries, although some extremely overpaid people take much longer. + +# Strengths + +- Our model is simple. Using a single salary curve for all doctorates is much simpler than using a separate curve for each rank. The concept of quality points is a very convenient and simplifying assumption and is one that anybody could use. Furthermore, the cost-of-living adjustment that we make is simple. +- The model rewards those instructors who advance in rank quickly, as we believe is appropriate, without making the penalty for late promotions too drastic—late promotees still get a raise, just not as much. +- The template that we used with simulated data can easily be used with actual data. +- Our model is flexible. The parameters to which we assigned values (such as $p = 2$ ) can easily be changed, + +- Our model conforms to all of the principles and circumstances of the problem statement. +- The model seems stable and consistent. Different random sets of data do not produce large changes in the results, and our results agree with values for average faculty salary in the U.S. +- If so desired, the concept of quality points could be extended to situations beyond the scale of this model. For example, if the administration wanted to give a faculty a special raise/cut for extraordinary performance, or to make especially good offers to desirable faculty from other schools, it could adjust a faculty member's number of quality points. + +# Weaknesses + +- The parameters used in our model are arbitrary, since we are missing data that would be useful in identifying them. +- We have no basis for setting the instructor slope at half of the Ph.D. slope. +- It is very difficult to perform sensitivity analysis on our model. We have no way of estimating how much our simulated data, upon which we should not rely too heavily, differs from the real data. +- We assumed that the number of instructors in each rank remains reasonably constant. It might not, which would alter our results. +- Implementing our model assumes adequate funding. If money is a problem, we could decrease $p$ , but this may not totally remedy the situation. + +# Appendix + +# Creating Random Faculty + +Since it does not seem to take very long to be promoted from instructor to assistant professor, the number of years as instructor is generated by taking a random number from an exponential distribution with a mean of 3. (This means that more than half of all faculty members spend 3 years or less as instructors, but a few spend much longer.) The number of years as an assistant professor is similarly taken to be 7 plus a random number from an exponential distribution with a mean of 3. The 7 must be added because it takes a minimum of 7 years to be promoted to associate professor. Similarly, the number of years as an associate professor is generated from the same distribution. The number of years as a full professor is generated by 1 plus a random number from an exponential distribution with a mean of 5. + +Not very much information is given in the problem to support this model. However, it does have some basis. For assistant professors, the average number of years at ABC is roughly exponentially distributed, with a mean of 9.4 (see Figure 3). This suggests that exponentially distributed random numbers are decent estimates of this number. Most of the simulated times are between 7 and 10 years, which roughly agrees with the minimum and average time that faculty members spend as assistant professors. The time spent as an associate professor is roughly the same, so we use the same random variable for years as an associate professor. + +![](images/09e2ff63114929d33c5b8c1b5e5a4ab6a4c379a672c9802664547c29327022f1.jpg) +Figure 3. Number of assistant professors vs. years at ABC. The dotted curve is the function used in the simulations. + +The mean of 5 for years-in-rank as a full professor comes from a total career of about twenty-five years: $7 + 3$ -ish + $7 + 3$ -ish + 5-ish = 25-ish. + +The simulation creates the same distribution of positions as the given salary list; we assume that the makeup of the faculty does not change. + +The second simulation creates a random number of years that each instructor has been at ABC, based on the given data and using an exponential distribution with a mean of 13.3 (the mean of the data) (see Figure 4). + +![](images/6ceadb49ded1cd165fede7ee4e44593b019962dcc59535154bd96ddaabd5b21d.jpg) +Figure 4. Number of faculty members vs. years at ABC. + +The number of faculty members who leave after $t$ years is the difference between the numbers there $t$ years ago and $t - 1$ years ago, which is ap- + +proximately equal to the derivative of the density function at $t$ . By explicit calculation, or from properties of the exponential distribution, the duration of stay is exponential with the same mean. + +# How Many People Are Replaced Each Year? + +We assume that the same fraction of the faculty is replaced each year and that a faculty member stays at ABC an average of 13.3 years. + +For a continuous model, we use an exponential distribution with probability density function + +$$ +f (t) = \frac {e ^ {- t / \lambda}}{\lambda}, \qquad t > 0, +$$ + +which has mean $\lambda$ . The area lying to the right of $\lambda$ is $e^{-1}$ . We set $\lambda = 13.3$ , so that after 13.3 years, the fraction of faculty remaining is $e^{-1} \approx .368$ . For the exponential "decay" in the number of faculty, let $\mu$ be the half-life and $r$ the decay constant; $r$ is the fraction of faculty who remain from one year to the next. Let $N_0$ be the initial number of faculty in a cohort, and let $N_\mu$ be the number of those faculty remaining after one "half-life." We have + +$$ +N _ {\mu} = N _ {0} r ^ {\mu} = \frac {1}{e} N _ {0}, \qquad r = e ^ {- \mu}. +$$ + +For $\mu = 13.3$ , we have $r = 0.928$ , so roughly $93\%$ of the faculty remain from one year to the next, and on average $.072 \times 204 = 15$ faculty members are replaced every year. Depending on the ranks involved, 15 faculty members cost anywhere from $\\(405,000$ (inexperienced instructors) to $\$ 900,000 \) (experienced full professors). If the college runs short of money, some money could be saved by not replacing these faculty members. + +For a discrete model, we use a geometric distribution with probability mass function + +$$ +f (n) = p (1 - p) ^ {n - 1}, \qquad n = 0, 1, 2, \ldots /, +$$ + +where $p$ is the fraction of faculty who leave each year. The mean of this distribution is $1 / p$ . We set $1 / p = 13.3$ , getting $p = .075$ , in good agreement with the continuous model. + +# References + +U.S. Department of Labor, Bureau of the Census. 1994. Statistical Abstract of the United States: 1994. 114th ed. Washington, DC: U.S. Government Printing Office. + +Parkin, Michael. 1993. Economics. New York: Addison-Wesley. + +# Long-Term and Transient Pay Scale for College Faculty + +Christena Byerley + +Christina Phillips + +Cliff Sodergren + +Southeast Missouri State University + +Cape Girardeau, MO 63701 + +Advisor: Robert W. Sheets + +# Introduction + +Our assignment was to design a salary system for the college's faculty without cost of living increases, then incorporate cost-of-living increases, and finally design a transition model to move all current faculty towards our model. We decided that one of the most important issues to consider was the quality of education at the college. Hence, we established a model to push faculty to work toward promotion or else their raise possibilities are minimized. This should raise faculty incentive and increase publications, research, and instruction, which will raise the overall quality of education at Aluacha Balaclava. + +The equation for our model without regard to rank is + +$$ +P _ {x} = P _ {1} + m \left(1 - e ^ {- k (x - 1)}\right). +$$ + +This model satisfies the given criteria and also leaves room for variations. + +To incorporate cost-of-living increases into our model, we use the Consumer Price Index, a national measure of inflation. + +For the transition, we take a faculty member's current salary and years in current rank and determine a salary curve that starts at that point and follows our model in the best possible way. We tested the transition model as well as possible with insufficient data by modeling salaries for fictional faculty. Results show that the transition model moves all faculty from their current salaries toward our desired system. + +# Assumptions + +- In the data, number of years of service means years of teaching at that institution, not overall career and not teaching at that rank. However, + +Table 1. +List of variables. + +
VariableMeaning
CPIx-1cost of living factor from the previous year
Frequiredfunds needed to give all faculty their full raise
Fxavailable funds for raises in the xth year
kdecay constant
mmultiplicative factor
P1entry-level salary at a given rank
Pxsalary in the xth year
P'salary the year before the transition plan begins
xyears in rank
x'years at a given rank when the transition plan begins
+ +many equations throughout our model make use of the variable $x'$ , which is the number of years that a faculty member has been at his or her current rank. This information was not given to us, so it will be the responsibility of the Provost to obtain it. + +- Each rank should have a minimum base salary and a maximum base salary for newly hired faculty. We also presumed that some faculty with Ph.D.s could be hired above the assistant professor level, based on experience. +- The minimum time for an instructor to complete the Ph.D. degree and be promoted is approximately two years, and there is a four-year minimum of service at the assistant professor level before promotion to associate professor. +- A "normal raise," in reference to our requirement of promotion benefit being equivalent to seven years of raises, could be determined approximately by the average raise for the first twenty years, $(P_{21} - P_1) / 20$ . +- A decaying exponential curve gives a good basis for the model. + +# Motivation for the Model + +We choose a decaying exponential model for a number of reasons. + +- It allows for considerable raises at the beginning of a rank, but as time passes, the raises decrease. We think that this is important because it gives faculty the incentive to work toward promotion by contributing to the college through research, publications, or excellence in teaching. Without being promoted after a certain amount of time, their salary will top out at a value that reflects their rank. Not only would promotion offer a raise, but it would also result in higher future raises. + +- By decreasing the multiplicative factor while at the same time increasing the starting salary, we establish a model that causes salaries in the same rank to grow closer over time, as required. + +The motivation behind choosing the decay constant is the number of years of experience that we want the faculty to have before the exponential term begins to level off and salary raises decrease dramatically from one year to the next. For each rank, we decide on the number of years for which a faculty member at that rank would receive $50\%$ of his or her total raises at that rank and solve for the decay constant $k$ : + +$$ +0. 5 = 1 - e ^ {- k (x - 1)}. +$$ + +We want an instructor to receive $50\%$ of his or her raises after seven years in an attempt to motivate the instructor to work for promotion. If a promotion does not occur, raises decrease more rapidly. + +For assistant and associate professors, we want this time to be longer, because they are less likely to be promoted. It takes a couple of years longer than an instructor to receive about $50\%$ of their raises—by their tenth year of service at that rank. + +For a full professor, because there is no possibility for promotion, this amount of time should be even longer. A full professor will receive about $50\%$ of his or her raises by his or her twelfth year in rank. + +In choosing the high and low salaries for the first year, we took into consideration a number of things. We averaged the current salaries of faculty, getting $31,919,$ 35,908, $44,286, and $54,228 for the ranks in ascending order. We set our base salaries taking these averages into consideration so that new faculty would be paid reasonably. We also took into consideration the statistics from the May 1994 Occupational Outlook Handbook [U.S. Department of Labor, Bureau of Statistics, 1994], which gives national average salaries as $27,700, $36,800, $44,100, and $59,500. + +Another source that we took into account was the *Statistical Abstract of the United States: 1994* [U.S. Dept. of Labor, Bureau of the Census, 1994], which lists average beginning salaries offered to candidates according to their degree level and their field of concentration. By taking all of these statistics into account, we attempt to establish a fair window for entry-level salaries for all ranks and a top salary at each rank below full professor (see Table 2). + +The reason that we have a different model for full professors is that we think that there should not be a maximum salary for them because they have no possibility for promotion; if we kept the same type of model, there would be a ceiling. So, although the salaries of full professors at the minimum and the maximum bases for full professors do not tend toward each other together as quickly as with the other ranks, they do get slightly closer over time. + +Table 2. Entry-level base salaries and top salaries. + +
RankMinimum baseMaximum baseTop salary
Instructor27,00037,00042,000
Assistant Professor32,00047,00052,000
Associate Professor37,00052,00062,000
Full Professor47,00062,000
+ +To determine the raise for promotion from one rank to another, we looked at the time to promotion and the maximum and minimum entry levels at the next rank. If a faculty member is promoted on time, he or she should get a raise equivalent to seven years' worth of normal raises (according to the problem statement). We chose $3,500 as a raise for a promotion achieved on time. This is a compromise between the calculations we made for normal raises over seven years, which were between$ 2,000 and $5,000, depending on the entry-level salary for the next rank. We chose $3500 because that amount keeps faculty receiving promotions within the entry-level salary range of the next rank and is in between the high and low normal raises over seven years. + +Another issue is how to allocate available funds. The first thing to consider is how much would be required to give everyone the raise that coincides with his or her salary-scale curve. If the required amount is available, then each faculty member is given his or her expected raise. If the required amount is not available, then each faculty member is given a proportion of his or her raise. If an excess is available, and if all faculty members are where they should be on the curve according to entry salary and the number of years at that level, then the excess is held over until the next year. + +# The Model + +Once the starting salary has been established, our model gives a salary curve for the faculty member to follow, as shown in Table 3. See Figure 1 for maximum and minimum salary curves in each rank and how they converge. We also present a salary schedule for minimum and maximum salary according to year and rank. [EDITOR'S NOTE: We omit the detailed salary schedule.] + +# What If There Isn't Enough Money? + +When determining the amount of the raise, it is necessary to take into consideration the available funds for the year vs. the amount that it would + +Table 3. +Salary curves. + +
RankCurve without inflation
Instructor:P1+(42,000-P1)(1-e-0.10(x-1))
Assistant Professor:P1+(52,000-P1)(1-e-0.08(x-1))
Associate Professor:P1+(62,000-P1)(1-e-0.08(x-1))
Full Professor:P1+[10,000×62,000-P1/15,000](1-e-0.07(x-1))
+ +![](images/7b11f258f777402480f303c309027c9884833bb77c5eb43d478aba5f79495acb.jpg) +Figure 1. Maximum and minimum salary curves for each rank. + +cost to give everyone the raise according to his or her curve. If the amount required exceeds funds available, then the raise for each faculty member will be determined in the following manner. + +For each faculty member, divide the expected raise by the total of expected raises and multiply this fraction by the available funds. An exception: If the available amount is less than $10\%$ of the required amount, then no raises are given and the funds are held over until the next year. (This ensures that only substantial raises will be given, i.e., no one will receive a $0.40 raise.) We multiply each of the formulas in Table 3 by + +$$ +\frac {(P _ {x} - P _ {x - 1}) F _ {x}}{F _ {\text {r e q u i r e d}}}. +$$ + +If adequate funds are not available in a certain year, then everyone will be below their salary curve. The following year, the way to determine who gets what percentage of the available funds is again to use the formulas to proportionate. + +A faculty member who receives a promotion will jump to the equation for the next rank. The entry-level salary at the new rank will be previous salary plus a raise of: + +- \(3,500, if promoted in the least amount of time (i.e., two years for instructor to assistant professor, four years for assistant professor to associate professor, and seven years for associate professor to full professor); + • $2,500, if promoted within five years of the minimum number of years; + • $1,000, if promoted any time later. + +# Cost of Living + +The model is the same as the previous model with salaries multiplied by a factor that takes cost of living into consideration. We chose the Consumer Price Index (CPI) because it is gives an all-encompassing measure of the percentage increase of goods and services in the U.S. We use the CPI from the previous year to determine the rise in cost of living for the current year. Many other indices can be inaccurate because they are based on projections of what is expected to happen in the future. We multiply each of the formulas in Table 3 by $(1 + \mathrm{CPI}_{x - 1})$ . + +# Transition + +The transition model takes a current faculty member and finds a salary curve that fits our model while considering current salary and years in their rank. For each rank, current salary and years in rank will fall + +- above our maximum salary curve for that rank, +below the minimum curve, or +- between the curves. + +We consider each possibility for each rank. + +# Above the Maximum Salary Curve + +We cannot fit faculty who currently are above the maximum salary curve for their rank into our salary range, because we cannot cut their salary. Because we have to allow them to receive raises but do not want them to receive very large ones, we let them increase at only the same rate as the maximum salary curve for their rank. To find their salary curve, we need to project backwards to determine a corresponding $P_{1}$ . We use the formulas in Table 3, substituting $P'$ (salary the year before the transition begins) for $P_{x}$ and $x'$ (number of years at a given rank when the transition begins) for $x$ . + +# Below the Minimum Salary Curve + +For those who fall below the minimum salary curve for their rank, we increase their salaries so that over a five-year period they move into the salary range for their years at that rank. Our model for this transition calculates what the faculty member's salary should be in five years to fit the minimum salary curve, then divides the difference into five equal increments, for an equal raise each year. + +We have + +$$ +P _ {x} = P ^ {\prime} + (x - x ^ {\prime}) \frac {P _ {1} + m (1 - e ^ {- k (x ^ {\prime} + 4)}) - P ^ {\prime}}{5}. +$$ + +This equation is used for only five years after the implementation of our model. At the end of this five-year period, all faculty originally falling below the minimum salary curves will be caught up to these minimums, and their salaries will follow the minimum curve from then on. + +# Between the Two Curves + +If a faculty member's current year in rank and salary fall between our maximum and minimum salary curves, we implement our original model. For each rank, we can find the curve that fits our model and passes through the faculty member's current point. To do this, we just project backwards to find the value of $P_{1}$ for such a curve, then substitute into the appropriate formula. + +# Testing the Model + +As a preliminary test of our model, we developed tables of salaries for the minimum and maximum entry salaries for each rank based on the number of years at that rank. These tables were the basis for Figure 1. + +Since we do not know years in rank for actual faculty members in the given data, we could not examine how their salaries would change under the transition model (which would be ideal). Instead, we generated several random contrived faculty members in each rank who are above the maximum salary curve, below the minimum salary curve, or between the two curves. We modeled the curves for these fictitious faculty and observed good transition from current salaries to our model. + +For further analysis, we could also test if the model financially coincides with the capabilities of the college. If not, then the model could be scaled down by lowering the entry salaries or by increasing the decay constant. + +# Strengths and Weaknesses + +The most successful way to improve an institution's prestige and quality of their degrees is to improve the quality of the faculty at the institution. This is one of the most important strengths in our model. We have attempted to improve the quality of the faculty at Aluacha Balaclava College by creating a large window for entry salaries, pushing faculty toward promotion, and setting salaries comparable to the national averages. + +Our window of entry has established a very wide range of salaries for prospective faculty. This gives Aluacha Balaclava College the opportunity to hire the best available faculty. In turn, these new faculty will boost the quality of teaching and raise the overall rating of the college. At the same time, our minimum entry salary helps keep the faculty salary budget down by not overpaying existing faculty who meet only average criteria. + +This window of entry salary may also be a weakness, because it will not allow the college to bring in more-prestigious instructors who expect a higher salary than the scale allows. We compensate with an "extraordinary circumstances" clause. When a faculty member does enter at a salary above what our maximum curve allows for, he or she will follow a salary scale equivalent to that of an existing faculty member who is making more than our curve allows during transition. + +The strength of the transition phase is that it brings current faculty up to a fair salary in a relatively short amount of time (five years). The shortness of the transition period should help alleviate any animosity among current faculty that has been caused by their salaries and resulted in the departure of the previous Provost. The shortness, however, may also produce a financial crunch on the college, which is a weakness. A longer transition period + +would lengthen the transition period and lessen the financial burdens in the first five years. + +Our model does not take economic deflation into consideration. Although deflation rarely occurs in the U.S., if it does, we do not want to give the faculty a salary cut. Instead, we use an inflation multiplier of 1 instead of $(1 + \mathrm{CPI}_{x - 1})$ . The CPI factor also poses another weakness because it is a national and not local average of cost-of-living increase. If there is an index available that estimates the cost-of-living increase in the surrounding area of Aluacha Balaclava College, then perhaps it would be a more accurate estimation of inflation. Another weakness is that by using the previous year for reference, there is a lag in cost-of-living adjustments. + +# References + +U.S. Department of Labor, Bureau of the Census. 1994. Statistical Abstract of the United States: 1994. 114th ed. Washington, DC: U.S. Government Printing Office. +U.S. Department of Labor, Bureau of Statistics. 1994. Occupational Outlook Handbook. May, 1994. Washington, DC: U.S. Government Printing Office. + +# How to Keep Your Job as Provost + +Liam Forbes + +Marcus Martin + +Michael Schmahl + +University of Alaska Fairbanks + +Fairbanks, AK 99775 + +lforbes@arsc.edu + +{fsmgm, fvmgs}@aurora.alaska.edu + +Advisor: John P. Lambert + +# Introduction + +Our salary system proposal quickly brings faculty salaries into line without cutting any salaries. In accordance with the principles given us, we have created a function that assigns each faculty member an ideal salary that they will be paid if the budget is large enough. + +Bringing salaries up to scale is easy if there is enough money. For the quite likely case of not enough money, we present an algorithm that brings as many faculty as possible up to the curve as rapidly as possible, within the limits of the available budget. We call this algorithm for figuring annual raises the scaling method. + +After giving everyone a minimum nominal raise, the scaling method finds a scaled-down version of the ideal salary function and brings all faculty below this curve up to the curve, using the rest of the money available for raises in the process. + +Cost of living is applied each year before the annual raises. The first method applies the cost-of-living increase to the ideal salary curve and allows the scaling method to naturally bring faculty salaries up to this curve. The second method applies the cost-of-living increase simultaneously to the ideal curve and the current faculty salaries. We recommend the first method. + +The transition phase will be very easy. There is no difference in our algorithm over the long term and for the short-term transition period. Our method instantly brings those faculty who are drastically behind scale up to a salary similar to their peers and continues to collapse the discrepancies between faculty at the same rank and experience. + +Table 1. Glossary of symbols. + +
SymbolMeaning
a, bparameters in the logistic equation
cscaling factor for the ideal salary curve
ebase of the natural logarithm, approximately 2.718.
COLcost-of-living increase, expressed as a percentage
Disalary deficit of faculty member i
ia given faculty member
f(x)the general logistic function
I(x)the ideal salary function
I'(x)a scaled-down version of I(x), with parameter k' instead of k
ka parameter in the logistic equation; the absolute max value for the curve
k'the scaled-down constant for the scaled-down ideal salary curve
khi, klowthe high and low endpoints in a binary search for an optimal k'
Mthe amount of money available for raises, after nominal raises
Rthe minimum annual raise
Si, Si+the current and next year's salaries for faculty member i
Ti+temporary guess for next year's salary for faculty member i
x, xithe faculty indexing function; the index of a given faculty member i
+ +# Assumptions and Justifications + +- No individual faculty member's salary may ever be decreased. This is a clarification and expansion of the principle given that existing faculty salaries cannot decrease during the transition. If this is true of the transition period, then by natural extension it should be true in the future. +- Not all the money available for raises each year necessarily needs to be distributed, if all faculty are at their ideal salary. It is unlikely that an administration would want to pay faculty members more than the salary system suggests that they deserve. +- Salary rates for faculty are not based on merit but solely on longevity and rank. There was no mention of merit in the instructions from the Provost. In our salary system, all faculty with the same longevity and the same rank have the same ideal salary. + +# Building the Salary System + +# The Ideal Salary Function + +The first step in designing a salary system is deciding what salary each faculty member would receive if there were always as much money in the budget as needed. We develop a curve that satisfies some basic criteria. + +# Curve Evaluation Criteria + +1. A promotion is worth seven years of salary increases (Principle 3). +2. New instructors are paid $27,000 (Principle 1). + 3. New assistant professors are paid $32,000 (Principle 1). + 4. Full professors with 25 or more years in service should make approximately $64,000 (Principle 4). +5. Full professors with more than 25 years of service should also make approximately $64,000 but more than a full professor with only 25 years of service (Principles 4, 5). +6. Salary increases should diminish over time (Principle 4). + +We created an indexing system whereby faculty members are given a number equal to their years in service plus seven times their rank. The rank values are 0 for instructors, 1 for assistant professors, 2 for associate professors, and 3 for full professors. Thus the index $x$ equals seven times the current rank number plus the number of years in service. A single salary system is used for all of the ranks, and a promotion is equivalent to seven years of service. This index is the input for the salary functions that will be developed in this paper. + +# Equations of the Curve Evaluation Criteria + +- $x =$ years + 7 × rank. +- Ideal salary is a function $I(x)$ , with + +$$ +I (0) = \$ 2 7, 0 0 0. +$$ + +$$ +I (7) = \$ 32,000. +$$ + +$$ +I(46) = \\(64,000\pm 5\% . +$$ + +\(I(71) = \\)64,000 \pm 10\% ; I(71) > I(46)\). (The number 71 is arbitrary; it represents a professor with fifty years in service.) + +$d^{2}I / dx^{2} < 0$ for large $x$ + +We examined polynomial, square-root, cube-root, exponential, power, and logistic curves. We choose the logistic curve as the ideal salary curve because it meets all of the criteria. + +Logistic function: $f(x) = \frac{k}{1 + ae^{bx}}$ . + +The logistic function meets Criterion 6 for $a, k > 0$ and $b < 0$ ; it is bounded above by $k$ as $x \to \infty$ , so it serves as a cap on the ideal salary function. If $k$ is known, then $a$ and $b$ can be solved for analytically using Criteria 2 and 3. + +We explored a range of possible $k$ values, calculating $I(46)$ and $I(71)$ for each $k$ . These values are used to extrapolate the appropriate error bounds at both indices. The two ranges did not quite overlap, so a value for $k$ was chosen between the two ranges. The parameters $a$ and $b$ were solved for at this $k$ , yielding + +$$ +k = 8 3, 0 0 0, \quad a = 5 6 / 2 7 = 2. 0 7, \quad b = - 0. 0 3 7 6. +$$ + +This procedure generated the ideal salary function: + +$$ +\text {I d e a l S a l a r y F u n c t i o n :} \quad I (x) = 8 3, 0 0 0 \left(1 + \frac {5 6 e ^ {- 0 . 0 3 7 6 x}}{2 7}\right) - 1. +$$ + +Figure 1 shows the ideal salary curve and the faculty salaries as a function of index. Notice that a few points are drastically above the curve, while most are a few thousand dollars under the curve. Those faculty who are above the curve will henceforth be called Red-Circle faculty [Henrici 1980]. + +![](images/5876e1831b851584765d8d852b59e6c5249ad870ac39209ddfb78f32c205435e.jpg) +Figure 1. Ideal salary curve and current faculty salaries. + +# The Annual Raise + +Apportioning the money available for raises is a matter of deciding how to move people toward the ideal salary curve. All faculty must get a raise in a year where money is available. The principles do not state how much each faculty member must get, so we leave this up to the Provost. Whatever minimum raise is decided upon we will call $R$ . + +After minimum raises, the difference between where the salaries are and where the ideal salary curve would have them is called the salary deficit: + +$$ +D _ {i} = I (x _ {i}) - (S _ {i} + R). +$$ + +The money left over after minimum raises will be called $M$ . In both methods, if there is not enough money to give everybody the minimum raise, then the money available for raises is divided evenly among the faculty. + +# The Deficit-Proportion Method + +This method gives everyone a mandatory raise $(R)$ and allocates the remaining money $(M)$ among the faculty in proportion to their deficit. For each faculty member $i$ , next year's salary $S_{i}^{+}$ will increase from the current year's salary $S_{i}$ according to + +$$ +S _ {i} ^ {+} = S _ {i} + R + D _ {i} \frac {M}{\sum D _ {i}}, +$$ + +where $\sum D_{i}$ is the sum of the deficits of the entire faculty. If a faculty member has a surplus (a negative deficit), it is treated as zero for this sum. + +# The Scaling Method + +A second scheme is to attempt to bring as many people as possible up to the same fraction of the ideal salary curve. This is done by scaling the ideal salary curve down so that the amount needed to raise everyone whose salary is below the curve up to the scaled-down curve equals the amount available for raises. The model again starts with the mandatory raise and then uses a binary search to find this scaled curve. The scaled function is called $I'(x) = cI(x)$ . Under this method, we have + +$$ +S _ {i} ^ {+} = \max \{S _ {i} + R, c I (x) \} \qquad \mathrm {f o r s o m e} c. +$$ + +The value of \( c \), which is a constant between 0 and 1, must be found heuristically. Because of this fact and roundoff error, we cannot necessarily find a scalar \( c \) that will allow the total ideal salaries to precisely match the actual salaries. The largest possible roundoff error for \( N \) faculty, using whole dollars, is the case where all faculty would otherwise be given 50 cents over a whole dollar amount. The amount of roundoff error in that extreme case gives a tolerance value of \( \\(N / 2 \). + +Notice that $cI(x) = ck / (1 + ae^{bx})$ . Instead of looking for a value for $c$ , our algorithm looks for a value for $k' = ck$ . The upper bound of $k'$ is the actual value of $k$ . The lower bound is $I(0)$ , because we can be sure that all salaries are above that value. Solving gives a new function $I'(x) = k' / (1 + ae^{bx})$ . + +# Procedure for the Scaling Method: + +1. If there is no money available for raises, then finished. +2. If there is money, but not enough to give the full mandatory raise, then divide the money equally among all faculty, and we are finished. + +3. If there is excess after the mandatory raise, then find a k' that will put the total salaries within $N/2 of the given salary budget, using the following binary search: + +(a) $khi\gets k$ +(b) $k_{\mathrm{low}} \gets I(0)$ . +(c) $k^{\prime}\gets (k_{\mathrm{hi}} + k_{\mathrm{low}}) / 2.$ +(d) For each faculty member, assign a temporary salary $T_{i}^{+} = \max \{S_{i} + R, I'(x)\}$ . + (e) If the sum of the Ti+ is within $N/2 of the salary budget, then end procedure. +(f) Otherwise, if $\sum T_{i}^{+} > \text{Budget}$ , then $k_{\mathrm{hi}} \gets k$ and return to Step d. +(g) $k_{\mathrm{low}} \gets k$ ; return to Step d. + +4. $S_{i}^{+}\gets T_{i}^{+}$ + +# Comparison + +The deficit-proportion method gives faculty whose salaries are below the ideal salary curve a raise proportional to the amount below the curve. This is fair in that it gives a larger raise to people who are further away from their deserved salary. It is unfair in that when two faculty at the same index start at different salaries, the person who starts with the higher salary will continue to have the higher salary until they both reach the ideal salary curve. Figure 2 shows this method applied to the original data set. Note that although all of faculty salaries have risen, those that were the furthest below the curve are still behind their peers. + +Like the deficit-proportion method, the scaling method is fair in that it gives a larger raise to people further away from their deserved salary. But it does not have the drawback that faculty with lower salaries will be perpetually lower than their peers. The scaling method equalizes the salaries of faculty with the same index much more rapidly than the deficit-proportion method (see Figure 3). For these reasons, the scaling method is superior. + +# Cost-of-Living Increases + +According to Henrici [1980, 162], "Some employers, when they adjust the salary curve upward, give everyone a 'general increase' at the same time. Others prefer to carry this adjustment into individual salaries throughout the year." Therefore, there are two ways of granting cost-of-living increases. Both involve moving the salary curve upward. One involves moving only + +![](images/a1bdd364cd5a873b1119b5d7da0c084049d32667908925a7ca19fb6519c358ae.jpg) +Figure 2. Deficit-proportional method applied to the faculty data. Budget: $9.5 million. + +![](images/7df451853417f8df2f85576aac541c905c47cb0e372051e8b67611b6cc0927ec.jpg) +Figure 3. Scaling method applied to the faculty data. Budget: $9.5 million. + +the salary curve upward (changing $k$ ), and the other involves moving the ideal salary curve upward and simultaneously raising current salaries by the same factor. We present both options and discuss their advantages. + +# COLA Method I: Raising the Curve + +For a given cost-of-living increase $\mathrm{COL}$ , set $k \gets k(1 + \mathrm{COL})$ . This has the effect of raising the ideal salary curve by a factor of $\mathrm{COL}$ (see Figure 4). All other parameters are unchanged. The corresponding raises occur naturally at the end-of-year salary computations, which are done with the scaling algorithm. + +![](images/f85143da00cae72a045aec5f92e8516826a41fc6142e6d13c58badafab180f3d.jpg) +Figure 4. Scaling method with cost-of-living applied to the salary curve. Cost of living: +5%. New budget: $9.5 million. + +# COLA Method II: Raising Salaries, Too + +For a given cost-of-living increase COL, set $k \gets k(1 + \mathrm{COL})$ . Then, if there is enough money, raise all salaries by multiplying them by $(1 + \mathrm{COL})$ and then applying the scaling algorithm. Otherwise, raise all salaries by the maximum percentage possible: + +$$ +S _ {i} ^ {+} = S _ {i} \frac {\sum S + M}{\sum S}. +$$ + +# Comparison + +By raising the curve but not immediately raising salaries, Method I brings the Red Circles (those faculty being paid above their ideal) into line more quickly. This method does not immediately increase the cost of salary, which is good because we are not sure there is money available when cost-of-living is factored in. If there is money, Method I will bring more people up more quickly than Method II will, since Method II depletes the discretionary budget. Method II increases the salary of all faculty, including the Red-Circle faculty. Method I does not, and giving a "cost-of-living" increase selectively could upset these Red Circles, even though they are already making an above-scale salary. + +Once the Red Circles have left or retired, all faculty will be at or below the ideal curve. This neutralizes some of the advantages of Method I. In the ideal case, where all faculty are being paid on the ideal curve and there is enough money to bring them to the new ideal curve, the two methods are equivalent, since the result will always be to bring them up to the curve. + +# Transition + +The transition period for the deficit-proportion and the COLA-II models is poor unless there is a large influx of money. The scaling and COLA-I models are better adapted to lower salary budgets. The length of the transition period is dependent upon how long the Red-Circle employees remain at the college. This is because the people below the curve can make the transition quickly if there is enough money in the budget, but there is no quick way to bring those people who are far above the curve back down. + +The procedure during the transition period is the same as the procedure after the transition period. It is not even necessary to make such a distinction. This is one of the major strengths of our methods. + +# Testing and Analysis + +# Creating a Population Model + +To test our methods, we created a program for each salary scheme. These programs model the movement of faculty through the college and include changing faculty indices based on accumulated years teaching, earning promotions when eligible, retiring eligible faculty, and hiring new faculty (to eliminate any gaps in numbers of faculty at a given rank). + +# Program Assumptions + +- The number of faculty in each rank should be constant. We have a decidedly optimistic view of the development of the college. The computer programs are based on the college remaining at least at the current level of faculty employment. Losing faculty would have an adverse effect on the college's income potential and the number of classes offered to students. Expansion of the college can easily be incorporated into the model by allowing more faculty to be hired at each rank. Also, by requiring that the number of faculty at each rank remain constant, we can study the population and salary shifts as individuals move through the system. +- After 25 years, any faculty are eligible to retire, regardless of rank. There is no reason to prevent instructors, assistant professors, and associate professors from retiring once they have achieved longevity. Anyone who reaches twenty-five years of teaching is eligible for the college's retirement plan (whatever that may be). This assumption comes from observing the common practices of universities and businesses across the nation. Retirement is a separate issue from an individual deciding to move on to other job possibilities, which also must be dealt with in the computer program. Retirement is handled as part of promotion considerations. +- There is no forced retirement. In order to allow maximum flexibility in the model, we do not impose forced retirement. This allows the college to add and modify its own policies to the model with a minimum of effort. Also, forcing retirement adds nothing of benefit to our model. +- Promotions are generated on a probabilistic basis. The problem specifically states that promotions are decided by the Provost herself. In order to model how promotions will affect an individual's salary, we needed to generate a probability that an individual will be promoted if eligible. This way we could model a random determination of promotions or tightly control how often promotions are earned. The college could also use this program as a tool to review a wide range of promotion schemes by altering certain parameters in the program. +- An individual eligible for promotion who is passed over will leave the college. This is a weakness of the program, but we used it to simplify our model of human behavior. It is hard to say how people will react to being passed over for promotion, so we decided for simplicity that someone passed over for promotion will be more likely to accept a better position somewhere else. This way, the program can simulate individuals leaving for reasons other than retirement. +- Promotion from assistant professor to associate professor is independent of the number of years served. The problem states specific rules for all promotions except this one; therefore, in order to allow the most flexibility in the + +models, we do not impose any requirements other than one must be an assistant professor in order to be promoted to associate professor. This way, the college can add requirements or expand them as necessary. This same expansion can easily be applied to existing promotion requirements, due to the modular structure of the final programs. + +- The college may hire faculty from outside the college into any rank. Sometimes gaps in the numbers of personnel at a particular rank will appear. In order to keep a constant number of faculty at each rank, whenever a gap occurs, a new faculty member is automatically hired from the outside community. The starting salary for this new person is determined by placing them directly on the ideal salary curve. In this way, the hiring of new faculty members helps develop the transition from the old salaries to the new compensation scheme. + +# The Flux Model + +The program model resulting from the above assumptions allows a great deal of built-in flexibility. We feel this is crucial, to show that the model and the testing programs can be applied to current and future situations of the college. The programs themselves can be used to examine the models and changes in data supplied to the models. + +The fluctuation, after which this program is modeled, is the combined effect of retirement, individuals leaving, promotion, and new hires. Figure 5 shows the origin and direction of each of these conditions. + +![](images/8bd5f25efdae44dc913f7c161b75d789695a2d70338654fafe3826b463d91eaa.jpg) +Figure 5. The fluctuation model—a basis for programming the four models. + +# Faculty Fluctuations and Program Results + +An initial analysis of the output and the performance of the programs was done using Mathematica and its animation functions. The data were plotted, one frame per year. Then the frames were animated using a built-in animation function. In this way, we could see how the faculty population moved towards the ideal salary curve as their indices rose because of time and promotion. Often, unexplainable data movement led to a re-analysis of the program and sometimes even of the model. + +The conclusion obtained from the animation of series of data sets over 10, 20, and 100 years is that the programs and models do create a feasible salary scheme that the Provost can use. However, determining which method was the most fair and reasonable required a more rigorous examination of the output data and the models. Once again, Mathematica was used to do statistical analysis of the models. + +# Analyzing Results of the Models + +A fair salary system gives equal rewards for equal services. Using the index system, all people at a given index are considered to have the same value. Thus, a fair system minimizes the differences in salary between people of the same index. To determine which model is the most fair, and therefore the best, our programs were used to simulate the effects of the different salary schemes on the relative standard deviation of salaries at any given salary level. + +Each salary scheme was run for ten years at budget increases of $2\%$ and $5\%$ per year. The relative standard deviation of salaries in the same index was calculated for each year. Then the relative standard deviations over the entire range of indices was averaged. Those indices that had 0 or 1 people were not included in this average, since a standard deviation could not be calculated. The results are shown graphically in Figures 6-9 for a $2\%$ increase in budget per year (results are similar for $5\%$ per year). Ten replicates of this experiment were performed to show the consistency of this calculation. + +The scaling and the COLA-I models were consistent over multiple runs and decreased relatively quickly and smoothly to a relative standard deviation of $2\%$ . The COLA-I model was superior to the scaling model when there was a larger yearly increase in the budget; because the COLA-I model scales up over time, it can use all of the money that is available. The deficit-proportional model decreased to $2\%$ relative standard deviation almost as well as the scaling and COLA-I models, but it was more prone to temporary jumps up to a higher relative standard deviation than the other two models. The COLA-II model was the least consistent and took ten years to decrease to a relative standard deviation of $7.5\%$ . + +![](images/f1fcc3fa81d512b5bfc566055487b7d92af2d1150719aa6c6443d4594e77c649.jpg) +Deficit Proportional $5\%$ Increase in Budget per Year +Figure 6. Average relative standard deviations of the deficit-proportional model, run at a $2\%$ increase in budget per year. Ten different runs are displayed to show the uncertainty. + +![](images/465bc893230fdacbac8866978d74021814941ea97486773a35dbcd59fac31e95.jpg) +Scaling $2\%$ Increase in Budget per Year +Figure 7. Average relative standard deviations of the scaling model, run at a $2\%$ increase in budget per year. Ten different runs are displayed to show the uncertainty. + +![](images/16771e703911248a7422bf8a15ee1c81aa43a49e2808e459f691cd966c9319ac.jpg) +COLA-1 2% Increase in Budget per Year +Figure 8. Average relative standard deviations of the cost-of-living model (COLA-I), run at a $2\%$ increase in budget per year and a $3\%$ increase in the COLA value per year. Ten different runs are displayed to show the uncertainty. + +![](images/0af26e239b5bedd1006de3b315152adaadfbf6a0ebb101e98ad018464caa10cd.jpg) +Figure 9. Average relative standard deviations of the cost-of-living model COLA-II, run at a $2\%$ increase in budget per year and a $3\%$ increase in the COLA value per year. Ten different runs are displayed to show the uncertainty. + +These results show that the scaling and COLA-I models are superior to the deficit-proportional and COLA-II models, because they have less variance in the salaries of equivalent faculty members. + +# References + +Henrici, Stanley B. 1980. Salary Management for the Nonspecialist. New York: Amacom. + +Marshall, Don R. 1978. Successful Techniques for Solving Employee Compensation Problems. New York: Wiley. + +# Judge's Commentary: The Outstanding Faculty Salaries Papers + +Donald E. Miller + +Dept. of Mathematics + +Saint Mary's College + +Notre Dame, IN 46556 + +dmiller@saintmarys.edu + +The substance of this problem was recognizable by the judges from academic institutions, causing many of them to comment on how similar it is to the situation at their own institutions. The basic problem of designing a new pay system satisfying the specified criteria was simple enough that most teams recognized it as a curve-fitting problem. They also, with varying degrees of success, were able to find a model that satisfied most of the criteria. + +Implementing the model, however, with limited resources, on salaries of current faculty at Aluacha Balaclava turned out to be deceptively more difficult. Most teams recognized that the salary curve would need to have a negative second derivative in order for salary increments to decrease with experience. Models included variations of logistic, power, exponential, root and polynomial functions, with the exponential function appearing most often. In their Outstanding paper, the team from University Alaska Fairbanks indicated that they had experimented with six different functions before settling on the logistic model because it met all the criteria. + +Some teams used an appropriate model for the first twenty-five years of experience, then froze everyone's salary, thus violating the principle that all faculty should get a raise any year that money is available. Most teams used the same model for all four ranks and entire tenure of the faculty member. However, others recognized the instructor rank as different from the other ranks and developed a separate model for it. Still others developed a separate model for the full professor rank, arguing that it was different because a full professor has no chance for promotion. The Outstanding paper from Southeast Missouri State University even developed a separate model for each rank. + +Modelers can frequently gain creditability by demonstrating that they understand the problem in its context. With our salary data, this could have been accomplished by recognizing some of the salaries as outliers that would need to be dealt with individually. The datum point representing a professor with two years of experience and a salary of $85,500 is clearly an + +outlier and will be virtually impossible to bring into line with any reasonable model. This person may be some sort of a "superstar" who is not expected to fit into the model salary structure. One of the judges suggested that this salary belonged to the football coach. Whatever the case, the modeler should be willing to point out or question such unusual situations, but few of the teams did. One exception was the Outstanding paper from the North Carolina School of Science and Mathematics. This team showed that, "...with the exception of a few grossly overpaid faculty, the problem of unfairness would be solved" in five years. + +Better papers were distinguished by a complete and mature treatment of the assumptions as well as a precise plan of implementation. The team from University of Alaska at Fairbanks even offered two plans, showed graphically how they differed, and then recommended one over the other. All the Outstanding papers also did some sensitivity analyses and included cost of living in their implementation plan. Another feather of these papers was a well-written executive summary. + +# About the Author + +Donald Miller is Associate Professor and of Chair of Mathematics at Saint Mary's College. He has served as an associate judge of the MCM for three years and prior to that mentored two Meritorious teams. He has done considerable consulting and research in the areas of modeling and applied statistics. He is currently a member of SIAM's Education Committee and president of the Indiana Section of the Mathematical Association of America. \ No newline at end of file diff --git a/MCM/1995-2008/1996MCM/1996MCM.md b/MCM/1995-2008/1996MCM/1996MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..39df241e1bf7df737ef9f02a890029dc305db951 --- /dev/null +++ b/MCM/1995-2008/1996MCM/1996MCM.md @@ -0,0 +1,3969 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy State University + +Montgomery + +P.O. Drawer 4419 + +Montgomery, AL 36103 + +JMCargal@aol.com + +Development Director + +Laurie W. Aragon + +Creative Director + +Roger Slade + +Production Manager + +George W. Ward + +Project Manager + +Roland Cheyney + +Copy Editors + +Seth A. Maislin + +Emily T. Sacca + +Distribution Manager + +Bill Whalen + +Executive Assistant + +Annette Moccia + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 17, No. 3 + +# Associate Editors + +Don Adolphson + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +Leah Edelstein-Keshet + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Peter A. Lindstrom + +Walter Meyer + +Gary Musser + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. ("Gene") Woolsey + +Brigham Young University + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +University of British Columbia + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +North Lake College + +Adelphi University + +Oregon State University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois and NSF + +Colorado School of Mines + +# Subscription Rates + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription includes print copies of quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in their classes, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2020 $69 + +(Outside U.S.) #2021 $79 + +# INSTITUTIONAL PLUS MEMBERSHIP SUBSCRIBERS + +Institutions can subscribe to the Journal through either Institutional Pus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in any class taught in the institution, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2070 $395 + +(Outside U.S.) #2071 $405 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +Regular Institutional members receive only print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2040 $165 + +(Outside U.S.) #2041 $175 + +# LIBRARY SUBSCRIPTIONS + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching and our organizational newsletter Consortium. + +(Domestic) #2030 $140 + +(Outside U.S.) #2031 $160 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquires readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02420 + +© Copyright 1997 by COMAP, Inc. All rights reserved. + +![](images/9d88313964f3964a6a2498ef6aa48a28a689e4fcd30d3d77773e524277768ced.jpg) + +# Vol. 17 No. 3 1996 + +# Table of Contents + +# Publisher's Editorial + +New Directions + +Solomon A. Garfunkel 185 + +# Modeling Forum + +Results of the 1996 Mathematical Contest in Modeling + +Frank Giordano 187 + +# The Submarine Detection Problem + +Gone Fishin' + +Douglas Martin, Robert A. Moody, and Woon (Larry) Wong 207 + +How to Locate a Submarine by Detecting Changes in Ambient Noise + +Carl Leitner, Akira Negi, and Katherine Scott 227 + +Detection of a Silent Submarine from Ambient Noise Fluctuations + +Andrew R. Frey, Joseph R. Gagnon, and J. Hunter Tart 241 + +Imaging Underwater Objects with Ambient Noise + +Aron C. Atkins, Henry A. Fink, and Jeffrey D. Spaleta 255 + +Judge's Commentary: The Outstanding Submarine Detection Papers + +John S. Robertson 273 + +Practitioner's Commentary: The Outstanding Submarine Detection + +Papers + +Michael J. Buckingham 277 + +# The Contest Judging Problem + +The Paper Selection Scheme Simulation Analysis + +Zheng Yuan Zhu, Jian Liu, and Haonan Tan 283 + +Modeling Better Modeling Judges + +Brian E. Ellis, Chad Hall, and Charles A. Ross 299 + +Judging a Mathematics Contest + +Daniel A. Calderón Brennan, Philip J. Darcy, and David T. Tascione... 309 + +Select the Winners Fast + +Haitao Wang, Chunfeng Huan, and Hongling Rao 317 + +The Inconsistent Judge + +Dan Scholz, Jade Vinson, and Derek Oliver Zaba 329 + +Judge's Commentary: The Outstanding Contest Judging Papers + +Veena B. Mendiratta 337 + +Donald E. Miller 339 + +Contest Director's Commentary: Judging the MCM + +Frank Giordano 341 + +Practitioner's Commentary: Computer Support for the MCM + +Steve Harper 345 + +# Publisher's Editorial + +# New Directions + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02173 + +s.garfunkel@mail.comap.com + +Sometimes it seems that things at COMAP never stay the same for very long. The second derivative is always positive. No sooner does one project come to fruition than another seems to heat up and still more project ideas emerge. Case in point: Principles and Practice of Mathematics is now out and in use. The text, COMAP's new first-year course for undergraduate mathematics majors, has been published by Springer-Verlag. Desk copies are available from the publisher, and we will have a major event at the joint MAA-AMS meetings in January to feature the book. Publication of Principles and Practice caps five years of effort by a distinguished author team and advisory board, not to mention the tireless efforts of the project editor, Walter Meyer. + +Moreover, the fourth edition of For All Practical Purposes has just been published by W.H. Freeman and Company. The text has a new four-color design and, as with each new edition, a substantial amount of original material not present in earlier editions. + +Work continues apace on the ARISE Project, our grades 9-11 comprehensive secondary-school mathematics curriculum. All of the project materials are currently undergoing revision for publication in the fall of 1997. In addition, we have recently received funding from the National Science Foundation (NSF) through the STREAM Project to prepare staff-development materials (print, video, and interactive) to support all high-school reform efforts. + +Now, about that second derivative. We have just begun a major new undergraduate initiative: Project Intermath, also funded by NSF. This project is part of a new national effort to institutionalize reform. Our vision is to help establish the interdisciplinary cooperation necessary for designing integrated college-level experiences in mathematics, science, and technology. We and our partners at the U.S. Military Academy at West Point intend to foster continuous coordination among departments presenting mathematics-based curricula. The strategies for Project Intermath include the development and dissemination of Interdisciplinary Lively Applications (ILAP) modules as well as the testing of integrated curriculum models. The director of this project and new addition to the COMAP staff is Brig. Gen. (Ret.) Frank Giordano. Frank will continue to direct the Mathematical Contest in Modeling; and we are grateful + +to have his energy, wisdom, and expertise. + +On a different note, by the time you read this editorial, COMAP'S web site, www.comap.com, will be up and running. Initially, as with most organizations, the Web site will be informational--describing our organization, its projects, and its products. But our plans are more grandiose. We intend to put most, if not all, of COMAP's modular materials on the Web. Our intent is to make it possible for faculty to preview our supplemental materials in order to decide better which modules will fit their course structure. As with many other companies that act as publishers, we are still debating the economics of this form of distribution. Stay tuned. + +As exciting as new technologies and new modes of delivery can be, nothing is more exciting than new programmatic ideas. COMAP, with the help and guidance of our president, Uri Treisman, is planning a major new initiative in the area of service. Tentatively titled "Volunteering Our Expertise," this program is designed to encourage and reward college mathematics departments for new projects that serve their community. Importantly, these will be projects in which faculty and students make use of their mathematical expertise to aid local schools, hospitals, churches, and community organizations. COMAP plans to publish reports on these programs and to give annual awards. It is our sincere hope that through these efforts we will foster increased service activities by undergraduate mathematics departments and, not coincidentally, demonstrate to our neighbors the centrality of our discipline. + +This annual guest editorial on COMAP activities, summing up where we are and where we hope to go, is a pleasant task. I would like to thank our editor, Paul Campbell, for this opportunity and for all of his hard work in making The UMAP Journal the fine publication it has become. + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for eleven years and has dedicated the last 20 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he appeared as the on-camera host), Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary-school mathematics. + +# Modeling Forum + +# Results of the 1996 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02173 + +f.giordano@mail.comap.com + +FRGiordano@aol.com + +# Introduction + +A total of 393 teams of undergraduates (a $23\%$ increase from 1995!), from 225 schools, spent the second weekend in February working on applied mathematics problems. They were part of the twelfth Mathematical Contest in Modeling (MCM). On Friday morning, the MCM faculty advisor opened a packet and presented each team of three students with a choice of one of two problems. After a weekend of hard work, typed solution papers were mailed to COMAP on Monday. Seven of the top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first eleven contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-1995). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +# Problem A: The Submarine Detection Problem + +The world's oceans contain an ambient noise field. Seismic disturbances, surface shipping, and marine mammals are sources that, in different frequency ranges, contribute to this field. We wish to consider how this ambient noise might be used to detect large moving objects, e.g., submarines located below the ocean surface. Assuming that a submarine makes no intrinsic noise, develop a method for detecting the presence of a moving submarine, its speed, its size, and + +its direction of travel, using only information obtained by measuring changes to the ambient noise field. Begin with noise at one fixed frequency and amplitude. + +# Problem B: The Contest Judging Problem + +When determining the winner of a competition like the Mathematical Contest in Modeling, there is generally a large number of papers to judge. Let's say that there are $P = 100$ papers. A group of $J$ judges is collected to accomplish the judging. Funding for the contest constrains both the number of judges that can be obtained and the amount of time that they can judge. For example, if $P = 100$ , then $J = 8$ is typical. + +Ideally, each judge would read all papers and rank-order them, but there are too many papers for this. Instead, there are a number of screening rounds in which each judge reads some number of papers and gives them scores. Then some selection scheme is used to reduce the number of papers under consideration: If the papers are rank-ordered, then the bottom $30\%$ that each judge rank-orders could be rejected. Alternatively, if the judges do not rank-order the papers, but instead give them numerical scores (say, from 1 to 100), then all papers falling below some cutoff level could be rejected. + +The new pool of papers is then passed back to the judges, and the process is repeated. A concern is that the total number of papers that each judge reads must be substantially less than $P$ . The process is stopped when there are only $W$ papers left. These are the winners. Typically, for $P = 100$ , we have $W = 3$ . + +Your task is to determine a selection scheme, using a combination of rank-ordering, numerical scoring, and other methods, by which the final $W$ papers will include only papers from among the "best" $2W$ papers. (By "best" we assume that there is an absolute rank-ordering to which all judges would agree.) For example, the top three papers found by your method will consist entirely of papers from among the "best" six papers. Among all such methods, the one that requires each judge to read the least number of papers is desired. + +Note the possibility of systematic bias in a numerical scoring scheme. For example, for a specific collection of papers, one judge could average 70 points, while another could average 80 points. How would you scale your scheme to accommodate for changes in the contest parameters $(P, J, \text{and } W)$ ? + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at Carroll College, Montana. At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Submarine Detection4163769126
Contest Judging53877147267
954114216393
+ +The nine papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries by judges and practitioners. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +# Submarine Detection Papers + +"Gone Fishin' + +Pomona College + +Claremont, CA + +Ami Radunskaya + +Douglas Martin + +Robert A. Moody + +Woon (Larry) Wong + +"How to Locate a Submarine by + +Detecting Changes in the Ambient Noise" + +University of North Carolina + +Chapel Hill, NC + +Ancel C. Mewborn + +Carl Leitner + +Akira Negi + +Katherine Scott + +"Detection of a Silent Submarine from Ambient Noise Field Fluctuations" + +Wake Forest University + +Winston-Salem, NC + +Stephen B. Robinson + +Andrew R. Frey + +Joseph R. Gagnon + +J. Hunter Tart + +"Imaging Underwater Objects with Ambient Noise" + +Worcester Polytechnic Institute + +Worcester, MA + +Arthur C. Heinricher + +Aron C. Atkins + +Henry A. Fink + +Jeffrey D. Spaleta + +# Contest Judging Papers + +"The Paper Selection Scheme Simulation Analysis" + +Fudan University + +Shanghai, China + +Yongji Tan + +Zheng Yuan Zhu + +Jian Liu + +Haonan Tan + +"Modeling Better Modeling Judges" + +Gettysburg College + +Gettysburg, PA + +James P. Fink + +Brian E. Ellis + +Chad Hall + +Charles A. Ross + +"Judging a Mathematics Contest" + +St. Bonaventure University + +St. Bonaventure, NY + +Albert G. White + +Daniel A. + +Calderón Brennan + +Philip J. Darcy + +David T. Tascione + +"Select the Winners Fast" + +University of Science and Technology of China + +Hefei, Anhui, China + +Qingjuan Yu + +Haitao Wang + +Chunfeng Huan + +Hongling Rao + +“The Inconsistent Judge” + +Washington University + +St. Louis, MO + +Hiro Mukai + +Dan Scholz + +Jade Vinson + +Derek Oliver Zaba + +# Meritorious Teams + +Submarine Detection Papers (16 teams) + +Beijing Institute of Technology, Beijing, China (Liang Sun) + +Chongqing University, Chongqing, Sichuan, China (Fu Li) + +Duke University, Durham, NC (Richard A. Scoville) + +Eastern Mennonite University, Harrisonburg, VA (John L. Horst) + +Nankai University, Tianjin, China (Wu Qun Huang) + +Ohio State University, Columbus, OH (Dijen Ray-Chaudhuri) + +Rhodes College, Memphis, TN (David A. Feil) + +Southeast University, Nanjing, China (Huang Jun) + +Southeast University, Nanjing, China (Zhizhong Sun) + +Trinity University, San Antonio, TX (Diane G. Saphire) + +Tsinghua University, Beijing, China (Celi Gao) + +University of North Florida, Jacksonville, FL (Peter A. Braza) + +University of Northern Iowa, Cedar Falls, IA (Gregory M. Dotseth) + +University of Science and Technology of China, Hefei, Anhui, China (Jixin Cheng) + +Xiangtan University, Xiangtan, Hunan, China (Zhou Yong) + +Zhongshan University, Guangzhou, China (Ren Shu Chen) + +# Contest Judging Papers (38 teams) + +Abilene Christian University, Abilene, TX (Thomas D. Hendricks) + +Bellarmine College, Louisville, KY (John A. Oppelt) + +California Polytechnic State University, San Luis Obispo, CA (Thomas O'Neil) + +Colorado College, Colorado Springs, CO (Deborah P. Levinson) + +East China University of Science and Technology, Shanghai, China (Sanbao Xu) + +Eastern Oregon State College, LaGrande, OR (Mark R. Parker) + +Harvard University, Cambridge, MA (Howard Georgi) + +Harvey Mudd College, Claremont, CA (David L. Bosley) + +Information & Engineering Institute, Zhengzhou, Henan, China (Hongwei Duan) + +Kenyon College, Gambier, OH (Dana N. Mackenzie) + +Lewis & Clark College, Portland, OR (Harvey Schmidt, Jr.) + +Luther College, Decorah, IA (Reginald D. Laursen) + +Macalester College, St. Paul, MN (Karla V. Ballman) + +Messiah College, Grantham, PA (Douglas C. Phillippy) + +Mt. St. Mary's College, Emmitsburg, MD (Fred J. Portier) + +Mt. St. Mary's College, Emmitsburg, MD (Theresa A. Francis) + +North Carolina School of Science and Mathematics, Durham, NC (Dot Doyle) + +National University of Defence Technology, Chang Sha, Hunan, China (MengDa Wu) + +New Mexico State University, Las Cruces, NM (Caroline Sweezy) + +Northern Arizona University, Flagstaff, AZ (Terence R. Blows) + +Rose-Hulman Institute of Technology, Terre Haute, IN (Aaron D. Klebanoff) + +South China University of Technology, Guangzhou, China (Lejun Xie) + +Southern Connecticut State University, New Haven, CT (Ross B. Gingrich) + +University College Cork, Cork Ireland (J.B. Twomey) + +University College Galway, Galway Ireland (Patrick M. O'Leary) + +University of Alaska Fairbanks, Fairbanks, AK (John P. Lambert) + +University of Dayton, Dayton, OH (Thomas E. Gantner) + +University of Latvia, Riga, Latvia (Andris B. Cibulis) + +University of Massachusetts-Amherst, Amherst, MA (Edward A. Connors) + +University of Missouri-Rolla, Rolla, MO (Michael G. Hilgers) + +University of Toronto, Toronto, Ontario, Canada (James G.C. Templeton) + +University of Utah, Salt Lake City, UT (Don H. Tucker) + +University of Wisconsin-Madison, Madison, WI (Anatole Beck) + +University of Wisconsin-Platteville, Platteville, WI (John A. Krogman) + +Western Washington University, Bellingham, WA (Tjalling J. Ypma) + +Xidian University, Xian, Shaanxi, China (Mao Yong Cai) + +Youngstown State University, Youngstown, OH (J. Douglas Faires) + +Zhejiang University, Hangzhou, China (Daoyuan Fang) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sci + +ences, awarded to each member of two Outstanding teams a cash award and a three-year membership. The teams were from Gettysburg College (Contest Judging Problem) and Washington University (Contest Judging Problem). Moreover, INFORMS gave free one-year memberships to all members of Meritorious and Honorable Mention teams. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. Each team member received a cash prize. The teams were from Pomona College (Submarine Detection Problem) and from St. Bonaventure University (Contest Judging Problem). They gave presentations at a special session at the July SIAM Annual Meeting in Kansas City, MO. + +The Mathematical Association of America designated one Outstanding team as an MAA Winner. The team was from Pomona College (Submarine Detection Problem). + +# Judging + +Director + +Frank R. Giordano, Dept. of Mathematics, Carroll College, Helena, MT + +Associate Directors + +Chris Arney, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +# Submarine Detection Problem + +Head Judge + +Marvin S. Keener, Mathematics Dept., Oklahoma State University, Stillwater, OK + +Associate Judges + +Courtney Coleman, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick Driscoll, Mathematics Dept., Virginia Polytechnic Institute and State University, Blacksburg, VA + +Ben A. Fusaro, Dept. of Mathematical Sciences, Florida State University, Tallahassee, FL + +Mario Juncosa, RAND Corporation, Santa Monica, CA + +Daphne Liu, Dept. of Mathematics and Computer Science, California State University Los Angeles, Los Angeles, CA + +Jack Robertson, Mathematics Dept., Georgia College, Milledgeville, GA + +Lee Seitelman, Glastonbury, CT + +John L. Scharf, Carroll College, Helena, MT + +Theodore H. Sweetser III, Jet Propulsion Lab, Pasadena, CA + +Daniel Zwillinger, Zwillinger & Associates, Arlington, MA + +# Contest Judging Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, + +Bloomington, IN + +Associate Judges + +Karen Bolinger, Mathematics Dept., Arkansas State University, + +State University, AR + +James Case, Baltimore, Maryland + +Alessandra Chiareli, Computational Science Center, 3M, St. Paul, MN + +William Fox, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Ted Forsman, Onward, Inc. + +Jerry Griggs, University of South Carolina, Columbia, SC + +John Kobza, Virginia Polytechnic Institute and State University, + +Blacksburg, VA + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Dept. of Mathematics, St. Mary's College, Notre Dame, IN + +Keith Miller, National Security Agency, Fort Meade, MD + +Peter Olsen, National Security Agency, Fort Meade, MD + +Theresa Sandifer, Mathematics Dept., Southern Connecticut State University, + +New Haven, CT + +Robert M. Tardiff, Dept. of Mathematical Sciences, + +Michael Tortorella, Lucent Technologies, NJ + +Marie Vanisko, Carroll College, Helena, MT + +Triage Session (all judges from Carroll College, Helena, MT) + +Director + +Frank Giordano + +Head Judge, Submarine Detection Problem + +John L. Scharf + +Head Judge, Contest Judging Problem + +Marie Vanisko + +Associate Judges + +Peter Biskis + +Philip LaRue + +Terry Mullen + +Jack Oberweiser + +Philip Rose + +Anthony M. Szpilka + +# Sources of the Problems + +Both the Submarine Detection Problem and the Contest Judging Problem were contributed by Daniel Zwillinger, Zwillinger & Associates, Arlington, + +MA. + +# Acknowledgments + +The MCM was funded this year by the National Security Agency, whose support we deeply appreciate. We thank Dr. Gene Berg of NSA for his coordinating efforts. The MCM is also indebted to INFORMS, SIAM, and the MAA, which provided judges and prizes. + +I thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +$\mathbf{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) +A = Submarine Detection Problem +B = Contest Judging Problem + +
INSTITUTIONCITYADVISORAB
ALABAMA
Birmingham-Southern Coll.BirminghamRaju SriramP
University of AlabamaHuntsvilleClaudio H. MoralesP
ALASKA
Univ. of Alaska FairbanksFairbanksJohn P. LambertM,H
ARIZONA
Northern Arizona Univ.FlagstaffTerence R. BlowsM,P
ARKANSAS
Hendrix CollegeConwayZe'ev BarelH,P
Williams Baptist CollegeWalnut RidgeLana S. RhoadsP
Michael MilliganP
CALIFORNIA
Calif. Inst. of TechnologyPasadenaRichard M. WilsonP
Calif. Poly. State Univ.San Luis ObispoThomas O'NeilM,P
Calif. State UniversityBakersfieldMaureen E. RushHP
NorthridgeGholam-Ali ZakeriP
Harvey Mudd CollegeClaremontDavid L. BosleyPM
Humboldt State Univ.ArcataKathleen M. CroweP
Pomona CollegeClaremontAmi RadunskayaO
Sonoma State UniversityRohnert ParkClement E. FalboP
COLORADO
Colorado CollegeColorado SpringsDeborah P. LevinsonM
Metro. State CollegeDenverThomas E. KelleyH
Trinidad State Jr. CollegeTrinidadA. PhilbinH,P
U.S. Air Force AcademyUSAF AcademyScott G. FrickensteinH
Jonathan D. RobinsonH
U. of Northern ColoradoGreeleyDonald D. ElliottH
U. of Southern ColoradoPuebloBruce N. LundbergP
CONNECTICUT
Sacred Heart UniversityFairfieldAntonio A. MagliaroP
Southern Connecticut State Univ.New HavenRoss B. GingrichM
Western Connecticut State Univ.DanburyJudith A. GrandahlH
Edward SandiferH
DISTRICT OF COLUMBIA
Georgetown UniversityWashingtonAndrew VogtPH
George Washington UniversityWashingtonDaniel H. UllmanP
Trinity CollegeWashingtonSuzanne E. SandsH
FLORIDA
Florida A&M UniversityTallahasseeBruno GuerrieriP
Florida Inst. of TechnologyMelbourneLaurene V. FausettP
Florida State UniversityTallahasseeHong WenH
Jacksonville UniversityJacksonvilleRobert A. HollisterP
Stetson UniversityDelandLisa O. CoulterH
University of North FloridaJacksonvillePeter A. BrazaM
Univ. of South Florida-Fort MyersFort MyersCharles E. LindseyPP
GEORGIA
Georgia CollegeMilledgevilleCraig TurnerP
ILLINOIS
Greenville CollegeGreenvilleGalen R. PetersP
Illinois Wesleyan UniversityBloomingtonZahia DriciP
Morton CollegeCiceroElaine B. PavelkaP
Northern Illinois UniversityDekalbHamid BelloutH
Linda R. SonsP
Olivet Nazarene UniversityBourbonnaisDale K. HathawayHP
Wheaton CollegeWheatonPaul IsiharaBH
INDIANA
Indiana UniversityBloomingtonJames F. DavisHH
Rose-Hulman Inst. of Tech.Terre HauteAaron D. KlebanoffM
Saint Mary's CollegeNotre DamePeter D. SmithPH
IOWA
Clarke CollegeDubuqueCarol A. SpiegelP,P
Drake UniversityDes MoinesAlexander F. KleinerP
Graceland CollegeLamoniSteve K. MurdockH
Grinnell CollegeGrinnellThomas L. MooreH,H
Luther CollegeDecorahReginald D. LaursenM
Univ. of Northern IowaCedar FallsGregory M. DotsethM
Timothy L. HardyP
KENTUCKY
Asbury CollegeWilmoreKenneth P. RietzP,P
Bellarmine CollegeLouisvilleJohn A. OppeltM,P
Univ. of LouisvilleLouisvilleAdel S. ElmaghrabyP
Western Kentucky Univ.Bowling GreenDouglas D. MooneyP
LOUISIANA
Louisiana State Univ.ShreveportRobert J. FragaP
McNeese State Univ.Lake CharlesSid L. BradleyP
Northwestern St. Univ.NatchitochesLisa R. GalminasP,P
MAINE
Colby CollegeWatervilleAmy H. BoydP,P
MARYLAND
Frostburg State UniversityFrostburgKurtis H. LemmertHP
Goucher CollegeBaltimoreRobert E. LewandP
Hood CollegeFrederickJohn BoonP
Betty MayfieldP
Loyola CollegeBaltimoreGeorge B. MackiwP
William D. ReddyH
Mt. St. Mary's CollegeEmmitsburgTheresa A. FrancisM
Fred J. PortierM
Salisbury State UniversitySalisburySteven M. HetzlerP
Kathleen M. ShannonP
MASSACHUSETTS
Fitchburg State CollegeFitchburgRichard C. BiskP
Harvard UniversityCambridgeHoward GeorgiM
Smith CollegeNorthamptonRuth HaasH,P
Univ. of MassachusettsAmherstEdward A. ConnorsM,H
Worcester Poly. Inst.WorcesterArthur C. HeinricherO
Bogdan VernescuP
MICHIGAN
Calvin CollegeGrand RapidsThomas L. JagerH
Eastern Michigan Univ.YpsilantiChristopher E. HeeHP
Lawrence Tech. Univ.SouthfieldRuth G. FavroP
Howard WhitstonH
Siena Heights CollegeAdrianToni CarrollP
University of MichiganDearbornJennifer ZhaoH
MINNESOTA
Bethany Lutheran CollegeMankatoJulie M. KjeerH
Macalester CollegeSt. PaulKarla V. BallmanM
Daniel A. SchwalbeH
University of MinnesotaDuluthZhuangyi LiuP
Bruce B. PeckhamH
MISSISSIPPI
Belhaven CollegeJacksonRobert A. JonesP
Janie SmithP
Jackson State UniversityJacksonDavid C. BramlettP
MISSOURI
College of the OzarksPoint LookoutAlbert T. DixonPP
Missouri Southern St. Coll.JoplinPatrick CassensH,H
Northeast Missouri St. U.KirksvilleSteven J. SmithH,P
Northwest Missouri St. U.MaryvilleRussell N. EulerP
Southeast Missouri St. U.Cape GirardeauRobert W. SheetsH,H
University of MissouriRollaMichael G. HilgersM
Washington UniversitySt. LouisHiro MukaiPO
MONTANTA
Carroll CollegeHelenaTerence J. MullenH
Anthony M. SzpilkaP
NEBRASKA
Hastings CollegeHastingsDavid B. CookeH
Nebraska Wesleyan Univ.LincolnP. Gavin LaRoseH
NEVADA
Sierra Nevada CollegeIncline VillageSue WelschP
University of NevadaRenoMark M. MeerschaertP
NEW JERSEY
Camden County CollegeBlackwoodAllison SuttonP
New Jersey Inst. of Tech.NewarkBruce G. BukietP
Rutgers UniversityNewarkLee MosherP
NEW MEXICO
New Mexico State Univ.Las CrucesMarcus S. CohenH
Caroline SweezyM
New Mexico Inst. of Mining and Tech.SocorroBrian T. BorchersP
NEW YORK
Canisius CollegeBuffaloL. Christine KinseyP
City College of CUNYNew YorkIzidor GertnerP
George WolbergH
Colgate UniversityHamiltonThomas W. TuckerH
Hofstra UniversityHempsteadRaymond N. GreenwellP,P
Ithaca CollegeIthacaJohn C. MaceliH
Osman YurekliP
Manhattanville CollegePurchaseEdward SchwartzH
Nazareth CollegeRochesterRonald W. JorgensenH
Queens College/CUNYFlushingAri GrossH
Siena CollegeLoudonvilleThomas H. RousseauH,P
St. Bonaventure Univ.St. BonaventureFrancis C. LearyP
Albert G. WhiteO
U.S. Military AcademyWest PointDouglas L. Bentley, Jr.H
Timothy M. PetitP
Westchester Comm. Coll.ValhallaRowan LindleyP,P,P
Yeshiva CollegeNew YorkThomas H. OtwayP
NORTH CAROLINA
Appalachian State Univ.BooneTerry G. AndersonH
Alan T. ArnhaltP
Holly P. HirstP
Duke UniversityDurhamRichard A. ScovilleM
N.C. School of Sci. & Math.DurhamDot DoyleM
John KolenaH
Daniel J. TeagueP
North Carolina St. Univ.RaleighJohn BishirH
Salem CollegeWinston-SalemDebbie L. HarrellP
Craig J. RichardsonP
Paula G. YoungP
Univ. of North CarolinaChapel HillAncel C. MewbornO
WilmingtonRussell L. HermanP
Wake Forest UniversityWinston-SalemStephen B. RobinsonO
Western Carolina Univ.CullowheeJeff A. GrahamH
NORTH DAKOTA
Univ. of North DakotaWillistonWanda M. MeyerP
OHIO
College of WoosterWoosterMatt BrahmP
Hiram CollegeHiramJames R. CaseH
Michael A. GrajekP
Kenyon CollegeGambierDana N. MackenzieM,H
Miami UniversityOxfordDouglas E. WardP
Ohio State UniversityColumbusDjen Ray-ChaudhuriMP
University of DaytonDaytonThomas E. GantnerM
Xavier UniversityCincinnatiRichard J. PulskampH,P
Youngstown St. Univ.YoungstownJ. Douglas FairesHM
OKLAHOMA
Oklahoma State Univ.StillwaterJohn E. WolfeH
Southeastern Okla. St. U.DurantJohn M. McArthurH
OREGON
Eastern Oregon St. Coll.LaGrandeMark R. ParkerM
Holly S. ZulloP
Lewis & Clark CollegePortlandHarvey Schmidt, Jr.M
Southern Oregon St. Coll.AshlandKemble R. YatesH
PENNSYLVANIA
Bloomsburg UniversityBloomsburgScott E. InchP
Chatham CollegePittsburghAngela A. FishmanP
Franklin & Marshall Coll.LancasterTim C. HesterbergP
Gannon UniversityErieThomas M. McDonaldP
Gettysburg CollegeGettysburgJames P. FinkO
Lafayette CollegeEastonThomas HillH
Messiah CollegeGranthamDouglas C. PhillippyM
Muhlenberg CollegeAllentownDavid A. NelsonP
Susquehanna UniversitySelinsgroveKenneth A. BrakkeP
SOUTH CAROLINA
Central Carolina Tech. Coll.SumterKaren G. McLaurinPP
Coastal Carolina Univ.ConwayPrashant S. SansgiryP
Columbia CollegeColumbiaScott A. SmithP
SOUTH DAKOTA
Northern State UniversityAberdeenA. S. ElkhaderH,P
TENNESSEE
Austin Peay St. Univ.ClarksvilleMark C. GinnH
David Lipscomb Univ.NashvilleGary C. HallP
Mark A. MillerP
Rhodes CollegeMemphisDavid A. FeilM
TEXAS
Abilene Christian Univ.AbileneThomas D. HendricksM
Baylor UniversityWacoFrank H. MathisH
Rice UniversityHoustonDouglas W. MooreH
Southwestern UniversityGeorgetownTherese N. SheltonP
Texas A & M UniversityCollege StationDenise E. KirschnerP
Trinity UniversitySan AntonioDiane G. SaphireMP
University of DallasIrvingCharles A. CoppinP
IrvingEdward P. WilsonP
U. of Texas-Pan AmericanEdinburgRoger A. KnobelPP
U. of Texas-Permian BasinOdessaMarcin PaprzyckiP
UTAH
University of UtahSalt Lake CityDon H. TuckerM,H
VERMONT
Johnson State CollegeJohnsonGlenn D. SproulH
Norwich UniversityNorthfieldLeonard C. GamblerP
VIRGINIA
College of William & MaryWilliamsburgLarry M. LeemisH
Hugo J. WoerdemanP
Eastern Mennonite Univ.HarrisonburgJohn L. HorstM
James Madison Univ.HarrisonburgJames S. SochackiP
Roanoke CollegeSalemRoland B. MintonP
University of RichmondRichmondKathy W. HokePP
WASHINGTON
Pacific Lutheran Univ.TacomaRachid BenkhaltiH,H
Univ. of Puget SoundTacomaRobert A. BeezerH
Martin JacksonH
Andrew F. RexH
Western Washington U.BellinghamTjalling J. YpmaPM
WISCONSIN
Beloit CollegeBeloitPhilip D. StraffinH,H
Northcentral Tech. Coll.WausauFrank J. FernandesP
Robert J. HenningP,P
Northland CollegeAshlandNicholas C. BystromP
St. Norbert CollegeDe PereJohn A. FrohligerP
Univ. of WisconsinGreen BayNikitas L. PetrakopoulosP
MadisonAnatole BeckM
OshkoshK.L.D. GunawardenaP
PlattevilleClement T. JeskeH
John A. KrogmanM
Sherrie NicolP
Stevens PointNorm D. CuretPH
Wisc. Lutheran Coll.MilwaukeeMarvin C. PapenfussP
AUSTRALIA
U. of South QueenslandToowoomba, Qld.Christopher J. HarmanP
CANADA
University of CalgaryCalgary, Alb.David R. WestbrookH,P
Univ. of SaskatchewanSaskatoon, Sask.James A. BrookeH
University of TorontoToronto, Ont.James G. C. TempletonM
York UniversityNorth York, Ont.Jianhong WuH,H
CHINA
Auto. Eng. Coll. of Beijing Union U.BeijingWang Xin FengP
Ren Kai LongH
Beijing Inst. of Tech.BeijingLiang SunMP
Beijing Normal Univ.BeijingLaifu LiuP
Wenyi ZengP
Beijing Univ. of Aero. & Astro.BeijingWeiguo LiP
Ling MaHP
Wei Guo LiP
Beijing Univ. of Post & Tel. Chongqing UniversityBeijingChongqing, SichuanShou-shan LuoFu LiGong ChuQiongsen LiuShanqiang RenHong-Quan YuLi-Zhong ZhaoMing-Feng HeP MP
Dalian Univ. of TechnologyDalian, LiaoningXiwen LuYuanhong LuZhen-Dong YuanSanbao XuP H
E. China Normal Univ.ShanghaiYuan CaoP
E. China U. of Sci. & Tech.ShanghaiXiangji TanWei Liao YouH
Fudan UniversityShanghaiYang SongYunfei YaoPP
FuYang Teachers CollegeFuYang, AnhuiZhenbin GaoJihong ShenP
Harbin Engineering Univ.Harbin, HeilongjiangXiaowei ZhangPeilin ShiH
Harbin Inst. of Tech.Harbin, HeilongjiangChen Dong YanP
Harbin U. of Sci. & Tech.Harbin, HeilongjiangXueqiao DuP
Hefei Univ. of Tech.Hefei, AnhuiYongwu ZhouYoudu HuangP
Hohai UniversityNanjingGen-Hong DingRu-Yun WangH
Huazhong U. of Sci. & Tech.Wuhan, HubeiWuwen GuoHuan QiNanzhong HeXiaoyang ZhouH P
Info. & Eng'ng. InstituteZhengzhou, HenanHongwei DuanZhonggeng HanH P
Inst. of Elec. & Auto. Eng. of Beijing Union Univ.BeijingZifa WangP
Jilin Inst. of TechnologyChangchun, JilinXiaogang DongYunhui XuH H
Jilin UniversityChangchun, JilinZhenghua LinXianyui LuP P
Jinan UniversityGuangzhou, GuangdongYe Shi QiYijun ZengHuai Ping ZhuP P
Nanjing Normal Univ.Nanjing, JiangsuHuai Ping ZhuP
Nanjing U. of Sci. & Tech.NanjingLong Sheng ChengH
Xiong Ping QianP
Xin Min WuP
Chong Gao ZhaoP
Nankai UniversityTianjinTian Ping YeH
Qun Huang WuM
Xia Sheng WangH
Xing Wei ZhouP
National U. of Defence Tech.Chang Sha, HunanMengDa WuM
Yi WuH
Peking UniversityBeijingChong-shi WuH
Qiao-jun GaoH
Sheng HuangP,P
Shandong UniversityJinanBaogang XuH
Guijuie QiP
Shanghai Jiaotong UniversityShanghaiWensong ChuP
Zhihe ChenP
Longwan XiangP
Gang ZhouP
Shanghai Normal Univ.ShanghaiJiaxiang XiangP,P
South China Univ. of Tech.Guangzhou, GuangdongFengfeng ZhuH
Hongzuo FuH
Lejun XieM
Zhihua CanP
Southeast UniversityNanjing, JiangsuJun HuangM
Jianming DengP
Zhizhong SunM
Daoyuan ZhuP
Tsinghua UniversityBeijingBinheng SongHP
Celi GaoM
Jin-Xing XieP
U. of Sci. & Tech. BeijingBeijingXiaoming YangP
U. of Sci. & Tech. of ChinaHefei, AnhuiJixin ChengM
Yu FengP
Zhi ZhouP
Qingjuan YuO
Xian Mining InstituteXian, ShaanxiPanZhu WeiP
Xiangtan UniversityXiangtan, HunanYangjin ChengH
Zhou YongM
Xidian UniversityXian, ShaanxiHu Yu PuP
Ma Jian FengH
Mao Yong CaiM
Yu Ping WangP
Zhejiang UniversityHangzhouDaoyuan FangPM
Huixiang GongHP
Zhongshan UniversityGuangzhou, GuangdongRen Shu ChenM
Liu Jun XuP
HONG KONG
Hong Kong Baptist Univ.Kowloon TongLi Zhi LiaoH
Wai Chee ShiuP
IRELAND
Trinity College DublinDublinTimothy G. MurphyH
James C. SextonP
Univ. College CorkCorkPatrick FitzpatrickH
J. B. TwomeyM
Univ. College GalwayGalwayPatrick M. O’LearyM
Michael P. TuiteP
LATVIA
University of LatviaRigaAndris B. CibulisM
LITHUANIA
Vilnius UniversityVilniusRicardas KudzmaH
Algirdas ZabulionisP
MEXICO
Univ. Autón. de YucatánMerida, YucatánMaría C. Fuente-FlorenciaP
+ +The editor wishes to thank Lay May Yeap of Beloit College for her help with Chinese names. + +# Gone Fishin' + +Douglas Martin + +Robert A. Moody + +Woon (Larry) Wong + +Pomona College + +Claremont, CA 91711 + +Advisor: Ami Radunskaya + +# Abstract + +We develop a method to measure the ambient field, using directional transducers and programmable electronic delay circuitry, so that we can determine the ambient sound pressure for a given cell in three-dimensional space. + +We discuss the expected observations from the interaction of the target object and the ambient field, and the associated limitations of our method. + +We conducted a simple experiment that verified that our approach worked in an anechoic environment (outdoors), and we argue that the results should generalize to the underwater environment. + +We show how our method produces data sufficient to reconstruct object size, position, and velocity, and we present the algorithms required to accomplish this reconstruction. We discuss the advantages and problems associated with various reference frequencies, and methods to optimize the technique. Finally, we present a model of the anticipated characteristic performance of our method. + +# Ambient Sounds + +(All of the background information in this section comes from Naval Sea Systems Command [1984].) + +Ambient noise in the ocean can be classified into background or continuous noise, which is present for extended durations of time, and intermittent sounds or noises generated at random intervals for essentially random durations. + +The background is comprised primarily of wind-based noises, noises from commercial shipping, and other similar human-made sources. Some seismic noise is present, particularly at lower frequencies. + +Intermittent noises can be further divided into biological and nonbiological. Multi-watt whale sounds range from low-frequency moans to high frequency clicks and have a frequency range of at least $2\mathrm{Hz} - 60\mathrm{kHz}$ . + +Intermittent nonbiological noise include the rain, earthquakes, explosions, and volcanoes. Although there is some additional noise generated by the surf, the poor propagation characteristics of the shallow water in which such noise + +is generated cause a rapid decay; thus these sounds do not appear to contribute heavily to the ambient field in the deep sea. The contributions of rain, earthquakes, and volcanoes are continuous enough for us to group them under the general heading “background noise,” since they can reasonably be expected to have periods vastly larger than our scanning period. Explosions are, from our perspective, an overwhelming problem. + +Studies have confirmed both the directionality and spatial coherence characteristics of ambient sound, both background and intermittent [Urick 1983]. Shipping noise, which is primarily in the $100\mathrm{Hz}$ range, tends to travel horizontally and at low angles. Wind noise, dominant in the higher frequencies $(1,000\mathrm{Hz}$ and above), travels over relatively direct paths and tends to arrive at angles between $45^{\circ}$ and $80^{\circ}$ relative to the surface. + +# Sonar + +Active sonar involves generating sound and determining the bearing and distance of the target object by measuring the time that it takes the echo to return to elements of the sensor array. In passive sonar, the target is the source of the detected sound. In both cases, the detected sound is effectively radiated from a point source, the target. The parameters that relate to the efficacy of passive sonar are [Urick 1983]: + +SL, source level; + +DI, receiving directivity index; + +DT, detection threshold; + +TL, transmission loss attributable to the medium; and + +NL, ambient noise level. + +These are related through the passive sonar equation: + +$$ +\mathrm {S L} - \mathrm {T L} = \mathrm {N L} - \mathrm {D I} + \mathrm {D T}. +$$ + +Since we are detecting the noise directly and attempting to measure the absence of noise along a particular directional axis, we reverse the SL and NL terms. Thus, we attempt to locate the loud fields and thereby detect the lower-power fields via elimination. Since we predict extremely small deviations in the source and noise levels, we depend quite heavily on the directivity index as the parameter that enables us to construct an effective system. + +# Acoustic Impact of Submarine + +At low speeds or at rest, the submarine will look like a hole in the noise field. Since the ambient field is homogeneous (that is, same intensity from any direction), the submarine appears as an absorber. It reflects sound waves; sound coming toward the observer from the other side of the submarine is deflected away from the observer, while sound coming from the same side as the observer is reflected towards the observer. Both these sound waves have the same intensity, so the net effect is that the submarine will look the same as the ambient field but reduced by a reflection coefficient. Similar considerations apply to sound coming at off-axis angles. See Figure 1, in which $A$ , $B$ , $C$ , and $D$ are incident ambient sound waves of equal intensity, with $aA$ , $aB$ , $aC$ , and $aD$ being their reflections from the submarine, for reflection coefficient $a$ . Thus, the submarine masks $A$ and $B$ from the detector but reflects $aC$ and $aD$ toward the detector. Since the each wave is of equal intensity, the effect of the submarine is to change the ambient field coming from behind it by a factor of $a$ . + +![](images/93cff8694ee2e870a0a98a1f514721e8e13aca0789748efad9bb77f73f2b9242.jpg) +Figure 1. Effect of a submarine on ambient noise. + +The formula for the reflection coefficient for a liquid-liquid boundary is + +$$ +a = \left(\frac {R _ {2} - R _ {1}}{R _ {2} + R _ {1}}\right) ^ {2}, +$$ + +where $R_{2}$ and $R_{1}$ are the acoustic impedances of the reflecting medium and the incident medium [Tucker and Gazey 1966, 91]. The liquid-solid reflection coefficient is far more complicated and, for our purposes, not significantly different. For the case of steel and water, we have $R_{2} = 3,900,000$ and $R_{1} = 154,000$ [Horton 1957, 26], which give $a = .854$ . However, practical submarine designs minimize $a$ to avert sonar detection; so a more likely value of $a$ is probably about .1. + +We can predict the difference in the ambient field from the submarine's presence. For ambient field intensity $I_{a}$ , the reflected intensity is $I_{r} = aI_{a}$ . + +So the difference in decibels between the reflected intensity and the ambient intensity is + +$$ +2 0 \log I _ {a} - 2 0 \log (a I _ {a}) = 2 0 \log I _ {a} - 2 0 (\log a + \log I _ {a}) = - 2 0 \log a. +$$ + +For a steel submarine with no noise reflection damping, the difference is $1.37\mathrm{dB}$ ; for a more realistic submarine, the difference is $20\mathrm{dB}$ ; and for the perfectly noiseless submarine, the difference is infinite in dB, which corresponds to the entire ambient noise level (that is, for ambient noise level of $60\mathrm{dB}$ , the difference between the submarine and the ambient noise level is also $60\mathrm{dB}$ ). Using a hydrophone (underwater microphone) array, we can detect the lower level of the ambient noise field and know that a submarine is there. + +# Microphones, Hydrophones, and Other Transducers + +Most of the literature about hydrophones implies their use in linear arrays, generally either bottom-fixed or fixed to a vertical line that is stretched by a weight affixed to the sunken end of the line. Sonar receptors are mounted in either circular or cylindrical arrays with bearing determined by sound arrival times. In both of these general cases, the hydrophones used are omnidirectional, an undesirable feature for our purposes. Albers [1965] and Horton [1957] describe methods to determine source location using either linear or planar arrays of omnidirectional hydrophones or hydrophones with cardioid response patterns + +There are three alternative techniques for limiting microphone pickup patterns that are vastly superior to reliance on a simple cardioid pattern. + +- The "tuned port" microphone tube (see Figure 2) cancels off-axis sound by allowing it to arrive at the microphone element via several separate paths, slightly out of phase with each other, so that the off-axis sound interferes with itself, thus canceling the unwanted sound. + +![](images/2c672f677fa1947a4be8a4d14f024b1b218e4eab5a635ea128834210e42cbb58.jpg) +Figure 2. Shotgun tuned port microphone. + +- A parabolic reflector selectively increases the impact of head-on waves with respect to the microphone element. The parabolic reflector involves several problems with water flow that we are not prepared to address. + +The frequency limitations of both techniques present tradeoffs to various aspects of the model. Higher frequencies are absorbed at a greater rate in water (or any other medium), and as a result, the effective distance of the method will be adversely affected by choosing higher frequencies. Conversely, lower frequencies are less directional, have longer wavelengths (and so are less selective), and are much more difficult to deal with when attempting to decrease the beamwidth of the sound pickup pattern. + +- "Pressure zone microphones" (PZM), also known as boundary microphones, are basically condenser microphone elements located close to a planar reflective boundary. Increasing the size of the boundary decreases the frequency at which the microphone becomes directional, in accordance with the formula $F = 188 / D$ , where $F$ is the frequency at which the microphone becomes directional and $D$ is the boundary dimension in feet (assuming a square boundary). Thus, a $2 \times 2$ -ft boundary results in directionality at or above $94 \mathrm{~Hz}$ and a supercardioid response (rejection greater than $12 \mathrm{~dB}$ at or over $30^{\circ}$ off-axis) at frequencies around $500 \mathrm{~Hz}$ [Bartlett 1991]. By using two panels with two microphones, an extremely directional response pattern can be achieved as a result of phase cancellation. Using $2 \times 2$ -ft panels at an angle of $60^{\circ}$ to each other, the on-axis sensitivity angle is approximately $7.5^{\circ}$ (see Figure 3). Although this method is not mentioned in the underwater literature, we believe that we can achieve beam widths of arbitrarily narrow widths (with a $5^{\circ}$ arc probably representing a practical minimum) by using this microphone design, as adapted to underwater use, in conjunction with Horton's phase-delay methods [1957, 247-248]. + +![](images/839b42d8806ea68f99d617bcf852de7debb3e248884379ce24a9b46faa56b83e.jpg) +Figure 3. Dillon PZM shotgun, with $2 \times 2$ -ft panels at an angle of $60^{\circ}$ to each other. The PZMs are the dark objects outside the angle; the apex of the angle is pointed toward the sound source. + +# Details of Electronics + +With two digital delay circuits between the microphone elements and a summing amplifier, we can bias the beam to the left or right of the center axis. The high accuracy and precision of available delay circuitry enable us to set the center point of the beam with high precision. Thus, we have a method for scanning along the plane parallel to the base of the Pillon structure that contains the two microphones, without the requirement of physical motion. + +A second Pillon assembly located on a parallel plane vertically above the first assembly provides a second beam that we can move in conjunction with the first beam. By electronically delaying these summation voltages prior to a second summation, we can bias the combined beam, thereby defining a plane (actually a hemicylindrical surface) over which the beam traverses with discrete scanning intervals with respect to both the $x$ - and $y$ -coordinates. + +Calibrating the beams is can be done by using a fixed reference source local to the PZM assemblies. A duplicate assembly located at a distance from the first provides a second plane over the area of observation. The intersections of target measurements between these two planes provide us with the third dimension through a change in basis; thus, we are able to detect the outline of an object in the area intersected by the cones bounded by the detection beams and the detection limits of the microphone elements. We use a notch filter with a very narrow bandwidth (a readily available device) and a computer controller in conjunction with the delay circuits to search selectively for any frequency within the limits of the microphone element range. Initially, we propose $1,500\mathrm{Hz}$ as a reasonable tradeoff between selectivity and sensitivity, but nothing in the mechanism or model requires this particular frequency in preference to any other. We suggest empirical analysis to determine which frequencies produce optimal results in real-world application. + +The use of phase cancellation limits the arc angle of the scanning planes. This limit is more practical than theoretical, but realistically we cannot expect to achieve viable cancellations with delays in excess of a single wave period. Beyond this limit, we are faced with superposition problems, including accounting for the reflections from the boundary as they interact with incoming wavefronts. This means that the effective scanning area is limited by the choice of frequency, the sensitivity of the microphone at the selected frequency, and the quantization of the planar grid required for sufficient object resolution. Closer objects will be "fuzzier" but easier to range. Higher frequencies will improve image definition, but at the expense of range. If we are willing to complicate the detection apparatus, we can defer the selection of tradeoff criteria until measurement time. With this approach, we include motors that can vary the angles between the boundary planes and vary the orientation of the two scanning planes by rotating the vertical assemblies about the vertical axis. We can address the frequency characteristics by simultaneously processing several frequencies through a distribution amplifier and a set of narrow notch filters. The complete apparatus can be generalized as shown in Figures 4 and 5. + +![](images/53ee09538da4ad1bf352dfdd8663c355681ee8d4bd0f1b3e1e55a7d0010cb74c.jpg) +Figure 4. Detection array. + +![](images/a24571f416aac5c999a0a04c23eed64079e6c37b7c1722b94b13698e4aab0c8b.jpg) +Figure 5. Electronic configuration. + +Whether we exploit the characteristics of the boundary microphones or instead rely on conventional directional microphones, the scanning equations are identical. There are two factors of concern, the arrival time differential and the amplitude difference. We compute these as follows [Bartlett 1991, 44-45]: + +$$ +\Delta T = \frac {\sqrt {D ^ {2} + \left[ \frac {S}{2} + D \tan \theta_ {s} \right] ^ {2}} - \sqrt {D ^ {2} + \left[ \frac {S}{2} - D \tan \theta_ {s} \right] ^ {2}}}{c}, +$$ + +where + +$T$ is the time differential between the microphones, + +$\theta_{s}$ is the desired source angle, + +$S$ is the spacing between the microphone elements, + +$c$ is the speed of sound, and + +$D$ is the distance from the microphones to the detection plane (the median of the depth plane). + +Amplitude signals must be adjusted between the incident pairs in order to flatten the scanning plane. By weighting the amplitude values between the pair, we essentially eliminate the distance factor from the measured sector, which is + +appropriate since we compute distance via triangulation rather than absolute amplitude. The amplitude difference in decibels between the two microphones of each pair is + +$$ +\Delta = 2 0 \log \left[ \frac {a + b \cos \left(\frac {\theta_ {m}}{2} - \theta_ {s}\right)}{a + b \cos \left(\frac {\theta_ {m}}{2} + \theta_ {s}\right)} \right], +$$ + +where $\theta_{m}$ is the angle between the boundary planes (or between the microphone elements, in the case of directional elements), and $a$ and $b$ are constants that define the polar characteristics of the microphone elements. + +There are several possible methods to image the three-dimensional target using our methods. + +- We could try to use one additional convergence operation and scan both planes synchronously. This would yield sound pressure levels at points in three-space; in other words, it would result in a parallelepiped field of observation, demarcated by a grid on two of its faces, with pressure readings for each three-dimensional segment. This would yield the best image, but we are concerned that we are asking too much of the convergence techniques. +- We could determine the center of the target in each of the two observation planes and then find the distance to the center of the object using a simple triangulation (since we know the distance between the detection arrays and both angles of detection). +- We could apply a change of basis to one of the planes to map it onto the three-space coordinates of the other, weigh the measurements in the rotated plane by their $z$ -coordinate values, and add the contents of the matrices. This would yield a two-dimensional matrix whose elements contain a value that should be proportional to the locations depth in the $z$ -plane. A simpler interpretation of this is that we are intersecting the cylinder formed by extending the two-dimensional shape in each of the observation planes and interpreting this bounding shape as the shape of the object itself. This is more than sufficient image resolution given the limits of the arc formations. By summing the matrices derived from each frequency observed, we should get a reasonably detailed image with this method. + +# Deriving Object Characteristics from Data + +# Edge Detection + +Through the array of hydrophones, we can construct a three-dimensional description of the target. By phase cancellation, we can construct precise narrow cones from the directional hydrophones and effectively orient them in an arbitrary direction. Each cone measures the sound intensity of a point in space, and we can represent this by a value in a two-dimensional matrix. By varying + +the delay time, we can sweep the detector field across the plane representing the field of observation; and thus we construct a two-dimensional grid. Each element of the matrix contains the data for the sound intensity of a particular position in space. Since the submarine blocks out the ambient noise, we detect a decrease in sound intensity in the positions of the submarine relative to its surrounding space. This gives us an outline of the submarine in two dimensions relative to the perspective angle of the microphone array (Figure 6). + +![](images/c935511bbba01a60b5cd5da61a3b71f61e852c70f848d73fe89f3186bc42f739.jpg) +Figure 6. Plane scanned by an array of microphones. The dark region indicates the change in sound intensity level relative to the surrounding space, yielding the outline of the submarine. + +From a single array of microphones, we cannot obtain the third dimension (the depth) of the submarine; therefore, the model proposes that another array of microphones be oriented at a slight angle to the observation plane and thereby obtain another two-dimensional view of the submarine relative to the perspective angle of this new array, which we call array2. + +Knowing the distance between the two arrays of microphones and the angles of their respective planes of observation, we can determine the intersection set of any given set of values—that is, we can locate the same subfield in three-space by deriving the third spatial vector of the first plane from the two vectors (representing the $x$ - and $y$ -coordinates) of the second plane. Thus, we can determine the absolute location of the sound field depression in three-space. + +# Distance of scanned plane from the microphones + +Referring to Figure 7, we observe that by the law of sines we have + +$$ +{\frac {\sin \beta}{b}} = {\frac {\sin (1 8 0 ^ {\circ} - \alpha - \beta)}{a}} = {\frac {\sin \alpha}{c}}, +$$ + +so that + +$$ +b = \frac {a \sin \beta}{\sin (1 8 0 ^ {\circ} - \alpha - \beta)}, \qquad c = \frac {a \sin \alpha}{\sin (1 8 0 ^ {\circ} - \alpha - \beta)}. +$$ + +# Size of each grid box + +From Figure 8, we see that the area of each grid point is simply the area of the circles traced out by the cone. The radius of the circle is $r = D \sin \theta$ , so the area of each grid box is $A = \pi (D \sin \theta)^2$ , where $D$ is the distance from + +![](images/a71b05f373156f85571f9dfd1a9db796b0941c932eb8669462e8d5513dc69e47.jpg) +Figure 7. Geometry of microphone arrays. + +$\mathbf{a} =$ distance between the two arrays of microphones +$\mathbf{b} =$ distance from array1 to target +$\mathbf{c} =$ distance form array2 to target +$\alpha =$ angle between b & the ocean ground +$\beta =$ angle between c & the ocean ground +$\mathrm{E} =$ position of target +$\mathbf{F} =$ position of array1 +$\mathbf{G} =$ position of array2 + +![](images/8ae302bafee91bd3438be8e0997ab32580958bc751c88537b78c629fcf0d94df.jpg) +Figure 8. Geometry of grid boxes. + +![](images/ab87b5bc45ff53fe8ce2f57e566cc93fbc6f73dc10ff42e656d592de8e41b9f5.jpg) + +Circular area of the cone traced out by microphone + +![](images/c0a16867f8a4c0b4b6b6f43f2e0f75cb0184eb879fac6bd24cc54bc2b48299cd.jpg) + +Area of each grid box + +the microphone to the target and $\theta$ is the angle subtended by the cone of the microphone. + +Since the two arrays of microphones are pointed in different directions in space, we have two different directional bases to work with. From array1, we have a basis with unit vectors $X_{1}$ , $Y_{1}$ , and $Z_{1}$ ; and each point of the plane represented by array1 is expressed in terms of this basis with $X_{1}$ as the horizontal direction of the plane, $Z_{1}$ as the vertical direction of the plane, and $Y_{1}$ as the depth, perpendicular to the plane. Similarly, each point on plane 2 represented by array2 will be expressed in termed of the basis with respect to array2. Looking at the two planes separately, we cannot determine the $y$ -component of the picture in either case. Now, if we perform a change of basis for the second plane, we can obtain the depth of the submarine in terms of the first basis. This can be done simply by multiplying the coordinates of each of the points on plane 2 by the matrix + +$$ +\left[ \begin{array}{c c c} \cos \phi \cos \theta & - \sin \theta & - \sin \phi \cos \theta \\ \cos \phi \sin \theta & \cos \theta & - \sin \phi \sin \theta \\ \sin \phi & 0 & \cos \phi \end{array} \right], +$$ + +where $\theta$ is the angle between $X_{2}$ (unit vector in the $x$ -direction of array2) and the $X_{1}Z_{1}$ -plane, and $\phi$ is the angle between $X_{2}$ and the $X_{1}Y_{1}$ plane. Here we are assuming that the distance between the two microphone arrays is significantly small compared to the distance between the submarine and the microphone + +arrays; therefore, we have the same origin for both bases and can use this single matrix to perform the change of basis. After the transformation of basis, we obtain the $Y_{1}$ -components for each scanned point $(X_{1i}, Z_{1j})$ on plane 1. Thus we have constructed a three-dimensional "portrait" of the submarine; the location of each of its points is expressed with respect to the basis defined by array1. + +Now we define a standard basis with respect to the ocean. That is, we use the plane of the bottom of the ocean as the $xy$ plane and the altitude of the ocean as the $z$ -axis. We now transform every point in the $X_{1}, Y_{1}, Z_{1}$ basis into the defined standard basis and obtain an "upright" three-dimensional figure of the submarine. By doing the transformation of the bases in real time on the computer, we can generate a three-dimensional continuous moving shadow of the submarine. + +# Size Computation + +From the three-dimensional view, we can extract the positions of any given surface point. Hence we can compute the length and width of the submarine. Better yet, we can determine the volume of the ship by summing up all the grid space taken up by the submarine. + +# Velocity and Direction + +To calculate the velocity of the submarine, we pick any point on the three-dimensional graph and locate its position at time $t$ and then again at time $(t + \Delta t)$ , then divide changes in each direction to find the components of the velocity. + +# Fun with Microphones + +We experimented to test our model of how the submarine would affect the ambient field. Not having a large body of water, we did an air-based experiment. To simulate the ambient noise field, we used speakers connected to a noise generator. Since we did not have a large number of speakers to simulate the entire ambient noise field, we tried several representative orientations of speakers and detector. Our entire set of equipment was + +- noise generator, +amplifier, +- two speakers, +sound-level meter, + +- rolling trash can, and +- small table. + +We set up the equipment in several different orientations. For each setup, we took several readings at each location to get an idea of the uncertainty of measurements. Our results are summarized at the end of this section. + +In each setup, we pointed the detector directly toward the far noise source $c_1$ (speaker 1) and took readings at three submarine positions $A$ , $B$ , and $C$ . In every setup, the submarine in location $A$ has a negligible effect on the ambient field. + +Figure 9 shows the first setup, to simulate the noise field from the opposite side of the submarine, with the noise field from the detector side reflecting off the submarine back toward the detector. + +![](images/cf0377bd4d18a8c44c2e1ba96bbf2b4337cdf376b3d3fb8cf927737d54835067.jpg) +Figure 9. Setup 1. + +Figure 10 shows the setup with the sources off to an angle behind the detector so as to measure the reflection of the ambient field off the submarine. + +Figure 11 shows the sources at an angle in front of the detector, to get the opposite of the second setup. By combining the data from both the second and third setups, we hoped to get an idea of the difference between the magnitude of reflection from the submarine and absorption by the submarine. + +Figure 12 is similar to Figure 11 but with much greater distance to the detector (approximately two and a half times the distance for setup 3). + +Table 1 summarizes our findings for the experiments. We decided that any measured change with an uncertainty greater than the change was not significant. In setup 1, the difference in the ambient field between the submarine in location $A$ and in location $B$ was negligible, whereas the difference between the submarine in location $A$ and in location $C$ was readily detectable. In the ocean, this would correspond to a measurable decrease in the ambient field when the array is directed toward the submarine, whereas when the array is pointed a little away from the submarine, the submarine would not be detected (i.e., + +![](images/e2d1c813e8b8980956e609313efcbe7facc9132222f186d17a066c2140ece353.jpg) +Figure 10. Setup 2. + +![](images/35e393aac9c605f328ac4ecb19a9e97e92dee0ea0debd6f7ff237e0f9b5b039a.jpg) +Figure 11. Setup 3. + +Table 1. Results of the experiment. + +
SetupChange (in dB) between
A and BA and C
1not significant-1.34 ± 0.16
2not significant+0.54 ± 0.27
3-2.20 ± 0.23not significant
4-1.04 ± 0.25not significant
+ +![](images/ca696015d22dcff7960b18218ad1461d8763e9e4fcfd175e37bb325bd2262b28.jpg) +Figure 12. Setup 4. + +ambient noise coming off-axis onto the submarine has a negligible effect). This conclusion is corroborated by the data in setups 3 and 4: When the submarine was not directly between the source and the detector, it had little effect on the ambient field; when it was directly between the source and the detector, the submarine had a large effect on the field. So, only noise that comes directly through the submarine to the detector is substantially affected. + +The data from all four setups show that the reflected noise from the submarine is lower in intensity than the noise absorbed by the submarine. From setup 1 this is direct. From setup 2 in conjunction with 3, we see that sources of the same intensity reflect back to the detector at $+0.54$ dB, while the submarine interrupts sources at the same distance from the submarine by $2.20$ dB. This difference becomes smaller in setup 4 because the detector can now register more of the ambient field, or the submarine takes up less of the detector's field of view. However, it is still detectable at $1.0$ dB. + +The data corroborate the fundamental idea that the submarine is detectable as an absence of noise in the ambient field. + +# Limitations + +There are several problems that our model must deal with. + +For a wave to "see" the target, that is, be reflected by the target, it must have a wavelength smaller than the the target. We assumed that the smallest dimension (length, width, height) of a submarine would be approximately $5\mathrm{m}$ . Smaller wavelengths might reflect off large fish; however, we are looking at an array of measurements, and from that we will know if the target is submarine size or fish size. + +We also want the frequency to be as low as possible, since low frequencies + +tend to attenuate in water far less than high frequencies. We can find the lowest usable frequency from the equation $c = \lambda f$ , where $c$ is the speed of sound in water (1,500 m/sec), $f$ is the frequency of the wave, and $\lambda$ is the wavelength (5 m). We find that 300 Hz is the lowest optimal frequency. There are other considerations, however. The directionality of the hydrophone array is highly frequency-dependent; in that case, the higher the frequency, the more directional the array can become. We decided that a frequency of 1,500 Hz would not attenuate too much for the additional directionality. + +We also find that there is a maximum distance at which we can detect the submarine, even under conditions of ambient noise with fixed frequency and amplitude. The passive sonar equation, + +$$ +\mathrm {S L} - \mathrm {T L} = \mathrm {N L} - \mathrm {D I} + \mathrm {D T}, +$$ + +can be solved to find the maximum distance. Note that for detection of a submarine in an ambient noise field, $(\mathrm{NL} - \mathrm{SL})$ is the relevant noise relation term. This is the ratio of noise that the submarine will absorb (SL) to the ambient noise level (NL). Rearranging, we get + +$$ +\mathrm {T L} = \mathrm {N L} - \mathrm {S L} - \mathrm {D T} + \mathrm {D I}. +$$ + +From Urick [1983, 385], we have + +$$ +\mathbf {D T} = 5 \log \left(\frac {d w}{t}\right), +$$ + +where $d$ is a parameter relating the probability of detection and the probability of false alarm, $w$ is the frequency range, and $t$ is the length of time listening for the submarine. From Urick [1983, 23], we have + +$$ +\mathbf {D I} = 1 0 \log \left(\frac {P _ {\mathrm {e q u i v}}}{P _ {\mathrm {a c t u a l}}}\right), +$$ + +where $P_{\mathrm{equiv}}$ is the noise power generated by an equivalent nondirectional hydrophone and $P_{\mathrm{actual}}$ is the noise power generated by the actual hydrophone. Another way of looking at this is in terms of the relative area of detection for a nondirectional hydrophone compared to the actual array of hydrophones, so an equivalent equation is + +$$ +\mathrm {D I} = 1 0 \log \left(\frac {\mathrm {s u r f a c e a r e a o f 3 6 0 ^ {\circ} s c a n}}{\mathrm {s u r f a c e a r e a o f a c t u a l s c a n}}\right). +$$ + +Since the arclength of a section on a sphere is given by $s = r\phi$ for angle $\phi$ measured in radians, and since for small $\phi$ the arclength $s$ is approximately equal to the diameter of the circle inscribed on the sphere, we get the surface area of the scan as + +$$ +A = \pi \left(\frac {r}{2 \phi}\right) ^ {2}. +$$ + +So we have + +$$ +\mathbf {D I} = 1 0 \log \left(\frac {4 \pi r ^ {2}}{\pi \cdot \frac {r ^ {2}}{4} \cdot \phi^ {2}}\right) = 1 0 \log \left(\frac {1 6}{\phi^ {2}}\right) = 1 2 - 2 0 \log \phi . +$$ + +Finally, + +$$ +\mathrm {T L} = 2 0 \log r + \left(\alpha \times 1 0 ^ {- 3}\right) r, +$$ + +with $\alpha$ given by [Urick 1983, 108] as + +$$ +\alpha = \frac {0 . 1 f ^ {2}}{1 + f ^ {2}} + \frac {4 0 f ^ {2}}{4 . 1 0 0 + f ^ {2}} + 2. 7 5 \times 1 0 ^ {- 4} \times f ^ {2} + 0. 0 0 3, +$$ + +for $f$ expressed in kHz [Urick 1983, 108]. For our purposes, we neglect the term involving $\alpha$ .1 + +Using these values, we can now solve for $r$ as + +$$ +\log r = . 0 5 \left(\mathrm {N L} - \mathrm {S L} - 5 \log \left(\frac {d w}{t}\right) + 1 2 - 2 0 \log \phi\right). +$$ + +Included as an appendix are graphs for a perfectly absorbing submarine of range vs. time (Figure 13), and, given a sweep time of 1 second, range versus $\phi$ and (NL - SL) (Figure 14). Notice that if the submarine is not perfectly absorbing, it quickly becomes hard to detect. For example, recall that the difference between the ambient noise field alone and the field with the submarine present is $-20\log a$ . For $a = .1$ , we have a difference of $20~\mathrm{dB}$ . If this is the case, the submarine is detectable only within $1\mathrm{km}$ or less, even with a low angle of observation. We can adjust for this somewhat with a slower scan; changing from 1 sec to 3 sec would almost double the range limit for detection. However, the better the absorption factor of the submarine, the easier for us to detect. This implies a tradeoff between absorption for active sonar and reflectivity for passive sonar. + +The uncertainty in successive readings limits the smallest noise difference that we can detect, lowering the range limit by approximately the standard deviation of the noise. A reference deviation of a hydrophone is $0.5\mathrm{dB}$ [Wagstaff and Baggeroer 1985]. + +There is variation in the amplitude of the ambient field. If the variation is $7\mathrm{dB}$ , any difference of $7\mathrm{dB}$ may be insignificant random noise; again, this lowers the range limit by this deviation level. That is, it will lower the effective (NL - SL) difference. A common standard deviation in the ambient noise field over one-hour periods is about $6\mathrm{dB}$ , which results from changing wind speed + +![](images/46c44504cff2b706c61c37d82d3ef27008a1078deac289ac986206767d123802.jpg) +Figure 13. Range vs. time of observation and vs. angle of observation. + +![](images/1863f17cd5f1c789c3831fa223f53b6302d0264eac3ea3a0c8bf1507c6d177f8.jpg) +Figure 14. Range vs. change in field with reflection and vs. angle of observation. + +during the observing period [Urick 1983]. However, we postulate that the deviation over one- or two-second intervals would be far lower, as the wind speed change would be far less. A reasonable assumption would probably be about $1 - 2\mathrm{dB}$ . + +We can get around these deviations to some extent by taking a lot of data. That is, if on several successive scans we find a deviation of 1 dB in the same location, it is more likely to be caused by a submarine than by a statistical deviation. Thus, we can probably notice a deviation of less than half the deviation of the ambient noise field or the hydrophone. + +A very large problem in positioning the submarine is not directly apparent from our model. The temperature difference at different levels in the ocean combined with pressure gives varying speeds of sound at different depths. While the maximum difference in the speeds of sound between depths of 0 to $1,500\mathrm{m}$ is only about $10\mathrm{m / s}$ , a velocity gradient as well as a speed gradient are forced upon the sound wave. This leads to rather strange characteristics in the path of travel of sound (called a ray) in the deep ocean (see Robinson and Lee [1994] and Tucker and Gazey [1966] for pictures of ray tracings). + +An interesting note is the fact that a "shadow zone" occurs at some distance from the source. This impacts the placement of the hydrophone; we do not want it in the upper portion of Region II (200-1,500 m) or the lower part of Region I (0-200 m), as these areas are missed by all sound rays at some distance from the source. It also requires some data processing once the location of the submarine has been found. From the equation for the length of an arc, it is easy to verify that + +$$ +\cos \phi_ {m} = \cos \phi_ {n} + \frac {d _ {n} - d}{R _ {n}}, +$$ + +where $\phi_{m}$ and $\phi_{n}$ are the angles of inclination of the incident wave at the detector and at the target, respectively, $d_{n}$ is the depth of the detector; $d$ is the depth of the source; and $R_{n}$ is the radius of curvature of the ray path [Tucker and Gazey 1966, 105]. The horizontal distance to the target, $s_{n}$ , is given by + +$$ +s _ {n} = R _ {n} (\sin \phi_ {n} - \sin \phi_ {m}). +$$ + +Combining these equations, we can solve for $d_{n}$ and $\phi_{n}$ if we use the distance to the target computed by the array as $s_{n}$ . Of course, this will not be exact, but iterations using $\phi_{n}$ to find a new $s_{n}$ will give a fairly exact picture. The only problem is when the ray has been reflected by the surface of the ocean or the surface between two regions, or if the ray has gone through more than one region. If this is the case, the actual location of the target is far more difficult to ascertain, and is beyond the scope of this paper. + +# Discussion of Alternative Approaches + +There are several other methods of detecting the submarine. + +One very interesting and purely hypothetical method is the idea of a "sound camera." This incorporates a sound lens and a nonreflecting box. It works in much the same way as an ordinary camera: It takes the view, inverts it through the lens, and displays it inside the box. This display would probably be in the form of a pressure-sensitive liquid that would have some characteristic change stemming from very small pressure changes and be proportional to those pressure changes. Thus, one would be able to get a "picture" of the sound that the lens is "seeing" by observing the changes in the liquid in the box. Naturally, there are several problems, the foremost being the velocity of sound. Since the velocity of light is essentially infinite with respect to the camera, the entire plane of observation is "seen" at the same time. However, because the speed of sound is very finite, things with distances differing by kilometers or more would be offset in the picture by times of seconds. If they were relatively constant sources, this would not be of much consequence as the source 10 sec before the picture would probably be much the same as the source at the picture. This would be a hard device to design, but it would be very useful in searching the ocean for sound sources. Reconstructing a visual image from the sound field could probably be done using acoustic holography, a technique described below as an additional alternative approach. + +A second alternative is a variant on a method described by Farrah et al. [1970] regarding sound holography. They describe the techniques for reconstructing a holographic image using sonar data. Just as a laser can be used to read the pits of a compact disk, or scan the surface of an LP recording, it would be possible for us to "read" the sound waves in water by measuring the refraction of a laser beam passing through them. We would be able to read specific waves from within the compendium of the ambient field, theoretically to any precision. Scanning the plane representing the observation threshold, we could reconstruct a plane parallel to it by subtracting the waves whose source can be determined to be off-axis to the observation plane. Time provides the third spatial dimension, so we should be able to locate and measure a target item within the observation space. + +# Conclusion + +Our model allows for the prediction of a perfectly absorbing submarine to extreme distances with exceedingly good technology (see Figure 13). However, with a more realistic model of a submarine, the detection limit falls to less than $5\mathrm{km}$ , and possibly less than $1\mathrm{km}$ . Nevertheless, the array can also detect a submarine generating non-machinery-based noise (such as flow noise and cavitation), so we may hear a quickly-moving submarine at large distances. + +# References + +Albers, Vernon. 1965. *Underwater Acoustics Handbook*, vol. 2. University Park, PA: Pennsylvania State University Press. +Bartlett, Bruce. 1991. Stereo Microphone Techniques. Stoneham, MA: Butterworth-Heineman. +Cox, Albert. 1982. Sonar and Underwater Sound. Lexington, MA: Lexington Books. +Farrah, H.R., E. Marom, and R.K. Mueller. 1970. An underwater viewing system using sound holography. In Proceedings of the 2nd International Symposium on Acoustical Holography, edited by A.F. Metherall and L. Larmore, 173-184. London: Plenum. +Hassab, Joseph. 1989. *Underwater Signal and Data Processing*. Boca Raton, FL: CRC Press. +Horton, Joseph. 1957. Fundamentals of Sonar. Annapolis: U.S. Naval Institute. +Metherell, A.F., and Lewis Larmore (eds.). 1970. Acoustical Holography, vol. 2. New York: Plenum Press. +Naval Sea Systems Command. 1984. Ambient Noise in the Sea. Washington: Dept. of the Navy. +Nisbett, Alec. 1974. The Use of Microphones. New York: Hastings House. +Robinson, Allan, and Ding Lee (eds.). 1994. *Oceanography and Acoustics*. Woodbury, NY: AIP Press. +Tucker, David, and B. Gazey. 1966. Applied Underwater Acoustics. London: Pergamon Press. +Urick, Robert J. 1967. Principles of Underwater Sound for Engineers. New York: McGraw-Hill. +_____. 1982. Sound Propagation in the Sea. Los Altos, CA: Peninsula Publishing. +1983. Principles of Underwater Sound. 3rd ed. New York: McGraw-Hill. +Wagstaff, Ronald, and Arthur Baggeroer (eds.). 1985. *High-Resolution Spatial Processing in Underwater Acoustics*. NSTL, MS: Naval Ocean Research and Development Activity. +Wilson, Oscar. 1985. An Introduction to the Theory and Design of Sonar Transducers. Washington: U.S. Government Printing Office. + +# How to Locate a Submarine by Detecting Changes in Ambient Noise + +Carl Leitner + +Akira Negi + +Katherine Scott + +University of North Carolina + +Chapel Hill, NC 27599-3250 + +Advisor: Ancel C. Mewborn + +# Summary + +We generate an artificial ambient noise field and give an algorithm for locating a submarine. + +Our model for the ambient noise field allows any frequency range, but we use $30\mathrm{kHz}$ , where environmental sounds, such as surface activity and biologics, dominate. We assume a normal distribution of noise in the area of interest and that a submarine impedes measuring the noise beyond it. + +Our recognition algorithm is simple: We look for contour lines on our ambient noise field, then for closed contours of the right size, and finally for the intensity patterns that match those of submarines. To help detect significant changes, we smooth the data. Since our algorithm is reasonably fast, it can also detect the changing locations of the center of the submarine as it moves and hence compute the speed and direction of the submarine. We calculate the size and the depth of the submarine by finding the maximum intensity of the dampening and the average dampening in the area around it. + +Using the algorithm on our artificial data, we can spot a submarine within $25\mathrm{m}$ of the actual location, when we are working with an area of $7\mathrm{km} \times 7\mathrm{km}$ , with the ratio of about 1.8 of maximum dampening effect to standard deviation of ambient noise. We were within a computer error scale for determining the speed of the submarine. Our model at the latest stage computed the depth of the submarine and the size of the submarine within a factor of $10^{2}$ . + +# Physics Facts + +- Acoustic intensity is related to the distance $r$ that sound travels and the attenuation coefficient $\alpha$ [Apel 1987, 368] via + +$$ +I (r) = I _ {0} \left(\frac {r _ {0}}{r}\right) ^ {2} e ^ {- \alpha (r - r _ {0})}, +$$ + +where $I_0$ is the acoustic intensity at distance $r_0$ . + +- "[B]elow $20\mathrm{Hz}$ , a frequency-independent attenuation coefficient, $\alpha_{1}$ , occurs and is approximated by: $\alpha_{1} = 6.9\times 10^{-7}\mathrm{m}^{-1}$ " [Apel 1987, 341, 369]. At higher frequencies, the attenuation constant is a function of frequency; at still higher frequencies, the dominant factor is water viscosity [Apel 1987, 368-371]. +- "[T]he [acoustic] energy radiates in all directions as a spherical wave," which causes an attenuation proportional to $1 / \text{distance}^2$ . [Dera 1992, 434] + +- “[T]he velocity of sound in the ocean varies from $1430 - 1540\mathrm{m / s}$ near the surface to $1580\mathrm{m / s}$ at great depths” [Dera 1992, 436]. +- Different underwater sounds have their own specific frequency ranges, such as shipping and machinery (less than $2\mathrm{kHz}$ ), biologics (0.1 to $100\mathrm{kHz}$ ), and ice (several Hz to kHz) [Hassab 1989, 3]. + +# Assumptions and Justifications + +- We monitor a small area of ocean, say $10\mathrm{km} \times 10\mathrm{km}$ , with a uniform depth of $D$ . In a small area, ocean depth tends to vary less, and we do not have to consider the curvature of the earth. If we need to locate a submarine in a larger area, we divide up the area into $10 \times 10$ squares.. +- Effects of submarines on ambient noise: + +- When there is a submarine between a sound and the observation point, the ambient noise field is dampened at nearby observation points, the same effect as the submarine absorbing almost all sounds (about $98\%$ ). Most of the sound reflected off an ellipsoid is scattered far away from our sensors. +- The dampening effects are independent of the speed of the submarine. + +- Assumptions about the ambient noise field: + +- We have instruments to measure acoustic intensity as a function of the point on the plane at a particular depth $d$ . +- Sensors do not malfunction. +- Atmospheric sound does not propagate to the ocean [Pain 1983, 158]. +- The dominant effect of sound hitting the bottom of the ocean is scattering off bumps. +- We set $d = 100 \mathrm{~m}$ , for realism and for simplicity ("sources located in shallow surface ducts can give complex ray arrival patterns" [Munk et al. 1995, 382]). + +- Just $14\%$ of sound is transmitted through the steel-water boundary [Pain 1983]; so $(14\%)^2 = 2\%$ of the sound is transmitted completely through the submarine's two steel-water boundaries. +- We measure a small interval of frequencies. “At the high-frequency end, sound absorption by seawater is very high. … At the low end, below 1 cps, one has great difficulty in generating sound (except with earthquakes and very large explosions)” [Tolstoy and Clay 1966, 3]. +- We ignore the effects of ambient water velocity on sound propagation, the common practice [Keller 1977, 2]. + +- Other properties of submarines: + +- The submarine shape is an ellipsoid. The thickness of the submarine is negligible in comparison to the depth of the ocean and the distance from which we are measuring the noise field. +- The speeds of submarines do not exceed 35 knots [Friedman 1984, 105]. +- The lengths of submarines vary from $50\mathrm{m}$ to $150\mathrm{m}$ ; the width is always about $10\mathrm{m}$ Friedman [1984]. +- The submarine is parallel to the ocean surface; for operational reasons, submarines do not tilt by a very large angle. + +# Development of the Model + +# Choice of Frequency + +We limit our noise detection to a convenient frequency range, near $30\mathrm{kHz}$ ; our model can adapt to different choices. + +Noise near $30\mathrm{kHz}$ is caused mostly by surface water movements, thermal activity, and biologics, which either affect relatively large areas uniformly, or are distributed randomly throughout the region. Thus, it is reasonable to assume that intensity of noise in the ambient noise field is distributed according to a smooth function with random fluctuations according to a normal or uniform distribution. + +We could alternatively look at low frequencies, near a few Hz, for which we would need a different attenuation constant. The dominant noise in this range is seismic activity, and seismic information is readily available in real life. + +# Dampening Effects + +We assume that reflections off the submarine and off the ocean bottom are dominated by scatterings, which means that the effects of reflection cannot be measured in the range in which we are working. This means that any sound that hits the submarine effectively disappears, i.e., it has the same effects as if the + +sound were absorbed by the submarine. Thus, we assume that the submarine blocks all sounds that come toward the observation point from the other side. + +An ellipsoidal submarine presents an elliptical profile, which at depth $d$ can be represented by the equation + +$$ +\frac {(x - x ^ {\prime}) ^ {2}}{a ^ {2}} + \frac {(y - y ^ {\prime}) ^ {2}}{b ^ {2}} = 1, +$$ + +where $(x', y')$ is the center of the ellipse. In addition, we assume that the ocean depth is uniformly $D$ (see Figure 1). + +![](images/70626d5424f3300923c47978cecdc9588fc548728c9eaea8add0b11f47ca100d.jpg) +Figure 1. Setup for the integral to calculate the dampening caused by the submarine. The $z$ -axis points downward, for convenience. + +The sound intensity measured is inversely proportional to the square of the distance from the source. Ignoring seawater's absorption of sound, the total sound blocked is expressed by the integral + +$$ +\iiint_ {V} \frac {1}{\left(x ^ {2} + y ^ {2} + z ^ {2}\right)} d x d y d z \tag {1} +$$ + +where $V$ is the volume of water blocked by the submarine. The evaluation of the integral proved exceedingly difficult, so we used the following approximation: + +$$ +\int_ {R} ^ {(D / d) R} \frac {a b \pi \left(\frac {r}{R}\right) ^ {2} \left(\frac {d}{R}\right)}{r ^ {2}} d r \tag {2} +$$ + +where $R = \sqrt{x'^2 + y'^2 + d^2}$ is the distance from the observation point to the center of the submarine. This integral reduces to + +$$ +\frac {a b \pi}{R ^ {2}} (D - d). \tag {3} +$$ + +Note that the dampening is not affected by the direction in which the submarine is pointing. + +Water dampens the sound by a factor of $e^{-\alpha (r - r_0)}$ , where $r$ is the distance from the source of the sound, $r_0$ is a reference distance (usually taken to be 1), and $\alpha$ is a constant dependent on the frequency of the sound that we are measuring. For example, $\alpha = 3 \times 10^{-2} \mathrm{dB / km}$ at $30 \mathrm{kHz}$ [Apel 1987]. Incorporating this factor into (1) makes the integral even harder to evaluate. Thus, we use (2) to get + +$$ +\int_ {R} ^ {(D / d) R} \frac {a b \pi \left(\frac {r}{R}\right) ^ {2} \left(\frac {d}{R}\right)}{r ^ {2}} e ^ {- \alpha (r - r _ {0})} d r, +$$ + +which reduces to + +$$ +\begin{array}{l} \int_ {R} ^ {(D / d) R} \frac {a b \pi d}{R ^ {3}} e ^ {- \alpha (r - r _ {0})} d r = \left. \frac {a b e ^ {- \alpha (r - r _ {0})} \pi d}{- \alpha R ^ {3}} \right| _ {R} ^ {(D / d) R} \\ { = } { \frac { a b d e ^ { \alpha r _ { 0 } } \pi } { \alpha R ^ { 3 } } \left( e ^ { - \alpha R } - e ^ { - \alpha R D / d } \right) . } \\ \end{array} +$$ + +Setting $r_0 = 1$ , we get + +$$ +\frac {a b d e ^ {\alpha} \pi}{\alpha R ^ {3}} \left(e ^ {- \alpha R} - e ^ {- \alpha R D / d}\right). \tag {4} +$$ + +We see that the amount of noise measured at each point is approximately proportional to $2\pi e^{\alpha} / \alpha$ . For derivation of the integrals and these values, see the Appendix. + +Our computer model reveals that (4) produces similar maximal dampening effects to (3), but the effects of (4) are registered in an area roughly one-third to one-half the radius of (3). + +We use (3) with the additional constraint of a smaller dampening radius. Dampening is more than some criterion constant $c$ when + +$$ +h < \sqrt {\frac {a b \pi (D - d)}{c} - d ^ {2}} < \sqrt {\frac {a b \pi D}{c}}, +$$ + +where $h$ is the horizontal distance from the observation point. Note that the middle expression depends on the unknown depth of the submarine; the expression on the right contains known quantities, except for the length of the submarine, which we assume varies by a factor of only 3. We can assume $c$ to be a relatively small constant, such as the standard deviation of the ambient noise, properly scaled. Doing so produces an area of radius approximately $100\mathrm{m}$ where the effects of the submarine are detectable. + +We still need to consider the motion of the submarine. Since the typical speed of about 30 knots (15 m/s) is greatly less than the speed of the sound (1430-1540 m/s), the time delay due to movement is small enough to ignore. + +# Analysis of the Problem and Model Design + +We developed a graphical simulation in MATLAB, with data on a grid (each square representing a sensor) and with color to indicate acoustic intensity. + +# Generation of Simple Random Noise + +We had MATLAB generate a data set containing only random noise. To start, we used 1 as the mean of the noise intensity. The adjustment values at each point came from a uniform distribution on the interval $(-0.00005, 0.00005)$ . Later, we used a normal distribution mean 0 and standard deviation 0.00005 to create random noise. + +# Dampening Effect of the Submarine + +We first guessed that the submarine would create an ellipse-shaped area of dampening, with greater dampening in the center than at the edges. + +Later, we derived two dampening functions using (3) and (4). The "attenuated" model included the effect of sound absorption by seawater, while the "unattenuated" model did not. + +# Smoothing Functions + +Having simulated the data, we sought a way to locate the submarine. + +We began by removing the random noise via a smoothing function. The first smoothing method, "consecutive," used least-squares to fit a line to the first 5 points in each row, and then to the next 5 points, etc. We specified an acceptable range of "noise" and then checked each point to see whether it was within that (vertical) distance from the least-squares line. If so, the value of the point was reassigned to the value on the regression line. We also wrote a routine that performed the same type of smoothing on each column. This method proved ineffective, but it inspired a second method. + +The second method, "overlapping," was very similar to the consecutive method, except instead of doing the process on points 1 to 5, then 6 to 10, then 11 to 15, we did the process on points 1 to 5, then 2 to 6, then 3 to 7, etc. This was much more effective. We found that running the overlapping row and column + +smoothers several times each did a very good job of enhancing the contrast between average noise and dampening effect. + +If we take the average of 5 points at each step, we theoretically expect the standard deviation of the mean to be reduced by a factor of $1 / \sqrt{5}$ at each step. After 6 smoothings, the standard deviation was reduced by about $1 / 12$ , which is greater than $(1 / \sqrt{5})^6$ . However, since there are many points that we are not affecting when they are outside of our noise tolerance (twice the standard deviation), this is consistent with the theoretical result. + +# Location of the Submarine + +We first had MATLAB produce a contour map of the data. Then we located contours that might represent a submarine's detectable dampening radius. Finally, we checked these "suspicious" contours for whether they behaved like submarine-dampening contours. + +![](images/67027a5d7266ee70ae3883de8ba15cf0e55e7e141578bdb0208826eeb5894b37.jpg) +Figure 2. Cross-section view of the ambient field on an $11 \times 11$ grid. The marked points are minima. + +We identified the contours that stayed within the grid. Then we eliminated all contours with diameters significantly greater than the detectable dampening radius. We found the "center of mass" of each contour by averaging all of the contour's vectors. Then we found clusters of contours by grouping those whose "centers of mass" were within one-fourth of the detectable dampening radius. Within each cluster, we averaged the contour centers to get a cluster center. + +We eliminated contour clusters with fewer than 4 contours because submarines tend to have 4 or more contours associated with them. + +For each cluster, we then searched the $11 \times 11$ square around the center and found the absolute minimum intensity in that square. We looked at the 4 grid squares adjacent to the minimum to see whether the absolute minimum was a local minimum (the 4 adjacent squares might not all be part of the $11 \times 11$ square). If the center was not a local minimum, then the cluster marked an area of high intensity rather than an area of dampening, so we eliminated such clusters from our set. + +The remaining clusters should indicate the presence of submarines at their centers. + +# Speed and Direction + +To approximate the speed of the submarine, we locate it at two different times and use the time-distance formula from algebra to find the speed. We would approximate the direction as a vector from the first location of the submarine to the second. + +# Size and Depth + +We use the unattenuated model, (3), to approximate depth and size. Recall that the dampening is + +$$ +I = \frac {a b \pi}{R ^ {2}} (D - d). +$$ + +The maximum of this function occurs when $R$ is smallest, which is when $R = d$ . Also, by rewriting the equation, we see that + +$$ +\begin{array}{l} { a b \pi ( D - d ) } { = } { I ( h ^ { 2 } + d ^ { 2 } ) } \\ = a b \pi (D - d) = m \left(I \left(h ^ {2} + d ^ {2}\right)\right) \\ { = } { a b \pi ( D - d ) = d ^ { 2 } m ( I ) + m ( I h ^ { 2 } ) , } \\ \end{array} +$$ + +where $h$ is the horizontal distance and $m$ is the averaging function over some region. By substitution, we see that + +$$ +d ^ {2} \max (I) = a b \pi (D - d) = d ^ {2} m (I) + m (I h ^ {2}), +$$ + +or + +$$ +d = \sqrt {\frac {m (I h ^ {2})}{\max (I) - m (I)}}. +$$ + +Since we assume the width to be constant, we can assume $a$ to be constant. So the length $b$ of the submarine is + +$$ +\frac {d ^ {2} \max (I)}{a (D - d) \pi}. +$$ + +# Figures + +Figures 3-6 are a clear visual representation of our algorithm. The mean of the measurements in these figures is scaled by $2\pi e^{\alpha} / \alpha \approx 2.095 \times 10^{4}$ . + +![](images/b94a56dafaa8e33f6d8a2fbd5c11eee8606e10f142851ab270407f8c13b719cb.jpg) +Figure 3. Plot of the sample data. + +- Figure 3 is a plot of our sample data. Where the graph dips down is where the submarine is. (The figure is a horizontal view of the 3-D graph.) +- Figure 4 is a contour map of the 3-D data from which Figure 3 was made. + +![](images/2c5f2136007224832b1b1fa7ea9bdd2060a31a53de7168ecbe0306131b556ea4.jpg) +Figure 4. Contour map of the sample data. + +- Figure 5 shows the effect of "overlapping" smoothing on the data from Figure 3. Notice that the noise-to-dampening ratio has drastically improved. Unfortunately, the maximum dampening has also decreased, so it is necessary to return to the original data for information on submarine size. +- Figure 6 is a contour map of the data in Figure 5. Notice how many fewer contours appear on Figure 6 than on Figure 4. + +![](images/5db80476ca8aad990459e6fb8565261e8b617677b914cbe419fe9e9b84cf0b22.jpg) +Figure 5. The effect of "overlapping" smoothing on the sample data from Figure 3. + +![](images/700db0c2cbda36ca660beb00b42752baa59af3fb61a50d796fbcd0efa4d387de.jpg) +Figure 6. Contour map of the data in Figure 5. + +# Results and Model Testing + +Using our algorithm on our modeled data, we located a submarine within $25\mathrm{m}$ of the actual location, working with an area of $7\mathrm{km}$ by $7\mathrm{km}$ , with a ratio of about 1.8 of maximum dampening effect to standard deviation of ambient noise. Considering that our scale is $25\mathrm{m}$ to a computer unit, this is the best that we can hope for. This remained true when we assumed that the mean noise level was not constant and replaced it with some smooth function with relatively small variation. + +We were also within this same range for determining the speed of the submarine. Our model correctly located the center of the submarine at each subsequent step in the model, even when we assumed that the submarine is moving. + +Since our model is functional with a ratio of about 1.8, we expect it to work just as well when the ratio is lower. + +Unfortunately, our model at the latest stage computed the depth of the submarine and the size of the submarine only to within a factor of $10^{2}$ . + +We need to test the model with various standard deviations in the random noise and with various means for the random noise as a function of grid location. We also need to do real-world experimentation. + +# Strengths and Weaknesses of the Model + +# Strengths + +- Our computer program contains many constants that are easy to adjust. These include: + +- Scale: We considered one unit equal to $25 \mathrm{~m}$ of real scale. +- Size of the field we observe: We chose a grid of $140 \times 140$ , but MATLAB is capable of handling a much larger matrix. +- Depth of ocean. +- Attenuation constant: We used the $3 \times 10^{-2} \mathrm{~dB} / \mathrm{m}^{2}$ as the default, since that is Apel's value for $30 \mathrm{kHz}$ sound [1987]. +- Conversion factor from computer scale noise to real noise: We chose a value that was theoretically easy to work with $(2.095 \times 10^{4})$ , but it could be replaced with a real average amount of noise. +- Absorption/reflection factor: In our current model, the amount of noise that is actually transmitted rather than blocked is $98\%$ . +- Radius of the area affected: This is a constant relative to several factors, including what noise level is expected and what level of statistical accuracy we want. + +- The graphical interface is a very intuitive way to organize data and to see the effects of each stage of the modeling. +- Our computer model of ambient noise considers dampening caused by a submarine; the model is faithful to the sound wave propagation and absorption patterns in the ocean. +- The computer algorithm is fast; it ran in a few seconds on a SPARC 20 station. Hence, our model can be applied in real time or near real time. +- We considered many possible factors that could affect the noise field, including marine biology, surface activities, human-generated noise, geothermal activity, and seismic activity. + +- Our calculations are independent of scale, as long as the submarine is more than 1 grid square in length. +- We could incorporate other smoothing functions, such as polynomial smoothing or spline approximation on 20 points or so. + +# Weaknesses + +- Our model assumes a very large number of sensors (the square of the number of grid units on a side), which would mean a high cost of implementation. To get adequate data from fewer sensors would require more-sophisticated sensors and might pose more difficulty if a sensor failed. Having fewer sensors would also mean a more "bumpy" field of data points. +- We could not test our model on real data. +- Our assumptions may make our approximations inaccurate. Such assumptions include the regular shape of the submarine, uniform depth of the ocean within our area of interest, and sound reflection/refraction patterns. +- Our integral approximations may be inaccurate, which would cause our computer-generated noise field to be inaccurate. +- Our model could not calculate the size of the submarine or its depth very accurately. We believe, however, that we have the relative scale on these numbers right, and it is a matter of finding the right corrective factor. +- Our ambient noise field model is just for a single submarine, though incorporating more than one submarine should be a relatively easy project. + +# Appendix: Derivation of Integrals + +To approximate the sound blocked by the submarine, we use two techniques: + +- We draw a line $l_{1}$ from the observation point, through the center of the submarine, to the bottom of the ocean (see Figure 7). We attempt to integrate along that line, on a portion of a spherical surface. That is, let the distance from the point of observation to the point on $l_{1}$ be $r$ , consider the sphere of radius $r$ , and consider the portion of the sphere that would project onto the submarine. We take the limits of integration on $r$ to be the center of the submarine and the point where $l_{1}$ hits the ocean floor. +- It is very hard to approximate the portion of the sphere that projects onto the submarine. When $r$ is at the center of the submarine, we can approximate the area by projecting the ellipse of the submarine onto the plane perpendicular to + +to $l_{1}$ . This reduces the length in some direction by a factor of $d / R$ , where $d$ is the depth of the submarine and $R = \sqrt{x^2 + y^2 + d^2}$ is the distance from the observation point to the center of the submarine. On the other hand, in the direction perpendicular to that direction, the length is not reduced and the area is reduced by $d / r$ . + +Observation point, at the origin. + +![](images/6c35836c14f088237a4d184afd83906b593c8f2bb74bf168eab22b2f09b1b5ed.jpg) +Figure 7. Approximation of the noise dampened by the submarine. + +The area of the ellipse is $ab\pi$ . We approximate the section of the sphere by a projection of this ellipse, which gives the area of $ab\pi (r / R)^2$ , since the dimension in each direction changes by the factor of $r / R$ as we move along $l_{1}$ . Thus, we have the integral + +$$ +\int_ {R} ^ {R (D / d)} \frac {a b \pi \left(\frac {r}{R}\right) ^ {2} \left(\frac {d}{R}\right) e ^ {- \alpha (r - r _ {0})} d r}{r ^ {2}} = \int_ {R} ^ {R (D / d)} \frac {a b d \pi}{R ^ {3}} e ^ {- \alpha (r - r _ {0})} d r. +$$ + +To calculate the total ambient noise at a point in this scale, we set up a similar calculation. We ignore the noise that might be coming from above the sensor. By doing so, we have + +$$ +\int_ {0} ^ {D} 2 \pi r ^ {2} (1 / r ^ {2}) e ^ {- \alpha (r - r _ {0})} d r + \int_ {D} ^ {\infty} (A (r) / r ^ {2}) e ^ {- \alpha (r - r _ {0})} d r, +$$ + +where $A(r)$ is the appropriate area function. We approximate $A(r) = 2\pi rD$ which results in + +$$ +\int_ {0} ^ {D} 2 \pi e ^ {- \alpha (r - r _ {0})} d r + \int_ {D} ^ {\infty} 2 \pi r D (1 / r ^ {2}) e ^ {- \alpha (r - r _ {0})} d r. +$$ + +We again cannot integrate this immediately, so now we approximate $1 / r$ by $1 / D$ , since this is the definite upper bound (also, we lost some noise in the approximation, so this will make up for the loss). This results in + +$$ +\begin{array}{l} \int_ {0} ^ {D} 2 \pi e ^ {- \alpha (r - r _ {0})} d r + \int_ {D} ^ {\infty} 2 \pi e ^ {- \alpha (r - r _ {0})} d r = \int_ {0} ^ {\infty} 2 \pi e ^ {- \alpha (r - r _ {0})} d r \\ = \frac {2 \pi}{- \alpha} \left[ e ^ {- \alpha (r - r _ {0})} \right] _ {0} ^ {\infty} \\ = 2 \pi e ^ {\alpha} / \alpha . \\ \end{array} +$$ + +This integral treats the ocean as infinitely deep, which may make sense due to the constant reflections of sound off the bottom and surface. + +# References + +Apel, John R. 1987. Principles of Ocean Physics, New York: Academic Press. +Centro Interdisciplinaro di Bioacoustica-Universita degli Studi di Pavia. 1996. http://www.unipv.it/~webcib/cib.html\#surf +Dera, Jerzy. 1992. *Marine Physics*. Warsaw: Polish Scientific Publishers. +Friedman, Norman. 1984. *Submarine Design and Development*. Annapolis, Maryland: Naval Institute Press. +Hassab, Joseph C. 1989. *Underwater Signal and Data Processing*. Boca Raton, FL: CRC Press. +Keller, Joseph B. 1977. Survey of Wave Propagation and Underwater Acoustics. Wave Propagation and Underwater Acoustics, Springer-Verlag: Berlin, 1977. +Munk, Walter H., Peter Worcester, and Carl Wunsch. 1995. *Ocean Acoustic Tomography*. New York: Cambridge University Press. +Pain, H.J. 1983. The Physics of Vibrations and Waves. 3rd ed. Chichester, Great Britain: John Wiley & Sons. +Tolstoy, Ivan, and Clarence Clay. 1966. *Ocean Acoustics: Theory and Experiment in Underwater Sound*. New York: McGraw-Hill. +U.S. Department of Commerce. 1996. Marine mammal acoustics. http://www.pmel.noaa.gov/vents/whales/whales.html + +# Detection of a Silent Submarine from Ambient Noise Field Fluctuations + +Andrew R. Frey + +Joseph R. Gagnon + +J. Hunter Tart + +Wake Forest University + +Winston-Salem, NC 27109 + +Advisor: Stephen B. Robinson + +# Summary + +We developed a method for detecting intrinsically silent submarines in the ocean by measuring only the fluctuations in the ambient noise field. This method allows us to calculate the position, velocity, and approximate size of a submarine. + +Our model relies on measuring the noise field at four different listening stations, with each station composed of four microphones a relatively small distance apart. We calculate an amplitude spectrum of the noise at each microphone using a Fourier analysis and compare this spectrum to the previously measured baseline spectrum for ambient noise. The difference between these spectra is the noise reflected from the submarine. + +We use the four microphones at a particular station to measure the gradient of the peak amplitude from the submarine noise spectrum. Because amplitude varies inversely with distance from the submarine, we can compute the submarine's location from the amplitude and the gradient at each listening station. The approximate size, in terms of the radius of a similarly sized sphere, follows from the distance and peak amplitude. + +Our comparison of the frequencies of the peak amplitudes of the submarine and ambient noise spectra provides a measure of the Doppler shift at each listening station caused by the submarine's motion. The Doppler shift gives us a component of the submarine's velocity in the direction of each station. We select a basis from among unit vectors in these four directions and convert the submarine's velocity into standard Cartesian components. + +We wrote a Fortran program to implement our algorithm. Our simulations show that we can determine position with better than $8\%$ accuracy in each dimension. Size calculations suggest a systematic error of roughly $20 - 30\%$ . Error in the velocity computations varied for each component with changes in submarine position and in the dominant frequency of the ambient noise, but was within about $30\%$ for a single frequency of $1,000\mathrm{Hz}$ . + +The model could be modified to remove some of our assumptions, such as the absence of currents. Our model uses a minimum number of listening stations, but a larger number would significantly improve the results. + +# Assumptions + +- The speed of sound in the ocean is constant. Though the speed of sound depends on temperature, the range of detection devices is small enough to render speed of sound fluctuations negligible. +- Ambient noise has the same frequency and amplitude everywhere, so reflections from the surface and bottom of the ocean need not be accounted for [Horton 1959]. +- All submarines are approximately spherical, are made of steel, and reflect a fraction $k$ of the sound energy incident upon them. Although the surface of the submarine, as a two-dimensional surface curved in three dimensions, is not a simple harmonic oscillator (SHO), an SHO is a reasonable analogy. In this case, the SHO is both forced (by ambient noise) and damped (by the water and by the flexibility of the metal). The steady-state solution for such a forced and damped SHO is vibration at the forcing frequency. The high damping coefficient likely with a submarine indicates that the response of the submarine should be independent of frequency (and Krasil'nikov [1963] lists a single reflectivity for all frequencies). Certainly, at distances large compared to the dimensions of the submarine, a sound wave reflected off it will approximate a spherical wave. +- Sound waves reflected from a submarine which then reflect off the surface or bottom of the ocean have negligible intensity. This is to say that non-ambient noises detected by our microphones can be considered to have reflected directly from a submarine, and not secondarily from the surface or bottom of the ocean. +- The ocean is of homogeneous consistency, so there are no large animals or objects, aside from the submarine, which significantly influence the transmission of sound waves. Furthermore, we assume that only one submarine is present at any given time. +- The submarine, in addition to creating no intrinsic noise, does not by its movement generate any turbulence that affects noise transmission. +- A typical submarine cannot move faster than $20 \mathrm{~m} / \mathrm{s}$ . +- There is no appreciable current, and the detection stations are at rest with respect to the water. +- We begin by assuming that there is only one frequency (and corresponding amplitude) of ambient noise, and then we generalize to multiple frequencies. + +# Description of the Model + +Our detection scheme consists of four clusters of microphones in a pyramidal orientation. We require four non-coplanar clusters in order to determine with certainty the submarine's direction of travel in case the submarine is located in the plane of three of the clusters. Four clusters is a minimum; see Analysis of Error and Sensitivity for advantages and disadvantages of having only four clusters. We define a Cartesian coordinate system with the $xy$ -plane parallel to the surface of the ocean and the positive $z$ -axis pointing up. We place one microphone cluster at the origin and at the points $(d,0,0)$ , $(0,d,0)$ , and $(0,0,d)$ . (See Figure 1.) + +![](images/5ed5e35c3ff46bf94c7c86a364133a038b9ff4303971f3e8c6e6eb5bca101c25.jpg) +Figure 1. Array of microphone clusters. + +Because submarines typically do not descend below $1,500\mathrm{m}$ , we place the origin of our system $1,000\mathrm{m}$ below the surface of the water, so that one microphone cluster is $(1,000 - d)\mathrm{m}$ below the surface and the other three clusters are $1,000\mathrm{m}$ down. We chose to let $d = 500\mathrm{m}$ so that the detection clusters are well-spaced throughout the potential depth-range of a submarine. We envision the detection clusters as either buoyant anchored rigs or as pods at the end of lines dropped from the surface (e.g., suspended from ships). However, the analysis is not affected by the location of the origin or the orientation of the coordinate axes, as long as the entire array is sufficiently submerged. + +Each of the four clusters (listening stations) in turn consists of four microphones, one at the precise location we gave for the cluster and the other three a small distance $\delta$ away, one each in each of the coordinate directions. (See Figure 2.) + +We first measure the ambient noise waveform when no submarines are present, so that we can determine the ambient noise's frequencies and associ- + +![](images/d9fb5ed4bde62d4b45d89f41be7c5d348f34aeec319e691da40fb901a24bddbb.jpg) +Figure 2. Microphone arrangement within each detection station. + +ated amplitudes (at first, only one frequency was present). We then measure the sound at each microphone location for a short period of time, perform a Fourier analysis on the resulting wave pattern to determine the frequencies present and their respective amplitudes, and use these data to figure out the location, speed, direction of travel, and size of any submarines. + +# Data Required for Calculations + +We seek the position of a submarine relative to our coordinate system, its velocity vector, and its size. Because we assume that a submarine can be treated as a sphere, its size is just its radius $R$ , and its position can be described by its center. Hence, a complete solution to our detection problem is comprised of radius $R$ , position coordinates $(x,y,z)$ , and velocity vector $\vec{v} = (v_x,v_y,v_z)$ . + +The data available to calculate these quantities consist of the frequencies and their respective amplitudes received by our array of microphones. We list the constants and variables that are required for our calculations: + +$f =$ the frequency of the ambient noise. We chose $1,000\mathrm{Hz}$ , a frequency well within the range of real oceanic ambient noise, for our single-frequency simulations. + +$I_0 =$ the intensity of the ambient noise. A reasonable intensity at the frequency $1,000\mathrm{Hz}$ is $5.4457\times 10^{-10}\mathrm{Pa}^2$ [Munk et al. 1995, 179]. + +$A_0 =$ the amplitude of the ambient noise. Amplitude squared is proportional to intensity. Because the proportionality constant has already been taken into account in calculating $I_0$ , we get $A_0 = \sqrt{I_0} = 2.3336 \times 10^{-5}$ Pa. + +$k =$ the percentage of the sound energy (or intensity) reflected by the surface of the submarine. We let $k = 0.86$ [Krasil'nikov 1963, 172]. Since amplitude is proportional to the square root of intensity, the amplitude of sound waves immediately after reflecting from the submarine surface will be $\sqrt{k} A_0 = 0.9274A_0$ . + +$A(s) =$ the amplitude of sound waves reflected from the submarine's surface at distance $s$ from the center of the (spherical) submarine. Note that + +$$ +A (s) = \frac {\sqrt {k} A _ {0} R}{s}. +$$ + +This formula is a well-known consequence of conservation of energy. + +# Step One: Detecting the Submarine + +The raw data that we receive through each of the microphones consist of a waveform recorded over a short time interval (see Figure 3 for a sample plot of multiple frequency noise). + +![](images/44d8beb7c6122301b8a016be6272173bce0f5e1e4838b36dbc5b3ffde324564f.jpg) +Figure 3. Ambient noise with five frequencies, showing pressure as a function of time. + +To convert these data into useful frequency and amplitude figures, we use a fast Fourier transform, which isolates the particular frequencies in a given signal. The Fourier transforms that we used were the sine and cosine transforms provided with Press et al. [1986]. Once this computation is completed, we have amplitude values $A_{i,j}$ associated with the frequency $f$ (with each frequency if there is more than one) of recorded noise for each microphone, where $i$ is the station number and $j$ is the microphone number within that station. + +Our algorithm subtracts the ambient noise amplitude spectrum (determined as discussed previously) from the new amplitude spectrum recorded at each microphone. If all the differences are equal to zero, the current noise in the ocean is only the homogeneous ambient noise, so there must not be a submarine within a detectable distance. The microphones then collect a new set of data, and we begin the process again. + +If any differences are nonzero, we must determine whether they are caused by a change in the ambient noise or by the presence of a submarine. If the differences in the spectra are due to a change in the ambient noise, all of the microphones should have recorded the same data (by the definition of ambient noise). However, any changes caused by a submarine should vary from station to station and from microphone to microphone because of varying positions of the microphones relative to the submarine. Therefore, we compare the difference spectrum (the amplitude spectrum from the microphone minus the ambient noise spectrum) of the first microphone of each station to the difference spectra of the first microphones of the other stations. We could compare all the difference spectra for all the microphones, but since the greatest variation is from station to station, we need to compare only among stations. + +If there is no variation among the difference spectra among the stations, the algorithm must take the change in the ambient noise into account. It replaces the ambient noise spectrum by the new ambient noise spectrum. + +If there is a difference from station to station, our algorithm has detected a submarine! In this case, the difference spectra give us the amplitude values $A_{i,j}$ of the sound reflected by the submarine for each microphone. Now the algorithm finds the frequency with the greatest amplitude for each microphone. Because all frequencies of noise reflect off the submarine with the same proportionality constant $\sqrt{k}$ , this frequency and the corresponding peak amplitude must be the reflection of the frequency with the peak amplitude in the ambient noise. We consider only these peak amplitudes and their corresponding frequencies throughout the remainder of the algorithm, whether the ambient noise is composed of one frequency or many (see Figure 4). + +# Step Two: The Submarine's Position + +Our strategy now is to compute $\vec{\nabla} A_{i}$ for each of the four stations by approximating the derivatives of $A_{i,1} = A_{i}(x,y,z)$ , where $(x,y,z)$ are the coordinates of microphone $(i,1)$ , in each of the $x$ , $y$ , and $z$ directions. This explains our rationale for having four microphones at each of the four stations, since we can compute + +$$ +\frac {A _ {i} (x + \delta , y , z) - A _ {i} (x , y , z)}{\delta} \approx \frac {\partial A _ {i} (x , y , z)}{\partial x}, +$$ + +as well as the derivatives in the $y$ and $z$ directions, to find $\vec{\nabla} A_{i}$ . + +We note that $|\vec{\nabla} A_i|$ is the absolute value of the derivative of $A_i$ with respect to $s$ , the distance from the center of the submarine. Since we now have values + +![](images/f1db38e34f04a82059d23f030c54f43c0376e6704eacedc82c562f2f2f9e08f8.jpg) +Figure 4. Amplitude spectrum of ambient noise, showing amplitude as a function of twice the frequency (in Hz). + +for $A_{i}$ and for $|\vec{\nabla} A_i|$ , and since + +$$ +A _ {i} (s) = \frac {\sqrt {k} A _ {0} R}{s}, \quad | \vec {\nabla} A _ {i} | = \frac {\sqrt {k} A _ {0} R}{s ^ {2}}, \tag {1} +$$ + +we see that $A_{i} / |\vec{\nabla} A_{i}| = s$ . At this point, because we know the vector $\vec{\nabla} A_{i}$ that points to the submarine and the distance $s$ to its center, we know the position of the submarine relative to detection station $i$ . In fact, the coordinates of the submarine's center are given by + +$$ +(a, b, c) = s \frac {\vec {\nabla} A _ {i}}{| \vec {\nabla} A _ {i} |} + (x, y, z). +$$ + +While this calculation for one of the $i$ stations is sufficient to get coordinates for the submarine's position, our four stations allow us to compute these coordinates four different ways. Since there will be some random error in each computation, averaging the four different points provides a better approximation of the submarine's position. + +# Step Three: The Submarine's Size + +With values for the amplitude $A_{i}$ at station $i$ and the distance $s$ of station $i$ from the submarine, computing the radius of the submarine is easy. From (1), we have + +$$ +R = \frac {A _ {i} s}{\sqrt {k} A _ {0}}. \tag {2} +$$ + +We obtain a better estimate by averaging the four values of $R$ . + +# Step Four: The Submarine's Velocity + +To calculate the velocity, we use the frequency $f$ of sound reflected from the submarine's surface. Since we know the frequency of the ambient noise and thus the frequency shift between ambient and reflected sound, we can solve the equation describing the Doppler effect for the speed of the submarine in the direction of a particular detection station. The general Doppler effect for sound [Krane et al. 1992] is + +$$ +f _ {o} = f _ {s} \left(\frac {c - v _ {o}}{c + v _ {s}}\right), +$$ + +where $f_{o}$ is the frequency received by the observer, $f_{s}$ is the frequency of the source (the ambient frequency), $v_{o}$ and $v_{s}$ are the components of the velocities of the observer and the source along the line between them, and $c$ is the speed of sound, in this case in water. Because our detection stations are stationary, we have $v_{o} = 0$ . Also, we let $v_{s}$ be positive if the submarine is moving away from station. Solving for $v_{s}$ , we get + +$$ +v _ {s} = \left(\frac {f _ {s}}{f _ {o}} - 1\right) c. +$$ + +Once this $v_{s}$ has been calculated for a particular listening station (let's name it $v_{i}$ ), we can express this component of the submarine's velocity in terms of a vector. Since $\hat{u}_i = \vec{\nabla} A_i / |\vec{\nabla} A_i|$ is just the unit vector pointing from detection station $i$ to the center of the submarine, the vector $\vec{v}_i = v_s\hat{u}_i$ is the component of the submarine's velocity in the direction of station $i$ . + +Note that we need velocity components in only three linearly independent directions to compute the velocity vector. However, we have four potential basis vectors, the four $\hat{u}_i$ . To determine which set of three vectors is the most useful basis for our analysis, we consider the four matrices formed by taking combinations of the $\hat{u}_i$ as column vectors. First, we know that any coplanar set of three vectors will not form a basis at all, so the matrix composed of them will be singular. By perturbation, any set of three vectors that are almost coplanar will form an almost singular matrix; and, in the real world of measurement and computation errors, such a basis would not be useful. Therefore, we choose the set of vectors $\{\hat{u}_i,\hat{u}_j,\hat{u}_k\}$ whose matrix has the largest determinant as the basis likely to be most useful to our algorithm. + +Now, the vector $(v_{i}, v_{j}, v_{k})$ is a coordinate vector in terms of our chosen basis. We want to change our basis to the standard $\{\vec{x}, \vec{y}, \vec{z}\}$ basis to determine the velocity vector of the submarine with respect to our coordinate system. This change of basis can be performed by a simple matrix multiplication, + +$$ +\left[ \begin{array}{c c c} \hat {u} _ {i} & \hat {u} _ {j} & \hat {u} _ {k} \end{array} \right] \left[ \begin{array}{l} v _ {i} \\ v _ {j} \\ v _ {k} \end{array} \right] = \left[ \begin{array}{l} v _ {x} \\ v _ {y} \\ v _ {z} \end{array} \right] = \vec {v}, \tag {3} +$$ + +resulting in the velocity vector $\vec{v}$ . + +# Extensions of the Model + +Although most of our simulations were carried out with only one frequency in the ambient noise, we find a broad range of frequencies in the ambient noise of the ocean. Fortunately, the algorithm can process multiple frequencies because it uses only the peak amplitudes and corresponding frequencies. This technique reduces the scenario to a single-frequency problem. + +The algorithm, as described above and encoded in the computer simulation, can adjust to fluctuations over time in the ambient noise field. However, if both the appearance of a submarine and a significant change in the ambient noise field coincide, the algorithm assumes that all of the difference spectra represent noise due to the presence of the submarine. This effect could cause some error. + +We also can track a single submarine over a period of time, because the algorithm, in the form of a computer program, runs quickly enough to provide frequent data on the position and velocity of the submarine. + +The presence of two or more submarines in the observation region presents more of a problem for the algorithm because it would not recognize the presence of the submarine with the smaller effect on the ambient noise field. However, the algorithm could be modified to detect the presence of a single submarine, calculate its effects on the ambient noise field, and compare the recorded data with those effects. + +Our model could be extended to eliminate the assumptions of no current and stationary listening stations. The general Doppler equation already provides for a moving observer, so moving listening stations would be relatively easy to handle. Also, a constant surrounding current could be integrated into the Doppler equation, though this computation would be a bit more complicated. + +# Simulation Results + +We wrote a Fortran program to simulate the implementation of our algorithm. The program uses only the parameter $k$ , the positions of all the microphones, a waveform representing the ambient noise, and a waveform for each of the microphones representing the noise field with the presence of a submarine. + +To run the simulation program, we needed to create sound data for the microphones to receive. We used another Fortran program to produce a discretized version of the soundwaves, with more than 8,000 samples per second. We first created a data set with only the ambient noise present, as a benchmark, and then added in the additional sound caused by a submarine of a particular size with specific position and velocity vectors. This data-generation scheme provided us with an easy check on the accuracy of our simulation program. + +At first, we created an ambient noise data set with one fixed frequency and amplitude. We then created data files for three different selections of submarine radius, position, and velocity. We ran our simulation program for each of these + +three data sets in the presence of a submarine. The results of these simulations are provided in Table 1. These results show that our simulation was relatively successful in picking out the position and the velocity of the submarine, though appreciable error was present. While our simulation apparently did not do a particularly good job of calculating the radius of the submarine, the percentage error for the three simulations was fairly consistent, suggesting that this error may be systematic and thus correctable. + +Since Fourier analysis can result in an apparent smearing of a particular frequency over a frequency range, the amplitude we calculate for this frequency may be consistently smaller than it should be. This seems like the sort of problem that could be corrected through more sophisticated, more powerful Fourier analysis, or at least through the inclusion of an amplitude correction factor. + +# Table 1. + +Computer program output of radius, position coordinates, and velocity coordinates for three different submarine data sets, with one ambient frequency. + +
SimulationR (m)x (m)y (m)z (m)vx (m/s)vy (m/s)vz (m/s)
1Input13.8-3,000-2,00030015105
Output9.73-2,765-1,84328114.910.54.7
2Input52001,000-5002102
Output4.02190953-4641.39.51.6
3Input102,0002,000-500555
Output7.951,8711,871-4684.84.83.9
+ +While the results of our simulations are encouraging, we point out that our computer program is merely one realization of our general mathematical algorithm for finding a submarine's radius, position, and velocity. The results would be improved by providing more complete and accurate sound data (corresponding to better microphones) or using a more accurate and perhaps more appropriate fast Fourier transform algorithm. + +We also ran the same simulations using ambient noise with multiple frequencies. The results are in Table 2. + +# Analysis of Error and Sensitivity + +Table 2 shows the differences between actual and calculated position, radius, and velocity of a submarine. The error in the position coordinates most likely arises due to the dependence of our algorithm on measured amplitude figures and the amplitude derivative that we compute using them, which appear not to be accurate. At least part of this error may be an artifact of our need to use discrete data. Furthermore, our numerical calculation of $\vec{\nabla} A_{i}$ for station $i$ introduces more error, since it is based on a finite, and in fact quite large, + +Table 2. Radius, position coordinates, and velocity coordinates for three different simulated submarine data sets with ambient noise of five different frequencies. + +
SimulationR (m)x (m)y (m)z (m)vx (m/s)vy (m/s)vz (m/s)
1Input13.8-3,000-2,00030015105
Output8.48-2,826-1,88628714.611.052.8
2Input52001,000-5002102
Output4.19192939-454-0.36.1-4.2
3Input102,0002,000-500555
Output7.721,8391,839-4485.05.0-1.5
Frequencies (Hz):1005001,0002,0003,500
Amplitudes (μPa):301020127
+ +distance $\delta$ between microphones. This error is difficult to eliminate, since increasing $\delta$ creates a worse approximation of a derivative, while decreasing $\delta$ to very small distances requires unreasonably sensitive microphones to perceive tiny differences in amplitude. Because of this practical consideration, we are forced to accept some error in our calculation of a submarine's position. + +We have a nice measure of position error, since our algorithm computes the position vector four times, each time using only data from the four microphones at one listening station. For the same reason, we also have a measure of the error in the radius measurements (neglecting the apparent systematic error). The relative errors (standard deviation divided by mean) listed in Table 3 are all quite small. + +Table 3. Relative error for radius and position calculations. + +
SimulationRx-coordinatey-coordinatez-coordinate
1.043.034.014.023
2.078.009.009.023
+ +Error in our radius calculation is largely inherited from the problem with amplitudes discussed above. From (2), we see that the radius is determined by the measured values of $A_{i}$ and $s_i$ . Thus, if the $A_{i}$ calculated by Fourier analysis is too small, the calculated radius value will similarly be too small. We suspect that this fact is the cause of the fairly systematic error in our values of $R$ shown in Table 1. + +Velocity error arises primarily because of two factors: + +- error in our position calculation, since the velocity computation relies upon the submarine's location, and + +Table 4. +Relative error of calculations for two different simulations for different values of $\delta$ , the distance separating microphones in each station lattice. + +
δ (m)SimulationR (m)x (m)y (m)z (m)|v| (m/s)
51-.195-.041-.046-.068-.058
2-.295-.078-.080-.054.007
101-.192-.049-.046-.070-.058
2-.294-.078-.079-.061.007
151-.197-.057-.047-.072-.057
2-.278-.078-.077-.068.007
201-.198-.067-.047-.075-.056
2-.272-.077-.076-.075.007
+ +- error in the observed frequency $f$ of noise reflected from the submarine as determined through the Fourier analysis. + +The error in the observed frequency becomes particularly important when low frequencies predominate the ambient noise. The error is amplified because a small absolute error in the frequency measurement becomes a large relative error in the low frequency range, and the measurement error is a constant absolute error for all frequencies. Note the errors in the velocity components in Table 4, the results of a simulation in which a frequency of $100\mathrm{Hz}$ dominates the ambient noise. + +We performed some additional simulations in which we permuted the parameter $\delta$ , the distance between microphones within stations, in order to determine the sensitivity of our calculations to this parameter. The relative errors between our calculated values and the actual values are shown in Table 2. From this table, it appears that our model is not particularly sensitive to fluctuations in $\delta$ . + +Using the fast sine and cosine transforms limits our model; their use assumes that the data have an initial phase of zero, so we do not take into account phase shift in the reflected noise due to travel time from the submarine to the stations. + +Finally, we note that some of the error in our results arose because of the small number of microphones we arranged in our stations. We designed the model with some frugality, using only four stations because four is the minimum number needed to guarantee that we can pinpoint a submarine's velocity (three fail if the submarine is in the same plane as all three stations). Our calculations would have benefitted from some redundancy in our measurements, for instance, using eight stations arranged as vertices of a cube. However, our minimal scheme is cheaper, requires less superstructure, and provides for simpler computer calculations than would a more redundant arrangement. + +# Conclusions + +This model successfully detects a silent submarine using only distortions of the ambient noise field as data. It accomplishes this task with a small number of microphones (16) arranged in a lattice structure beneath the surface of the ocean, and it provides relatively accurate data for a range of submarine sizes, positions, and velocities. The model may lack realism in that it requires such assumptions as a homogeneous ocean, nearly spherical submarines, and isolated ambient noise frequencies. However, the first two of these assumptions have fairly solid physical bases in most normal circumstances, and our algorithm does provide a solid foundation that can be extended to take into account such complicating factors as a continuous frequency distribution. + +# Acknowledgments + +The authors would like to thank Dr. Edward Allen and Dr. Stephen Robinson for their assistance in preparation for the MCM. + +# References + +Horton, J.W. 1959. Fundamentals of Sonar. Annapolis, MD: United States Naval Institute. +Krane, K.S., D. Halliday, and R. Resnick. 1992. Physics. 5th ed. New York: John Wiley & Sons. +Koffman, Elliot B., and Frank L. Friedman. 1993. Fortran with Engineering Applications. 5th ed. New York: Addison-Wesley. +Krasil'nikov, V.A. 1963. Sound and Ultrasound Waves in Air, Water and Solid Bodies. 3rd ed. Jerusalem: Israel Program for Scientific Translations. +Munk, Walter H., Peter Worcester and Carl Wunsch. 1995. *Ocean Acoustic Tomography*. New York: Cambridge University Press. +Press, W.H., B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling. 1986. Numerical Recipes: The Art of Scientific Computing. New York: Cambridge University Press. + +# Imaging Underwater Objects with Ambient Noise + +Aron C. Atkins + +Henry A. Fink + +Jeffrey D. Spaleta + +{ atkins, haf, spaletaj } @wpi.edu + +Worcester Polytechnic Institute + +Worcester, MA 01609 + +Advisor: Arthur C. Heinricher + +# Introduction + +We present a method of locating and tracking undersea objects of various sizes. We list assumptions that help define our approach, together with requirements that must be met for an object to be detectable. We construct a detection scheme for two dimensions, which we generalize to three dimensions. + +In constructing our models, we make several design decisions which include time limitations, range, and space and resource requirements. Many of these choices are desirable for confidently detecting objects. + +# Assumptions + +We make several assumptions regarding the type of environment to which we will be listening. + +- We consider an extreme case in which our object must be a near-perfect acoustic reflector; such an object resembles a submarine, which tends to absorb very little sound. (If submarines absorbed a predominant amount of sound, sonar systems would be ineffective in tracking their movement.) + +By requiring our target object to be a near-perfect acoustic reflector, an individual scanning frequency can be chosen from a range of transmittable frequencies in ocean water. This range is given as between 5 and $50\mathrm{kHz}$ [Buckingham et al. 1992]. We can limit our monitoring process to a single frequency because a near-perfect acoustic reflector will give the same characteristic response at all frequencies [Pedrotti and Pedrotti 1993]. Having information on more than one frequency would produce redundant information. It is better to pick a single frequency that is strongly represented in the ambient noise field than to monitor several different frequencies. + +- We require directionality in the ambient field. Without an ambient noise bias in some direction, there will not be a detectable difference between reflected and background noise. Experimental research shows there is a directional bias in the ambient noise field near the ocean's surface and floor [Stephens 1970, 124–125; Urick 1983, 227]. For acoustic absorbers, ambient field directionality would be unnecessary; absorbers would always be detected as “holes” in the ambient noise field intensity regardless of field bias. +- We also require that the ambient noise field remain relatively stable while we are searching. Frequent changes in the field over short time intervals make it nearly impossible to find an object. +- We require that at least one of the following two conditions holds to detect confidently the presence of objects in our scan region: + +- The object being scanned is in motion relative to our scan location. +- We perform a background scan prior to the target object entering into our scan region, reliable for the ocean conditions during the search time. + +- We assume a minimum target size of approximate $10\mathrm{m}$ , the average width of submarines in the U.S. Navy. Submarines of other nations are of comparable size. +- Our sensing equipment consists of multiple parabolic reflectors, commonly used for focusing weak signals to one focal point, where we place a hydrophone, a device designed to measure acoustic intensity under water [Geil 1992]. This type of sensor has been used previously in related experiments [Buckingham et al. 1992] with promising results. Additionally, we assume that our equipment can detect angle differences to $0.1^{\circ}$ . + +We would like a system capable of detecting objects as far away as possible for relatively small sensor size. The relation between resolving distance and sensor diameter is given in Appendix A. + +# Object Detection + +# Prior Experimental Results + +Our procedure relies on the results of an experiment performed by Buckingham et al. [1992] in which neoprene-coated boards submerged near a pier were detected using the ambient noise in the water. In their experiment, three targets each $0.9\mathrm{m}$ high and $0.77\mathrm{m}$ wide were placed $7\mathrm{m}$ from a reflector. The detector was a hydrophone located at the focal point of a parabolic reflector of diameter $1.22\mathrm{m}$ with a neoprene rubber surface. The back of the hydrophone was shielded to prevent detecting noise from that side. The boards first were + +turned edge-on to the reflector and the noise level was recorded over a frequency range of $5 - 50\mathrm{kHz}$ . When the boards were rotated so that they were face-on to the parabolic dish, the noise spectrum was recorded again. By subtracting the two spectra, the authors found an average intensity difference of $4\mathrm{dB}$ . They reasoned that this difference was due to a directionality in the noise emanating from the nearby pier. + +Our procedure for detecting objects in the ocean using the ambient noise field extends this experiment. We first consider locating and tracking an object in the ocean in two dimensions, using two of these reflectors. + +# Object Detection in Two Dimensions + +We can uniquely determine any point in the plane if we know its distance from two known points. These known points can be our parabolic reflectors. (We assume that the object moving through the ocean is never collinear with our two reflectors; if there is reason to believe this may happen, we can add a noncollinear third reflector.) + +To simplify visualizing the problem, we consider the special case when both of the parabolic reflectors are located at a fixed depth near the shore line; this does not make the problem any less interesting and is likely to be the case in many practical implementations (see Figure 1). The process of detecting a distant object begins with both reflectors sweeping through $180^{\circ}$ while monitoring a fixed frequency. If the background noise were constant for all angles, then a plot of noise vs. angle would yield a horizontal line. However, due to directional biases in the ambient noise field, it is likely that this noise will be a function of the angle at which the reflector is directed. For this reason, it would be useful to have a background scan (a scan with no object present) for comparison with later scans. + +![](images/ea9f01c126d9a76116c25cce8127ec62d601e9f34e407e3beb6033ead5e73313.jpg) +Figure 1. Point location in two dimensions. + +Assuming that a background scan has been made, it is a simple matter for both reflectors to scan through angles and record the ambient noise level when searching for an object. By subtracting away the two plots of noise level in dB vs. the angle each reflector has swept from its initial position, a characteristic disturbance pattern should appear over a small angle range, indicating the presence of an object. The angle at the center of this disturbance represents the approximate viewing angle to the center of the object. Knowing this angle measurement from each reflector and the distance between the reflectors, it is a simple matter to triangulate the position of the object. A method to perform this triangulation is described in Appendix C. + +The disturbance pattern in the intensity plot gives more information than just the viewing angle at which the center of the object is located. Since the angle is being swept out, the broader the disturbance pattern, the larger the object. By measuring the angular width of the disturbance region, we can calculate an approximate size of the object. Once the distance to the object has been found, the triangulation formulas can be applied using the extreme angular values of the signal disturbance to find the size of the object. The size of the object is approximated from the difference between the two extreme locations calculated from triangulation. + +Once the reflectors have found the object initially, they can track it by scanning through a small interval centered around the initial position. This interval can come from an estimate of the velocity of the object. + +One possibility that can occur while scanning is that the object could be out of range of one of the two reflectors. The position of the object can still be approximated, since the ranges of both of the reflectors are known. The object would lie in crescent region inside the circle of detection of one reflector and outside the circle of detection of the second. This gives a rough location of the object that may be good enough in practical applications. A potential intersection of two sensor regions is shown in Figure 2. + +![](images/64e9ee5f56784b7defd228842fd0beec6dc054f7ad3f542209fd0485bf44eace.jpg) +Figure 2. Intersection of two scan regions. + +# Detection without a Background Scan + +If a background scan is not available, or if the background noise has changed sufficiently (due to a storm or other disturbances in the ocean), then an object + +can still be detected and tracked. Again, for simplicity, we consider the case where the reflectors lie along a shore. The process begins as before, with each reflector sweeping through $180^{\circ}$ and recording the ambient noise level. This scan will be a combination of the background noise with the influence of an object, if present. Although a variance may be present in this scan, it cannot be ruled out that its presence is merely an artifact of the normal ambient noise at that angle. To determine whether there is a true disturbance, a second full scan is needed a short time later. + +Once a second scan has been made, the two can be subtracted as before to yield the difference in noise level at each angle. Assuming that the target object has moved during the two scan periods, there will be a shift in the disturbance on the second noise plot. The subtraction will yield a pattern. The midpoint in the subtracted disturbance region gives an approximation to the angle at which the object was viewed halfway during the time interval. If the two scans are taken close enough together to narrow down the time, yet far enough apart to detect the shift in the object's position, then the position of the object can be determined with nearly the same accuracy as when a background scan is used. + +The drawback to this approach is that it requires each reflector, at least initially, to scan through a full $180^{\circ}$ ( $360^{\circ}$ when not near the shore) at least twice to detect the object. Once the object is found, two passes over a much smaller angular range can be used to track it as before. A second shortcoming is that the method can detect only moving objects. To detect a stationary object, a background scan is required. + +# Object Detection in Three Dimensions + +Three-dimensional object detection requires substantially more time than the two-dimensional case (see Figure 3). Each reflector has to sweep out the entire volume of space, a formidable task. The position of an object can be determined from the angles to it from two known points, just as in the two-dimensional case. The direction of each reflector is described by two angles, using their spherical coordinate representation, or alternatively by vectors. The case of both reflectors and the object being collinear can be resolved by using a third noncollinear reflector. + +# Ideal Detection of Objects + +To cover a large region effectively, we use an array of reflectors positioned so that each point lies within the range of at least two reflectors. An ideal object-location scheme would use not two but three (noncollinear) parabolic reflectors to detect objects in the scan region. An object entering the region would be detected immediately by at least one reflector. Then the remaining reflectors would scan along the line of sight of the first reflector until they detect the object. Next, the three-dimensional point-location scheme discussed + +![](images/21eef9c8a2fae49e14a999cd1a8147feb9cd1d04a0029425d313c11a44306f84.jpg) +Figure 3. Point location in three dimensions. + +in Appendix D determines the object's position. With a second scan a short time later, the two positions determine the heading and velocity. + +# Detection with Reflector Arrays + +A major problem with our scheme is that it does not produce a complete sweep of the full area in a reasonable time. We can compensate for this by adding additional reflectors or by changing the shape of the reflectors. The array configurations we consider are a trough system (Figure 4), a linear system (Figure 5), and a parabolic torus configuration (Figure 6). + +![](images/06920e91dfa2f68540f6629fc984b535922a0b557878cf055ba2a09e6c09a480.jpg) +Figure 4. Parabolic trough. + +# The Parabolic Trough + +Parabolic troughs were used as solar energy collectors as early as the late 1800s, to focus the sun's energy to heat water-filled pipes. Many solar energy plants still use parabolic troughs because of their advantages over flat mirrors or dishes [Smith 1996]. + +![](images/9f44c4ecadc856f9dea30171c66caa1e82c3d0bf6e79cdf460b6cc5bbbfeb149.jpg) +Figure 5. Linear array configuration. + +![](images/0a7ed1a49794e492e90355252f3f322149150113ebc5aedea87c5a80f1f7b9d6.jpg) +Figure 6. Parabolic torus. + +A parabolic trough is shaped something like a soup can cut in half lengthwise. Unlike a parabolic dish, which only focuses rays to a specific focal point, a parabolic trough focuses rays into a line. A parabolic trough is simpler and cheaper to construct than a parabolic dish and can scan a much wider field. An array might be constructed of three of these parabolic troughs oriented at different angles. As each of them rotates, they sweep out a plane. The intersection of these three planes gives the location of the object. It is unlikely that an object would evade detection if the troughs are long enough and are able to rotate through some angle to compensate for their finite length. + +A drawback to using a parabolic trough is its inefficiency, since it focuses rays to a line rather than a point. This would lead to lower detected intensities when an object is encountered. + +# The Linear Array + +The same effect of using a trough can be produced using a linear string of dishes. If all the reflectors point in the same direction, then this would act as a trough with a finite number of focus points. The disadvantage is the cost of construction and the need for careful alignment of each dish. + +# The Parabolic Torus + +A parabolic trough still does not sweep out an infinite plane through its rotations. One reflector shape that can focus the ambient noise in an entire + +plane into a detectable area is a parabolic torus, which looks like a tire rim. Each cross section is a parabola, so the parabolic torus focuses energy a constant distance away from the vertex. For this torus, the focus becomes a ring around the reflecting surface. In theory, this provides a tremendous advantage over either troughs or dishes. A trough is limited to scanning areas only as wide as its length. The parabolic torus scans an entire plane during one instant and, when it is swept through $180^{\circ}$ perpendicular to its axis, scans the entire volume around it. Thus, it is impossible to “hide” from this sensor, as is possible with the previous two. + +A weakness of this reflector construction is that even less energy is focused from an object in the surroundings than from a parabolic trough. Unless the object creates a strong disturbance pattern in the ambient noise field, the object may not be seen with this reflector. Additionally, we can expect the range of this system to be less than for a system using only parabolic dishes. + +A detection system could then employ three of these reflectors mounted at different angles. Each would sweep through their surroundings. After a full revolution, they would have two scans and corresponding angles for each scan. + +A more accurate procedure might use two parabolic tori and a parabolic dish. The tori could be used to find the line along which the object lies and the dish could then sweep out along that line with greater accuracy to find the object. A drawback of the tori configuration is that we do not know a way of calculating the resolving power, so we do not know the range. + +# Strengths and Limitations + +We discuss the necessity of some of our assumptions and what happens if they are relaxed or removed. We also reveal some intrinsic limitations due to equipment and environment. + +- We can relax our assumption that the target object is a near-perfect reflector. Strong sound-absorbing objects produce intensity profiles with negative difference regions compared to the background, in contrast to the positive regions for near-perfect reflectors. The similarities suggest that our assumption is not needed. If we allow frequency-dependent reflectivity, we can determine the acoustic color of an object by scanning over a range of frequencies. +- If the assumption requiring directionality is removed, detection becomes difficult for all objects except very strong acoustic absorbers. Absorbers would always show up as "holes" in the background ambient noise field. Since interesting strong acoustic absorbers are rare, we recommend that this constraint remain in any implementation. +- There is no need for a background reference scan if we use a multiple scanning technique on moving objects. If a reliable background scan can be + +obtained, it should be used for validation; for detection of stationary objects, a confident background is still necessary. + +- Ambient noise near the surface of the ocean is unstable because of highly variant surface conditions, so stationary objects would be harder to detect. This is contrasted by the seismically directed field at the ocean floor, which is constant over periods as long as seasons [Stephens 1970]. +- We are uncertain which reflector type is the best in practical situations. We suggest experimentation like that in Buckingham et al. [1992]. +- The angle-measuring precision of our tracking equipment is crucial to our ability to precisely find objects. If we alter the precision of our equipment even slightly, we might incorrectly locate the object, especially when it is far away. +- We must limit the total size of our reflector. Large reflectors are impractical because moving them underwater poses a significant problem and because one placed on a ship must not restrict the ship's movement. +- One last limitation that dominates any design is the acoustic energy absorption of sea water. This attenuation of energy effectively cuts our viewing distance to $1\mathrm{km}$ if we are using a frequency of $40\mathrm{kHz}$ [Stephens 1970, 12], which is disappointing. + +# Conclusions + +Our findings lead us to believe that this technology is a viable detection system for most circumstances. Theoretical results for distant objects, or for stationary objects in unstable ambient fields, are not so positive; and this type of system would be not advisable for these conditions. We are disappointed in the limited maximum viewing range in ocean water. The range of $1\mathrm{km}$ greatly limits the ability to detect approaching submarines. + +This is, however, a great way to detect stationary submarines at deeper levels, which are hard to detect using traditional sonar techniques [Stephens 1970]. A scan depth of $1\mathrm{km}$ is a reasonable maximum for submarines. + +Another potential application is the monitoring of otherwise undetectable disturbances in the ocean, including changes in seismic conditions and the behavior of marine life. This application presents a nonmilitary use for equipment that might otherwise lie unused. + +# Appendix A: Equipment Constraints and Resolving Capabilities + +A parabolic reflector has constraints dependent on its diameter. Given the diameter $w$ of our reflector, we can apply Rayleigh's criterion [Pedrotti and Pedrotti 1993, 335], which states that our object must be closer than a fixed distance for the object to be resolved: + +$$ +\theta = \frac {1 . 2 2 \lambda}{w}. \tag {1} +$$ + +Here, $\theta$ identifies the minimum angle that can be used to resolve our object, and $\lambda$ is the wavelength of the sound waves. + +Pedrotti and Pedrotti [1993] state that $\lambda$ is proportional to the speed of sound in water of a certain temperature divided by the frequency of the sound waves. This gives $\lambda$ as + +$$ +\lambda = \frac {c}{\mu}. +$$ + +Substituting into (1) for $\lambda$ , we have + +$$ +\theta = \frac {1 . 2 2 c}{w \mu}. +$$ + +The speed of sound in sea water, $c$ , is dependent on the temperature of the water. We assume a constant temperature of $25^{\circ}$ C. For this temperature, $c = 1531$ m/s [Lide 1992, 14-31]. The value of $\mu$ will be determined when we choose the sound wave frequencies to monitor. + +We now derive a relation between smallest angle $\theta$ and the ratio + +$$ +\frac {x = \text {o b j e c t s i z e}}{r = \text {m a x i m u m o b j e c t d i s t a n c e}}. +$$ + +Assuming that $\theta$ is a relatively small angle, we have + +$$ +{\frac {\theta}{2}} \approx \tan \left({\frac {\theta}{2}}\right) = {\frac {x}{2 r}}, \qquad {\frac {x}{r}} \approx {\frac {1 . 2 2 c}{w \mu}}. +$$ + +Table 1 lists maximum object distances vs. required reflector diameter to view a $x = 10$ m-wide object at a frequency view of $\mu = 40$ kHz. To view an object the width of a submarine at a distance of $20$ km, we would need a reflector approximately the length of a submarine, a truly impractical requirement. + +# Appendix B: 2-D Example + +The following example demonstrates what an expected signal response will be for a specific geometrical situation. We consider a specific two-dimensional case with the following conditions (see Figure 7): + +Table 1. +Range vs. required diameter of parabolic reflector. + +
Distance r (km)1251020
Diameter w (m)59234793
+ +- We use a single parabolic reflector. +- The object to detect is a plane line segment, and our sensor lies below the outward normal of the object. +- We measure all angles relative to a line parallel to the line segment. The line segment can be viewed as defining the horizontal. The angle $\theta$ formed by the horizontal and the normal to the object directed towards the reflector is then $-90^{\circ}$ . +- We define several variables for use in our example: + +- The reflector is located directly below the midpoint of the object at a distance $r = 19.7 \, \text{m}$ . +- The object has a width of $x = 6.74$ m. +- The diameter of the reflector is $w = 5$ m. + +![](images/ccad6a26faefa61e9a06c9f129b06b225a1cef7e044da0d9eed32b3312e8a412.jpg) +Figure 7. Relation between object and reflector. + +The distance $d$ from the corner of the object to the center of the reflector is + +$$ +d = \sqrt {\left(\frac {x}{2}\right) ^ {2} + r ^ {2}} = 2 0 \mathrm {m}. +$$ + +The angle when the center of the reflector is pointed directly at the first corner of the object, measured from a line parallel to the object line segment, is + +$$ +\phi = 9 0 ^ {\circ} - \arcsin \left(\frac {x}{2 d}\right) = 8 0 ^ {\circ}. +$$ + +Similarly, the angle when the center of the reflector is pointing at the last corner of the object is + +$$ +\sigma = 9 0 ^ {\circ} + \arcsin \left(\frac {x}{2 d}\right) = 1 0 0 ^ {\circ}. +$$ + +To visualize the reflection signal from the object, we must introduce an ambient noise field. Let us assume that we are near the ocean bottom, where seismic disturbances produce a sound field that is more intense when scanning is directed at the sea floor than when scanning is directed upward toward the surface [Stephens 1970, 124-125]. The optical analogy is having the sun at your back while looking into a mirror. We consider a case in which the sound intensity $I$ of the ambient noise field, with no object in the scan region, depends on the scanning angle $\lambda$ as follows: + +$$ +I (\lambda) = - 2 \sin \lambda + 9 0. +$$ + +Figure 8 shows a plot of the function $I$ . + +![](images/03952238fd2eef2efe7c7af15fa598d85e084dd919e05fa97ea1eb5745c1e7fa.jpg) +Figure 8. Plot of intensity $I$ vs. scanning angle $\lambda$ (in degrees). + +With an object in the scan region, the reflector begins to pick up reflected noise from the object before being pointed directly at it. Figure 9 shows the large region $\Omega$ (between the dotted rays) in which some noise reflected from the object is received. There is a small transition region at each end of the object where the object takes up only part of the view, including a part E in which the view is mostly of the object. Inside the region $\Gamma$ , the object is in full view of the reflector and there are no boundary transition effects. Beyond region $\Omega$ , the signal received is just background noise. + +![](images/9fea6fe3c7fab34e8326652c104d5320272ac97e00ae3632026d245071c9f1e0.jpg) +Figure 9. Reflected noise is received in region $\Gamma$ but also in part in region E and in fact throughout region $\Omega$ (denoted here by $\cap$ ), which is bounded by the dotted rays. + +The transition region can be described in terms of an angle offset $\psi$ from the contact angles $(\sigma$ and $\phi)$ : + +$$ +\psi = \arcsin \left(\frac {w}{2 d}\right), +$$ + +which is shown in Figure 10. The outer boundaries of region E are the angles $\phi$ and $\sigma$ ; the boundaries of region $\Omega$ are angles $\phi - \psi$ and $\sigma + \psi$ ; the boundaries of region $\Gamma$ are the angles $\phi + \psi$ and $\sigma - \psi$ . The right-hand transition region corresponds to the angle interval $(\phi - \psi, \phi + \psi)$ , and the left-hand region to $(\sigma - \psi, \sigma + \psi)$ , so that each region has angular width $2\psi$ . + +![](images/387deea530f3b60d3ed672c0e80c33f258627ee71c0ae9401287d15305acd895.jpg) +Figure 10. Definition of the angle $\psi$ . + +The intensity pattern from inside the region $\Omega$ depends on the angle between the direction at which the reflector is pointed and the normal from the target object (see Figure 11). All sound reaching the reflector must be parallel to the direction at which it is pointed, due to the characteristics of the parabolic shape of the reflector. Following sound from the reflector back to the object reveals from which direction sound is received when reflected. + +![](images/6249f6db85586a4b0ef3f7271d29bf8a2855cb1fff2c2a18dd893308f8a36488.jpg) +Figure 11. The scan angle $\lambda$ and the view angle $\eta$ . + +The parallel rays from the reflector travel back to the object surface and reflect. The angle of incidence must equal the angle of reflection for a nearly perfect reflective surface. The angle the rays make from the horizontal after reflecting off the surface is + +$$ +\eta (\lambda) = \pi - \lambda - 2 \theta . +$$ + +Using this reflected angle equation, we can produce a graph of the reflected intensity when the object takes up the full width of the sensor view, that is, when the object is in region E, where $\theta$ ranges from $\phi + \psi$ to $\sigma - \psi$ . The intensity in this region is given by + +$$ +I (\eta (\lambda)) = 2 \sin \left(\frac {\lambda \pi - 2 \pi^ {2}}{1 8 0}\right) +$$ + +and is plotted in Figure 12. + +We approximate the boundary transitions of region $\Omega$ by fitting a Gaussian that matches the noise intensity of background at region $\Omega$ 's outer edge and fits the noise reflected intensity at region E's outer edge. + +The values of $I$ in the two boundary regions are + +$$ +I \left(\eta (\phi + \psi)\right) e ^ {\ln \left(\frac {I (\phi - \psi)}{I (\eta (\phi + \psi))}\right) (\lambda - \phi - \psi) ^ {2} / 4 \psi^ {2}} = I \left(\eta (\phi + \psi)\right) \left[ \frac {I (\phi - \psi)}{I (\eta (\phi + \psi))} \right] ^ {(\lambda - \phi - \psi) ^ {2} / 4 \psi^ {2}}, +$$ + +![](images/08a2a00653c3b27b80f9e1f5b10713f65eaed078a4f65c2a9bef44b2bec61cf5.jpg) +Figure 12. Object signal response vs. angle $\lambda$ (in degrees). + +$$ +I \big (\eta (\sigma - \psi) \big) e ^ {\ln \left(\frac {I (\sigma + \psi)}{I (\eta (\sigma - \psi))}\right) (\lambda - \sigma - \psi) ^ {2} / 4 \psi^ {2}} = I \big (\eta (\sigma - \psi) \big) \left[ \frac {I (\sigma + \psi)}{I (\eta (\sigma - \psi))} \right] ^ {(\lambda - \sigma + \psi) ^ {2} / 4 \psi^ {2}}. +$$ + +From these expressions, we can build a piecewise function that approximates the noise signal response from the object. Notice that this function is only an approximation, because it is not differentiable at all points. A better approximation can be obtained by fitting derivatives at boundaries as well. Figure 13 gives a plot of the total intensity response over the entire domain of scan angle $\lambda$ . + +![](images/e9ac8f6d58455f10d4eca122aecbc2626972c48022fa4bbf35d503007a9ad06d.jpg) +Figure 13. Total response vs. angle $\lambda$ (in degrees). + +To isolate the object image, we subtract the background noise field to obtain a difference plot (Figure 14). The distinct hump seen there will not always be the case. The intensity response depends on object surface geometry, reflectivity, + +![](images/940ad00ac27c30f219600a10b8e3b943a127a4338139ec2768f3e41736e118fa.jpg) +Figure 14. Difference plot. + +and noise field directional characteristics. Certain parameter values will lead to difference plots that contain hills, valleys, and zero regions inside the overall object intensity response. Note that these complexities do not change the overall process of determining characteristics of the object, as long as a minimum and maximum angles of response can be found. + +# Appendix C: Point Location in Two Dimensions + +We can determine the location of the object by triangulation from two reflectors, provided the object is not collinear with them (see Figure 1). Let $d_{i}$ be the distance from reflector $i$ to the target, $\theta_{i}$ the angle formed at reflector $i$ , and $\ell$ the distance between the two reflectors. + +Applying the law of sines to the triangle formed by the three objects yields + +$$ +\frac {d _ {1}}{\sin (\pi - \theta_ {2})} = \frac {d _ {2}}{\sin \theta_ {1}} = \frac {\ell}{\sin (\pi - \theta_ {1} - (\pi - \theta_ {2}))}, +$$ + +$$ +{\frac {d _ {1}}{\sin \theta_ {2}}} = {\frac {d _ {2}}{\sin \theta_ {1}}} = {\frac {\ell}{\sin (\theta_ {2} - \theta_ {1})}}. +$$ + +Solving this for $d_{1}$ and $d_{2}$ gives + +$$ +d _ {1} = \frac {\ell \sin \theta_ {2}}{\sin \left(\theta_ {2} - \theta_ {1}\right)}, \quad d _ {2} = \frac {\ell \sin \theta_ {1}}{\sin \left(\theta_ {2} - \theta_ {1}\right)}. \tag {2} +$$ + +Setting the origin of a coordinate system on reflector 1 and setting the $x$ -axis to extend along toward reflector 2, the position of the object is then given by the ordered pair $(x,y) = (d_1\cos \theta_1,d_1\sin \theta_1)$ . + +# Appendix D: Point Location in Three Dimensions + +We assume that two reflectors have located a target and are aimed at it with direction vectors $\mathbf{v}_1 = \langle \mathrm{a}_1, \mathrm{b}_1, \mathrm{c}_1 \rangle$ and $\mathbf{v}_2 = \langle \mathrm{a}_2, \mathrm{b}_2, \mathrm{c}_2 \rangle$ . We superimpose a three-dimensional coordinate system on our reflectors with the first reflector located at the origin. For simplicity, we build our coordinate system so that the second reflector lies on the $x$ -axis a distance $\ell$ from the first one. + +The position of the object is given by the intersection of the two lines with direction vectors $\mathbf{v}_1$ and $\mathbf{v}_2$ . The parametric equations for them are + +$$ +\text {l i n e} 1: \left\{ \begin{array}{r c l} x & = & s a _ {1} \\ y & = & s b _ {1} \\ z & = & s c _ {1} \end{array} , \quad \text {l i n e} 2: \left\{ \begin{array}{r c l} x & = & \ell + t a _ {2} \\ y & = & t b _ {2} \\ z & = & t c _ {2} \end{array} \right. \right. \tag {3} +$$ + +Their intersection is the solution of the system + +$$ +s a _ {1} = \ell + t a _ {2} +$$ + +$$ +s b _ {1} = t b _ {2} +$$ + +$$ +s c _ {1} = t c _ {2} +$$ + +With only two unknowns, one equation is redundant. Solving the first two equations for $s$ gives + +$$ +s = \frac {\ell b _ {2}}{a _ {1} b _ {2} - b _ {1} a _ {2}}. \tag {4} +$$ + +Substituting into (3) gives the location of the object as $(sa_1, sb_1, sc_1)$ , where $s$ is given by (4). + +# Object Velocity in Two Dimensions + +We consider a worked-out example of a plausible trial if reflectors were set up and moved as outlined in this paper. In this scenario, two reflectors are placed on shore $100\mathrm{m}$ from each other (for this geometry, a third reflector is not needed to resolve the special case when the object and the two reflectors are collinear). + +During the first sweep of both reflectors through a full $180^{\circ}$ , the reflectors find a peak in the ambient noise plot at angles of $\theta_{1} = 84.3^{\circ}$ and $\theta_{2} = 91.9^{\circ}$ . Using (2) from Appendix C gives the location of the object: + +$$ +x = \frac {\ell \sin \left(\theta_ {2}\right) \cos \left(\theta_ {1}\right)}{\sin \left(\theta_ {2} - \theta_ {1}\right)} = \frac {1 0 0 \sin \left(9 1 . 9 ^ {\circ}\right) \cos \left(8 4 . 3 ^ {\circ}\right)}{\sin \left(9 1 . 9 ^ {\circ} - 8 4 . 3 ^ {\circ}\right)} = 7 5. 1 \mathrm {m}, +$$ + +$$ +y = \frac {\ell \sin \theta_ {1} \sin \theta_ {2}}{\sin (\theta_ {2} - \theta_ {1})} = \frac {1 0 0 \sin (8 4 . 3) ^ {\circ} \sin (9 1 . 9) ^ {\circ}}{\sin (9 1 . 9 ^ {\circ} - 8 4 . 3 ^ {\circ})} = 7 5 2. 0 \mathrm {m}. +$$ + +Then, 5 sec later, the reflectors find the object at the angles $\theta_{1} = 84.5^{\circ}$ and $\theta_{2} = 92.4^{\circ}$ . Using (2) again produces the new location of the object + +$$ +x = 6 9. 7 \mathrm {m}, \quad y = 7 2 3. 6 \mathrm {m}. +$$ + +Subtracting the coordinates gives the direction in which the object has moved: + +$$ +\text {d i r e c t i o n} = \langle 6 9. 7 - 7 5. 1, 7 2 3. 6 - 7 5 2. 0 \rangle = \langle - 5. 4, - 2 8. 4 \rangle +$$ + +So, the object is moving predominantly shoreward and slightly toward reflector 1. It speed is + +$$ +\mathrm {s p e e d} = \frac {\mathrm {d i s t a n c e}}{\mathrm {t i m e}} = \frac {\sqrt {(- 5 . 4) ^ {2} + (- 2 8 . 4) ^ {2}}}{5} = 5. 8 \mathrm {m / s}. +$$ + +Therefore, the object is moving at $5.8\mathrm{m / s}$ (11 knots) at an angle of $79.2^{\circ}$ SSW from the line through both reflectors. When first detected, it is $756\mathrm{m}$ from the first reflector and $752\mathrm{m}$ from the second reflector; at the second detection, it is $727\mathrm{m}$ and $724\mathrm{m}$ from the reflectors. + +# References + +Buckingham, M.J., B.V. Berkhout, and S.A.L. Glegg. 1992. Imaging the ocean with ambient noise. Nature 356: 327-329. +French, A.P. 1971. Vibrations and Waves. New York: W.W. Norton. +Geil, F.G. 1992. Hydrophone techniques for underwater sound pickup. Journal of the Audio Engineering Society 40: 711-718. +Lide, D.R. (ed.). 1992. CRC Handbook of Chemistry and Physics. 73rd ed. London: CRC Press. +Pedrotti, F.L., and L.S. Pedrotti. 1993. Introduction to Optics. 2nd ed. Englewood Cliffs, NJ: Prentice Hall. +Smith, C. 1995. Revisiting solar power's past. Technology Review (July 1995). http://web.mit.edu/afs/athena/org/t/techreview/www/articles/july95/Smith (11 Feb 1996). +Stephens, R.W.B. (ed.). 1970. *Underwater Acoustics*. New York: Wiley-Interscience. +Thornton, S.T., and A. Rex. 1993. Modern Physics for Scientists and Engineers. New York: Saunders. +Toppan, A. 1996. Active USN ships—Submarines. http://www.wpi.edu/\~elmer/navy/current/usn_submarines.html (11 Feb 1996). +Urick, R.J. 1983. Principles of Underwater Sound. 3rd ed. New York: McGraw-Hill. +Young, H.D. 1992. University Physics. 8th ed. New York: Addison-Wesley. + +# Judge’s Commentary: The Outstanding Submarine Detection Papers + +John S. Robertson + +Dept. of Mathematics and Computer Science + +Georgia College and State University + +Milledgeville, GA 31061-0490 + +jroberts@mail.gac.peachnet.edu + +# Introduction + +The problem of locating, classifying, and tracking objects under the ocean's surface is extremely important and has stimulated a great deal of significant oceanographic research. Despite the collapse of the Soviet Union and the end of the Cold War a few years ago, a number of countries possess submarine fleets that represent a very real strategic threat to other nations. Therefore, this kind of modeling problem will retain its importance for many decades. + +# Modeling + +The fundamental approach to mathematical modeling can be summed in these three steps: + +- Formulate a scientific problem in mathematical terms. +- Solve the underlying mathematical problem, perhaps inventing new mathematical methods in the process. +- Interpret the mathematical results in light of the original problem. + +During the last step, the accuracy of the model's predictions are considered. If they are not good enough, then they can be used to highlight weaknesses in the model. Refinements are made and the three-step process is repeated as appropriate. + +The Outstanding papers excelled in their application of both the first and third steps. For example, a critical factor noted by the judges was whether teams considered the environmental effect that the ocean has on sound propagation. Another factor weighed by the judges involved accounting for the properties of the ambient noise field itself. The literature contains extensive discussions + +of both ideas, and too many teams did little or nothing in this area. A great number of papers made absolutely no attempt to do any true acoustic modeling. Instead, they looked like homework sets for a signal-processing course, rolling out page after page of theory without ever making a clear linkage to the problem posed. While the judges did not doubt the mathematical prowess present in some of those papers, those papers contained very little modeling—and modeling, after all, is what the contest was all about. Papers with simple models that were well conceived and whose shortcomings were clearly noted tended to fair much better than papers with extremely elaborate calculations and little connection with the real world. + +# Novel Approaches + +Several papers stood out for the novel ideas that they incorporated into their problem analysis. This usually involved clever schemes for designing receivers so that they would work well under the conditions specified in the problem. Even though the mathematical analysis may have been a bit short, evidence of creative thinking generally gave teams that tried something new a substantial boost in the judges' eyes. + +# Literature Searching + +Many teams made little or no effort to search the literature to discover relevant references. Between the time that the problem was chosen for the MCM and the contest date, Scientific American published an article that treated aspects of this subject shortly before the competition began [Buckingham et al. 1996]. Very few teams mentioned this paper among their references. + +# Conclusion + +The very best papers displayed a healthy balance among the three modeling steps. Lots of powerhouse mathematics was certainly not sufficient for a paper to be competitive; teams in future years should bear this point in mind as they organize their write-ups. Mathematical modeling is as much about modeling as it is about mathematical detail. + +# Reference + +Buckingham, M.L., John R. Potter, and Chad L. Epifanio. 1996. Seeing underwater with background noise. Scientific American 274 (2) (February 1996): 86-90. + +# Acknowledgment + +The author is particularly grateful to Sedes Sapientiae for her help with this work. + +# About the Author + +John S. Robertson is Chair of the Dept. of Mathematics and Computer Science at Georgia College and State University. He received his Ph.D. from Rensselaer Polytechnic Institute in 1986. He studied under Mel Jacobson and Bill Siegmann, two applied mathematicians who have made substantial contributions to the understanding of underwater sound propagation. Dr. Robertson subsequently became interested in problems related to atmospheric sound propagation and has written a number of research papers in both ocean and atmospheric acoustic propagation. He is passionately interested in applied mathematics and loves to teach students with all kinds of backgrounds. He enjoys living as a Yankee transplant in Georgia, where he no longer needs to shovel snow. + +# Practitioner's Commentary: The Outstanding Submarine Location Papers + +Michael J. Buckingham +Marine Physical Laboratory +Scripps Institution of Oceanography +University of California, San Diego +9500 Gilman Drive +La Jolla, CA 92093-0213 +mjb@mpl.ucsd.edu +and +Institute of Sound and Vibration Research +The University +Southampton SO17 1BJ +England + +# Background + +Underwater acoustics has a long history, dating back to ancient Greece, where scholars were interested in the hearing of fish; and to the ancient Chinese, who, taking a more pragmatic approach, listened at the end of a bamboo pole with the other end placed in the water, in an attempt to detect shoals of fish. This idea was further developed by Leonardo da Vinci, who, in 1490, described the use of a listening tube to detect distant shipping. However, it was not until September 1826 that the first quantitative investigation into underwater acoustics was performed, when the speed of sound in water was measured by two young scientists, Daniel Colladon and Charles Sturm, in a classic experiment conducted on Lake Geneva, Switzerland [Lasky 1977]. Surprisingly, in view of the rudimentary nature of their experiment, the result they obtained was within $3\%$ of the currently accepted value of the speed of sound in water. + +Most progress on methods of underwater acoustic detection has, of course, taken place in the twentieth century, with the two world wars providing the primary impetus to the development effort. Passive detection, in which a target is detected simply by listening for the sound that it makes, was the mainstay of submarine detection during World War I. As the war drew to a close, both Britain and the U.S. developed an accelerating interest in active detection techniques, in which a pulse of sound is projected into the water and the presence of a target inferred from the returning echo. However, active + +systems came too late to play a significant role in influencing the outcome of World War I. + +Development of sonar systems continued between the wars at a relatively leisurely pace; but, with the advent of World War II, underwater acoustic detection technology became a principal concern on both sides of the Atlantic. As a result, active echo-ranging systems were used extensively for submarine detection on surface ships and submarines. Other types of submarine detection device were also used, for example, magnetic anomaly detectors; but acoustics was and is the preferred approach to undersea detection, simply because the ocean is essentially transparent to sound and opaque to all other forms of radiation. + +Today, half a century since the end of World War II, underwater acoustics is still performed using active and passive techniques. Although the technology has improved enormously over that period, the basic principles underlying these sonar techniques remain unchanged. Modern applications of underwater sound are not just military in nature but include bottom surveying and mapping by the offshore oil and gas industry, fish location and monitoring, and population studies of marine mammals. In some of these applications, neither active nor passive is ideal. Passive detection fails when the target is very quiet or silent, and active is undesirable in situations where the target, for example dolphins or whales, may be disturbed by the transmitted signal. Active detection has the further disadvantage of giving away the presence of the observer. In many military scenarios, this lack of covertness may preclude the use of active sonar altogether. + +An alternative to passive and active sonar was introduced several years ago, based on the idea that ambient noise in the ocean acts as a form of acoustic illumination. The ambient noise is generated by a variety of sources, including breaking surface waves, precipitation, shipping, and biological sources, such as snapping shrimp in near-shore locations and numerous types of marine mammal throughout the oceans. Far from being the silent deep, the ocean is in fact a naturally noisy environment. The noise has much in common with daylight in the atmosphere in that both are random radiation fields, with components propagating in all directions. + +By thinking of the noise as the acoustic daylight of the ocean, it is natural to pursue the opto-acoustic analogy, with a view to developing a new type of underwater acoustic detection system and ultimately perhaps an underwater imaging capability. The important question to be addressed is: Can an object in the ocean be detected from the disturbance it introduces into the ambient noise field? Any such object will scatter and reflect some of the incident noise, suggesting that by focusing the scattered component with a suitable acoustic lens, it should be possible to create an image of the object space. This, after all, is the essential process underlying conventional photography using daylight in the atmosphere. + +Acoustic daylight imaging in the ocean has been the subject of an extensive research program at Scripps Institution of Oceanography for the past five years. + +In the early days, a simple experiment was performed off the end of Scripps pier, the purpose of which was to establish whether detection or even imaging of an object is feasible through its effect on the noise field. The results of the experiment were encouraging [Buckingham et al. 1992], to the extent that a more ambitious program was launched with the aim of creating recognizable images of objects from the noise. This objective has now been achieved. Numerous real-time moving images of silent targets at ranges around $40\mathrm{m}$ have been created using ambient noise as the sole form of acoustic illumination [Buckingham et al. 1996]. The images consist of 126 pixels in an elliptical configuration; they show geometrical targets in the water column, as well as oil drums partially embedded in a sedimentary bottom. + +Several theoretical analyses have been developed in support of the acoustic daylight imaging program. In one approach, Helmholtz-Kirchhoff scattering forms the basis of a numerical simulation of ambient noise imaging [Potter 1994]. The output of the numerical code is in the form of computer-generated images that closely resemble the acoustic daylight images obtained from the ocean experiments. On a different tack, a theoretical analysis of noise anisotropy and its effect on acoustic daylight images [Buckingham 1993] indicates that imaging should be possible under a wide range of illumination conditions. This three-pronged attack, comprising numerical simulation, theoretical analysis, and experimental observations, provides a substantial and growing body of evidence in support of the ambient noise imaging concept. + +Ambient noise detection and imaging represents an ambitious technological challenge, hingeing on questions of fundamental physics, engineering design, extensive software development, theoretical analysis, numerical simulation, and delicate experimental procedures. Each of these aspects of the problem has to be integrated with the others in order to achieve a successful imaging capability. Although, as the culmination of a long-term effort, the most recent research has indeed reached a point where images can be created from the noise, the technique is still in its infancy. Perhaps a useful analogy is with the earliest days of television, when the fact that the pictures were of poor quality was beside the point: The critically important issue was that the pictures existed at all. + +# The Outstanding Papers + +Continuing in the tradition of innovation in underwater acoustics, the four student teams addressed the question of detecting or imaging a submarine underwater with ambient noise. All the papers were well written and in each case gave careful consideration to the advantages and limitations of the technique. Since there is little in the literature on the problem, the students had to rely largely on their own imagination and creativity to make progress, the result of which is a number of interesting suggestions on the theory and practice of ambient noise imaging. + +The Worcester Polytechnic Institute paper is interesting in that it considers various types of reflector as candidates for an acoustic lens. In fact, a similar approach was implemented in the original investigations of ambient noise detection, in the first instance using a parabolic dish [Buckingham et al. 1992] and subsequently a larger spheroidal dish [Buckingham et al. 1996]. This paper was the only one of the four to cite a reference to the research that has been performed on ambient noise detection and imaging. The authors propose an interesting triangulation technique for locating the position of targets based on the use of several parabolic reflectors, and then go on to consider two other detector geometries, a parabolic trough, and a parabolic torus. With regard to detection performance, their conclusions were cautious. They found that the technique would work, but only over limited ranges, out to about $1\mathrm{km}$ , due to the absorption of sound by seawater. Indeed, this is a limitation of ambient noise detection at frequencies in the tens of kHz range. + +The Wake Forest University team took a different approach by developing a computer simulation of noise detection with four close-packed hydrophones at each of four locations. Based on several simplifying assumptions concerning the nature of the noise in the ocean, and by representing the submarine as a sphere, they proposed a detection technique to give the submarine's position and, through a Doppler shift in the noise spectrum, its velocity. The idea of determining target speed from the noise field is novel, although probably would be difficult to achieve in practice, since the noise is broadband, whereas the authors assumed it to be single frequency. Another of the assumptions adopted by the authors of this paper is that the intensity of the noise is statistically stationary everywhere, which in reality is definitely not the case. It is not clear whether their algorithm would be successful in detecting the noise fluctuations introduced by the target submarine against the fluctuations that are a natural feature of the ambient noise field. Still, this paper shows a good deal of imagination and gives a thorough discussion of the errors and sensitivity of the proposed technique. + +An extensive discussion of ambient noise, its properties, and sources is given by the Pomona College team. The authors go on to propose the use of an array of directional hydrophones for detecting the presence of a submarine from the disturbance that it introduces into the noise field. In an extension of their technique, a second array is introduced to give a three-dimensional view of the submarine. This is the only paper that comes close to the idea that the ambient noise can be used not only for detection but also for imaging of objects. They also went beyond theoretical analysis by performing simple experiments in air, using loudspeakers to simulate an ambient noise field and a rolling trash can as the target. Silhouettes of the target were obtained when the trash can was between a source and the receiver. Toward the end of the paper, various schemes for producing images of a submarine with ambient noise are proposed. One of these is a sound camera, and a second is a variant of Schlieren photography, using a laser interference technique to observe the pressure fluctuations in the sound field. In the latter case, the authors appear to + +be unfamiliar with Schlieren methods but worked out the essential principles for themselves. + +The University of North Carolina team developed a computer-generated model of the ambient noise field and used it in conjunction with a recognition algorithm, in a simulation of submarine detection using ambient noise. The submarine was represented as an ellipsoidal object, which acted as an obstruction in the noise field, giving rise to a reduction in noise intensity (a silhouette) at the receiver. Contouring of the noise intensity forms the basis of the recognition algorithm. As in the Wake Forest paper, the contouring technique is claimed to provide position and velocity information on the target. In this case, however, the velocity is obtained, not by Doppler shifts, but by observing the submarine position at successive times. Apparently, a grid of sensors was assumed as the detector, with resolution limited to the inter-sensor spacing, although the details of the detection scheme are not clear from the paper. The contouring technique seems to work reasonably, although some of the arguments in the discussion are difficult to follow. + +Overall, the quality of the papers is very high, and some of the ideas presented may well work in practice. It is interesting, though, that none of the candidates considered using a multi-beam phased array as an acoustic lens. Such a system is a strong contender for future ambient noise imaging systems. Perhaps we shall also see some of the novel ideas presented in the students' papers turned into real systems in the future. + +# References + +Buckingham, M.J. 1993. Theory of acoustic imaging in the ocean with ambient noise. Journal of Computational Acoustics 1: 117-140. +_____, B.V. Berkhout, and S.A.L. Glegg. 1992. Imaging the ocean with ambient noise. Nature 356: 327-329. +________, John R. Potter, and Chad L. Epifanio. 1996. Seeing underwater with background noise. Scientific American 274 (2) (February 1996): 86-90. +Lasky, M. 1977. Review of undersea acoustics to 1950. Journal of the Acoustical Society of America 61: 283-297. +Potter, J.R. 1994. Acoustic imaging using ambient noise: Some theory and simulation results. Journal of the Acoustical Society of America 95: 21-33. + +# About the Author + +Michael Buckingham is Professor of Ocean Acoustics at Scripps Institution of Oceanography, La Jolla, California, U.S.A., and also a Visiting Professor at the Institute of Sound and Vibration Research, University of Southampton, U.K. He + +is the originator of the acoustic daylight imaging concept, which his group at Scripps has developed over the past six years. He has been a Visiting Professor in the Department of Ocean Engineering, M.I.T., and an Exchange Scientist at the Naval Research Laboratory, Washington, D.C. Before joining Scripps in 1990, he was at the Royal Aerospace Establishment, Farnborough, U.K., where he developed his interests in underwater acoustics. In 1984, he received the A.B. Wood medal from the Institute of Acoustics, and he was the recipient of the Clerk Maxwell Premium from the I.E.R.E. in 1972. He is a Fellow of the Acoustical Society of America, the Institute of Acoustics, and the Institution of Electrical Engineers. + +# The Paper Selection Scheme Simulation Analysis + +Zheng Yuan Zhu + +Jian Liu + +Haonan Tan + +Fudan University + +Shanghai, China + +Advisor: Yongji Tan + +# Summary + +We provide five models for selection schemes and, based on computer simulation, we propose optimal schemes for each model. + +In analyzing the problem, we use a cost function to evaluate a scheme. + +We quantify a judge's capability in terms of the variance $d$ of accidental error and the magnitude $e$ of systematic bias. + +We enumerate our assumptions and give the algorithm for computer simulation. We then discuss the possible ranges of parameters $d$ and $e$ , finding that the judges' capability must reach a certain level to accomplish their task. + +We discuss the five models. The Ideal Model can be used to explain the selection scheme under ideal conditions. The Round-Table Model and the Classic Model produce expensive solutions. To save money, we put forward the Cutoff Model and the Advanced Round-Table Model. + +The Cutoff Model is based on numerical scoring and rejects papers under a certain cutoff level in each round. Its flexibility on changing the rejection proportion in each round, depending on the capability of the judge, leads to a more economical scheme. The Advanced Round-Table Model is a combination of rank-ordering and numerical scoring; it can generate a scheme that is both economical and easy to operate. + +We compared all the models (see Table 6). The schemes produced by the Cutoff Model and the Advanced Round-Table Model reduce the expense drastically. + +Later we generalize and find that all models except the Round-Table Model are suitable for different values of $P$ , $J$ , and $W$ . For the purpose of classifying winners to various ranks, the Cutoff Model works best. + +We find that the expense depends on the capabilities of the judges. A small decrease in the capability of the judges can lead to a great increase in expense in each model. So the best advice that we can give to the contest committee is + +# Assumptions + +- There is an absolute rank-ordering and numerical scoring to which all judges would agree. +- The absolute numerical scores of all the papers are integers from 1 to 100, and are distributed $\mathrm{N}(70, 100)$ ( $\mu = 70, \sigma^2 = 100$ ). +- A scheme is accepted if and only if it guarantees with $95\%$ probability that the final $W$ winners are among the "best" $2W$ papers. +- The judges score papers individually and do not influence one other. +- The judges' scoring has accidental errors that have the normal distribution; the magnitude of errors can be obtained from a judge's past records. +- Some judges have systematic bias for a specific kind of paper, hence they will mark higher or lower scores on such papers. +- In using the rank-ordering method, only the bottom $30\%$ that each judge rank-orders are rejected. + +# Analysis of the Problem + +Our main task is to provide a scheme that can reliably select the $W$ winners and significantly reduce the number of papers for each judge to read. + +# The Evaluation Method + +We can adopt either rank-ordering or numerical scoring to select the best papers. We elaborate the case of scoring, since a rank-ordering can be produced on the basis of the scores. + +# Expense + +The purpose of reducing the number of papers read by each judge is to save on the expense of the contest. According to the theory of marginal utility, more papers to read means more money to be paid for each paper. So the reading number for different judges should be as equal as possible. The cost function depends on the actual situation, but we use the following function: Papers 1 to 20 cost \(m\) each, 21 to 50 cost \)2m$ each, and 51 to 100 cost \)4m$ each. In mathematical form, the function is + +$$ +C = m \cdot \sum_ {i = 1} ^ {J} \left\{a _ {i} + \left(a _ {i} - 2 0\right) \cdot u \left(a _ {i} - 2 0\right) + 2 \cdot \left(a _ {i} - 5 0\right) \cdot u \left(a _ {i} - 5 0\right) \right\}, +$$ + +where $a_{i}$ is the number of papers that judge $i$ reads and + +$$ +u (x - a) = \left\{ \begin{array}{l l} 0, & x < a; \\ 1, & x \geq a. \end{array} \right. +$$ + +Small variation in the cost function has little effect on the scheme; with so little information, we might as well let $m = 10$ . + +# The Judge + +After preliminary simulation, we found that the capability of the judge is the most important factor in deciding the selection scheme. We use two parameters to describe the capability of a judge: + +- The variance of accidental errors in scoring. The smaller the variance, the more abundant the judge's experience and the more accurate the judge's scores, and vice versa. The variance can be obtained from the judges' past performance. +- The magnitude of the systematic bias. This is hard to quantify, since it is difficult to determine the type and the magnitude of an individual's bias in real life. So we simplify the situation and classify both the judges and papers into three types: radical, neutral, and conservative. A radical judge gives higher scores to radical papers and lower scores to conservative papers and has no bias towards neutral papers. A conservative judge stands opposite to the radical one. A neutral judge has no bias at all. + +# Constructing the Model + +Because of too many random factors in the judgment process, it is difficult to solve the problem theoretically. To solve the problem, we adopt computer simulation based on theoretical analysis. + +# The Algorithm of the Simulation + +1. Generate 100 random integers from 1 to 100, as the papers' "real" scores, from the distribution $\mathrm{N}(70, 100)$ . Put them in the array paper_score[1, i]. +2. Take a constant $d$ as the upper bound of all the judges' accidental errors. Generate 8 random integers $d_j$ as the standard deviations of the judges' accidental errors, using a discrete uniform distribution on the integers from 0 to $d$ . Put these numbers in the array judge[1,j]. +3. Take a constant $e > 0$ as the systematic bias value. Let $1, 0,$ and $-1$ represent radical, neutral, and conservative, respectively. Give every paper a + +number in $\{1,0,-1\}$ and put them in array paper_score[0,i]. Give every judge a number in $\{1,0,-1\}$ and put them in judge[0,j]. We calculate the systematic bias $s$ from the expression + +$$ +s = e * \mathsf {p a p e r \_ s c o r e} [ 0, \mathsf {i} ] * \mathsf {j u d g e} [ 0, \mathsf {j} ]. +$$ + +For example, when a conservative judge meets a radical paper, $s = -e$ . + +4. The method that judge $j$ scores paper $i$ : Let + +$$ +u = \text {p a p e r} [ 1, i ] + e * \text {p a p e r} [ 0, i ] * \text {j u d g e} [ 0, j ]. +$$ + +Generate random integers in [1,100] as scores that have the normal distribution $\mathrm{N}(u,d_j^2)$ . Put them in array judge_score[i,j]. Thus we generate the judges' score matrix. + +# Determining the Parameters + +We need to determine the values of $d$ and $e$ . First, we discuss how to determine the range of $d$ . + +From probability theory, we have + +Lemma. Let $X_{1}, X_{2}, \ldots, X_{n}$ be independent random variables with variances $\sigma_{j}^{2}$ . Then $\overline{X} = \sum X_{i} / n$ has variance + +$$ +\sigma^ {2} = \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \sigma_ {i} ^ {2}. +$$ + +So we have + +Corollary 1. $\frac{1}{\sqrt{n}}\min_{1\leq i\leq n}\{\sigma_i\} \leq \sigma \leq \frac{1}{\sqrt{n}}\max_{1\leq i\leq n}\{\sigma_i\} .$ + +From the corollary, we conclude that the accuracy of judgment can be improved if several judges work on each paper and average their scores. + +Using the Cauchy inequality, we have + +$$ +\sigma^ {2} \geq \frac {1}{n ^ {3}} \left(\sum_ {i = 1} ^ {n} \sigma_ {i}\right) ^ {2}, +$$ + +that is, + +Corollary 2. $\sigma \geq \frac{1}{\sqrt{n}}\frac{\sum_{i=1}^{n}\sigma_{i}}{n}.$ + +Since by our assumption $\sigma$ is distributed discrete uniform on $[0,d]$ , we have + +$$ +\frac {\sum_ {i = 1} ^ {n} \sigma_ {i}}{n} \approx \frac {d}{2}. +$$ + +For $n \leq 8$ , this becomes + +$$ +\sigma \geq \frac {1}{2 \sqrt {2}} \cdot \frac {\sum_ {i = 1} ^ {n} \sigma_ {i}}{n} \approx \frac {\sqrt {2}}{8} d. +$$ + +Now we have + +Conclusion 1. Generally speaking, accidental errors can be reduced when several judges work on the same paper. The more judges involved, the more accurate the result. + +Conclusion 2. For the most part, in scoring a single paper, the standard deviation of the mean accidental error will not be lower than $\frac{\sqrt{2}}{8} d$ . + +We have not proved the two conclusions; in fact, there are exceptions. But the probability of exception is too slim to have effect on the practical problem. So we grant these conclusions in our later discussion. + +Based on lots of experiments with computer simulation, we find the experimental law listed below. + +Law 1. $d < 10$ + +where $d$ is the upper bound on the standard deviation of the accidental judging error. + +Verification: We need only explain that when $d = 10$ , there is no selection scheme that guarantees with $95\%$ probability that the final $W$ winners are among the "best" $2W$ papers. We consider the ideal situation, under which each judge reads all the papers, for the case of 6 papers and 8 judges. + +Because of Conclusion 2, the standard deviation of the 8 judges' mean accidental error is $\sigma \geq d\sqrt{2}/8$ . We may suppose $\sigma = d\sqrt{2}/8$ as well, and here $d = 10$ . We set the systematic bias to zero: $e = 0$ . Using a simulation of the Round-Table Model and 10,000 iterations, 10,000 times, we observed 9,460 times when the judges chose the 3 winners correctly (i.e., they are among the "best" 6 papers). This probability of $94.6\%$ is a little lower than our standard. + +A calculation using Mathematica gave probability of failure (at least one winner not among the "best" 6 papers) as $5.6\%$ . + +By using the simulation program for Round-Table again, with a changed value of the parameter $d$ , we get: + +Law 2. When $d \leq 3$ , Round-Table meets our $95\%$ standard if each paper is read by only one judge. + +# Comments on Laws 1 and 2 + +Law 1 points out that to succeed in the task of selecting winners in a contest, the judges' capability should reach a certain level. If a judge's standard deviation $d$ is more than 10, even if there is no systematic bias, a paper that deserves the score of 70 may be scored higher than 80 or lower than 60 with probability greater than $30\%$ . The probability of a score higher than 90 or lower than 50 is no less than $5\%$ . Such a person obviously cannot be qualified as a judge in any serious competition. + +Law 2 points out that if all the judges are reliable enough—in other words, they are all experienced and have little systematic bias—a single judge's score is sufficient for determining the winners. When $d = 3$ , running of Round-Table 5,000 times shows that the average failure rate is about $1.2\%$ . + +We conclude that if $e = 0$ , we need to consider only $d$ from 3 to 10; if $e \neq 0$ , then $d$ ranges from 0 to 10. + +The range of $e$ is quite difficult to determine. A reasonable supposition is that $e$ has the same magnitude as $d$ . We ran Round-Table with different values of $d$ and $e$ and used Mathematica to plot a three-dimensional bar chart (see Figure 1). + +![](images/56b6a38fc1b5c342375af1769c70009b70df63df1beca392df02e50799825ffa.jpg) +Figure 1. The failure rate $(r / 100)$ is much more dependent on accidental error $d$ than on bias $e$ . + +From the figure, we find that the standard deviation $d$ has significant effect on the result, whereas the bias $e$ has little effect. + +Later we put forward several practical models for paper-judging and find some optimal schemes for different values of $d$ and $e$ , based on computer simulation. + +We may suppose that $e \in \{0, 5, 10\}$ and $d \in \{1, 3, 5, 7, 9\}$ . These limited ranges suffice to reveal the relationship between the scheme selection and the capability of judges. + +# The Ideal Model + +When $d = e = 0$ , every judge can rank-order or score all the papers as correctly as the absolute rank-ordering. This is the "ideal situation." + +For 100 papers and 8 judges, 4 judges have 13 papers each and 4 have 12 each; a single reading of a paper, by any judge, suffices. The winners are the papers with the highest scores. The total cost is $1,000, and the 3 winners must be the "best" 3 papers. + +A good scheme to rank-order the papers is shown in Figure 2. This scheme guarantees that the winners are the "best" 3 papers. The cost is $1,210. + +![](images/9e9d232d8cd37f17afa6be05e1ea660aca888c2c5c01a84a7fcbbde98ec1dca0.jpg) +Figure 2. A scheme for rank-ordering the papers. The letters represent the 8 judges. The number at the tail of each arrow is the number of papers that a judge reads, and the number at the head of the arrow is the number that the judge selects. + +But the most economical method is presented in Figure 3. Here, judges $A$ , $B$ , and $C$ each rank-order 14 papers, and the other judges rank-order 13 papers each. We suppose that $A$ is the Head Judge, who is responsible for picking out 3 winners from the 8 papers left after the first screening round. The cost of this method is $1,070. Though it cannot ensure that winners are the top 3, it ensures that the "best 3" papers are among the 6 winners with probability of 99.3%. + +We prove this assertion. If the "best" 6 papers distribute among at least 3 groups (i.e., are judged by at least 3 different judges at the initial screening), then the 3 winners are "qualified" (i.e., among the "best" 6). If the top 6 distribute among no more than 2 groups, there must be some unqualified winners. The number of the unfavorable events is + +$$ +8 + \binom {6} {1} \cdot 8 \cdot 7 + \binom {6} {2} \cdot 8 \cdot 7 + \binom {6} {3} \cdot 8 \cdot 7 \cdot \frac {1}{2} = 1, 7 4 4. +$$ + +The number of ways to distribute 6 papers to groups arbitrarily is $8^{6} = 262,144$ , and $1,744 / 262,144 = 0.66\%$ . In other words, all the winners are qualified in $99.3\%$ of all the cases. + +![](images/23d12deb8dc3043a508609695bff62e73c2ccd984285eaef28e58edfcbb168f9.jpg) +Figure 3. A more economical scheme for rank-ordering the papers. + +The Ideal Model sets a lower bound for the cost function. When $d$ and $e$ are small (e.g., $d = 3$ , $e = 0$ ), which means the judges are experienced and have almost no bias, the ideal model can work as the selection scheme. + +# The Round-Table Model + +The distinguishing feature of this model is its simplicity. + +1. Determine $n$ , the number of rounds, according to the specific $d$ and $e$ . +2. Have all judges sit around a round table (hence the name of the model) and share the papers equally. At each round, after every judge has finished scoring, each judge passes the papers already read to the right, where the neighboring judge scores these papers again in the next round. +3. After $n$ rounds, each judge averages the $n$ scores on a paper; the average score is the final score for the paper. The final score determines the rank-order of the paper. + +The key question for this method is how to determine the value of $n$ . Through numerical experiment, we found that systematic bias exerts only slight influence on $n$ . So all our discussion later is based on the assumption that $n$ is thoroughly determined by $d$ . + +When the distributions of the judges' accidental errors are all $\mathrm{N}(0, d^2)$ , it is not hard to find that after $n$ rounds the error of final score will be $d_n = d / \sqrt{n}$ . For 8 judges, and with $d < 10$ , we have $d_n < \sqrt{2} / 8 \cdot 10 \approx 1.77$ . Further simulation shows that the scheme is desirable if $d_n \leq 1.6$ . We have + +Law 3. If all judges' error are distributed $N(0, d^2)$ , and if $d_n = d / \sqrt{n} \leq 1.6$ i.e., $n \geq d^2 / 1.6^2$ , then an $n$ -round scheme is desirable; if $d_n \geq 1.77$ , i.e., $n \leq d^2 / 1.77^2$ , then an $n$ -round scheme is not desirable. + +When all the conditions of Law 3 are satisfied, we can easily find the optimal value of $n$ . But the requirement on the error distribution is too harsh. When the variance of error has the uniform distribution on $[0, d]$ , there is an empirical formula: + +$$ +n = \min _ {K \in N} \Bigl \{K \geq \left(\frac {d}{2 \cdot 1 . 6}\right) ^ {2} \Bigr \}. +$$ + +The values of $n$ obtained from the formula agree with the optimal $n$ obtained from computer simulation (see Table 1). + +Table 1. Failure rate in 1,000 iterations, and expense, for various numbers of rounds $n$ . + +
Bias eMax variance dNumber of rounds n% Failure rateExpense
0311.2$1,000
524.4$2,400
753.2$10,400
982.8$22,400
5313.6$1,000
524.7$2,400
754.8$10,400
984.4$22,400
+ +We see from the table that the expense increases rapidly as $d$ rises. So we had better not use this scheme if the capability of the judges is quite ordinary. + +# The Classical Model + +We put forward a classical model by combining rank-ordering and scoring methods. + +1. Distribute the papers to each judge equally as far as possible. If a judge meets a paper already scored by that judge, the judge exchanges it with another judge. The judges score the papers. +2. Every judge rank-orders the papers that that judge has just scored and determines the bottom $30\%$ . +3. Each judge's bottom $30\%$ of papers are rejected. +4. If only 3 papers remain, they are winners. If there have been 8 rounds, then all the papers left have been scored by the 8 judges. So average each paper's scores and select the highest 3 as the winners. Otherwise go back to Step 1. + +This model strictly limits the rejection of the papers during each round and refrains as much as possible from rejecting the good papers. The stability and precision of the model are very high, but the flexibility is low. Because of the + +progressive rejection method, it costs less than the Round-Table Model when $d$ is comparatively large; but in general, this scheme is comparatively expensive. See the result of the simulation of the Classical Model in Table 2. + +Table 2. Simulation results for the Classical Model. + +
Bias eMax variance d% Failure rateExpense
000.0$4,851
052.0$5,022
094.1$5,563
501.1$5,462
551.9$5,779
594.8$6,250
1005.5$6,528
1056.3$6,653
10910.7$7,395
+ +The Round-Table Model and the Classical Model can be used as selection schemes. However, the expenses are not encouraging. So we put forward two models based on them that will be more economical. + +# The Cutoff Model + +This model is based on the Classical Model. However, it changes the cutoff levels in each round. Thus, it is not constrained to reject $30\%$ in each round but can determine the rejection proportion by the circumstances. So it is more flexible. + +1. Determine the failing percentage of each round. When there are $n$ rounds, the falling percentage is $x = \sqrt[n]{0.03}$ . Each paper is scored in each round, so we can complete the judging work in no more than 8 rounds. Hence $n = 8$ . +2. Distribute the papers to each judge equally; each judge should get papers that that judge has not previously scored. +3. The judges score the papers. Average each paper's scores as its this-round-score. Determine this round's cutoff level using the failing percentage and the papers' this-round-score. Any paper below the cutoff level falls. +4. If only three papers remain, they are the winners. Otherwise, go back to Step 2. + +For fixed variance $d$ and systematic bias $e$ , we experimented with different values of $n$ to find the optimal scheme, that is, the scheme with the smallest number of rounds that satisfies the criterion of an failure rate less than $5\%$ . Table 3 gives some results. + +Table 3. Simulation results for the Cutoff Model, giving percentage failure rates for combinations of $e$ and $d$ with values of $n$ . + +
Bias eMax variance dNumber of rounds nExpense
12345678
000$1,189
30$1,661
5961
725178674
9472816156675
50161073$1,725
3115461
5297372
7654518158687
9623323181811103
+ +Figure 4 shows the rank of the third-best paper for two of the optimal schemes, and Figure 5 shows that the simulation of these schemes is stable. + +We see that the cost is significantly less than for the two previous models; the total reading times are reduced because many of the unqualified papers are rejected earlier. However, the method of distributing papers in each round is comparatively complicated and may cause trouble in practice. + +Figure 4. Rank of the third-best paper for two optimal schemes, over 1,000 iterations. +![](images/d1faf137ae96c16465ebf973c8a01707d2503d0c30c9ef97b022bcce7a49c92e.jpg) +a. $e = 0$ , max $d = 5$ , and 2 rounds. b. $e = 5$ , max $d = 5$ , and 4 rounds. + +![](images/496005e59bdbc9dff9998d777fad8b5c7c559b91fa62335e763774077221e73a.jpg) + +Figure 5. Rank of the third-best paper vs. iteration number, for two optimal schemes. +![](images/ac42ea13dd12c586aaf3e7438065dab6c1d1504a2310de2b107898f7d1e86ff3.jpg) +a. $e = 0$ , max $d = 5$ , and 2 rounds. b. $e = 5$ , max $d = 5$ , and 4 rounds. + +![](images/04f01ce3a43d398b21fff587deb06b3612d0b629de82d20546776b497742b0ce.jpg) + +# The Advanced Round-Table Model + +This model use a combination of rank-ordering and numerical scoring. In its early stage, we ingeniously use the method of group sequencing and partially exchanging to realize rejecting in definite proportion without violating the assumption that only the bottom $30\%$ that each judge rank-orders are rejected. This selection scheme is stable in structure and easy to implement. Setting the times of paper-exchanging each round also makes it flexible. In the final stage, we determine the winners from average scores to speed the selection process. + +1. We distribute the papers equally to the judges. The rejection proportion in each round is $30\%$ . After $n$ rounds of screening, each judge has only one paper left (when $30\%$ is less than 1, we consider it to be 1). At the first round, we control the roundoff method to ensure that each judge has an equal number of papers left. At other rounds, we round down.) Given the capability of judges, we can determine the exchange times $K_{i}$ in each round. + +For our problem with $n = 6$ , the numbers of papers each judge has after each round are 9, 6, 4, 3, 2, and 1. The numbers of papers rejected in each round are 4, 3, 2, 1, 1, and 1. + +2. The judges sit around a round table. Let $K_{i} = i$ (later we thoroughly discuss the method of selecting the value of $K_{i}$ ). At the first round, $K_{0} = 0$ , judges do not exchange papers but rank-order them and eliminate the worst $30\%$ . +3. At the second round, $K_{1} = 1$ . Each judge passes the worst $30\%$ papers to the right. Then each judge scores the new papers received and re-rank-orders all current papers (including the ones not passed on) and cuts off the worst $30\%$ . +4. For $K_{i} \geq 2$ , the passing, scoring, re-rank-ordering, and rejecting the worst $30\%$ takes place $i$ times. +5. When each judge has only one paper, the paper is passed $K_{n}$ times. If the paper has already been scored for $K_{n}$ times, the other judges do not score it. From the mean value from the $K_{n}$ scorings, we select the best three papers as winners. + +The exchange method in this scheme is somewhat like that of the Round-Table Model, so we call it Advanced Round-Table. + +# Why Exchange the Bottom $30\%$ ? + +We assumed that only the bottom $30\%$ papers that each judge rank-orders could be rejected. Hence the number of papers left after each round will not be a fixed number, which makes the method more complicated, more unstable, and increases costs. But this undesirable feature can be avoided if we circulate + +the bottom $30\%$ papers. For example, consider a paper among the bottom $30\%$ that $\mathrm{J}_1$ has passed to $\mathrm{J}_2$ . If it is still among the bottom $30\%$ of $\mathrm{J}_2$ 's papers after $\mathrm{J}_2$ 's re-rank-ordering, it is certain that it should be eliminated. If this bottom $30\%$ also contains papers that are not from $\mathrm{J}_1$ , the paper in question can be eliminated without violating the $30\%$ rejection rule (it has been rank-ordered only by $\mathrm{J}_2$ ). Moreover, if $\mathrm{J}_2$ believes that it is even worse than some of the bottom $30\%$ from $\mathrm{J}_1$ , it is reasonable for $\mathrm{J}_2$ to reject it. + +# How Many Times to Pass the Papers? + +Since at each round we must determine how many times to pass the papers, the process of searching for the optimal scheme is much more complicated than the Round-Table and Cutoff Models. However, the flexibility of the model increases with the complexity, which makes it possible to find a selection scheme that is both efficient and economical. + +First, we note two properties of $\{K_i\}$ : + +1. $\{K_i\}$ is bounded, i.e., $0 \leq K_i < J$ . +2. $\{K_i\}$ is monotone increasing, i.e., $i < j \Rightarrow K_i \leq K_j$ . + +There are only $J$ judges, and all the papers are divided into $J$ groups. When $K_{i} > J$ , a judge may find that a paper previously passed on comes back again! That is anything but efficient. So $K_{i} < J$ . + +At the last several rounds, it seems more likely that the qualified papers (the top 6 papers) may be eliminated, so there should be more passings at later rounds than at early ones. So $\{K_i\}$ is monotone increasing. + +Second, there is a relationship between $\{K_i\}$ and the cost $C$ . That is, $C$ is a monotone increasing with the total number $K$ of papers that all judges have rank-ordered, $K = 8\sum_{i=0}^{n} P_i K_i + 100$ , where $P_i$ is number of papers to be eliminated at round $i$ . In this model, the numbers of papers read by each judge are almost equal (the difference is no more than one paper), so $C$ is monotone increasing with $K$ . + +It's clear that the expense and the precision of selection scheme are at odds: The less the expense, the bigger the probability of producing an unqualified winner. We might begin as well with the $\{K_i\}$ whose corresponding $C$ is the smallest. By testing the $\{K_i\}$ according to the order of the sequence one by one, we increase the expense step by step and at the same time increase the precision of the scheme. When the precision meets the request, the scheme that this $\{K_i\}$ corresponds to is the optimal scheme. We can use our simulation program to test the precision of the scheme. + +We should say that this method of searching for the optimal scheme is time-consuming. To find an optimal scheme corresponding to a group of specific values of $d$ and $e$ , we may spend hours; but compared to the savings, several hours of machine time is nothing. + +A more efficient approach is to use binary search to find the "best" $\{K_i\}$ . + +Table 4. +Failure rate and expense for various sets of $\{K_i\}$ . + +
Bias eMax variance dKnIterations% Failure rateExpense
051, 1, 1, 1, 1, 2, 4*20,0001.8
71, 1, 1, 1, 1, 4, 5*10,0003.9
91, 1, 1, 2, 2, 4, 55,0004.8
551, 1, 1, 2, 2, 2, 41,0000.7
72, 2, 2, 2, 2, 4, 81,0002.7
92, 2, 2, 2, 2, 4, 81,0006.7
+ +Listed in Table 4 are several $\{K_i\}$ (the two starred sets are optimal). + +From the table we notice that under the same conditions this model's cost is less than that of all the other models, while its operating process is clear and definite, easy to understand, and easy to apply in practice. + +The weakness is also obvious: finding the optimal $\{K_i\}$ , which is time-consuming. + +# Comparison and Critique of the Models + +We have discussed five models, all suited for practical use except for the Ideal Model, which can be used under ideal conditions only. Table 5 compares the precision and expense of the different models under specific conditions. + +Table 5. +Precision and expense of the different models. + +
ModelBias eMax variance dIterations% Failure rateExpense
Round-Table051,0004.7$2,400
Classical1,0002.0$5,022
Cutoff1,0001.8$1,414
Advanced Round-Table20,0001.8$1,120
Round-Table071,0004.8$10,400
Classical1,0002.3$5,389
Cutoff1,0004.4$1,661
Advanced Round-Table10,0003.9$1,560
+ +From the table we see that when the judges's capability is high ( $d = 5$ ), the Classical, Cutoff, and Advanced Round-Table Models' precisions are very sharp, while the Round-Table Model's is a little lower. As far as cost is concerned, the Classical Model is highest, followed by the Round-Table Model, the Cutoff Model, and the Advanced Round-Table Model. The costs of the latter two are extraordinarily low. + +When the judge's capability is comparatively low ( $d = 7$ ), the precision of every model is almost the same. The cost of the Round-Table Model is too + +huge to assume and the Cutoff Model and the Advanced Round-Table Model are both again considerably lower. + +To get a comprehensive idea of all the models, we summarize various criteria in Table 6. The contest committee can decide which model to use, based on this able and the concrete circumstances. We advise that the committee choose the best judges, even though they may cost more than others, for it would prove to be economical on the whole. + +Table 6. Comparative features of the models. + +
IdealRound-TableClassicalCutoffAdvanced Round-Table
Adaptabilityv. lowhighhighhighhigh
Precisionv. highmediumhighhighhigh
Expense (large d)v. highhighlowlow
Expense (small d)lowlowv. highlowlow
Complexity of initializinglowmediumv. high
Complexity of executionlowlowhighhighlow
Determinacyaboslutehighlowlowhigh
Flexibilitylowhighlowhighv. high
+ +# Generalization of the Model + +# Different Values for Parameters $(P, J, \text{and} W)$ + +The Classical Model can be applied directly with different parameter values. For the other models, all we need to do is to determine the parameters for the optimal scheme based on the new values of $P$ , $J$ , and $W$ . For the Round-Table Model, the parameter is the number of times to pass the papers; for the Cutoff Model, it is the number of rejection rounds $n$ ; and for the Advanced Round-Table, it is the sequence $\{K_i\}$ of numbers of times to pass papers in each round. + +All the empirical formula and laws in our paper are deduced for the particular given values of $P$ , $J$ , and $W$ (100, 8, and 3), so they do not apply to a new problem automatically. However, using the method offered by our paper combined with computer simulation, we can find new empirical formulas and laws and determine the new parameters of optimal scheme easily and quickly. + +For instance, let us take Problem B of the 1995 MCM, which had $P = 174$ , $J = 12$ , and $W = 4$ . Let us assume that $d = e = 5$ . + +Using the algorithm of the Round-Table Model, we find that \( n = 4 \) is the best choice; the failure rate is \( 3.0\% \), and the expense is \(13,440. For the Classical Model, we get \( 2.8\% \) and \(21,320. For the Cutoff Model, the optimal value is \( n = 4 \), the failure rate is \( 4.2\% \), and the cost is \(3,502. Finally, for the Advanced + +Round-Table Model, the optimal set of parameters is $K_{1} = K_{2} = K_{3} = K_{4} = K_{5} = K_{6} = 0$ , $K_{7} = K_{8} = 1$ ; the failure rate is $3.0\%$ and the cost is $1,700. + +# More Winners + +Sometimes a few outstanding papers are not the only result of a contest. We may be asked to classify other papers as Meritorious, Honorable Mention, and Successful Participation, as the MCM does, in order to encourage the participants. Except for the Ideal Model, all the selection schemes discussed in our paper are suitable for this task. Compared with other schemes, the Cutoff Model is best, since it ranks all papers. + +# Strengths and Weaknesses + +# Strengths + +The Cutoff Model and the Advanced Round-Table Model successfully generate selection schemes that can drastically reduce the cost of judges, and we have given practical methods for determining the optimal selection scheme of both models. Both models and their methods are easy not only to understand but to apply in practice, and they easily generalize to most situations. + +Our simulation program requires little memory and runs very fast, so it is especially suitable for practical application. + +# Weaknesses + +Since we largely rely on computer simulation to test our models, to select the optimal selection schemes, and to verify our laws, we cannot assure that our result is $100\%$ definitive. However, we ran the simulation program more than 1,000 times before drawing any critical conclusion, and the result of the simulation are very stable. We believe that all our results are reliable enough for application. + +Because of the absence of relevant information, our cost function may not conform to reality. + +# Modeling Better Modeling Judges + +Brian E. Ellis + +Chad Hall + +Charles A. Ross + +Gettysburg College + +Gettysburg, PA 17325 + +Advisor: James P. Fink + +# Summary + +We designed a system for judging a contest of papers with two main goals in mind: to minimize the number of reads by a judge and to ensure a fair contest. We first designed a model that would best predict the choices of human judges comparing just two papers. The basic premise is that the closer two papers are in an absolute ordering, the more likely the order of the papers is to be reversed by a judge, whereas the farther they are apart, the less likely a reversal. + +Our model accommodates arbitrary numbers of judges, papers, and winners. The $P$ papers are split into $S$ stacks. To ensure fairness, two judges read each stack. From every pair of stacks, $W$ papers advance to the next round. If two judges cannot agree which $W$ should advance, the Head Judge decides. The rounds continue until $2W$ papers remain, when a balloting process among four judges and the Head Judge determines the $W$ winners. + +We can predict the total number of reads made in the judging process and the maximum number of reads by any judge. We calculate an optimal number of judges so that all judges have nearly the same number of reads. + +Testing on a computer, we found that our model fails to pick $W$ out of the top $2W$ no more than $0.1\%$ of the time. These failures are attributable to the human factor in judging. For the given problem of 8 judges deciding on 3 winners from 100 papers, our model predicts and tested at 254 total reads, with 32 papers read by each judge; the model fails to select 3 of the top 6 only $0.08\%$ of the time. For 32 judges deciding on 7 winners from 350 papers, our model predicts and tested at 1,162 total reads, and 36 papers read by each judge; the model fails to select 7 of the top 14 only $0.01\%$ of the time. + +# Assumptions with Justifications + +Papers: + +- Ranking: There is an absolute ordering of the papers, so we can determine if the winning papers are within in the top $2W$ total papers. + +- Number: The number of papers is far greater than the number of winners. + +- Judges: + +- Knowledge: All judges are knowledgeable about the question posed and can easily determine if a paper has merit; otherwise, a paper cannot be fairly evaluated. +- Preferences: All judges will agree on the ranking of a particular paper within some margin of error. Each judge has personal preferences about what is desirable in a paper. Also, when a judge is asked to read a large number of papers, there must be some margin of error in the ranking process. +- Ability: A judge can read up to 20 papers at a sitting and still pick out the top papers with a reasonable amount of accuracy. In speaking with a number of professors and contest evaluators, we found that 20 is the upper bound on the papers that the professors feel that they can evaluate fairly at one time. +- Head Judge: The Head Judge only settles disputes and votes in the final round; the Head Judge is not counted in the number $J$ of judges. +- Number: The minimum number of judges is 5, including the Head Judge. There must be enough judges to evaluate all of the papers fairly; the more judges, the more accurate the process will be. + +- Fairness is the ultimate variable. In any contest, judges must be willing to sacrifice time and energy to ensure that the best papers will win the contest. The credibility of the contest is based upon the fairness and correctness of the judging. + +# Definitions of Constants and Terms + +$P$ : total number of papers + +$J$ : total number of judges, not including the Head Judge + +$J_{k}$ : representation of judge $k$ + +W: total number of winners + +read: one judge reading one paper one time + +round: a process of elimination in which a set of papers is cut to $W$ papers + +$R_{a}$ : the representation of round $a$ + +$S_{a}$ : the number of stacks in round $a$ . A stack is a set of papers of size $< P$ . + +$N$ : the number of papers in a stack + +$S_{jk}$ : representation of stack $j$ in round $k$ + +error: a judge's ordering that contradicts the absolute ordering + +# The Paper Contest Model + +The model begins by dividing the $P$ papers into $S$ stacks. Judges then perform an elimination round in which two judges work together to combine two stacks into one stack of $W$ papers. The comparisons are made by rank ordering, using no numerical scoring system. The process is repeated for the new stacks until two stacks are left. The final round then enacts a voting process on the last two stacks to declare the winners. + +# Preliminaries + +We first determine the number of stacks, $S_{1}$ , needed for the first round. To ensure a symmetric elimination, we need $S_{1}$ to be a power of 2. By our assumptions, each judge can read up to 20 papers, so the size of a stack cannot exceed 20. The number of papers in each stack is $N = P / 2^{n}$ , where $n$ is the smallest value that satisfies + +$$ +N = \frac {P}{2 ^ {n}} \leq 2 0. +$$ + +If $2^n$ does not divide $P$ evenly, $N$ is rounded up. The papers are distributed as evenly as possible about the $S_1$ stacks. We assign each judge one stack until we run out of either stacks or judges. If we run out of judges, then some judges will be asked to repeat the first round. + +# First Round + +Judges $J_{1}$ and $J_{2}$ are assigned stacks $S_{11}$ and $S_{21}$ . Judge $J_{i}$ chooses $W$ papers from the stack $S_{i1}$ ; $W$ , to ensure that all of the $W$ best papers cannot be eliminated in round $R_{1}$ . Once done, they swap stacks. Judge $J_{1}$ then chooses $W$ from $S_{21}$ , while $J_{2}$ chooses $W$ from $S_{11}$ . Together, they compare their lists and determine $W$ from the union of $S_{11}$ and $S_{21}$ . If there is a dispute, the Head Judge determines which paper advances. Each pair of stacks is cut to $W$ papers in the same manner. At the completion of the first round, there are $S_{2} = 2^{n - 1}$ stacks and $N = W$ papers. + +# Why Choose $W$ Every Time? + +The scenario could arise in which the top $2W$ papers fall into one stack in any round. If we return any fewer than $W$ papers, the model would automatically + +fail. To return more than $W$ papers would increase the stability of the model, but not to a degree that would warrant the increased number of reads required. + +# Second and Subsequent Rounds + +There will be $n - 2$ "middle" rounds (see Appendix A). The procedure for these rounds can be generalized with the introduction of a variable $r$ that holds the value of the round number. At the beginning of $R_{r}$ , we have $S_{r} = 2n - r + 1$ stacks and $N = W$ papers. The next two available judges are assigned the stacks $S_{1r}$ and $S_{2r}$ . Each chooses $W$ papers from the union of stacks $S_{1r}$ and $S_{2r}$ , and then they agree upon a final $W$ to advance, with the Head Judge settling any disputes. Every pair of stacks is cut to $W$ papers in the same manner. This is repeated round by round up to and including round $R_{n - 1}$ , at the completion of which there will be $2W$ papers remaining. + +# Final Round + +The final round, $R_{n}$ , is a voting process. To ensure fairness and to account for the importance of the final decision, we choose five judges, including the Head Judge, to evaluate the papers. These judges read the remaining $2W$ papers and rank-order them. An official, possibly an extra judge, tallies the votes, giving $W$ points to first place, $W - 1$ points to second place, and so forth, down to 1 point for place $W$ . The $W$ papers receiving the most points are the winners. If there are any ties in the points, the ballot of the Head Judge breaks the tie. + +# Human Factor + +The one variable that this or any model cannot control is the human factor. We simulate this human factor by using a probability distribution that models what an actual judge may do. If all the judges are exactly the same, paper 1 will always be ranked ahead of paper 2. However, individual judges will have preferences about what they would like to see in a paper. The most common example is one judge who would weight presentation over substance while another judge would rate substance over presentation. In this case, paper 2 could easily be rated above paper 1. To model this factor, we chose the following function as the probability that a judge's ranking of two papers differs from the absolute ordering + +$$ +E (P, d) = \frac {1 . 4 6 + \arctan (1 - 6 0 d / P)}{2 . 9 2 + \frac {\pi}{2}}, +$$ + +where there are $P$ papers in the contest and $d$ is the distance between the two compared papers on the absolute scale. + +![](images/574399dc0f082587d9928a4a17651c54219ad26b1e55daeeac84585ef3b15583.jpg) +Figure 1. The operating characteristic curve for a judge's ranking of two papers. (Note that this is not a probability density function.) + +The formula gives the probability of the judge making an error as a function of the true difference in ranks between two papers. As the distance between two ranks increases, the probability of reversing of the papers decreases quickly. The probability of error when there is a $0.01P$ difference between two papers is approximately $50\%$ . So, the choice between papers 5 and 6 for $P = 100$ is completely random. The probability of error when there is more than a $0.17P$ difference is 0. In this case, the magnitude of the difference between the papers is too great; it would be impossible to err in the comparison. The values for the probability of error between .01P and .17P are representative of real life—the closer two papers are, the more likely a judge's personal preferences of style will influence the ordering of the papers. Similarly, the further two papers are apart, the less likely the judges preferences will be able to affect their comparison. + +# Results + +# Total Reads + +The total number of reads, excluding arbitrations by the Head Judge, is given by + +$$ +2 P + \sum_ {i = 2} ^ {n - 1} 2 ^ {n - i + 2} w + 5 (2 w). +$$ + +The first term is to the number of reads in $R_{1}$ , the second takes care of rounds $R_{2}$ through $R_{n - 1}$ , and the third is for $R_{n}$ (see Appendix A). + +# Number of Judges + +The model requires five judges, including the Head Judge. The model can accommodate any $J \geq 4$ , but there is an optimal number of judges $J_O$ that minimizes the maximum number of reads per judge. That optimal number of judges equals $S_1$ , not counting the Head Judge. All $J_O$ judges are needed in $R_1$ , with one-half used in $R_2$ , one-fourth in $R_3$ , and so on. We use each judge in the first and in one subsequent round, leading to nearly the same number of reads (see Appendix A). + +# Maximum Reads per Judge + +If $J \geq J_O$ , the maximum number of reads is + +$$ +2 \left\lceil \frac {P}{2 ^ {n}} \right\rceil + 2 W. +$$ + +If $J < J_{O}$ , the maximum number of reads can be very large, even unreasonably large. In this situation, some of the $J$ judges will be required to read two or more pairs of stacks in round one. Already these judges will have to read at least 40 papers and possibly more, because the second and subsequent rounds haven't even started. If a $J < J_{O}$ is chosen, $J$ must be close to $J_{O}$ or there will be many unhappy judges. + +# Testing the Model + +We implemented the model in the C programming language, making some minor additional assumptions (see Appendix B). + +We then ran tests for various combinations of $P$ and $W$ , always using the optimal number $J_{O}$ of judges. We did 10,000 iterations each for the cases in Table 1. The test data returned an average failure rate of $0.023\%$ . + +Table 1. Combinations of number of papers $P$ and number of winning papers $W$ ,for each of which 10,000 iterations were run. + +
PWpercentage failuretotal readsmax reads per judge
5020.02%10426
10030.08%25432
20050.01%57036
35070.01%1,16236
+ +The model is valid. It agrees with the formula for the maximum number of reads, total reads, and, most important, the final $W$ are consistently among the top $2W$ . The small failure rate is attributable to the human factor. Whenever the human element is involved, there are bound to be rare cases that occur. + +# Strengths and Weakness of the Model + +# Strengths + +- The probability of the model failing is extremely low, usually less than $0.1\%$ . +- The model takes into account the possibility of human error. +- All judging is done via direct comparison, and at least two judges must concur for a paper to advance. There is no numerical scoring, which can be biased by the grading scale of the judge; and an error by a judge has less of a chance of advancing an unworthy paper. +- Our model performs extremely well for the example posed in the original question ( $P = 100$ , $W = 3$ , and $J = 8$ ) (see Figure 2). It fails only $0.08\%$ of the time, while limiting the judges to 32 reads each (one-third of the total number of papers) and the total reads to 254. +- Most important, we would feel very comfortable using the model to judge our paper in the 1996 MCM. The model is fair. The top papers win virtually every time. + +# Weaknesses + +- The model has definite bounds of effectiveness. We have set a bound on the number of papers that a judge can read at one sitting at 20. After the first round, judges read $2W$ papers each round. Thus, the number of winners must be less than or equal to 10. Allowing $2\%$ of all papers to be winners, the total number of papers must be less than or equal to 500. A possible solution for large $P$ is to break the contest into two halves of less than 500 each and run the model for each half. +- We had to model the human factor, which is difficult. The data on which we based the curve were our best estimate of human nature. We did not have data to consult to see how humans would actually perform under these circumstances. All of our testing and the validity of our results are based upon the assumption that our equation is actually representative of what occurs in the real world. If further research deems that equation to be inaccurate, it will be easy to adjust our model to a new equation. + +# Appendix A: Rationale and Proofs + +# There Are $n - 2$ Middle Rounds + +In each round, exactly $W(S_r + 1)$ papers advance into the next round, where $S_r + 1 = 2^{n - r}$ . This is true because the number of stacks is halved with each + +![](images/70bf3343f8781c6921001b539dbdd3528e3a47e7cc8d247fe723e892bbc5ed35.jpg) +Figure 2. Diagram of the operation of the model for the original setting of $P = 100$ , $W = 3$ , and $J = 8$ . + +successive round, and there are $W$ papers in each new stack. The final round begins when only $2W$ papers remain. We are in the final round when $2^{n - r} = 1$ , or $2^n = 2^r$ . So $n = r$ and the total number of rounds is $n$ , including the first round and the final round. Hence there are $n - 2$ middle rounds. + +# Total Number of Reads + +In $R_{1}$ , every paper is read twice, yielding $2P$ reads. In the middle rounds, there are $2^{n - r + 1}$ stacks of $W$ papers each. Each stack is read twice, yielding $2^{n - r + 2}W$ reads per round, for $n - 2$ rounds. Round $R_{n}$ has 5 judges, each reading the final $2W$ papers. Thus, the sum of the reads for all rounds is + +$$ +2 P + \sum_ {i = 2} ^ {n - 1} 2 ^ {n - i + 2} w + 5 (2 w). +$$ + +# Rationale for Optimal Number of Judges, $J_{O}$ + +We would like each judge to read approximately the same number of papers. This happens in the first round (some judges may read one additional paper if the stacks cannot be divided evenly). In each successive round, the number of papers read by a judge is $2W$ . If the number of judges is $2n$ , each judge is guaranteed exactly 2 rounds of judging. The number of judges needed is halved with each subsequent round. Thus, there will always be 4 judges for $R_{n-1}$ , leaving 4 judges who have yet to judge a second time. These four judges plus the Head Judge make up the 5 who judge the final round. If $J_O$ judges are used, each judge reads nearly the same number of papers, since every judge reads in the first round and one subsequent round. + +# Maximum Number of Reads for a Judge + +If $J = J_{O}$ , each judge reads in exactly 2 rounds: the first round, with approximately + +$$ +2 \left\lceil \frac {P}{2 ^ {n}} \right\rceil +$$ + +papers, and either a middle or a final round, with $2W$ papers. Thus, the maximum number of reads is + +$$ +2 \left\lceil \frac {P}{2 ^ {n}} \right\rceil + 2 W. +$$ + +If $J > J_{O}$ , some judges read in only one round. So long as $J < 2J_{O}$ , at least one judge will read in two rounds, so the maximum number of reads is still the same as above. + +If $J < J_{O}$ , some judges read more than one pair of stacks in the first round, possibly many pairs; the maximum number of reads could become very large. + +# Appendix B: The Computer Test Model + +In using a computer to test a model, we must make some assumptions about human behavior that can be implemented by the computer. For the most part, our assumptions about human behavior are taken care of in the equation for the error factor. The other assumptions that we make are: + +- Judges always require a third party to settle any disputes. It is very difficult to implement the power persuasion would have in a discussion between two judges. +- Judges are reasonable. If a judge has read two papers earlier and reads them again, they will again receive the same relative ranking, except possibly in the last round, which allows for more scrutiny to each paper. +- There is no Head Judge in the program. The person who settles any disputes is simply the next available judge who has never before compared the papers. +- The optimal number of judges is always used. + +# Judging a Mathematics Contest + +Daniel A. Calderón Brennan + +Philip J. Darcy + +David T. Tascione + +St. Bonaventure University + +St. Bonaventure, NY 14778 + +Advisor: Albert G. White + +# Overview + +Our model is based on breaking the problem down into four main areas and dealing with each: the distribution of papers among the judges, scoring methods, the number of papers to eliminate per round and the number of rounds, and the performance of the model with larger numbers of papers. + +In each component, we focused on the goals of maintaining fairness and variety in all judging procedures, eliminating as many papers as possible in each round, minimizing the number of rounds, and, most important, seeing that no one goal was attained at the expense of any other. + +The papers were coded and sent on from judge to judge without any prior knowledge of that particular participant or paper. There is no way to avoid having a judge read a paper twice, but a judge will not read a paper twice in a row until perhaps the final two rounds. + +# Assumptions + +- Budget constraints affect only the number of judges. +- Time constraints affect only the number of papers that each judge can read. +- An approximate "absolute" ranking system exists among the judges; i.e., if every paper were to be scored or ranked by each judge, the results of each judge would generally agree with every other (allowing for a few places where consecutive papers may be "flip-flopped"). +- All papers are eligible to win (none disqualified for cheating, missing sections, etc.). +- Judges need not be in the same location, but a copy of each paper (electronic or hard copy) is readily available to each judge. +- Judges remain ignorant of other judges' opinions on all papers. + +- There is no way to avoid having a judge read a paper twice (nonconsecutively) during the reading process, but re-reading has no effect on judges' opinions of papers (i.e., a judge rates a paper as effectively on a second read as on the first). +- A minimum of 5 judges per 100 papers are needed. + +# Developing the Model + +- Paper distribution: + +- Maintain judges' ignorance of other judges' opinions of papers. +- Distribute papers so that the top $2W$ (6) are kept in competition throughout the contest. +- Distribute papers such that no judge sees the same paper in consecutive rounds until the final two rounds (ensures efficiency in judging and fairness, as multiple opinions of papers are necessary). +- Computer distribution both frees one human to judge instead of distributing papers and accomplishes the above tasks. + +Equating numerical judging systems: + +- Deals with the possibility of systematic bias in use of a numerical scoring scheme. +- For each judge in the numerical scoring round, we adjust scores so that the highest score is equated to $100\%$ and the others are adjusted proportionally; for example, scores of 92, 86, 89, 79 are adjusted to the same values divided by 92. +- Given our assumption of an absolute ranking scheme and that a numerical scheme will closely follow, this method in essence allows us to put a numerical value on a judge's ranking so that it may be compared with ranks of other judges. + +- Cuts at end of each round; number of rounds; total papers read per judge: + +- As few rounds as possible (time/budget constraints) +- As few reads per judge as possible (time/budget constraints) +- Keep the top $2W$ papers intact + +- Cut approximate $44\%$ of papers remaining in each round. When possible, leave a multiple of $J$ intact, so that only in the first round do some judges read more than others. In the first round, do not cut to fewer than $2W + 1$ papers, in case the best $2W$ papers go to the same judge. After the first round, distribution of papers will prevent this problem from occurring again (fairness). + +# Methods for Distribution + +The method for redistributing the papers after the first round is based upon a matrix created by the judges, each of whom enters in a column, from top to bottom, the numbers of the paper read, in decreasing order of rank. Then the first row consists of the highest-ranked papers of each judge. + +Using falling diagonals of the matrix, it is possible to ensure that in the next round no judge receives any paper just read in the previous round. Further, it is possible to ensure that one judge does not receive all $2W$ top papers, which would force them to be cut the next round. The matrix below illustrates the method for redistributing the papers. + +$$ +\begin{array}{r} J _ {8} \quad J _ {1} \quad J _ {2} \quad J _ {3} \quad J _ {4} \quad J _ {5} \quad J _ {6} \quad J _ {7} \\ J _ {7} \left( \begin{array}{l l l l l l l l} a _ {1 1} & a _ {1 2} & a _ {1 3} & a _ {1 4} & a _ {1 5} & a _ {1 6} & a _ {1 7} & a _ {1 8} \\ a _ {2 1} & a _ {2 2} & a _ {2 3} & a _ {2 4} & a _ {2 5} & a _ {2 6} & a _ {2 7} & a _ {2 8} \\ a _ {3 1} & a _ {3 2} & a _ {3 3} & a _ {3 4} & a _ {3 5} & a _ {3 6} & a _ {3 7} & a _ {3 8} \\ a _ {4 1} & a _ {4 2} & a _ {4 3} & a _ {4 4} & a _ {4 5} & a _ {4 6} & a _ {4 7} & a _ {4 8} \\ a _ {5 1} & a _ {5 2} & a _ {5 3} & a _ {5 4} & a _ {5 5} & a _ {5 6} & a _ {5 7} & a _ {5 8} \\ a _ {6 1} & a _ {6 2} & a _ {6 3} & a _ {6 4} & a _ {6 5} & a _ {6 6} & a _ {6 7} & a _ {6 8} \\ a _ {7 1} & a _ {7 2} & a _ {7 3} & a _ {7 4} & a _ {7 5} & a _ {7 6} & a _ {7 7} & a _ {7 8} \end{array} \right) \end{array} +$$ + +From the diagonals of the matrix that run downward from left to right, we get the assignments of papers to judges for the next round: + +$$ +J _ {1}: \quad a _ {1 2}, \quad a _ {2 3}, \quad a _ {3 4}, \quad a _ {4 5}, \quad a _ {5 6}, \quad a _ {6 7}, \quad a _ {7 8} +$$ + +$$ +J _ {2}: \quad a _ {1 3}, \quad a _ {2 4}, \quad a _ {3 5}, \quad a _ {4 6}, \quad a _ {5 7}, \quad a _ {6 8}, \quad a _ {7 1} +$$ + +$$ +J _ {3}: \quad a _ {1 4}, \quad a _ {2 5}, \quad a _ {3 6}, \quad a _ {4 7}, \quad a _ {5 8}, \quad a _ {6 1}, \quad a _ {7 2} +$$ + +$$ +J _ {4}: \quad a _ {1 5}, \quad a _ {2 6}, \quad a _ {3 7}, \quad a _ {4 8}, \quad a _ {5 2}, \quad a _ {6 2}, \quad a _ {7 3} +$$ + +$$ +J _ {5}: \quad a _ {1 6}, \quad a _ {2 7}, \quad a _ {3 8}, \quad a _ {4 1}, \quad a _ {5 3}, \quad a _ {6 3}, \quad a _ {7 4} +$$ + +$$ +J _ {6}: \quad a _ {1 7}, \quad a _ {2 8}, \quad a _ {3 1}, \quad a _ {4 2}, \quad a _ {5 4}, \quad a _ {6 4}, \quad a _ {7 5} +$$ + +$$ +J _ {7}: \quad a _ {1 8}, \quad a _ {2 1}, \quad a _ {3 2}, \quad a _ {4 3}, \quad a _ {5 5}, \quad a _ {6 5}, \quad a _ {7 6} +$$ + +$$ +J _ {8}: \quad a _ {1 1}, \quad a _ {2 2}, \quad a _ {3 3}, \quad a _ {4 4}, \quad a _ {5 6}, \quad a _ {6 6}, \quad a _ {7 7} +$$ + +The Judges rank-order the newly distributed papers, and again a matrix is used to redistribute the papers that make the cut: + +$$ +\left( \begin{array}{c c c c c c c c} a _ {1 1} & a _ {1 2} & a _ {1 3} & a _ {1 4} & a _ {1 5} & a _ {1 6} & a _ {1 7} & a _ {1 8} \\ a _ {2 1} & a _ {2 2} & a _ {2 3} & a _ {2 4} & a _ {2 5} & a _ {2 6} & a _ {2 7} & a _ {2 8} \\ a _ {3 1} & a _ {3 2} & a _ {3 3} & a _ {3 4} & a _ {3 5} & a _ {3 6} & a _ {3 7} & a _ {3 8} \\ a _ {4 1} & a _ {4 2} & a _ {4 3} & a _ {4 4} & a _ {4 5} & a _ {4 6} & a _ {4 7} & a _ {4 8} \end{array} \right) +$$ + +Again, diagonals are used to redistribute four papers to each judge. + +
J1:a12,a23,a34,a45
J2:a13,a24,a35,a46
J3:a14,a25,a36,a47
J4:a15,a26,a37,a48
J5:a16,a27,a38,a41
J6:a17,a28,a31,a42
J7:a18,a21,a32,a43
J8:a11,a22,a33,a44
+ +The next round of judging results in only two papers being passed by each judge. For this round, we convert the $2 \times 2$ matrix of the papers that pass this cut into a $4 \times 4$ matrix as follows: + +$$ +\left( \begin{array}{c c c c c c c} a _ {1 2} & a _ {1 3} & a _ {1 4} & a _ {1 5} & a _ {1 6} & a _ {1 7} & a _ {1 8} & a _ {1 1} \\ a _ {2 3} & a _ {2 4} & a _ {2 5} & a _ {2 6} & a _ {2 7} & a _ {2 8} & a _ {2 1} & a _ {2 2} \end{array} \right) \quad \longrightarrow \quad \left( \begin{array}{c c c c} a _ {1 2} & a _ {1 3} & a _ {1 4} & a _ {1 5} \\ a _ {1 6} & a _ {1 7} & a _ {1 8} & a _ {1 1} \\ a _ {2 3} & a _ {2 4} & a _ {2 5} & a _ {2 6} \\ a _ {2 7} & a _ {2 8} & a _ {2 1} & a _ {2 2} \end{array} \right) +$$ + +We split each row between columns 4 and 5 to create four rows of four elements. The papers are once again redistributed among the judges for numerical scoring. We must distribute each paper to two judges, according to our judging scheme, and we have devised a distribution method that ensures that no judge shares more than one paper to be read with any other (so that each paper may be numerically scored and therefore weighted against the widest possible range of papers allowable; in this case, each paper is scored against seven others). We do this first by adopting the diagonal distribution scheme that we have used throughout this problem and assigning the groups of four papers to four of the judges. Then, once we have distributed each paper once, we distribute the papers a second time by assigning the next four judges one column of the matrix as it stands above. In this case, we really cannot prevent a judge from reading a same paper as in the previous round, but the different scoring system calls for a more in-depth look at the papers anyway, and previous knowledge of a paper should not hinder the fairness of the distribution/scoring scheme. + +In the final round, each judge reads the final eight papers and ranks them. The papers are then scored according to low-rank-sum method described in the Judging Methods section. The top three papers, the ones with the lowest rank sum, are the winners. + +# Number of Rounds, Papers Read per Judge, Paper Elimination + +We must limit the number of rounds of judging and the number of papers read by each judge by eliminating the greatest possible number of papers per + +round, while protecting papers that have an earnest chance of winning from being eliminated in a round in which too many papers are eliminated. This amounts to protecting the "best" $2W$ papers until they may compete against each other in the final round of judging. + +In the first round, we call for elimination of only so many papers as to leave $2W + 1$ papers from each judge remaining in the competition. This ensures against the unlikely occurrence that a single judge receives the top $2W$ papers in the initial round. After this round, the methods of paper distribution prevent this from happening. A maximum of $W$ papers may be given to a single judge in the second round, which calls for elimination of all but $W + 1$ papers; and in the third round, elimination of one-half of the remaining papers is allowable, since by this time the distribution scheme has spread out the $2W$ "best" papers enough to protect them from being eliminated. + +Now that the papers have been thinned out, and the "cream" has been allowed to rise to the top, we may begin more forceful elimination measures. We make fourth round one in which the judges assign numerical grades to papers, with only the top $2W + 2$ papers surviving elimination. The scoring procedure is discussed in the Judging Methods section; it suffices here to say here that the judging saves the top papers from being eliminated while allowing a drastic reduction in the number of papers remaining. + +With only $2W + 2$ papers left in the running after the fourth round, we can have each judge read each of them and rank them in order, with the $W$ papers ranked consistently high enough emerging as victors. + +The judges have been subjected to the fewest number of rounds and the fewest possible number of papers to read. + +# Judging Methods + +Our model uses two methods of judging, rank-ordering and numerical judging. Rank-ordering offers the advantage that there should not be any bias to interfere with the ranking. We base this on the assumption of an absolute ranking system, i.e., if the judges were to read all the papers, they would agree to an absolute ranking of the papers (with some reversals of consecutive papers). The ranking of any subset of papers must also conform to the absolute ranking, that is, the process of ranking must be order-preserving. The disadvantage with rank-ordering is that if a judge were to receive all the top papers, some of the top papers would get cut. + +The other method, numerical grading, has the disadvantage of allowing for systematic bias. If a judge on average gives lower scores, our adjustment procedure compensates. + +Since we use both methods of judging, we developed corrective measures for both to prevent these problems. The simpler of the two methods to correct is the rank-ordering method. We base the redistribution of papers that passed the cut on the rank that they had in the previous cut, ensuring that the best + +$2W + 1$ papers pass the first rank cut. After that cut, the redistribution prevents any judge from receiving a majority of the top $2W$ papers until the numerical grading round. + +To prevent bias from removing one of the top $2W$ papers at the numerical round was a difficult task. We decided finally to base the numerical grading on a percentage curve based on the grades that the judges assign to all the papers. The top-scoring paper in a given judge's group determines the $100\%$ paper, with the each of the remaining papers curved assigned its percentage of the top score. The distribution of the papers in the scoring round ensures that any two judges will have only one paper in common and that every paper is graded twice. Therefore every paper will have two "curved" percentage grades. These two grades are averaged together, yielding an overall weighted percentage ranking. The top $2W + 2$ papers pass this cut in a further correction to ensure that the top $2W$ papers pass into the final round. + +The eight papers in the final round are scored by using low-rank scoring. One of the eight judges assigns a number respectively to a paper, 1 (outstanding) through 8 (poor). At the end of the final round, the scores are added together and the paper with the lowest score wins. + +# Generalizing the Model + +Since we have assumed that the ratio of judges to papers is greater than .05, we may test our model's growth rate, in terms of reads per judge and percentage of total papers read per judge, for larger ratios. We examined the changes as the number of papers doubles from 100 to 200, and then to 400, for ratios of .05 and .10. We assumed that each time $P$ doubles, so does $W$ . + +Table 1 gives the details of our analysis, for a variety of different situations. For a ratio of .10, we see a tremendous increase in efficiency over the situation for a ratio of .05! Each judge reads about one-third of the total papers instead of more than half. + +# Demonstration of the Model + +[EDITOR'S NOTE: At this point the authors illustrate in detail the application of their model to two sets of data devised by them, each with $P = 100$ , $J = 8$ , and $W = 3$ . In the first example, the top three papers in fact win. The second example shows what happens when one judge receives all $2W$ top papers in the first round: The top three papers still win. For reasons of space, we omit these extended examples.] + +Table 1. +Analysis of contest situations for ratios of judges to papers of .05 and .10. + +
RatioPJWRoundPapersPapers/judgeMethodEliminated/judge
.0510053110020rank8
26012rank4
3408rank4
4208scorebottom 14 averages
588rank5 highest rank sums
8
+ +# Strengths and Weaknesses + +# Strengths + +- Paper distribution: The distribution method keeps the required $2W$ papers in circulation while eliminating the maximum number of papers at each round. It also allows for a wide variety of judges' opinions to be given on the top $2W$ papers. +- Judging methods: The numerical grades overcome systematic bias by the judges. The rank-ordering allows the maximum number of papers to be culled in early rounds while preserving the top $2W$ papers. +- Elimination methods: We minimize the number of rounds and the number of papers judges must read, while maintaining fairness. +- Growth rate: For a given ratio of papers to judges, as the number of papers increases, the percentage of papers read per judge diminishes rapidly. + +# Weaknesses + +- Paper distribution: In the worst-case scenario, we cannot prevent a single judge from receiving all of the top $P / J$ papers and eliminating the bottom portion of those. +- Judging methods: The numerical grading adjustments require redundant readings, and extra time is consumed in applying the adjustments. Further, a less-qualified paper may receive a higher weighted average than a paper with a higher absolute ranking, if the lower-ranked paper is judged against papers with even lower ranks and the higher-ranked paper is judged against papers with even higher ranks. +- Elimination methods: Entering the final round, too few papers may be eliminated. This apparent inefficiency is necessary to carry the top $2W$ papers into the final round. +- Growth rate: With a low judge-to-total-papers ratio and a relatively low number of papers, the percentage of total papers read by each judge is quite high. The total number of papers read per judge increases even as the percentage drops. + +# References + +Decker, Rich and Stuart Hirschfield. 1995. The Object Concept: An Introduction to Computer Programming Using $C++$ . Boston, MA: PWS Publishing. + +The selection sort that we used comes from this book. + +# Select the Winners Fast + +Haitao Wang + +Chunfeng Huang + +Hongling Rao + +Center for Astrophysics + +University of Science and Technology of China + +Hefei, China + +Advisor: Qingjuan Yu + +# Summary + +Assuming that judges are ideal, we provide a model to determine the top $W$ papers in almost the shortest time. We use a matrix record the orderings that we get from judges, and we reject as many papers as possible after each round. + +We then consider real-life judges and estimate the probability that the final $W$ papers contain a paper not among the best $2W$ papers. + +Furthermore, considering the possibility of systematic bias in a scoring scheme, we improve the model by using a Bayesian estimation method, which makes it possible to some extent to compare different judges' scores. + +We performed many computer simulations to test the feasibility of our model. We find that our model would be improved by increasing the number of papers selected from the first round. We also made a stability analysis by altering $P$ , $J$ , $W$ and got an empirical formula to predicts the total time of judging. + +We used data from real life to test our model and got a perfect result: For $P = 50$ , $J = 3$ , and $W = 2$ , we get the first- and third-best papers with our scheme; with $W = 3$ , we got the top three papers. + +We conclude by summarizing a practicable and flexible scheme and offering some suggestions, and estimating the budget with an empirical formula. + +# Assumptions + +- The judges are equal. None is more authoritative than the others. +- When a judge is evaluating a paper, the judging result is not influenced by adventitious factors, such as taking bribes. +- The time that a judge takes is proportional to the number of papers to read. + +- There exists an objective criterion with which we can tell which of two papers is "better." Therefore, we can use an absolute rank-order or absolute scores to describe the quality of the papers measured by the criterion. +- The absolute rank-order is transitive: If $A$ is better than $B$ and $B$ is better than $C$ , we can say $A$ is better than $C$ . + +# Analysis of Problem + +Our primary goal is to include the top $W$ papers among the "best" $2W$ papers. + +A subsidiary goal is that each judge read the fewest possible number of papers. We interpret this goal into two points: + +- It is the duration of the whole judging process, the total time for all rounds, that is constrained by funding. If the time for a round is how long it takes the judge who has the most papers to read, it is wise to distribute the papers to the judges as evenly as possible in each round. +- We want to get as much information as possible. + +The two usual methods of judging are rank-ordering and numerical scoring. Systematic bias is possible in a scoring scheme, that is, each judge may have a subjective tendency in scoring, which results in incomparability among scores given by different judges. However, it is reasonable to believe that the scores that the same judge gives to different papers are comparable, even if they are obtained in different rounds. Therefore, compared with a rank-ordering method, scoring is a more meaningful way to record the results for papers judged in earlier rounds. We use a scoring scheme instead of a rank-ordering scheme in our later model, so a paper need not be read more than once by the same judge. Note that we do not compare the scores of different judges directly, that is, we mainly use scores to obtain a rank-ordering. + +We first consider the simplified problem with the significant assumption that the ordering from each judge's evaluation coincides with the absolute ordering. In this event, we can definitely find the best $W$ papers. Furthermore, we can optimally adjust the allocation of papers in every round to get an efficient scheme. + +But judges in real life cannot rate the papers with perfect precision. For example, a paper with absolute rank 7 (we denote it $P_{(7)}$ ) may get a higher score than $P_{(6)}$ from a judge. We call that misjudgment. Misjudgments prevent us from getting the best $W$ papers, so their effect must be taken into account. + +There are also subjective differences among the scorings of different judges. For example, for the same two papers, one judge may give 80 and 83, while another gives 65 and 72. If we know the distribution of each judge's scores, we can to some extent compare scores given by different judges. The real distribution for each judge is unknown, so we have to use estimates. + +Table 1. +Notation. + +
SymbolMeaning
Ptotal number of papers
Wnumber of winners
Jnumber of judges
Ttotal judging time (or number of papers that can be judged in the time)
Pipaper i
P(i)paper with the absolute rank of i
Sithe absolute score for paper i
Pi > Pjpaper i is better than paper j in absolute rank-order
Pi(A) > Pj(A)paper i is better than paper j in judge A's opinion
Rinumber of papers currently known to be better than Pi
ORDmatrix of currently known relations between pairs of papers
[x]the smallest integer not less than x
N(μ0, σ02)normal distribution with mean μ0 and standard deviation σ0
σ1standard deviation of the judges' scoring
μj, σjmean and standard error of judge j's scoring
ˆμj, ˆσjestimated values of μj, σj
Perrorprobability of error occurring
+ +# Design of the Model + +# Top $W$ in the Least Time + +Ideally, the ordering in each judge's opinion coincides with the absolute ordering, expressed mathematically by + +$$ +P _ {i} (A) > P _ {j} (A) \Leftrightarrow P _ {i} > P _ {j}. +$$ + +So, based on the transitivity of the absolute score, if $P_{i}(A) > P_{j}(A)$ and $P_{j}(B) > P_{k}(B)$ , we can say that $P_{i} > P_{k}$ . + +To find the top $W$ as soon as possible, as many papers as possible should be rejected after each round. So if there are $W$ papers or more in the current paper pool that are better than $P_{i}$ , reject $P_{i}$ . + +In the first round, $P$ papers are dispatched to $J$ judges evenly. After performing the above rejection rule, each judge selects $W$ papers. In later rounds, how do we dispatch the remaining $W \cdot J$ papers to judges to obtain the greatest number of new orderings from each round? + +Let us consider the simple case in which $W = 2$ , with $P_{1} > P_{2}$ and $P_{3} > P_{4}$ known after the first round. If we compare $P_{2}$ with $P_{4}$ (or $P_{1}$ with $P_{3}$ ), no matter what the result is, we can always gain an extra relation (if, say, $P_{2} > P_{4}$ , the extra relation is $P_{1} > P_{4}$ ). If we compare $P_{1}$ with $P_{4}$ (or $P_{2}$ with $P_{3}$ ), on some occasions no extra relations can be obtained. A similar result holds for $W = 3$ . + +This example indicates a fact: If we use current rank $R_{i}$ to denote the number of papers known to be better than $P_{i}$ from current known information, we should try to distribute the papers with close current rank to the same judge in order to get more relations in a round. + +Use matrix ORD to describe known orders. Define $\mathrm{ORD}_{ij}$ by + +$$ +\mathrm {O R D} _ {i j} = \left\{ \begin{array}{l l} 1, & \text {i f P _ {i} > P _ {j}}; \\ - 1, & \text {i f P _ {j} > P _ {i}}; \\ 0, & \text {i f P _ {j} = P _ {i}}; \\ \infty , & \text {i f P _ {i} a n d P _ {j} h a v e n o t b e e n c o m p a r e d b y a n y j u d g e}. \end{array} \right. +$$ + +At the beginning of a round, we dispatch papers to judges and judges give each paper a score. We then find every $P_{i}$ and $P_{j}$ that have scores from the same judge in the finished rounds and fill $\mathrm{ORD}_{ij}$ and $\mathrm{ORD}_{ji}$ . We replace ORD with its transitive closure [Wang 1986], which, put simply, adds all the indirect order gained from $\mathrm{ORD}_{ij}$ into the matrix ORD. At the end of each round, for each paper $P_{i}$ , calculate $R_{i}$ from ORD and reject $P_{i}$ if $R_{i} \geq W$ . Repeat the above process until the final $W$ papers are left. + +# Consider Misjudgment + +By misjudgment, we mean that the final $W$ papers are not the best $W$ . If the final $W$ papers contain a paper not among the best $2W$ , an error occurs. + +Assume that for a paper with an absolute score of $\mu_{1}$ , the score given by a certain judge is a random number following a normal distribution $N(\mu_1,\sigma_1^2)$ . The standard deviation $\sigma_{1}$ is the parameter that describes the degree of precision in a measurement. Misjudgment originates from the deviation of a judge's scoring from the absolute score, as Figure 1 shows. The shaded area in Figure 1 shows the misjudgment area. + +![](images/86f5b77c6af4009f693a914420b1656e2d29ec0e16e2899f8dfdc7231200cccf.jpg) +Figure 1. Possible score distributions for two papers. + +There must be a distribution of the absolute scores of all the papers. We assume that it is a normal distribution $N(\mu_0, \sigma_0^2)$ , so that the ratio $\sigma_1 / \sigma_0$ reflects the judge's ability to distinguish the quality of these papers and also determines the probability of misjudgment. Using the basic model, given $P$ , $J$ , $W$ , and $\sigma_1 / \sigma_0$ , we can estimate the probability of error ( $P_{\text{error}}$ ). If the probability is small enough, we can expect the model to provide the desired result. + +Taking the random feature of scoring into account, some conflict is likely to happen, such as $\mathrm{ORD}_{ij} = 1$ but judge $A$ 's scores show $P_{j}(A) > P_{i}(A)$ . One + +way to solve the conflict is to find all judges who have read both $P_{i}$ and $P_{j}$ , sum up the scores given to $P_{i}$ and $P_{j}$ by these judges, and determine a new $\mathrm{ORD}_{ij}$ by comparing the two sums. + +# Systematic Bias Among Judges + +Considering differences among the scoring tendencies of different judges (systematic biases), it is undesirable that each judge select out the same number of papers in the first round, for then it will be more likely that excellent papers will be rejected in the first round. + +Instead, when the first round of judging is over, we input the scores of each group of papers into computer, which gives the estimate of each judge's parameters (mean score and standard deviation) and computes each group of papers' score threshold for rejecting papers corresponding to a certain absolute level. This way, excellent papers have less possibility of being rejected in the first round. Estimating the parameters of all the judges enables us to compare the scores from different judges to some extent. + +We use Bayesian estimation [Box and Tiao 1973] to determine the estimate of judge $j$ 's parameters $(\mu_j, \sigma_j)$ . Suppose that judge $j$ gives scores $S_1, \ldots, S_n$ to papers $P_1, \ldots, P_n$ . We use the method of maximum likelihood to estimate $\sigma_j$ : + +$$ +\hat {\sigma} _ {j} = \frac {1}{n} \sum [ S - E (S _ {i}) ] ^ {2}. +$$ + +We then use Bayes's method to estimate $\mu_{j}$ . In reality, we may have a priori knowledge of each judge's scoring tendency. Even if not, we still have reason to assume an a priori distribution of each judge's score. If the prior parameters are $(\mu_0,\sigma_0^2)$ , then the posterior parameter is + +$$ +\hat {\mu} _ {j} = \frac {n \cdot E (S)}{n + \left(\frac {\hat {\sigma} _ {j}}{\sigma_ {0}}\right) ^ {2}} + \frac {\left(\frac {\hat {\sigma} _ {j}}{\sigma_ {0}}\right) ^ {2} \cdot \mu_ {0}}{n + \left(\frac {\hat {\sigma} _ {j}}{\sigma_ {0}}\right) ^ {2}}. +$$ + +Then we can use quantile $(N(\hat{\mu}_j,\hat{\sigma}_j^2),\mathrm{LEVEL})$ as the score threshold, where 1-LEVEL is the expected proportion of papers should be retained. One suitable value is + +$$ +\mathrm {L E V E L} = 1 - \frac {W \cdot J}{P}. +$$ + +# Test of the Model + +The most important test is to verify that the model makes sense. We do a computer simulation to see how our model behaves as the two practical factors are gradually taken into account. As detailed later, our results agree with our expectations (Feasibility Test). In addition, a finding in the course of testing + +leads us to make some improvement to the model. A more thorough test is made by revising the parameters (Stability Test). Lastly, we try to apply our model to more complicated example in real world. The results meet reality very well (A Real-Life Example). + +# Feasibility Test + +We fix $P = 100$ , $J = 8$ , and $W = 3$ . + +# Test of Basic Model + +We assign $P$ papers absolute scores of $1,2,\ldots ,100$ . These values are used only to provide the relative order of the papers. + +All papers are randomly allocated to 8 judges at the beginning of the simulation. We calculate the total judging time. + +The results of 1,000 iterations of simulation (see Figure 2) show that the basic model can select the top three in quite a short time, as we analyzed before. + +![](images/d06ae229c6506814f5126e39c3e12f4a395e13d8d411ce2bbeea2b41d05bedf1.jpg) +Figure 2. Frequency vs. total judging time. + +# Take Misjudgment into Account + +These two simulations are vital: + +- Simulating the distribution of the absolute score. Generally, we have reason to use a normal distribution. In order to assign the scores of 100 papers, we generate 100 random numbers following $N(60,30^2)$ , truncated at 0 and 100. +- Simulating the score given to one paper. We simulate a judge's scoring by adding a normal random number to the absolute score of the paper. + +The quantity $\sigma_1 / \sigma_0$ should be fairly small (say, $\leq 0.1$ ), because a judge should have good competence in judgment. We take $\sigma_1 / \sigma_0 = 1/30$ , $2/30$ , and $3/30$ as cases in our simulation. + +We also did a theoretical estimate for these cases of $P_{\text{error}}$ under the worst of circumstances. Take $\sigma_1 / \sigma_0 = 3 / 30$ for example. An error occurs when one or + +more of $P_{(7)}, P_{(8)}, \ldots$ enter the final three. The probability of $P_{(7)}$ entering the final three contributes the most to $P_{\mathrm{error}}$ . We let $P_{\mathrm{error}}(i, j)$ be the probability of misjudging papers $i$ and $j$ . We approximate $P_{\mathrm{error}}$ by the probability of $P_{(7)}$ entering the final three: + +$$ +\begin{array}{l} {P _ {\mathrm {e r r o r}}} \approx {\frac {1}{8} P _ {\mathrm {e r r o r}} (3, 7) + \left(\frac {1}{8}\right) ^ {2} \sum P _ {\mathrm {e r r o r}} (3, i) P _ {\mathrm {e r r o r}} (i, 7) + \dots} \\ \approx \frac {1}{8} P _ {\mathrm {e r r o r}} (3, 7) + \left(\frac {1}{8}\right) ^ {2} \sum_ {i = 4} ^ {6} P _ {\mathrm {e r r o r}} (3, i) P _ {\mathrm {e r r o r}} (i, 7). \\ \end{array} +$$ + +We computed $P_{\mathrm{error}}(i,j)$ using Mathematica by calculating the area of the shaded region in Figure 1. In this way, we get the estimate $P_{\mathrm{error}} \approx 0.4\%$ . + +The results of the simulations accord with the theoretical estimate (see Table 2). + +$$ +P = 1 0 0, J = 8, W = 3 +$$ + +Table 2. Results of 1,000 trials for each value of $\sigma_1 / \sigma_0$ vs. theoretical estimates. + +
σ1/σ0Mean TMax TErrorsObserved PerrorEstimate of Perror
1/3017.7210.00010-7
2/3017.7210.000.0006
3/3017.8214.004.004
+ +# An Extra Improvement to the Model + +The simulation results demonstrate that the model behaves reasonably so far. Surprisingly, a slight modification improves the model remarkably. If in the first round we select more papers, say $W_{1}$ instead of $W$ , and select $W$ papers from the next round, we find that $P_{\text{error}}$ declines greatly but the total judging time is scarcely affected. The chances of a excellent paper being rejected in the first round are much more than in the later round, because the papers rejected after the first round are read by only one judge, while those rejected later are read by more judges. Table 3 gives simulation results for several values of $W_{1}$ . + +$$ +P = 1 0 0, J = 8, W = 3 +$$ + +Table 3. Results of 1,000 trials for each value of $W_{1}$ + +
W1Mean TMax TErrors
617.9 ± 0.9200
517.8 ± 0.8200
317.8 ± 0.8201
+ +# Take Judges' Systematic Biases into Account + +We might as well simulate the scores from different judges by using the normal distribution with randomly generated mean value and variance. + +Using the method offered in Design of Model, we get the results of Table 4. With increasing LEVEL, the total judging time declines but more errors occur; it is difficult to minimize both time and number of errors. + +# Table 4. + +Results of 1,000 trials for each value of LEVEL. + +$$ +P = 1 0 0, J = 8, W = 3 +$$ + +
LEVELMean TMax TErrors
50%22.0240
70%19.5233
75%18.7214
80%18.0214
85%17.2207
90%16.31911
+ +# Stability Test + +We change the parameters $P, J, W$ to test the model's stability. Table 5 gives the results of 100 iterations for each of several groups of parameters. + +# Table 5. + +Results of 100 trials, for $\sigma_1 / \sigma_0 = 3 / 30$ , for each combination of values of $P$ , $J$ , and $W$ . + +
PJWMean TMax TErrors[P/J] + W + 2
504418.1 ± 1.222019
808314.9 ± 0.917015
1007319.5 ± 0.721020
8317.7 ± 0.820018
8418.8 ± 0.821019
519.8 ± 0.922120
10315.4 ± 0.918115
1208319.8 ± 0.922020
1408323.0 ± 1.027123
13114.4 ± 0.716314
215.8 ± 0.818115
316.9 ± 0.819016
518.2 ± 0.921018
+ +Analyzing these data, we discover an empirical formula + +$$ +\left\lceil \frac {P}{J} \right\rceil + W + 2, +$$ + +which fits the data for the average value of $T$ wonderfully. Another finding is that small $W / P$ will cause considerable $P_{\mathrm{error}}$ . So when $W / P$ is too small (say $\leq 1 / 100$ ), the model does not work well. But properly reducing the number of papers rejected in each round will reduce $P_{\mathrm{error}}$ . + +# A Real-Life Example + +One idea for testing our model would be to use data from a tennis competition. The table of international standings can be treated as the absolute order, and the result of each formal match acts as a "judge." It is a pity that we have no data! + +So we use a substitute for the data of a real competition. We obtained from our department real scores of 50 students for three semesters taking the same three courses. We consider that the sum of each student's three scores stands for the student's level in this major; we take it as an absolute score, and this gives the absolute ordering. In any one semester, the order of students' scores can differ from the absolute order. So we can use these data to simulate a contest, in which each semester acts as a judge assigning a score. + +We use these data in our computer program and get the results of Table 6. + +Table 6. Results of analysis of departmental data $(P = 50, J = 3)$ + +
WTpapers selected
219P(1), P(3)
322P(1), P(2), P(3)
+ +# Generalization + +# How to Budget? + +Funding for the contest constrains both the number of judges that can be obtained and the amount of time that they can judge. + +Assume that each judge can mark \( n \) papers/day and the judge's salary is \( \\(s \)/day$. We can hypothesize that funding \( f \) is a function of \( T \), \( n \), and \( J \): \( f = f(T, n, J) \), where \( T \) is total judging time. Obviously, \( \frac{\partial f}{\partial J} > 0 \), \( \frac{\partial f}{\partial T} > 0 \), \( \frac{\partial f}{\partial n} < 0 \), and \( T = T(J) \). Fortunately, we have the an empirical formula + +$$ +\left\lceil \frac {P}{J} \right\rceil + W + 2. +$$ + +A reasonable functional form for $f$ is + +$$ +f = \left\lceil \frac {T}{n} \right\rceil \cdot J \cdot s = J \cdot s \cdot \left\lceil \frac {\left\lceil \frac {P}{J} \right\rceil + W + 2}{n} \right\rceil . +$$ + +Since $k \leq \lceil k \rceil \leq k + 1$ , we get + +$$ +\frac {(P + (W + 2) \cdot J) \cdot s}{n} \leq f \leq \frac {(P + (W + 3 + n) \cdot J) \cdot s}{n}, +$$ + +which allows us to budget for the contest if the number of the judges has been given (see Table 7). + +Cost of the contest, for various combinations of papers per day per judge and number of judges. + +Table 7. + +
nJmin fmax f
158$3,080$6,067
208$2,310$5,250
157$2,987$5,600
207$2,240$4,812
+ +On the other hand, we can turn the equation around into the form + +$$ +\frac {f \cdot \frac {n}{s} - P}{W + 3 + n} \leq J \leq \frac {f \cdot \frac {n}{s} - P}{W + 2}. +$$ + +According to this, if funding is known, we can decide the number of judges (see Table 8). Of course, these are rough estimates. + +Number of judges that can be hired, for various combinations of number of papers per day per + +judge and budget. + +$$ +P = 1 0 0, s = 3 5 0, W = 3 +$$ + +Table 8. + +
nfmin Jmax J
15$5,000622
1038
15$7,0001040
10720
+ +# Applying the Model to Different Kinds of Competition + +For contests that give awards to just a few winners, our model is an effective and rational scheme. For contests that give various awards at different levels, we can modify a few parameters in our model. There are two methods. + +- Method 1: Suppose that the contest committee expects to classify the participants into different levels in some given proportions, say four levels of $5\%$ , $10\%$ , $35\%$ , and $50\%$ , similar to the MCM. In the first round, we reject $50\%$ as Successful Participation; reject $35\%$ in the 2nd round as Honorable Mention; reject $10\%$ in the third round as Meritorious; the remaining $5\%$ are Outstanding. +- Method 2: We set the value of LEVEL as needed in each round to distinguish participants of different levels. This method is more flexible and fairer than Method 1. + +# Final Scheme + +We summarize our final scheme: + +- Divide the judging process into several screening rounds and follow the principles below in each round until $W$ papers remain. +- Use a scoring scheme. +- Do not compare the scores from different judges. +- In the first round, distribute papers to all judges evenly. After scoring, select out the top $2W$ papers in each group to enter the next round. +- At the end of each round, for each paper, calculate the number of papers better than it (which we call the current rank of the paper), then reject every paper whose current rank is more than $W - 1$ . +- At the beginning of each round, dispatch papers with a close current rank to the same judge, if possible. +- The number of papers distributed to each judge in each round should be as equal as possible. + +# Our Suggestions + +- Properly reducing the number of rejected papers in the first round would decrease the error probability. +- Altering the number rejected in each round as needed is helpful in competitions that determine different levels of the participants. +- To be more practical and efficient, we suggest prejudging the papers at first, that is, rejecting the papers of distinctly poor quality. + +- Between rounds, have some discussion among judges so that they gain some knowledge of the levels of papers as a whole. Such a feedback mechanism surely helps reduce the standard deviation of judgment. +- When there are about $2W$ papers left, all the judges gather to read the remaining papers together, if time permits, to select the top $W$ papers. + +# Strengths and Weaknesses + +# Strengths + +- We have shown how our model provides an efficient scheme in correct selecting winners. The model was not only tested in a computer simulation but also proved adaptable to real cases. +- Our model is also very stable. All parameters, which we set arbitrarily, can be changed without changing the quality of the model. +- We obtain from our model an empirical formula for $T$ , based on $P, J$ , and $W$ . +- We use Bayesian estimation to take into account the differences among judges. +- Our model is flexible enough to be applied to different kinds of competitions. + +# Weaknesses + +- We are unable to demonstrate that our model is optimal. +- We would like to be able to improve our estimation of the parameters for each judge. + +# References + +Box, G.E.P. and Tiao, G.C. 1973. Bayesian Inference in Statistical Analysis. Reading, MA: Addison-Wesley. + +Wang, Yihe. 1986. Introduction to Discrete Mathematics. Harbin, China: Harbin Institute of Technology Press. + +# The Inconsistent Judge + +Dan Scholz + +Jade Vinson + +Derek Oliver Zaba + +Dept. of Systems Science and Mathematics + +Washington University + +St. Louis, MO 63130 + +Advisor: Hiro Mukai + +# Summary + +We provide a judging process that is robust enough to ensure that the intrinsically best papers are chosen in spite of randomness and subjectivity in the judging process. We increase the scope of the problem by introducing inconsistency into the judging process. We model this inconsistency by expressing the actual paper score as the sum of the intrinsic numerical score, the overall bias of the judge, and an error term: + +$$ +S _ {j p} = S _ {p} + B _ {j} + \epsilon_ {j p}. +$$ + +We use an iterative computer-guided process to determine the judging procedure. After each round of judging, the computer program uses bias estimates to calculate confidence intervals for the intrinsic score of each paper. These confidence intervals are used to reject as many papers as possible while guaranteeing, within a specified level of confidence, that the top $W$ papers advance to the next round. Less cautious rejection criteria in each round adapt the method to select the winning papers from among the top $2W$ papers. + +We did a computer simulation over a range of values for the parameters. Intrinsic scores were normally distributed with mean 50 and standard deviation 20; bias and consistency parameters were varied. We compare the method results to the intrinsic scores of the papers. For $P = 100$ , $J = 8$ , $W = 3$ , the method proved correct $95\%$ of the time with an average of 175 papers read. + +# Assumptions + +Given that papers have an absolute intrinsic ranking, we assume that papers also have an associated intrinsic numerical score. The score that a judge gives a paper reflects not only the intrinsic score of the paper and the overall bias of the judge but also the inconsistency of the judge. This assumption is more realistic than assuming that all judges would agree to an absolute ranking and will produce a more robust judging procedure. We assume: + +- Papers have intrinsic scores which follow a normal distribution. +- Judges have constant numerical bias. +- The range of biases for all judges follows a normal distribution. +- Judges' inconsistency follows a normal distribution. + +The normal distribution is used for analytical convenience and is justified by historical precedent [DeGroot 1986, 263-264]. + +# The Model + +We express our assumptions mathematically by equating the score $S_{jp}$ that judge $j$ assigns paper $P$ to the sum of the intrinsic score $S_p$ of the paper, a bias term $B_j$ for the judge, and an error term $\epsilon_{jp}$ for the score: + +$$ +S _ {j p} = S _ {p} + B _ {j} + \epsilon_ {j p}. +$$ + +Our model has parameters $\mu, \sigma, B$ , and $\Delta$ . The distribution of intrinsic scores is parameterized by $\mu$ and $\sigma$ . That is, $S_{p}$ is a random variable with distribution $N(\mu, \sigma^{2})$ . Parameter $B$ is a measure for the bias of all the judges; the bias $B_{j}$ for a particular judge comes from the distribution $N(0, \sigma^{2})$ . The parameter $\Delta$ is a measure for the overall consistency of the judges. The error for an individual grading, $\epsilon_{jp}$ , comes from the distribution $N(0, \Delta^{2})$ . The terms $B_{j}$ and $\epsilon_{jp}$ account for the subjective nature of the judging process. + +# The Method + +Our model estimates the intrinsic scores of papers by producing estimates for the bias of the judges and adjusting their scores accordingly. Our confidence in the estimated intrinsic scores is used to reject as many papers as possible while maintaining that the probability of rejecting one of the top $W$ papers is less than a predetermined $\alpha$ . The method proceeds as follows: + +# Distribution of Papers + +Our model distributes papers according to the following prioritized criteria: + +- No judge reads the same paper more than once. +- The numbers of papers read by each judge for a given round do not differ by more than one. This minimizes time spent reading for a round. +- Workload is distributed equally among the judges. + +# Estimation of the Intrinsic Score Distribution + +Since the distribution and values of the actual intrinsic scores are not known, we attempt to estimate them. We estimate the mean and variance of the intrinsic scores after the first round. The mean and variance are estimated by the following (see Appendix A): + +$$ +\hat {\mu} = \overline {{S}} _ {j p}, \quad \hat {\sigma^ {2}} = \frac {1}{J} \sum_ {j} \frac {1}{P (j) - 1} \sum (X _ {j} - \mu_ {j}) ^ {2}, +$$ + +where the $P(j)$ denotes the number of papers judge $j$ has read. + +# Calculation of Bias + +After round one, each judge will have scored approximately $P / J$ papers. If the average of the scores for a given judge is significantly greater than the mean of the scores, either the judge is positively biased or the judge happened to receive a sample of unusually good papers, or both. If $X_{1}, \ldots, X_{n}$ are the scores of papers reas by a particular judge, then the conditional distribution for $B_{j}$ after round one (see Appendix B) is normal with mean and variance given by + +$$ +B _ {j} ^ {1} = \frac {\sum (X _ {i} - \mu)}{n + \frac {\sigma^ {2} + \Delta^ {2}}{B ^ {2}}}, V _ {j} ^ {1} = \frac {1}{\frac {1}{B ^ {2}} + \frac {n}{\sigma^ {2} + \Delta^ {2}}}. +$$ + +Note that in the special case when $B = 0$ , the estimate for $B_{j}$ is also zero, but if $B$ is large, the distribution has mean approximately $\overline{X}$ and variance $\sigma^2 / n$ . + +# Recalculation of the Bias + +If the judges are unusually consistent, i.e., if $\Delta$ is very small, we would like our judging procedure to recognize and take advantage of this fact. In the extreme case, when $\Delta = 0$ , we can precisely rank all $P$ papers with only $P + J - 1$ readings: Start by dividing the papers evenly in the first round; in the second round, judge $A$ retires while each of the other judges reads one of judge $A$ 's papers; by subtracting out each judge's bias relative to judge $A$ , we learn the precise ranking of the papers. + +The method optimized for the trivial case above is successful because it uses the fact that $\Delta = 0$ to calculate the biases exactly after the second round. We could adapt this simple example to improve our judging procedure. In the first round, the biases are estimated according to the preceding section. These are used for the first cut. In subsequent rounds, first re-estimate $\Delta$ . Using the new value, re-estimate the biases $B_{j}$ and their variances $V_{j}$ of our estimates; if the new value of $\Delta$ is small, so is the uncertainty of our bias estimate. The combination of a small inconsistency $\Delta$ and accurate knowledge of the biases would allow us to calculate an estimated intrinsic score more + +accurately. With sharpened values of the estimated intrinsic score, more papers could confidently be eliminated after each round. We derive and present the formulas for this bias re-estimation in Appendix B. + +# Estimation of Intrinsic Scores + +The estimated bias for each judge is used to calculate for each paper $p$ a net score that estimates the intrinsic score after taking into account the bias of the judges and the number of readings. The mean and variance for the net score are (see Appendix B) + +$$ +\text {m e a n} = \frac {\frac {\mu}{\sigma^ {2}} + \sum_ {j} \text {j u d g e s} p \frac {S _ {j p} - B _ {j}}{V + \Delta^ {2}}}{\frac {1}{\sigma^ {2}} + (\# \text {r e a d i n g s}) \frac {1}{V + \Delta^ {2}}}, \quad \text {v a r i a n c e} = \frac {1}{\frac {1}{\sigma^ {2}} + (\# \text {r e a d i n g s}) \frac {1}{V + \Delta^ {2}}}. +$$ + +Here $V = \max V_{j}$ is used instead of $V_{j}$ to simplify forthcoming calculations. + +# Rejection of Papers + +At the end of each round, we seek to eliminate as many papers as possible while still ensuring that the best $W$ papers are selected within a specified degree of confidence. Appendix D derives the inequality + +$$ +\Pr (\text {m i s t a k e n r e j e c t i o n}) < W \cdot \sum_ {p = 1} ^ {R} \Phi \left(\frac {S j , p - S _ {j , p - w + 1}}{\sqrt {2} \sigma_ {T}}\right). +$$ + +The variable $S_{jp}$ reflects the computed score of paper $p$ in ascending order, $S_{j,p - w + 1}$ is the score of the paper whose score is ranked $w$ , and $\sigma_T^2$ is the total variance of the score distributions. This inequality lends itself to an iterative process in which the lowest papers are rejected one by one until the inequality reaches a desired level of confidence. If there still remain more than $W$ papers after the confidence level is reached, a new round is initiated. This iterative process involves repeating the earlier steps of this section. + +# Model Implementation + +We simulated the model with a $C++$ program to demonstrate its validity and scope. The simulation compares the actual $2W$ intrinsic score winners to the $W$ model-determined winners. Due to time constraints, the re-estimation of the bias was omitted from the simulation. + +# Initialization + +The simulation assumes that there are 100 papers, 8 judges, and 3 winning papers. It generates absolute intrinsic scores from a normal distribution with mean 50 and standard deviation 20. The parameters $B$ and $\Delta$ , which determine the generation of judges' parameters, are varied over a realistic range, empirically determined to be between 5 and 10. For all simulation calculations, the judging process assumes $B = 8$ and $\Delta = 8$ , in order to demonstrate the validity of the model with no knowledge of the distributions of the biases and inconsistencies. + +# Simulation + +We ran the program for 1,000 competitions with various levels of confidence per round and distributions of scores. The model was successful approximately $93 - 97\%$ of the time with 160-190 readings. Due to the slack in the confidence inequalities, a strict lower bound of 0.7 confidence per round produced these encouraging results while significantly reducing the total number of readings. + +# Real-World Implementation + +A fully implemented computer program would allow a judging team to input the number of papers to be evaluated, the number of winning papers, and the number of judges. The program would have the judges input the scores for each paper that they judged in round one. The program would then ask for a degree of confidence that the winning papers will be drawn from the top $2W$ papers. Output is the designation of the papers that are to be advanced to the next round. The judges now enter their scores for round two, and the process is repeated until $W$ papers remain. + +# Stability + +The formulas used thus far have relied upon exact values for parameters $\mu$ and $\Delta$ for the distribution of intrinsic scores to greatly simplify calculations. This information, however, would not be available in actual implementation of our judging process. Fortunately, small inaccuracies in the calculated values of $\hat{\mu}$ and $\hat{\sigma}$ do not undermine the validity of our judging process. + +# Strengths and Weaknesses + +Our model provides a great deal of flexibility for variations in the judging procedure. We do not assume that the judges will agree in absolute ranking for a given competition. We are able to do this with a confidence of $95\%$ for 3 + +winning papers with an average of 175 total readings. These numbers reflect simulation without re-estimating the biases. If re-estimation calculations are implemented, the numbers would improve. + +Shortcomings of the model include its assumption that papers have an intrinsic score. This confines the validity of the model to more technical papers. Additionally, the model does not account for a situation in which the inconstancy of judges may vary in the distribution of the scores they report. This would happen if one judge used the full range of scores from 0 to 100 and another judge had a tighter range of reported scores. In this sense, the model does not fully reflect reality. The assumptions of a normal distribution and simulation over normally distributed data are also approximations of reality. + +# Appendix A: Estimation of Intrinsic Score Distribution + +We seek to estimate the mean and variance of the intrinsic scores by observing the scores of the first round. The most reasonable estimate for the mean of the intrinsic scores is simply the mean of the scores observed in the first round, $\hat{\mu} = \overline{X}$ . The scores assigned to various papers in the first round by a particular judge are of the form $S_{jp} = S_p + B_j + \epsilon_{jp}$ for the various values of $p$ . The variance of these numbers for fixed $j$ is the sum of the variance of the intrinsic score and the inconsistency $\Delta^2$ . Since the $\Delta^2$ is insignificant compared to $\sigma^2$ , we may approximate $\sigma^2$ by the variance Var $S_{jp}$ for a fixed value of $j$ . Averaging this variance over all judges yields a reasonable estimate for the variance $\sigma^2$ : + +$$ +\hat {\sigma} ^ {2} = \frac {1}{J} \sum \mathbf {V a r} S _ {j p}. +$$ + +# Appendix B: Estimation of Biases and Net Scores + +Theorem. Suppose that $A$ is a random variable with distribution $N(\mu, \sigma^2)$ and $A$ is hidden from observation, but the independent random variables $X_i = N(A, \sigma_i^2)$ are observed. Given observations $X_i$ , the conditional distribution for $A$ is normal with + +$$ +m e a n = \frac {\frac {\mu}{\sigma^ {2}} + \sum \frac {X _ {i}}{\sigma_ {i} ^ {2}}}{\frac {1}{\sigma^ {2}} + \sum \frac {1}{\sigma_ {i} ^ {2}}}, \qquad v a r i a n c e = \frac {1}{\frac {1}{\sigma^ {2}} + \sum \frac {1}{\sigma_ {i} ^ {2}}}. +$$ + +Proof: [EDITOR'S NOTE: This theorem was formulated by the authors, who could not find a reference for it. For reasons of space, we omit their proof, which is based on results in DeGroot [1986].] + +Corollary 1. If a judge's bias comes from the distribution $N(0, B^2)$ and the scores $X_1, \ldots, X_n$ reflect the bias, the variation $\sigma^2$ of intrinsic scores of papers with mean $\mu$ , and the inconsistency $\Delta$ of the judge, then our estimate for the bias of this judge and the variance of this estimate are + +$$ +B _ {j} = \frac {\sum (X _ {i} - \mu)}{n + \frac {\sigma^ {2} + \Delta^ {2}}{B ^ {2}}}, \qquad V _ {j} = \frac {1}{\frac {1}{B ^ {2}} + \frac {n}{\sigma^ {2} + \Delta^ {2}}}. +$$ + +Corollary 2. Suppose that the intrinsic score of a paper comes from $N(\mu, \sigma^2)$ . The scores $S_{jp}$ reflect the intrinsic score of the paper, the biases of the judges, and the inconsistency $\Delta$ of the judging process. Our estimates $B_j$ of the biases each have variance $V$ . Then our estimate of the intrinsic score for paper $p$ has mean and variance: + +$$ +{ m e a n } { = } { \frac { \frac { \mu } { \sigma ^ { 2 } } + \sum _ { j } j u d g e d ~ p ~ \frac { S _ { j p } - B _ { j } } { V + \Delta ^ { 2 } } } { \frac { 1 } { \sigma ^ { 2 } } + ( \# ~ r e a d i n g s ) ~ \frac { 1 } { V + \Delta ^ { 2 } } } } +$$ + +$$ +{\mathrm {v a r i a n c e}} = {\frac {1}{\frac {1}{\sigma^ {2}} + (\# \mathrm {r e a d i n g s}) \frac {1}{V + \Delta^ {2}}}.} +$$ + +Each paper has a normal score distribution with mean $\hat{S}_j$ and variance $\sigma_T^2$ . The variances are the same for each paper. Then $S_{jp} - B_j$ (based on our best estimate of $B_j$ , which may change from round to round), which is our best estimate of a judge's score, has variance $V_j + \Delta^2\sigma^2$ . If we just up all bias variances to $V = \max V_j$ , this becomes $V + \Delta^2\sigma^2$ . So overall for this paper, + +$$ +\mathbf {m e a n} = \frac {\frac {\mu}{\sigma^ {2}} + \sum \frac {S _ {j p} - B _ {j}}{V + \Delta^ {2} \sigma^ {2}}}{\frac {1}{\sigma^ {2}} + n \left(\frac {1}{V + \Delta^ {2} \sigma^ {2}}\right)}, \quad \mathbf {v a r i a n c e} = \frac {1}{\frac {1}{\sigma^ {2}} + n \left(\frac {1}{V + \Delta^ {2} \sigma^ {2}}\right)}. +$$ + +Note that $\sigma_T = V + \Delta^2\sigma^2$ + +# Appendix C: Re-estimation of Parameters + +First we seek to re-estimate the parameter $\Delta$ . If we consider all papers (at least two) read by both judge $j$ and judge $k$ , the differences are distributed according to + +$$ +S _ {j p} - S _ {k p} = B _ {j} - B _ {k} + \left(\epsilon_ {j p} - \epsilon_ {k p}\right) = B _ {j} - B _ {k} + N \left(0, 2 \Delta^ {2}\right). +$$ + +By computing the variance of the differences $S_{jp} - S_{kp}$ for a fixed pair of independent judges, we obtain an estimate of the variance $2\Delta^2$ . The more papers the pair of judges has read in common, the more precise this estimate will be. We obtain a still more precise estimate of $2\Delta^2$ by averaging these variances + +over each pair of judges, weighting each average according to the number of papers read by both judges: + +$$ +\hat {\Delta} ^ {2} = \frac {\frac {1}{2} \sum (P (j , k) - 1) \mathbf {V a r} [ S _ {j p} - S _ {k p} ]}{\sum (P (j , k) - 1)}, +$$ + +where $P(j, k)$ is the number of papers read by both judges $j$ and $k$ . Using the updated estimate of $\Delta$ , we may now re-estimate the biases $B_j$ as well as their variances $V_j$ . We use an iterative procedure and demonstrate that for $\Delta \neq 0$ the successive calculations for $B_j$ and $V_j$ converge. We cannot rigorously demonstrate the validity of this iterative procedure. Instead, we justify this procedure by intuitively motivating each step. [EDITOR'S NOTE: For reasons of space, we omit the details.] + +# Appendix D: The Confidence Inequality + +Theorem. Let $S_{j1}, \ldots, S_{jP}$ denote the computed scores of the papers sorted in ascending order and let $\sigma_T$ denote the standard deviation of the score estimates. If $R \leq P - W$ , then the probability of accidentally rejecting one of the best $W$ papers by rejecting the $R$ lowest-ranked papers is bounded by + +$$ +\Pr (\text {m i s t a k e n r e j e c t i o n}) < W \cdot \sum_ {p = 1} ^ {R} \Phi \left(\frac {S j , p - S _ {j , p - w + 1}}{\sqrt {2} \sigma_ {T}}\right). +$$ + +Proof: To mistakenly eliminate one of the best $W$ papers, it is necessary that one of the rejected papers have an intrinsic score greater than that of one of the top $W$ papers. Thus + +$$ +\begin{array}{l} \Pr (\text {m i s t a k e n r e j e c t i o n}) < \sum_ {p = 1} ^ {R} \sum_ {q = P - W + 1} ^ {P} \Pr (S _ {p} > S _ {q}) \\ < \sum_ {p = 1} ^ {R} \sum_ {q = P - W + 1} ^ {P} \Pr \left(S _ {p} > S _ {P - W + 1}\right) \\ = W \cdot \sum_ {p = 1} ^ {R} \Pr (S _ {p} - S _ {P - W + 1}) \\ { = } { W \cdot \sum _ { p = 1 } ^ { R } \Phi \left( \frac { S _ { j , p } - S _ { j , P - W + 1 } } { \sqrt { 2 } \sigma _ { T } } \right) . } \\ \end{array} +$$ + +# References + +DeGroot, Morris H. 1986. Probability and Statistics. Reading, MA: Addison-Wesley. + +# Judge's Commentary: The Outstanding Contest Judging Papers + +Veena B. Mendiratta + +Bell Labs + +Lucent Technologies + +2000 N. Naperville Road + +Naperville, IL 60566 + +veena@lucent.com + +The Contest Judging Problem provided the contestants with a challenging real-world problem that lent itself to a range of analysis and modeling methods. In coming up with a "best" selection scheme, the contestants used methods such as rank-ordering, numerical scoring, and bias estimation. + +What made the problem interesting and challenging to model, and also to judge, were the less well-defined aspects of the problem. For example, how was bias of judges with respect to ranking and scoring papers handled? Also, how was the issue of ensuring that the best $2W$ papers were not screened out in the early rounds of reading addressed? Consequently, we could not select the winning papers for MCM 1996 based solely on the criterion of how many paper-readings a team's algorithm required; we also considered how the issues were addressed. + +Almost all of the successful papers were able to develop a basic model for the ideal case to select the top $W$ papers for the specific parameter values specified in the problem statement. A key assumption for the ideal case is that every judge rank-orders or scores all the the papers as the absolute rank-ordering, that is, there is no judge bias. The so-called ideal model, though unrealistic, sets a lower bound for the total number of reads. The stronger papers went significantly beyond the ideal case. + +Characteristics of the best papers included the following: + +- An explicit modeling of judges' bias, addressing the issues of systematic bias and the variance of accidental errors in scoring. +- Estimating statistical bounds on the probability of failure to pick the best $W$ papers out of the top $2W$ papers. +- In addition, some of the papers realized the importance of more judges reading the papers remaining in the later screening rounds and included this factor in their models. + +- The better papers provided a clear statement of results in terms of the total number of paper readings, the confidence level of the results, and sensitivity of the results to the model parameters. + +Various different approaches were selected to address the above issues and some of these are summarized below. + +The Gettysburg College team minimizes the probability of eliminating the $W$ best papers in the first round by having two judges read each paper in that round. This same paper models judge error through a functional relationship between the probability of judge error in ranking (with respect to the absolute ranking) and the distance between two compared papers on the absolute scale. The team from the University of Science and Technology of China uses Bayesian estimation to address systematic bias in scoring. They also model the error probability as a function of the percentage of papers eliminated in each round. The Fudan University team shows statistically the conditions under which the "ideal" model can work. The Washington University team models the judge bias and error and, after each round of judging, uses new bias estimates to calculate confidence intervals that determine the number of papers rejected. The St. Bonaventure team implements a novel distribution scheme, which they illustrated very effectively with matrices, to ensure that judges do not receive the same paper more than once and also that the same judge does not receive the top $2W$ papers. + +Lastly, the best papers were characterized by clear and logical presentations that brought forth the team's underlying analytical thought process. These papers were well organized and well written, with appropriate tables and graphics for presentation of results, and included a comprehensive summary, all of which made it easier to understand the material presented. + +The Contest Judging problem was challenging and many excellent solutions were offered. Finally, however, five papers stood out from the others, and the members of those teams should feel proud of their accomplishments. + +# About the Author + +Dr. Mendiratta (Ph.D., Northwestern University, 1981) has been at Bell Labs (now part of Lucent Technologies) since 1984, working on a wide range of systems. She currently works in the Architecture and Performance area. Her work at Bell Labs has focused on reliability modeling and performance analysis of switching systems, as well as mathematical programming models for switch configurations. Prior to 1984, she worked for three years as Manager of Operations Research for the Illinois Central Gulf Railroad, where she directed the implementation of an empty-freight-car distribution optimization model that was developed as part of her Ph.D. dissertation. Her professional activities include serving on the MCM Advisory Board as well as an MCM judge, being co-President of the INFORMS Chicago Chapter, and being a SIAM Visiting Lecturer. + +# Judge's Commentary: The Outstanding Contest Judging Papers + +Donald E. Miller + +Department of Mathematics + +Saint Mary's College + +Notre Dame, IN 46556 + +dmiller@saintmarys.edu + +This year's problem, while framed in the terminology of a competition such as the MCM, finds application in several other areas of decision-making. One such area, itself a competition of sorts, occurs when a school makes its decisions on the award of scholarships. Another occurs in the screening of applicants for a specified position. In both of these situations, the potential exists for more nominally qualified applicants than positions. Thus, those making the decision, the judges in our problem, must either rank-order or otherwise quantify the applicants in an attempt to decide which ones are "best." Further, these decisions must be made under time constraints that make it impossible for each "judge" to evaluate every applicant; even if they were to do so, it is doubtful that the evaluations would be in complete agreement. It is this element that complicated evaluation of the contest papers. + +The assumption of an absolute rank-ordering made the problem seem deceptively easy, resulting in a broad range of papers, from quite simple to very elaborate. At one end of the spectrum, we found papers that assumed absolute rank-ordering and simply developed a heuristic solution to the basic problem; some even recognized the assumption was unrealistic but chose to model the problem as stated, since that is what was requested. Others who used this assumption clearly didn't believe it, since they rejected a simple merge-sort in favor of more complicated algorithms that denied existence of the assumption. Still others attempted to refine the problem using theories ranging from topics in graph theory to concepts of fuzzy sets. Some of the better bias-elimination refinements included matrix reduction, regression with error terms, and scoring normalization with specified probability distributions. + +The judges felt that the best models were those that solved the basic problem for 100 papers, 8 judges, and 3 winners, then produced a successful refinement with adequate complexity to accurately model the process but with enough simplicity for the model to be useful. Further, each model would be clearly stated and its use demonstrated with a simple example. Thus, the ideal paper would solve the basic problem and demonstrate that the solution was optimal, or at least close to optimal. It would then generalize the solution to accommodate different numbers of papers, judges, and winners. Having completed this, + +it would address judge bias and measure the success rate of the algorithm as a function of some quantitative measure of judge bias. It might then examine alternative algorithms, finding a relation between levels of judges bias and the success rate of these algorithms. It would continue by addressing the strengths and weaknesses of these methods, while being sure to clearly address all the points requested in the contest rules. + +While many papers showed much insight into the problem and its complexities, the Outstanding papers were distinguished by the way that they addressed the problem of judge bias effectively, and with appropriate documentation, to allow for implementation of the recommended model. The team from the University of Science and Technology of China used Bayesian statistics, with a normal prior, to adjust for judge bias, then ran simulations to test for sensitivity of their method. Normal distributions, a common assumption, were also used by the Washington University team for both the intrinsic score of the paper and judge bias. A distinguishing feature of the paper from Fudan University was its stability analysis for different levels of judge bias. + +Finally, here's a note on good practice in mathematical modeling as related to this problem. Common in requests for models are unrealistic assumptions that would make any model created with these assumptions of minimal use. Thus, it is necessary for the modeler to evaluate critically all assumptions and, if necessary, refine the problem to one that is realistic. Judges viewed the statement, "there is an absolute rank-ordering to which all judges would agree," to be such an assumption. Under this assumption, with 100 papers and 8 judges, there are methods of finding the top three papers, in order, with each judge reading at most 14 papers and with at most 109 papers read. In another situation and under different rules, the modeler would ask the author for elaboration on the statement before proceeding with the model. But in the absence of such consultation, the modeler should answer the question as stated and then refine it with realistic assumptions. + +# About the Author + +Donald Miller is Associate Professor and of Chair of Mathematics at Saint Mary's College. He has served as an associate judge of the MCM for four years and prior to that mentored two Meritorious teams. He has done considerable consulting and research in the areas of modeling and applied statistics. He is currently a member of SIAM's Education Committee and past president of the Indiana Section of the Mathematical Association of America. + +# Contest Director's Commentary: Judging the MCM + +Frank Giordano + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02173 + +f.giordano@mail.comap.com + +FRGiordano@aol.com + +# Overview + +Each paper submitted to the MCM is classified as Non-successful, Successful Participant, Honorable Mention, Meritorious, or Outstanding. Further, from among the Outstanding papers, judges from SIAM, INFORMS, and the MAA pick winners to receive awards from their societies. Typically, the top $2 - 3\%$ are classified as Outstanding, the next $15\%$ as Meritorious, and the next $25\%$ as Honorable Mention. A paper classified as Successful Participant is a complete paper that satisfies the rules of the contest. + +The process of determining the classification of each paper consists of distinct phases: screening, grading, and judging. As detailed below, each of these phases employs a mixture of normalized grading, ranking, classification against a judge's "absolute ideal," and qualitative judging. We now describe each of the phases. + +# Screening Rounds + +Typically, there are three screening rounds. The primary purpose of screening rounds is to identify papers that are not going to be among the top $43\%$ (Honorable Mention or above). After a preliminary reading of several papers (not graded), judges jointly design a 7-point grading scale. Discriminators used on the scale are designed to classify the papers into three categories. The top classification reflects exceptional quality; the second category are papers that the judge feels should be retained; and the third category are papers that the judge thinks should be eliminated. The grading scale emphasizes heavily the organization of the paper: What ideas have the contestants used in developing and analyzing their models? The summary is weighted heavily, as the contest rules require that it reflect the major ideas that the contestants used. Typically, a judge will devote 10-15 minutes to screen a paper. + +The first two of the three screening rounds are accomplished by "Triage Judges." For the 1996 contest, the triage judging took place at Carroll College + +in Helena, MT. Typically, 10-14 judges participate in the triage judging, and each paper is read by one judge in each round. After the second round, papers that were judged in the lowest category by both the judges who read the paper are eliminated. If both judges vote the paper weak, but their scores differ by more than 2 points, a third judge classifies the paper before it is eliminated. Typically, about $25\%$ of the papers are eliminated after the first two screening rounds. + +# Grading Rounds + +Grading takes place in Claremont, California by a different set of judges. Normalized scores from the first two screening rounds are used to rank-order the papers. The normalized scores are used to organize the papers into as many stacks as there are judges. Upon arrival at Claremont, judges are given a stratified packet of papers to read for "calibration" (not grade). After reading the papers, the judges jointly design a 7-point screening scale. The judges then conduct a third and final screening round. + +After the screening round at Claremont, the judges design a 100-point scale to use in the grading rounds. Typically, there are 4 grading rounds. A judge typically spends 30-45 minutes to grade each paper. In addition to grading the paper, the judge is asked to rank it against all papers read in the round. For example, "2/7" would be the second best of 7 papers read in a grading rank. Ranking requires the judge to pick out the top papers each round. The judge is also required to assign an "absolute" classification: Successful Participant, Honorable Mention, Meritorious, or Outstanding. This classification permits the judges to render an opinion on the "absolute" quality of the paper independent of their grading and ranking procedures. + +# Final Judging + +After the three screening rounds and four final grading rounds, typically 6-12 papers remain in the contest. Time is then provided for judges to read papers that they have not yet read. The judges then meet to judge the merits of each of the remaining papers. Judges who have studied the papers debate the strong points and weak points of each paper. The papers are then compared against one another. By consensus, the Outstanding papers are chosen. + +# Ranking the Papers + +No attempt is made to rank the papers until after the first two screening rounds. Normalized scores are used for ranking the papers thereafter. Beginning with the third screening round, judges are required to rank the papers + +against all papers read that round. Additionally, they are required to judge the paper as Successful Participant, Honorable Mention, Meritorious, or Outstanding. + +# Stratified Packets + +Beginning with the third screening round, the papers are organized into as many stacks as there are judges. The papers are distributed modulo the number of judges, using the cumulative normalized scores. The cumulative scores are weighted, with the grading rounds counting more than the screening rounds. + +# Eliminating Papers + +Typically, about $25\%$ of the papers are eliminated after the first two screening rounds. Beginning with the third screening round, the Contest Director, the Associate Contest Director, and the two Head Judges meet to discuss the elimination of papers. The information that they use is + +- the overall rank based on normalized scores, +- the round rank assigned each round, and +- the overall classification given to the paper by the judge. + +Each item is quite useful in determining which papers should be eliminated. Since judges receive a stratified packet, the round ranks are especially useful. No "quota" is used to determine how many papers to cut each round. Typically, the top $43\%$ (Honorable Mention, Meritorious, and Outstanding) remain after the first grading round; the top $18\%$ (Meritorious and Outstanding) remain after the third grading round. + +# The Judges + +Typically there are 10-14 judges accomplishing the first two screening rounds (Triage). These judges are led by experienced Head Judges who have graded several years at Claremont before becoming a Head Triage Judge. Generally, calibration sessions are held, where papers are read but not graded. After the calibration reading, 7-point scales are developed. For the grading and judging at Claremont, 25 judges are chosen. The professional societies choose their own judges from among volunteers with strong credentials. COMAP rounds out the judging team by choosing a field of judges with a wide range of expertise. The judges selected are respected in the mathematical science community for their integrity and dedication. If needed, the field of judges is augmented with subject-matter experts. + +# Conclusion + +Judging a contest that receives as many creative student solutions as the MCM does is a difficult task. The judges deserve our gratitude for a difficult job that has been done well, and with great dedication and integrity. + +# About the Author + +Frank Giordano has been the Director of the MCM since 1991. + +# Practitioner's Commentary: Computer Support for the MCM + +Steve Harper + +Mathematics Dept. + +Caroll College + +Helena, MT 59625 + +sharper@saints.carroll.edu + +Judging the math modeling contest is mostly a human endeavor, with a proper place for a computer to help. Since human judges are, by definition, human, there need to be ways to watch for human bias in order to get fair contest results. + +The current scheme that Contest Director Frank Giordano uses has evolved to address these concerns. To make a fair judgment, each judge needs to see excellent, good, and average papers. There are too many papers for each judge to read them all, and assigning papers by random draw will not always give a good balance. After being ranked using the scores from prior rounds, the papers are distributed to ensure that each judge gets a variety in quality. + +Since different people do not give the same score to the same paper, the scores are weighted (to account for natural human tendency to "grade low" or "grade high"). A more subtle problem is that people tend to "root" for a paper that may, for instance, resonate with how the judge would approach the problem (the "right answer," so to speak). Making the scoring scale 40-100, rather than 1-100, can reduce the impact of rooters who give low scores to other papers. Having the judge rank all the papers for that round also gives the contest director more control to ensure that a paper is not arbitrarily eliminated. For instance, if a paper with a low score has a ranking of 2nd out of 15 papers, it may deserve to stay for another round and another opinion. Furthermore, the ranking provides a way for judges to give a numerical score based on the established criteria yet still note that there are unquantifiable factors that make one paper rank better than another paper with a higher score. + +The judging for this contest previously used a spreadsheet. This gave some last minute flexibility, at the cost of data entry errors plus late-round blery eyes trying to line up too much numerical data that would not fit on one page. In 1996, the contest used a custom FoxPro database program with three database tables (for scores, for judge information and weightings, and for round names and elimination scores). + +Judges are assigned using letter codes rather than numbers since numerical scores abound on the printed score sheets. Each judge needs a random set of papers containing both good and average papers (that this judge has not read before). After the two triage rounds, the remaining contestant papers are split into 4 stratified layers based upon accumulated weighted scores. The + +computer tries to give an equal number of papers to each level and assigns papers so that each judge will get a variety of quality in papers. It also ensures that no judge reads the same paper twice. The contest director resolves any problems generated (such as the computer not assigning any paper to a judge if, by luck, the only papers left in the contest have all been read by that judge). + +The computer prints out the judge assignments for the contest director in score order, and for the individual judges in document number order (so the judge doesn't know how the paper has fared so far). + +The computer checks scores as they are recorded and alerts the human input operator for typing or recording errors. (The score that a judge intends to put on a paper should not be rendered invalid by a typo or two.) It notes problems if the human tries to enter a score for a document that doesn't exist or is already out of the contest. It verifies that the actual judge is the same as was assigned (though this can be overridden). Other problems noted are scores out of range, changes to an existing score ("Is this really a change, or did you type the wrong document number?"), and ranking within a round (saying "This is the 5th best paper out of 4" will not pass inspection). + +Some individual judges tend to grade harder or easier than others. A weighting formula (from Frank Giordano, who inherited it from Ben Fusaro) is designed to try to account for that with a minimum of effort. The formula is: + +$$ +\text {W e i g h t e d S c o r e} = \left\{ \begin{array}{l l} \frac {\text {P o p u l a t i o n M e a n}}{\text {J u d g e ' s M e a n}} \times \text {R a w S c o r e ,} & \text {i f P o p u l a t i o n M e a n} \leq \text {J u d g e ' s M e a n}; \\ \frac {1 0 0 - \text {P o p u l a t i o n M e a n}}{1 0 0 - \text {J u d g e ' s M e a n}} \times (\text {R a w S c o r e} - 1 0 0) + 1 0 0, & \text {o t h e r w i s e .} \end{array} \right. +$$ + +(Note: no weighted score can be greater than 100, no matter how tough the judge is on the other papers.) + +To allow for human modification, this weighting is applied in two steps. The program first calculates the judge's ratio, and the contest director can then individually adjust the judge's weighting. For example, a judge could read only a couple of papers before getting an emergency phone call and having to leave, and the weighting may be too far off (in the opinion of the contest director) to be useful. + +Next the weighting ratio that goes with each judge is applied to each document that the judge scored. Again, the contest director can adjust the weighted score for any one paper, if there is good reason to do so. + +To calculate the weighting, the only papers considered are the ones still left in the contest. All prior rounds for those papers are considered in calculating the weighting for each judge. Then the new weighted score is figured retroactively for the prior rounds to obtain a new weighted total. Thus, in each round, the weighting has to be recalculated. (Note: In each round, the low-scoring papers drop out. While the number of recorded scores is increasing, the number of papers left is decreasing, so the total number of scores included may go up or down. There always is a concern with weighting too small a sample.) + +The length of time that a judge spends on the paper is also a factor, since a score for a Triage Round reading of 5 minutes does not deserve the same confidence as a 30-minute reading in the Finals. + +Then, based on the weighted score, the computer suggests which papers to eliminate. (The elimination scheme is stored in the database and is easy to change.) It calculates what score will leave a (preselected) percentage of the original contestants after the current round, compares that to a (preselected) minimum score, and suggests the higher number. + +After review, the contest director can decide to draw the elimination line in a different place, as well as check the set of scores for individual papers below the line to decide which papers deserve another chance. (If a paper got a 99 and a 40, it would probably deserve another read before being tossed out of the contest.) + +The percentage and minimum scores for the 1996 contest are shown in Table 1. + +Table 1. Percentage and minimum scores for judging rounds of the 1996 contest. + +
RoundScore rangeMinimum continuation scorePercentage remaining
Triage 10–7199%
Triage 20–7690%
Screening0–71243%
Final 120–502530%
Final 240–1008518%
Final 340–10015010%
Final 440-1002206%
+ +The computer also records the final classification based on when the document is eliminated: + +- Survive Final 4: Outstanding +- Survive Final 3: Meritorious +- Survive Final 2: Meritorious +- Survive Final 1: Honorable Mention +- Survive Screening: Honorable Mention +- Survive Triage 1: Successful + +The contest director can review these classifications and change them. + +To help the contest director make elimination decisions, the computer prints out reports in weighted score order. At all times, a printout is available (in document number order) to answer the question, "Whatever happened to paper number such and such?" + +Using this database program solves many problems. However, as one would expect, there is a cost—flexibility. The data are not readily accessible to noncomputer folks, so changing the structure of how the contest is judged is not too easy. It was not feasible in the time allotted to make a program generally usable for every judging situation possible. Given the stability of the contest over the last several years plus time constraints, trading future flexibility to solve present needs (for data entry, judge assignment, score weighting and elimination) seemed a fair trade. + +Are there improvements? Are you kidding? The final round ends with judges arguing over the top $3\%$ of papers about which ones are really Outstanding and which of those deserve special awards. For all judges to participate, each needs to have read several of the top papers—more than just luck would allocate. Next year's program already has a better judge assignment algorithm in place! + +# About the Author + +Steve Harper teaches at Carroll College in Helena, MT, where students are encouraged to have a variety of background work in areas other than just computing. (Steve himself has a background in accounting, politics, wind energy, and consulting.) He designed and coded the database and program for the 1996 contest. + +# THE MATHEMATICAL + +# CONTEST IN + +# MODELING + +# FEBRUARY 7-10, 1997 + +The thirteenth annual international Mathematical Contest in Modeling will be held February 7-10, 1997. The contest will offer students the opportunity to compete in a team setting, using mathematics to solve real-world problems. + +For registration information, contact: + +MCM 1997, COMAP, Inc., Suite 210, 57 Bedford Street, Lexington, MA 02173 + +email: mcm@comap.com voice: 617-862-7878 + +Major funding provided by the National Security Agency. \ No newline at end of file diff --git a/MCM/1995-2008/1997MCM/1997MCM.md b/MCM/1995-2008/1997MCM/1997MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..32db53e1773d8e4e6cf2748db31cbdf125c538ef --- /dev/null +++ b/MCM/1995-2008/1997MCM/1997MCM.md @@ -0,0 +1,3547 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy State University + +Montgomery + +P.O. Drawer 4419 + +Montgomery, AL 36103 + +JMCargal@aol.com + +Development Director + +Laurie W. Aragon + +Creative Director + +Roger Slade + +Production Manager + +George W. Ward + +Project Manager + +Roland Cheyney + +Copy Editors + +Seth A. Maislin + +Emily T. Sacca + +Distribution Manager + +George Jones + +Production Secretary + +Gail Wessell + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 18, No. 3 + +# Associate Editors + +Don Adolphson + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +Leah Edelstein-Keshet + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Peter A. Lindstrom + +Walter Meyer + +Gary Musser + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. ("Gene") Woolsey + +Brigham Young University + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +University of British Columbia + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +North Lake College + +Adelphi University + +Oregon State University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# Subscription Rates for 1998 Calendar Year: Volume 19 + +Individuals subscribe to The UMAP Journal through COMAP's MEMBERSHIP PLUS. This subscription includes quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, a $10\%$ discount on COMAP materials, and a choice of free materials from our extensive list of products. + +(Domestic) #MP9920 $64 + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +(Foreign) #MP9921 $74 + +Institutions can subscribe to the Journal through either Institutional Membership or a Library Subscription. Institutional Members receive two copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, and our organizational newsletter Consortium. They also receive a $10\%$ discount on COMAP materials and a choice of free materials from our extensive list of products. + +(Domestic) #UJ9940 $165 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +(Foreign) #UJ9941 $185 + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching. + +(Domestic) #UJ9930 $140 + +# LIBRARY SUBSCRIPTIONS + +(Foreign) #UJ9931 $160 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02173, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02173 + +© Copyright 1997 by COMAP, Inc. All rights reserved. + +# Vol. 18, No. 3 1997 + +# Table of Contents + +# Publisher's Editorial + +Full Plate Solomon A. Garfunkel 187 + +# Modeling Forum + +Results of the 1997 Mathematical Contest in Modeling Frank Giordano. 191 + +# The Velociraptor Problem + +Pursuit-Evasion Games in the Late Cretaceous +Edward L. Hamilton, Shawn A. Menninga, and David Tong 213 + +The Geometry and the Game Theory of Chases +Charlene S. Ahn, Edward Boas, and Benjamin Rahn 225 + +Gone Huntin': Modeling Optimal Predator and Prey Strategies +Hei (Celia) Chan, Robert A. Moody, and David Young 243 + +Lunch on the Run Gordon Bower, Orion Lawler, and James Long 255 + +A Three-Phase Model for Predator-Prey Analysis +Lance Finney, Jade Vinson, and Derek Zaba 277 + +Judge's Commentary: The Outstanding Velociraptor Papers +John S. Robertson 293 + +# The Mix Well for Fruitful Discussions Problem + +An Assignment Model for Fruitful Discussions Han Cao, Hui Yang, and Zheng Shi 297 + +Using Simulated Annealing to Solve the Discussion Groups Problem David Castro, John Renze, and Nicholas Weininger 307 + +Meetings, Bloody Meetings! +Joshua M. Horstman, Jamie Kawabata, and James C. Moore, IV....321 + +A Greedy Algorithm for Solving Meeting Mixing Problems Adrian Corduneanu, Cyrus C. Hsia, and Ryan O'Donnell 331 + +Judge's Commentary: The Outstanding Discussion Groups Papers Donald E. Miller 343 + +Practitioner's Commentary: The Outstanding Discussion Groups Papers Vijay Mehrotra 347 + +COMAP ANNOUNCES + +# THE MATHEMATICAL CONTEST IN MODELING + +FEBRUARY 6-9, 1998 + +The fourteenth annual international Mathematical Contest in Modeling will be held February 6-9, 1998. The contest will offer students the opportunity to compete in a team setting, using mathematics to solve real-world problems. + +For registration information, contact: + +Attn: Clarice Callahan + +MCM, COMAP, Inc., Suite 210, 57 Bedford Street, Lexington, MA 02173 + +email: mcm@comap.com voice: 781/862-7878 ext. 37 + +Major funding provided by the National Security Agency. + +Additional support for this project is provided by the Institute for Operations Research and the Management Sciences, the Society for Industrial and Applied Mathematics, and the Mathematical Association of America. + +# Publisher's Editorial Full Plate + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02173 + +s.garfunkel@mail.comap.com + +# ARISE: A New High-School Curriculum + +This is an incredibly exciting time for COMAP. The first in our series of ARISE Project texts has come out: Mathematics: Modeling Our World—Course 1, published by South-Western. Course 2 will be available in early March and Course 3 by next summer. Each course has a student text, an annotated teacher's edition, a teacher's resource guide, a solutions manual, a CD-ROM with all of our calculator and computer software, and a videotape containing introductory segments for each chapter. These texts represent the culmination of over five years of effort by project staff and by our author and field-test teams. + +With the launching of this new comprehensive secondary-school curriculum, our work has just begun. Now is the time to spread the word—leadership institutes, presentations at regional and national meetings, and teacher training sessions. A great deal of our energies over the coming months and years will be devoted to putting the show on the road. For COMAP this represents new territory. We cannot rely on the slogan from Field of Dreams, "If you build it, they will come." We need to have a presence in the community, showing our work and explaining the goals that we are trying to attain. + +ARISE began as a standards-based curriculum, part of the larger reform movement to change the content, applications, pedagogy, technology, and assessment of high-school mathematics. But what has emerged in ARISE, perhaps not surprisingly, is an integrated curriculum built around mathematical modeling. As with all of COMAP's materials, ARISE is a rigorous program; and also as with all of our materials, the applications and models are real, for both students and teachers. + +The UMAP Journal 18 (3) (1997) 187-189. ©Copyright 1997 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +There has been a great deal written of late about "fuzzy math," "whole math," "politically correct math," etc. The notion that critics have put forth is that by using technology and centering on constructivist theories of learning, reformers have short-changed basic skills; that in the name of conceptual understanding for all, the curriculum is being watered down. Nonsense! + +I cannot speak for all of the new curricula being written, but Mathematics: Modeling Our World is a great deal more rigorous and richer than the standard Algebra 1—Geometry—Algebra 2 sequence that schools now use. I honestly believe that if students entered college or the world of work having mastered these new ARISE materials, there would be reason to celebrate. One goal of ARISE has always been to increase the mathematical understanding of as large a percentage of students as possible; we believe (and have early evidence to show) that this can be achieved without diminishing the level and rigor of the curriculum. + +As you can tell from the above remarks, it is difficult to work this hard and this long on a project without believing in it completely. This is an occupational hazard of working in educational reform. But all proselytizing aside, we hope that everyone will have a chance to look at these materials and give them a fair trial for what we have achieved or, at least, hoped to achieve. + +# HiMCM: An MCM for High Schools + +There are two new COMAP projects underway that I want to describe. The first of these, called HiMCM, is under the direction of Frank Giordano, the retired chair of the U.S. Military Academy at West Point Mathematics Dept. who is now (I'm happy to say) at COMAP. HiMCM is a three-year grant from NSF to plan and test a high school version of the Mathematical Contest in Modeling. We are working closely with all of the major professional societies, with the goal of having them cooperate on running the formal contest nationwide once we have developed a model that we all feel works well. Unlike the undergraduate contest, as you might imagine, there are a number of issues of access (to technology, libraries, classrooms, etc.) and equity at the secondary level which need to be worked out. We expect to have a planning year and then run one or more versions of the contest with a set of test schools to refine our plans. We are tremendously excited by the opportunity. We feel that the undergraduate MCM, now in its 14th year, has been one of our most successful and influential programs. We look forward to having a similar influence on the secondary school community. + +# MathServe: Sharing Mathematical Expertise + +The second project that I am pleased to announce is MathServe, under the direction of Uri Treisman (Dana Center, University of Texas at Austin), Frank Giordano, and myself. This program has been funded for three years by the Alfred P. Sloan Foundation. The purpose of MathServe is to promote discipline-based service to the community by mathematics faculty and students, primarily at the undergraduate level. Service frequently has meant helping to paint a community center or volunteering to head a Girl Scout troop. But mathematics as a discipline can and should be used in community service programs--for example, helping to plan an efficient school bus route or to devise an inventory program for a local hospital. MathServe will try to connect the service and volunteer community with mathematics departments. The centerpiece of the program will be an annual set of awards for the most effective projects, along with a special publication (perhaps an additional issue of *The UMAP Journal*) containing descriptions of those judged to be outstanding. + +As you can see, COMAP has a pretty full plate. We are extremely gratified to be able to work on so many exciting ideas. And as always, we are most grateful to all of you who continue to work with us to improve the teaching and learning of mathematics at all educational levels. + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for eleven years and has dedicated the last 20 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he appeared as the on-camera host), Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary school mathematics. + +# Reminder from the Editor + +Through August 1998, manuscripts and editorial correspondence should go to: + +Paul J. Campbell + +c/o Lst. Prof. Pukelsheim + +Institut für Mathematik der Universität Augsburg + +Universitätstrasse 14 + +D-86135 Augsburg + +Germany + +voice: 011-49-821-598-2162 fax: 011-49-821-598-2280 + +email: campbell@math.uni-augsburg.de + +www: http://cs.beloit.edu/campbell/ + +# Modeling Forum + +# Results of the 1997 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02173 + +f.giordano@mail.comap.com + +# Introduction + +A total of 409 teams of undergraduates, from 226 schools, spent the second weekend in February working on applied mathematics problems. They were part of the twelfth Mathematical Contest in Modeling (MCM). On Friday morning, the MCM faculty advisor opened a packet and presented each team of three students with a choice of one of two problems. After a weekend of hard work, typed solution papers were mailed to COMAP on Monday. Nine of the top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first twelve contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-1996). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +# Problem A: The Velociraptor Problem + +The velociraptor, Velociraptor mongoliensis, was a predatory dinosaur that lived during the late Cretaceous period, approximately 75 million years ago. Paleontologists think that it was a very tenacious hunter and may have hunted + +in pairs or larger packs. Unfortunately, there is no way to observe its hunting behavior in the wild, as can be done with modern mammalian predators. A group of paleontologists has approached your team and asked for help in modeling the hunting behavior of the velociraptor. They hope to compare your results with field data reported by biologists studying the behaviors of lions, tigers, and similar predatory animals. + +The average adult velociraptor was $3\mathrm{m}$ long with a hip height of $0.5\mathrm{m}$ and an approximate mass of $45\mathrm{kg}$ . It is estimated that the animal could run extremely fast, at speeds of $60\mathrm{km/hr}$ , for about 15 sec. After the initial burst of speed, the animal needed to stop and recover from a buildup of lactic acid in its muscles. + +Suppose that velociraptor preyed on Thescelosaurus neglectus, a herbivorous biped approximately the same size as the velociraptor. A biomechanical analysis of a fossilized thescelosaurus indicates that it could run at a speed of about $50\mathrm{km/hr}$ for long periods of time. + +# Part 1 + +Assuming the velociraptor is a solitary hunter, design a mathematical model that describes a hunting strategy for a single velociraptor stalking and chasing a single thescelosaurus as well as the evasive strategy of the prey. Assume that the thescelosaurus can always detect the velociraptor when it comes within $15\mathrm{m}$ but may detect the predator at even greater ranges (up to $50\mathrm{m}$ ) depending upon the habitat and weather conditions. Additionally, due to its physical structure and strength, the velociraptor has a limited turning radius when running at full speed. This radius is estimated to be three times the animal's hip height. On the other hand, the thescelosaurus is extremely agile and has a turning radius of $0.5\mathrm{m}$ . + +# Part 2 + +Assuming more realistically that the velociraptor hunted in pairs, design a new model that describes a hunting strategy for two velociraptors stalking and chasing a single thescelosaurus as well as the evasive strategy of the prey. Use the other assumptions and limitations given in Part 1. + +# Problem B: Mix Well For Fruitful Discussions + +Small group meetings for the discussion of important issues, particularly long-range planning, are gaining popularity. It is believed that large groups discourage productive discussion and that a dominant personality will usually control and direct the discussion. Thus, in corporate board meetings, the board will meet in small groups to discuss issues before meeting as a whole. These smaller groups still run the risk of control by a dominant personality. In an + +attempt to reduce this danger, it is common to schedule several sessions with a different mix of people in each group. + +A meeting of An Tostal Corporation will be attended by 29 board members of which nine are in-house members (i.e., corporate employees). The meeting is to be an all-day affair with three sessions scheduled for the morning and four for the afternoon. Each session will take 45 minutes, beginning on the hour from 9:00 A.M. to 4:00 P.M., with lunch scheduled at noon. Each morning session will consist of six discussion groups with each discussion group led by one of the corporation's six senior officers. None of these officers is a board member. Thus, each senior officer will lead three different discussion groups. The senior officers will not be involved in the afternoon sessions, and each of these sessions will consist of only four different discussion groups. + +The president of the corporation wants a list of board-member assignments to discussion groups for each of the seven sessions. The assignments should achieve as much of a mix of the members as possible. The ideal assignment would have each board member with each other board member in a discussion group the same number of times while minimizing common membership of groups for the different sessions. The assignments should also satisfy the following criteria: + +1. For the morning sessions, no board member should be in the same senior officer's discussion group twice. +2. No discussion group should contain a disproportionate number of in-house members. + +Give a list of assignments for members 1-9 and 10-29 and officers 1-6. Indicate how well the criteria in the previous paragraphs are met. Since it is possible that some board members will cancel at the last minute or that some not scheduled will show up, an algorithm that the secretary could use to adjust the assignments with an hour's notice would be appreciated. It would be ideal if the algorithm could also be used to make assignments for future meetings involving different levels of participation for each type of attendee. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at Southern Connecticut State University (Problem A) or at Carroll College, Montana (Problem B). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Velociraptor53758134234
Discussion Groups42543103175
962101237409
+ +The nine papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +# Velociraptor Papers + +"Pursuit-Evasion Games in the Late Cretaceous" + +Calvin College + +Edward L. Hamilton + +Grand Rapids, MI + +Shawn A. Meninga + +Gary W. Talsma + +David Tong + +"The Geometry and the Game Theory of Chases" + +Harvard University + +Charlene S. Ahn + +Cambridge, MA + +Edward Boas + +Howard Georgi + +Benjamin Rahn + +"Gone Huntin': Modeling Optimal Predator and Prey Strategies" + +Pomona College + +Hei (Celia) Chan + +Claremont, CA + +Robert A. Moody + +Richard Elderkin + +David Young + +"Lunch on the Run" + +University of Alaska Fairbanks + +Gordon Bower + +Fairbanks, AK + +Orion Lawler + +John P. Lambert + +James Long + +"A Three-Phase Model for Predator-Prey Analysis + +Washington University + +St. Louis, MO + +Hiro Mukai + +Lance Finney + +Jade Vinson + +Derek Zaba + +# Discussion Groups Papers + +"An Assignment Model for Fruitful Discussions" + +East China Univ. of Science and Technology + +Shanghai, China + +Lu Xiwen + +Han Cao + +Hui Yang + +Zheng Shi + +"Using Simulated Annealing to Solve the Discussion Groups Problem" + +Macalester College + +St. Paul, MN + +Karla V. Ballman + +David Castro + +John Renze + +Nicholas Weininger + +"Meetings, Bloody Meetings!" + +Rose-Hulman Institute of Technology + +Terre Haute, IN + +Aaron D. Klebanoff + +Joshua M. Horstman + +Jamie Kawabata + +James C. Moore, IV + +"A Greedy Algorithm for Solving Meeting Mixing Problems" + +University of Toronto + +Toronto, Ontario, Canada + +Nicholas A. Derzko + +Adrian Corduneanu + +Cyrus C. Hsia + +Ryan O'Donnell + +# Meritorious Teams + +Velociraptor Papers (37 teams) + +Beijing Univ. of Aeronautics and Astronautics, Beijing, China (Li Weiguo) + +Brandon University, Brandon, Manitoba, Canada (Doug Pickering) + +California Polytechnic State Univ., San Luis Obispo, CA (Thomas O'Neil) (two teams) + +Dalian University of Technology, Dalian, Liaoning, China (He Ming-Feng) + +Duke University, Durham, NC (David P. Kraines) + +East China Univ. of Science and Technology, Shanghai, China (Shao Nianci) + +Experimental High School of Beijing, China (Zhang Jilin) + +Goucher College, Baltimore, MD (Megan Deeney) + +Harvey Mudd College, Claremont, CA (David L. Bosley) + +Hebei Institute of Technology, Tangshan, Hegei, China (Wan Xinghuo) + +Hope College, Holland, MI (Ronald Van Iwaarden) + +Macalester College, St. Paul, MN, (Daniel A. Schwalbe) + +N.C. School of Science and Mathematics, Durham, NC (Dot Doyle) + +Nankai University, Tianjin, China (Ruan Jishou) + +National Univ. of Defence Technology, Chang Sha, Hunan, China (Cheng LiZhi) + +North Carolina State University, Raleigh, NC (Robert T. Ramsay) + +Rose-Hulman Inst. of Technology, Terre Haute, IN (George Berzsenyi) + +Seattle Pacific University, Seattle, WA (Steven D. Johnson) + +Southeast University, Nanjing, Jiangsu, China (Xu Liang) +Swarthmore College, Swarthmore, PA (Stephen B. Maurer) +Trinity University, San Antonio, TX (Diane G. Saphire) +Tsinghua University, Beijing, China (Wang Siqun) +United States Air Force Academy, USAF Academy, CO (Scott G. Frickenstein) +Univ. of Science and Technology of China, Hefei, Anhui, China, (Yu Feng) +Univ. of Science and Technology of China, Hefei, Anhui, China (Yu Tianyue) +University of Colorado, Boulder, CO (Anne M. Dougherty) +University of Puget Sound, Tacoma, WA (Robert A. Beezer) +University of Saskatchewan, Saskatoon, Canada (Raj Srinivasan) +Wake Forest University, Winston-Salem, NC (Stephen B. Robinson) +Wake Forest University, Winston-Salem, NC (Edward Allen) +Washington University, St. Louis, MO (Hiro Mukai) +Western Washington University, Bellingham, WA (Igor Averbakh) +Western Washington University, Bellingham, WA (Saim Ural) +Wuhan Univ. of Hydraulics and Engineering, Wuhan, Hubei, China (Peng Zhuzeng) +Youngstown State University, Youngstown, OH (J. Douglas Faires) +Zhejiang University, Hangzhou, China (Fang Daoyuan) + +Colorado College, Colorado Springs, CO (Deborah P. Levinson) +David Lipscomb University, Nashville, TN (Gary C. Hall) +Eastern Mennonite University, Harrisonburg, VA (John L. Horst) +Eastern Oregon State College, LaGrande, OR (Holly S. Zullo) +Eastern Oregon State College, LaGrande, OR (Mark R. Parker) +Gettysburg College, Gettysburg, PA (James P. Fink) +Graceland College, Lamoni, IA (Ronald K. Smith) +Grinnell College, Grinnell, IA (Thomas L. Moore) +Harvey Mudd College, Claremont, CA (David L. Bosley) +Hebei Institute of Technology, Tangshan, Hegei, China (Liu Baoxiang) +Hiram College, Hiram, OH (Larry Becker) +Ithaca College, Ithaca, NY (James E. Conklin) +Kenyon College, Gambier, OH (Brian D. Jones) +National Univ. of Defence Technology, Chang Sha, Hunan, China (Wu MengDa) +Peking University, Beijing, China (Huang Sheng) +South China Univ. of Technology, Guangzhou, China (Hao Zhifeng) +Southeast University, Nanjing, Jiangsu, China (Zhu Dao-yuan) +Trinity College Dublin, Dublin, Ireland (Timothy G. Murphy) +Univ. of Northern Colorado, Greeley, CO (William W. Bosch) +University College Cork, Cork, Ireland (Martin Stynes) +University College Cork, Cork, Ireland (Gareth Thomas) +University of Colorado, Boulder, CO (Bengt Fornberg) +University of Richmond, Richmond, VA (James Davis) +University of Southern Queensland, Toowoomba, Queensland, Australia (Christopher J. Harman) +Xidian University, Xian, Shaanxi, China (Mao Yong-cai) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, awarded to each member of two Outstanding teams a cash award and a three-year membership. The teams were from Calvin College (Velociraptor Problem) and Rose-Hulman Institute of Technology (Discussion Groups Problem). Moreover, INFORMS gave free one-year memberships to all members of Meritorious and Honorable Mention teams. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. Each team member received a cash prize. The teams were from Washington University (Velociraptor Problem) and from University of Toronto (Discussion Groups Problem). Both teams presented their results at a special Minisymposium at the SIAM Annual Meeting at Stanford University in July. + +The Mathematical Association of America (MAA) designated one Outstanding team from each problem as an MAA Winner. The teams were from Harvard University (Velociraptor Problem) and Macalester College (Discussion Groups Problem). The Macalester team gave a presentation at a special session of MAA Mathfest in Atlanta, GA, in August. + +# Judging + +Director + +Frank R. Giordano, COMAP, Lexington, MA + +Associate Directors + +Chris Arney, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +William Fox, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +# Velociraptor Problem + +Head Judge + +Marvin S. Keener, Mathematics Dept., Oklahoma State University, Stillwater, OK + +Associate Judges + +James Case, Baltimore, Maryland + +Alessandra Chiareli, Computational Science Center, 3M, St. Paul, MN + +Courtney Coleman, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Ben A. Fusaro, Dept. of Mathematical Sciences, Florida State University, Tallahassee, FL + +Mario Juncosa, RAND Corporation, Santa Monica, CA California State University Los Angeles, Los Angeles, + +Mark Levinson, Edmonds, WA + +Rick Mabry, Wolfram Research, Inc., Champaign, IL + +Keith Miller, National Security Agency, Fort Meade, MD + +Mike Moody, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Peter Olsen, National Security Agency, Fort Meade, MD + +Jack Robertson, Mathematics Dept., Georgia College, Milledgeville, GA + +John L. Scharf, Carroll College, Helena, MT + +Lee Seitelman, Glastonbury, CT + +Theodore H. Sweetser III, Jet Propulsion Lab, Pasadena, CA + +Robert M. Tardiff, Dept. of Mathematical Sciences, Salisbury State University, Salisbury, MD + +Beverly West, Cornell University, Ithaca, NY + +Daniel Zwillinger, Zwillinger & Associates, Arlington, MA + +# Discussion Groups Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +Victor Adamchik, Wolfram Research, Inc., Champaign, IL + +Karen Bolinger, Mathematics Dept., Arkansas State University, State University, AR + +Jerry Griggs, University of South Carolina, Columbia, SC + +John Kobza, Virginia Polytechnic Institute and State University, Blacksburg, VA + +Daphne Liu, Dept. of Mathematics and Computer Science, California State University Los Angeles, Los Angeles, CA + +Vijay Mehrotra, Onward Inc., Mountain View, CA + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Dept. of Mathematics, St. Mary's College, Notre Dame, IN + +Cathy Roberts, Northern Arizona University, Flagstaff, AZ + +Kathleen M. Shannon, Salisbury State University, Salisbury, MD + +Michael Tortorella, Lucent Technologies, NJ + +Marie Vanisko, Carroll College, Helena, MT + +# Triage Session + +# Velociraptor Problem + +Head Triage Judge + +Theresa M. Sandifer, Southern Connecticut State University, New Haven, CT + +Associate Judges + +Therese Bennett, Southern Connecticut State University, New Haven, CT Susanna D. Fishel, Southern Connecticut State University, New Haven, CT Ross B. Gingrich, Southern Connecticut State University, New Haven, CT Cynthia B. Gubitose, Western Connecticut State University, Danbury, CT C. Edward Sandifer, Western Connecticut State University, Danbury, CT Xiaodi Wang, Western Connecticut State University, Danbury, CT + +Discussion Groups Problem + +(all judges from Carroll College, Helena, MT) + +Head Triage Judge + +Marie Vanisko + +Associate Judges + +Peter Biskis + +Terence Mullen + +Jack Oberweiser + +Philip Rose + +# Sources of the Problems + +The Velociraptor Problem was contributed by Jack Robertson, Mathematics Dept., and William Wall, Dept. of Biological and Environmental Sciences, both of Georgia College, Milledgeville, GA. The Discussion Groups Problem was contributed by Don Miller, Dept. of Mathematics and Computer Science, St. Mary's College, Notre Dame, IN. + +# Acknowledgments + +The MCM was funded this year by the National Security Agency, whose support we deeply appreciate. We thank Dr. Gene Berg of NSA for his coordinating efforts. The MCM is also indebted to INFORMS, SIAM, and the MAA, which provided judges and prizes. + +I thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +$\mathbf{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +A = Velociraptor Problem + +B = Discussion Groups Problem + +INSTITUTION CITY ADVISOR A B + +ALABAMA + +University of Alabama Huntsville Boris Kunin P + +ALASKA + +Univ.ofAlaska Fairbanks John P.Lambert O,P + +ARIZONA + +Northern Arizona Univ. Flagstaff Terence R.Blows P + +University of Arizona Tucson Bruce J. Bayly H + +CALIFORNIA + +Calif. Poly. State Univ. San Luis Obispo Thomas O'Neil M,M + +Calif. State Univ. Monterey Bay Richard Brooks P,P + +Northridge Gholam-Ali Zakeri P + +Harvey Mudd College Claremont David L. Bosley M M + +Humboldt State Univ. Arcata Mark Rizzardi P + +Loyola Marymount Univ. Los Angeles Thomas M. Zachariah P + +Pomona College Claremont Richard Elderkin O P + +Sonoma State University Rohnert Park Clement E.Falbo P + +COLORADO + +Colorado College Colorado Springs Deborah P. Levinson M + +Trinidad State Jr. College Trinidad George E. Leone P + +Robert A. Philbin P + +U.S. Air Force Academy USAF Academy Scott G. Frickenstein M + +Harry N. Newton H + +Jonathan D. Robinson H + +University of Colorado Boulder Anne M. Dougherty M + +Bengt Fornberg M + +U. of Northern Colorado Greeley William W. Bosch M,P + +U. of Southern Colorado Pueblo Bruce N. Lundberg P + +
INSTITUTIONCITYADVISORAB
CONNECTICUT
Southern Conn. State U.New HavenRoss B. GingrichH
University of BridgeportBridgeportNatalia RomalisP
Western Conn. State U.DanburyEdward SandiferH
DISTRICT OF COLUMBIA
Georgetown UniversityWashingtonAndrew VogtPP
George Washington Univ.WashingtonDaniel H. UllmanPP
Trinity CollegeWashingtonSuzanne E. SandsPP
FLORIDA
Florida Southern CollegeLakelandWilliam G. AlbrechtP
Florida State UniversityTallahasseeHong WenH
Jacksonville UniversityJacksonvilleRobert A. HollisterHP
D. Neal BoehmkeP
Stetson UniversityDelandLisa O. CoulterPP
Univ. of North FloridaJacksonvillePeter A. BrazaH
GEORGIA
Georgia College & State U.MilledgevilleCraig TurnerPP
Wesleyan CollegeMaconJoseph A. IskraP
IDAHO
Boise State UniversityBoiseAlan R. HausrathH
ILLINOIS
Illinois CollegeJacksonvilleDarrell E. AllgaierP
Illinois Wesleyan Univ.BloomingtonZahia DriciP
Northern Illinois Univ.DeKalbLinda R. SonsH
Olivet Nazarene Univ.BourbonnaisDale K. HathawayP
Wheaton CollegeWheatonPaul IsiharaP
INDIANA
DePauw UniversityGreencastleRichard SmockP,P
Indiana UniversityBloomingtonDaniel P. MakiP
South BendMorteza Shafii-MousaviPH
Rose-Hulman Inst. of Tech.Terre HauteAaron D. KlebanoffO
George BerzsenyiM
Frank YoungP
Saint Mary's CollegeNotre DameJoanne SnowPH
Valparaiso UniversityValparaisoRick GillmanH,P
Wabash CollegeCrawfordsvilleEsteban PoffaldP,P
IOWA
Drake UniversityDes MoinesAlexander F. KleinerP
Graceland CollegeLamoniRonald K. SmithM
Grinnell CollegeGrinnellThomas L. MooreM
Luther CollegeDecorahRuth BergerH
Mt. Mercy CollegeCedar RapidsKent KnoppP
Simpson CollegeIndianolaChristopher SmithP
Murphy WaggonerP
Wartburg CollegeWaverlyLynn J. OlsonP
KENTUCKY
Asbury CollegeWilmoreKenneth P. RietzP
Georgetown CollegeGeorgetownAnn C. HeardP
LOUISIANA
McKinley High SchoolBaton RougeDavid BrantonP
McNeese State Univ.Lake CharlesRobert L. DoucetteP
Northwestern St. Univ.NatchitochesLisa R. GalminasP
MAINE
Bowdoin CollegeBrunswickAdam B. LevyP
Colby CollegeWatervilleJan HollyP
University of MaineOronoGrattan P. MurphyP
MARYLAND
Goucher CollegeBaltimoreMegan DeeneyM
Hood CollegeFrederickJohn BoonP
Betty MayfieldP,P
Loyola CollegeBaltimoreDipa ChoudhuryPH
William D. ReddyH
Mt. St. Mary's CollegeEmmitsburgTheresa A. FrancisH
Fred J. PortierP
Salisbury State UniversitySalisburyMichael BardzellP
Steven M. HetzlerP
MASSACHUSETTS
Harvard UniversityCambridgeHoward GeorgiO
Mass. Inst. of TechnologyCambridgeGilbert StrangP
Smith CollegeNorthamptonRuth HaasPP
Simon's Rock CollegeGreat BarringtonAllen B. AltmanH,P
Univ. of MassachusettsAmherstEdward A. ConnorsPP
Kiwi Graham-EagleP
LowellLou RossiP
Western New England Coll.SpringfieldLorna HanesP
MICHIGAN
Calvin CollegeGrand RapidsGary TalsmaO
Eastern Michigan Univ.YpsilantiChristopher E. HeePP
Hope CollegeHollandRonald Van IwaardenM
Lawrence Tech. Univ.SouthfieldRuth G. FavroP
Howard WhitstonP
Siena Heights CollegeAdrianRick TrujilloP
University of MichiganAnn ArborTava OlsenP
MINNESOTA
Gustavus Adolphus CollegeSt. PeterJohn M. HolteP
Macalester CollegeSt. PaulKarla V. BallmanO
Daniel A. SchwalbeM
University of MinnesotaDuluthPaul BoisenP
Zhuangyi LiuP
MISSOURI
Missouri Southern St. Coll.JoplinPatrick CassensP,P
Northwest Missouri St. U.MaryvilleRussell N. EulerPP
Rockhurst CollegeKansas CityPaula ShorterP
Truman State UniversityKirksvilleSteve SmithPH
Washington UniversitySt. LouisHiro MukaiO,M
Wentworth Military Acad.LexingtonCharles MordanP
MONTANTA
Carroll CollegeHelenaTerence J. MullenP
Phil RoseP
Anthony M. SzpilkaP
NEBRASKA
Hastings CollegeHastingsDavid B. CookeH
Nebraska Wesleyan Univ.LincolnP. Gavin LaRoseP,P
NEVADA
Sierra Nevada CollegeIncline VillageSteve EllsworthP
NEW JERSEY
Camden County CollegeBlackwoodPenny L. BrowerP
Allison SuttonP
NEW YORK
Great Neck South H.S.Great NeckAlbert CavallaroP
Hofstra UniversityHempsteadRaymond N. GreenwellP
Ithaca CollegeIthacaJames E. ConklinHM
Nazareth CollegeRochesterNelson G. RichP
Queens College, CUNYFlushingAri GrossP
St. Bonaventure Univ.St. BonaventureFrancis C. LearyP
Albert G. WhiteP
St. John Fisher CollegeRochesterDaniel CassP
U.S. Military AcademyWest PointErik BolltH
Kellie SimonH
Wells CollegeAuroraCarol C. ShilepskyP
Westchester Comm. Coll.ValhallaNeil BasescuPP
Rowan LindleyP,P
Yeshiva CollegeNew YorkYakov KarpishpanP
NORTH CAROLINA
Appalachian State Univ.BooneHolly P. HirstH
Duke UniversityDurhamDavid P. KrainesM
N.C. School of Science & Math.DurhamDot DoyleM,H
North Carolina St. Univ.RaleighRobert T. RamsayM,P
Univ. of North CarolinaPembrokeRaymond E. LeeP
Wake Forest UniversityWinston-SalemEdward AllenM
Stephen B. RobinsonM
Western Carolina Univ.CullowheePaul BrandtH
Jeff A. GrahamP
OHIO
College of WoosterWoosterMatt BrahmH
Hiram CollegeHiramLarry BeckerM
James R. CaseP
Brad GubserH
Kenyon CollegeGambierBrian D. JonesPM
Miami UniversityOxfordDouglas E. WardP
University of DaytonDaytonRalph C. SteinlageP
Xavier UniversityCincinnatiTheresa BrightP
Richard J. PulskampH
Youngstown St. Univ.YoungstownJ. Douglas FairesMP
Paul MullinsHH
OKLAHOMA
Oklahoma State Univ.StillwaterJohn E. WolfeHH
Southeastern Okla. St. U.DurantJohn M. McArthurP
Karla OtyP
Southern Nazarene UniversityBethanyPhilip CrowP
OREGON
Eastern Oregon St. Coll.LaGrandeMark R. ParkerM
Norris PreyerHH
Holly S. ZulloM
Lewis & Clark CollegePortlandRobert W. OwensH
Southern Oregon St. Coll.AshlandKemble R. YatesH
PENNSYLVANIA
Allegheny CollegeMeadvilleDavid L. HousmanP
Bucknell UniversityLewisburgDavid FarmerP
Sally KoutsoliotasH
Chatham CollegePittsburghJonathan AronsonP
Angela A. FishmanP
Gannon UniversityErieRafal F. AblamowiczP
Thomas M. McDonaldP
Gettysburg CollegeGettysburgJames P. FinkHM
Lafayette CollegeEastonThomas HillH
Messiah CollegeGranthamDouglas C. PhillippyP
Lamarr C. WidmerP
Penn State Berks CampusReadingLeila Miller-Van WierenP
Douglas M. Van WierenP
Swarthmore CollegeSwarthmoreStephen B. MaurerM
SOUTH CAROLINA
Charleston Southern Univ.CharlestonJeryl JouhsonP
The CitadelCharlestonKanat DurgunP
Coastal Carolina Univ.ConwayPrashant S. SansgiryP
Columbia CollegeColumbiaScott A. SmithP
Francis Marion UniversityFlorenceCatherine A. AbbottP
Midlands Technical CollegeColumbiaRichard BaileyP
John R. LongP
SOUTH DAKOTA
Northern State UniversityAberdeenA.S. ElkhaderP
TENNESSEE
Austin Peay St. Univ.ClarksvilleMark C. GinnPP
Christian Brothers UniversityMemphisCathy W. CarterPP
David Lipscomb Univ.NashvilleGary C. HallM
Mark A. MillerP
TEXAS
Abilene Christian Univ.AbileneDavid HendricksP
Rice UniversityHoustonDouglas W. MooreP
Southwestern UniversityGeorgetownTherese N. SheltonP
Trinity UniversitySan AntonioDiane G. SaphireM,P
University of HoustonHoustonBarbara Lee KeyfitzP
University of Texas at AustinAustinMike OehrtmanHH
U. of Texas-Pan AmericanEdinburgRoger A. KnobelPP
UTAH
University of UtahSalt Lake CityChris JohnsonP
Don H. TuckerP,P
Utah State UniversityLoganMichael C. MinnetteP
VERMZONT
Johnson State CollegeJohnsonGlenn D. SproulH,P
Norwich UniversityNorthfieldLeonard C. GamblerP
VIRGINIA
College of William & MaryWilliamsburgLarry M. LeemisH
Eastern Mennonite Univ.HarrisonburgJohn L. HorstM
James Madison Univ.HarrisonburgJames S. SochackiH
Roanoke CollegeSalemChris LeeP
Roland B. MintonP
Thomas Jefferson H.S. for Science & TechnologyAlexandriaPatricia GabrielP
University of RichmondRichmondJames DavisM
Virginia Western Comm. Coll.RoanokeRuth ShermanPP
WASHINGTON
Pacific Lutheran Univ.TacomaRachid BenkhaltiP
Seattle Pacific UniversitySeattleSteven D. JohnsonM
Univ. of Puget SoundTacomaRobert A. BeezerM
Perry FizzanoH
Martin JacksonP
John RiegseckerP
Western Washington U.BellinghamIgor AverbakhMH
Saim UralMP
WISCONSIN
Beloit CollegeBeloitPaul J. CampbellP,P
Carroll CollegeWaukeshaDennis MickP
John SymmsP
Edgewood CollegeMadisonKen JewellP
Steven PostP
Northcentral Tech. Coll.WausauFrank J. FernandesP
Robert J. HenningP
Ripon CollegeRiponRobert FragaPP
Univ. of WisconsinPlattevilleClement T. JeskeP
John A. KrogmanP
Sheryl WillsP
Sheela Yadav-OnleyP
Stevens PointNorm D. CuretP
Wisc. Lutheran Coll.MilwaukeeMarvin C. PapenfussH
AUSTRALIA
U. of South QueenslandToowoomba, Qld.Christopher J. HarmanM
Tony RobertsH
CANADA
Brandon UniversityBrandon, Man.Doug PickeringM
Earl Haig Secondary SchlNorth York, Ont.John CaranciPP
Memorial U. of NfldSt. John's, Nfld.Andy FosterP
University of CalgaryCalgary, Alb.David R. WestbrookP,P
Univ. of SaskatchewanSaskatoon, Sask.James A. BrookeH
Raj SrinivasanM
University of TorontoToronto, Ont.Nicholas A. DerzkoPO
Univ. of Western OntarioLondon, Ont.Peter H. PooleH
York UniversityNorth York, Ont.Neal MadrasP,P
Anthony SzetoP,P
CHINA
Anhui UniversityHefei, AnhuiYang ShangjunP
Zeng JianjunP
Beijing Inst. of Tech.BeijingBao Zhu GuoH
Cui Xiao DiP
Beijing Normal Univ.BeijingZhang JilinH
Beijing Union UniversityBeijingRen KaiLongP
Wang XinfengH
Zeng QingliP
Beijing Univ. of Aero.BeijingLi WeiguoMP
China Pharmaceutical Univ.NanjingQiu JiaxueP
Yang JinghuaP
Chongqing UniversityChongqing, SichuanLi FuH
Gong QuH
He ZhongshiP
Liu QiongsenP
Dalian Univ. of TechnologyDalian, LiaoningYu Hong-QuanP
He Ming-FengMP
Da Tong High SchoolShanghaiGong ChanganH
E. China U. of Sci. & Tech.ShanghaiShao NianciM
Lu XiwenO,H
Lu YuanhongH
Experimental H.S. of Beijing Fudan UniversityBeijing ShanghaiZhang JilinM
Cao YuanH
Tan YongjiH
Liao You WeiPP
Harbin Inst. of Tech.Harbin, HeilongjiangShang ShoutingH,P
Wang YongH,H
Hebei Institute of Tech.Tangshan, HegeiLiu BaoxiangM
Wan XinghuoM
Huazhong U. of Sci. & Tech.Wuhan, HubeiShi Bao-changP
Qi HuanH
Gao JianP
He NanzhongP
Info. & Eng'ng. Institute Jilin Inst. of TechnologyZhengzhou, Henan Changchun, JilinHan ZhonggengH
Dong XiaogangH
Gao TianP
Sun ChangchunP
Xu YunhuiH
Jilin UniversityChangchun, JilinLiu QinghuaiP
Lu XianruiH
Ma FumingP
Xue Yin JingP
Jilin University of Techn.Changchun, JilinZhang KuiyuanH
Fange PeichenH
Jinan UniversityGuangzhou, GuangdongFan SuohaiP,P
Ye Shi QiP
Nanjing U. of Sci. & Tech.Nanjing, JiangsuZhao Chong GaoH
Nankai UniversityTianjinHuang WuqunP
Ruan JishouM
Zhou XingWeiP
National U. of Defence Tech.Chang Sha, HunanWu MengDaM
Cheng LiZhiM
Peking UniversityBeijingHu XiaodongPH
Huang ShengHM
Shandong UniversityBeijingQu ChunjiangP
Long HepingH
Yu TiamH
Cui YuquanP
Shanghai Jiaotong UniversityShanghaiChu WensongP
Chen ZhiheP
Huang JianguoP
Zhou GangH
Shanghai Normal Univ.ShanghaiGuo ShenghuanHH
South China Univ. of Tech.Guangzhou, GuangdongZhu FengfengH
Fu HongzuoH
Hao ZhifengM
Chang ZhihuaH
Southeast UniversityNanjing, JiangsuHuang JunH
Xu LiangM
Sun ZhizhongP
Zhu DaoyuanM
Southwest Jiaotong UniversityChengdu, SichuanDeng PingP
Li TianruiH
Yuan JianP
Zhao LianwenP
Tsinghua UniversityBeijingWang SiqunM,H
U. of Sci. & Tech. of ChinaHefei, AnhuiXue JiangengH
Yu TianyueM
Wang XinmaoH
Yu FengM
Wuhan U. of Hydraulic & Engin.Wuhan, HubeiPeng ZhuzengM
Xian Jiaotong UniversityXian, ShaanxiDai YongHongH
Dong TianxinH
He XiaoliangH
Zhou YicangH
Xidian UniversityXian, ShaanxiHu YupuH
Li YoumingH
Mao YongcaiM
Zhejiang UniversityHangzhouFang DaoyuanMP
ZhengZhou Univ. of Tech.ZhengZhou, HenanJia JunguoH
Wang ShubinH
Zhongshan UniversityGuangzhou, GuangdongTang MengxiH
Wang Yuan ShiPH
Xu Liu JunP
Yu Jin HuaP
HONG KONG
Hong Kong Baptist Univ.Kowloon TongTong Chong Sze Shiu Wai CheePH
IRELAND
Trinity College DublinDublinTimothy G. Murphy James C. SextonHM
Univ. College CorkCorkPatrick Fitzpatrick Martin StynesHM
Gareth ThomasMM
Univ. College GalwayGalwayMartin MeerePP
University of LimerickLimerickGordon S. LessellsPP
LITHUANIA
Vilnius UniversityVilniusRicardas Kudzma Algirdas ZabulionisPH
SOUTH AFRICA
University of StellenboschStellenboschThomas P. Dreyer Juan VuurenPP
+ +The editor wishes to thank Yeap Lay May and Chen Rong of Beloit College for their help with Chinese names. + +# Pursuit-Evasion Games in the Late Cretaceous + +Edward L. Hamilton + +Shawn A. Menninga + +David Tong + +Calvin College + +Grand Rapids, MI 49546 + +{ehamil28,smenni23,dtong23} @calvin.edu + +Advisor: Gary W. Talsma + +# Summary + +Using techniques from differential game theory, we model the velociraptor hunting problem by means of a semi-discrete computer algorithm. + +By defining predator and prey behaviors in terms of simple, intuitive principles, we identify a set of strategies designed to counter one another, such that no one pure predator strategy or prey strategy defines an optimal behavior pattern. Instead, the ideal strategy switches between two or more pure strategies, in an essentially unpredictable, or protean, manner. The resulting optimum behaviors show a mixture of feints, bluffs, and true turns for the thescelosaurs, and a mixture of predictive interception and simple pursuit for the velociraptors. + +Finally, using these strategies, we demonstrate a conclusive advantage for velociraptors hunting in pairs over velociraptors hunting in isolation. + +# Introduction + +We describe hunting strategies for a velociraptor and flight strategies for its prey using a computational semi-discrete representation of a differential game of pursuit and evasion. + +First, we review the formalism of traditional non-differential game theory and the extension of its principles into the analysis of differential systems, taking careful note of the unique aspects of the velociraptor problem. + +Second, we propose a minimal set of assumptions required to reduce the analytical game to a numerical time-iterated computer algorithm, and submit a small set of intuitively simple strategies for each participant to execute. + +Third, we examine the implications of both the assumptions of perfect and imperfect information for the optimization of strategies for a single pursuer and single evader and then extend these conclusions to more pursuers. + +Finally, we comment on the inherent limitations of this model and offer our evaluation. We conclude that for the game of imperfect information there exists no pure strategy for the prey, but that the best alternative is to behave in an unpredictable fashion with respect to the alternation of turns and feints. The predator, however, has a clearly dominant strategy of compromising between a predictive algorithm and simple pursuit. + +# Mathematical Formalization in Game-Theoretic Terms + +Contests of interception and avoidance between a predator and its prey fall into the broad and diverse category of linear differential games. In a differential game of pursuit and evasion, two or more opposing players attempt either to maximize or to minimize their separation, subject to certain limitations on their motion. + +Unlike the games of classical game theory, differential games involve the application of methodology from differential equations and make use of continuous fluctuations that define the states and objectives of the players. + +In either a traditional or differential game, players seek to maximize some value, known as the payoff function, by selecting among a set of alternatives, with the payoff to each player determined by some function of all the players' choices. In a zero-sum game, all players are in direct competition with one another, such that the objectives expressed by their payoff functions are exactly opposite. Games of pursuit and evasion fall within this classification; the pursuer seeks to minimize the distance between itself and the evader, and the evader seeks to maximize this distance. + +In the case of the velociraptor-thescelosaur pursuit-evasion game, the payoff function is a simple binary function: The only relevant outcomes are capture or escape. Such games are referred to as games of kind, as opposed to games of degree, in which the payoff can take on a larger set of possible values. In some cases, it may be helpful to consider a game of kind as being embedded in a game of degree, with the time to capture (or the separation at evasion) as the payoff function. Although this in not necessary in purely deterministic trials, the average time to capture for a particular strategy pair may provide a useful statistical measure of the efficacy of a strategy. + +There are two classes of variables in a differential game: state variables, which specify the complete configuration of the entire system at any given + +point in time, and control variables, which participants in the game use to alter the state function in ways favorable to their attempts to maximize their own payoff function. Common state variables in traditional pursuit-evasion games are the spatial coordinates of the participants; control variables may include an angle of maximum turn, or acceleration vectors. + +Games in which all players have access to a complete set of state variables at any given point in time are known as games of perfect information; games in which this is not true are referred to as games of imperfect information. Typically, games of imperfect information lack exact analytical methods of solution. Thus, one of the best methods of evaluating such games is to use a discrete model, which can be implemented in terms of a simple computational algorithm. Unfortunately, purely discrete models often run the risk of obscuring essential details of the system that may depend on continuity of values without quantization. A reasonable compromise that still permits a computational method of solution is a semi-discrete method, in which time is iterated discretely, but spatial values are allowed to extend over the entire domain of real numbers [Isaacs 1967, 42]. We chose to implement the raptor hunting game as a semi-discrete computational algorithm. + +The velociraptor problem embodies one of the most interesting situations in a two-player pursuit-evasion game. If the maximum velocity of the evader is greater than the maximum velocity of the pursuer, then the evader has an optimal strategy of moving directly away from the pursuer at maximal velocity and will always successfully evade capture. Similarly, if the pursuer is superior in both speed and maneuverability, then the pursuer has an optimal strategy of moving directly toward the evader and is guaranteed of a successful capture. The case in which the pursuer is swifter while the evader is more agile, however, has no trivial solution and may require complex or nondeterministic strategies for optimal play. + +# Assumptions and Development of the Model + +# Initial Configuration + +Prey animals generally ignore predators until they have moved within a well-defined flight radius; when the distance between themselves and a predator falls below this value, a flight response is triggered. We are given that the flight distance of a thescelosaurus is less than $50\mathrm{m}$ and greater than $15\mathrm{m}$ , such that the moment a velociraptor is detected within this distance, the thescelosaur will immediately flee. Similarly, we assume that the raptor has been deliberately stalking the thescelosaur with the intent of capturing it; thus, the predator will be in a set crouch position and will pursue at the first sign of flight. Although the thescelosaur may be startled or surprised, it initiates the flight and chase sequence, and thus both it and the raptor begin moving (nearly) simultaneously. + +Our model allows us to specify the initial state either through selection of locations and facings for each predator and prey, or by implementing a simple probabilistic stalking model to determine the initial separation. In most cases, we set the starting separation of the predator and prey close to the minimum possible value, given to be $15\mathrm{m}$ . This is because, for most large separations, the optimal strategy of the prey is to move directly away from the predator at maximum speed until the predator is very close. Thus, differences in strategic behavior do not typically become manifest until the prey is forced to deviate from a linear path due to the proximity of the predator. + +In the case of multiple predators, we assume either that they have stalked out nearly opposite sides of a circle of radius $15\mathrm{m}$ or that they start from approximately the same position. + +# Sensory Acuity + +For the simplest level of approximation, it is sufficient to assume that all participants in a pursuit-evasion game have complete and instantaneous access to the state function for all times, such that they are involved in a game of perfect information. However, this is a relatively inaccurate assumption for most real physical systems. The ability to estimate distances and directions is always subject to random error, and vision is limited to a field of sight subtending an angle of less than $180^{\circ}$ . Being forced to rely on other senses, usually hearing, to track a moving object is less than ideal, and can lead to sizable error in the assessment of distances in particular. These limitations are particularly problematic for predators, who have a narrowly focused forward field of vision and depend heavily on being able to estimate not only the present but the future locations of their prey. + +Our model functions under assumptions of perfect information but with the introduction of both random and systematic error in sensory perception. The former is enforced by multiplying the magnitude and angle of the displacement vector between predator and prey each by a different random number between 0.95 and 1.05 before passing it to the routine governing the selection of control variables. Additionally, uncertainty in these equations due to the limited field of sight (systematic errors) are estimated using a linearly increasing statistical spread proportional to the angular displacement between the direction of sight and the observed object. The distance and angle are each multiplied by a different random number in the range $1.00 \pm s\theta/\pi$ , where $s = 0.05$ for angular displacement and 0.25 for linear displacement. Finally, to emphasize the importance of visual contact, these factors are further increased for the predator by 50% to $s = 0.075$ and 0.375, respectively. + +A related issue is reaction time. In the perfect information case, knowledge of the state vector is imparted instantly. A more realistic assumption is that these data become available for use only after they have been psychologically processed by the brain. Ordinarily, the process of updating awareness of the environment could be considered to be immediate; but in a contest measured in + +hundredths of a second, to deny the prey the ability to have an advantage over the predator in its knowledge of its own actions would obscure an essential element of the model. To prevent either dinosaur from being able to react instantaneously to changes in its environment, we delay information about the state variables of other dinosaurs by 0.05 s for the raptor and 0.037 s for the thescelosaur. + +# Physical Constraints on Motion + +The center of mass of each of the dinosaurs is assumed to be a point particle moving in a two-dimensional plane in accordance with Newtonian mechanics. Two available options in altering the motion of a point (subject to Newtonian kinematic equations) are to impose either a linear or an angular acceleration. + +The data provided by biomechanical analysis indicate that the maximum turn radius of the raptor was $1.5\mathrm{m}$ and that of the thescelosaur was only $0.5\mathrm{m}$ . (We understand the problem statement to mean that these values apply at top speed, even though this leads to the thescelosaur being capable of a 32-g turn.) This suggests, in each case, that a maximum possible angular displacement in any given time interval may be defined as $d\theta = a(dt) / v$ , where $v$ is the current velocity, $dt$ is the length of the time interval, and $a$ is the centripetal acceleration, where $a = v^2 /r$ , and $v$ and $r$ here are the speed and maximum radius of curvature at maximum velocity. + +Without associated data on the linear acceleration of the dinosaurs, it is necessary to invoke an argument by analogy. The African cheetah is a modern predator filling an ecological role similar to that of the raptor. Cheetahs also share many of the same strategic attributes with velociraptors (high speed, limited endurance, and a turning radius inferior to that of their primary prey). One might reasonably assume that the linear acceleration capabilities of the raptor would have been similar. A cheetah can accelerate to peak velocity (over $90\mathrm{km / hr}$ ) in about 2 s. However, the velociraptor has a lower maximum speed, and is also lighter than the cheetah. This suggests that the acceleration for a raptor may be somewhat greater. Operating under the assumption that the velociraptor possesses the same relative acceleration ability compared to its body size, and recognizing that the force exerted by muscle tissue is proportional to the square of the linear dimensions of the body, while body mass is equal to the cube, a factor of $(2 / 3) / (1.25)^{2 / 3} \approx .57$ is not an unreasonable correction to the acceleration of the lighter raptor (where 1.25 is an approximate ratio of the mass of a cheetah to the mass of a raptor). + +# Strategy + +An animal might be expected to behave in accordance with straightforward heuristic principles, and each of the strategies that we tested reflects such a principle. For the sake of the simulation, we assume each raptor and each + +thescelosaur must adopt only one strategy at the start of a pursuit-evasion game, although we later consider the advantages that could be gained by switching strategies during the course of a chase. + +# Predator Strategies + +- Strategy Predator-0 (default): Move directly toward the present location of the prey at the maximum possible speed without deceleration (i.e., speed cannot be decreased, even if facing away from the prey). +- Strategy Predator-1 (predictive): Estimate the future position of the prey by assuming that it will continue moving in its current direction at the present velocity, and plot an intercept course. +- Strategy Predator-2 (half-predictive): Take the average of the angles indicated by strategy Predator-0 and strategy Predator-1. + +# Prey Strategies + +- Strategy Prey-0 (default): Move directly away from the present location of the predator at the maximum possible speed without deceleration (i.e., speed cannot be decreased, even if facing toward the predator). +- Strategy Prey-1 (constant turn): Similar to Prey-0, but every time the predator comes within $1.5 \mathrm{~m}$ , make a sharp $90^{\circ}$ turn, and then go straight again. +- Strategy Prey-2 (variable turn): Similar to Prey-1, but instead of turning at $90^{\circ}$ , turn constantly at a rate proportional to the distance to the predator. +- Strategy Prey-3 (feint and bluff): Similar to Prey-1, but instead of going on straight after the $90^{\circ}$ turn, turn back at maximum rate by $270^{\circ}$ . + +# Capture and Non-Capturability + +Conventionally, in analytical studies of differential games of pursuit and evasion, the condition for capture is just the decrease of the distance between the pursuer and evader below some set value, the radius of capture. In our model, we permit capture more realistically by one of two mechanisms. + +- If the raptor is within one meter of the thescelosaur, it may attempt to "lunge" and score a critical strike, thrusting the parts of its body (jaws and talons) employed to injure or subdue prey forward with a sudden burst of acceleration. This is represented by a probabilistic capture condition consisting of two factors, one dependent on distance varying linearly from 0.5 to $1.0\mathrm{m}$ , and one dependent on angle varying linearly from 0 radians to $\pi$ radians. If a random number generated between 0 and 1 is less than the product of these two factors, then capture occurs. + +- Alternatively, if the raptor and the thescelosaur move within $0.5\mathrm{m}$ of one another, they physically collide, with invariably catastrophic consequences for the prey. We choose not to model the possibility that the raptor may be physically injured in this case. + +If capture has not been attained within 15 s, the simulation ends, and the thescelosaur is assumed to escape, as the raptor is forced to stop and rest. Although the raptor may not spend all of the 15 s at top speed, the process of accelerating and decelerating is at least as energetically demanding as running at a constant speed. + +# The Simulator + +The simulator consists of two main portions: the outer loop in charge of updating the game status at each iteration, and the movement generators. These last, one each for the predator(s) and prey, implement the strategies. This is performed in five steps: + +1. Data Acquisition Phase: The bearing and distance to each opponent is determined and recorded. +2. Data Manipulation Phase: Each of the above values is randomly permuted to simulate inaccuracies in sensory acuity. +3. Strategizing Phase: Based on the data and the chosen strategy, a "best-case" move is chosen. +4. Limitation Phase: Physical limitations are imposed on the chosen move in accordance with the associated capabilities of the animal. +5. Movement Phase: This final value is passed back to the outer loop for implementation. + +If at any point the outer loop determines that capture has occurred, or that the 15-s time limit has expired, the simulation is halted and the final status is reported. + +![](images/fc4bd9c8c2df7b862192695a44466af08f67c1a37162a57a32798582ba48c226.jpg) +Figure 1. The simulator window. + +The simulator was written using ANSI C++ (with the Hewlett-Packard standard template library). The program uses an X Windows (X11R5) display interface to graphically track the positions of the predator(s) and prey. The code was compiled and linked using the freely available C++ compiler from the GNU Software Foundation and was executed on a Sun Microsystems SparcStation 5 workstation. Source code is available via e-mail to smenni23@calvin.edu. + +# Results + +# One Raptor + +We studied the case of a solitary hunter in detail for both the assumptions of perfect information and for the restrictions on sensory acuity defined above. Under the first assumption, the outcome of every initial position is deterministic and repeatable. Thus every strategy by the predator is either successful, in that it results in the capture of the prey, or unsuccessful, because it does not lead to a capture within the allotted time. This leads to a natural formulation in terms of a binary matrix (Table 1) where 1 represents capture and 0 denotes an escape. + +Table 1. Capture in a game of perfect information. + +
Prey-0Prey-1Prey-2Prey-3
Predator-01001
Predator-11110
Predator-21111
+ +The intransitivity of the relative ranking of Predator-0 and Predator-1 (or Prey-1, Prey-2, and Prey-3) merits further comment. This corresponds to a situation in which no optimum strategy exists for the predator (or the prey) with respect to the reduced game containing only those choices. Due to the tighter turning radius of the thescelosaur, the only way the raptor can catch it as it moves through its sharp turn is by anticipating its future position. However, this leads to the potential that the thescelosaur may switch to Prey-3, a devious strategy whereby the predator is forced to overcommit as a result of a false turn by the prey, something to which the less sophisticated strategy Predator-0 was immune. The same analysis applies in reverse to the thescelosaurus; for any one strategy she commits to consistently, the raptor can switch to a new strategy capable of beating her every time. Thus, if either player insists on playing with a single deterministic strategy, the other can take advantage of it. + +The most obvious option is to switch to a random selection of bluffs and real turns. This would at least guarantee that the other dinosaur could not anticipate the tactic consistently and preemptively account for it. Fortunately, in a game of perfect information, the raptor has another option: Predator-2, a deterministic strategy that combines the other two in a nonrandom way. This + +same alternative would not be available if the raptor suffered from a reaction time delay, as is the case with the imperfect-information variant described next. + +With the introduction of effects resulting in imperfect information, the deterministic outcomes are replaced by probabilistic outcome distributions. Some strategies will still virtually always succeed in defeating others, but in many cases the outcome will be sufficiently random that it can be treated only by statistical methods. To get some idea of the relative probabilities in the game of imperfect information, we have run 10 tests at each pair of strategies, with random variation in the starting configuration. The matrix values now reported are the probability of a kill as determined by the above testing (Table 2). These values should be taken as rough approximations only. + +Table 2. Capture in a game of imperfect information. + +
Prey-0Prey-1Prey-2Prey-3
Predator-0100%30%100%70%
Predator-110%70%90%10%
Predator-2100%90%90%30%
+ +Many of the essential features of the game of perfect information remain: Predator-2 is still an excellent strategy, and the intransitivity between Predator-0 and Predator-1 with respect to Prey-0 and Prey-1 is also evident. However, Predator-2 is no longer a strictly dominant strategy, and Prey-2 loses much of its merit. In purely statistical terms, the optimal strategy is not to select any one pattern of behavior, but to mix them randomly. This game in intensive form has no saddle-point in pure strategies. By defining the probabilistic outcome in terms of a payoff function, however, an optimum randomized behavior defining a saddle-point in mixed strategies could probably be found. + +# Two Raptors + +An identical analysis can be carried out for the two-raptor case (Table 3). + +Table 3. Capture with two predators. + +
Prey-0Prey-1Prey-2Prey-3
Predators 0,090%50%100%40%
Predators 1,190%100%100%100%
Predators 2,2100%100%100%70%
Predators 0,1100%100%100%70%
+ +In all but two cases, the two-raptor strategy is superior to the one-raptor strategy, assuming both raptors behave the same way. If one functions in + +a predictive fashion (Predator-1), while the other accepts the default strategy (Predator-0), the outcome is remarkably similar to two "half-predictive" (Predator-2) raptors, suggesting that specialization of hunting roles might offer considerable advantages. (Examine, for example, the difference between the outcome of a 0,0 strategy vs. the 0,1 strategy.) These results suggest that velociraptors would have strong incentives to hunt in groups, particularly if less experienced raptors (using Predator-0) could cooperate with more experienced raptors (using Predator-1). The total number of kills by mixing two "0" raptors with two "1" raptors is not only greater than for all four hunting in isolation, it is also greater than the total kills with non-experience-mixed pairings. + +# Weaknesses + +- Because of the lack of direct evidence for the biomechanical properties of dinosaurs, the numerical values for certain parameters, particularly the maximum linear acceleration and the reaction time delay, are matters of speculation, and subject to legitimate dispute. Arguments by analogy to modern predators are of dubious value, and if the intent is to compare the results of the simulation to the experimentally observed behavior of modern mammalian predators, then using such references as empirical parameters could be a source of positive bias. +- Because of the intrinsic limitations of a semi-discrete methodology, we cannot be assured that we have really found an optimal solution, only that a given solution is optimal with respect to the others considered. +- The strategies for the two-raptor case are unrealistic, in that they assume that the thescelosaur completely ignores the more distant of the two raptors at any time. +- The introduction of a reaction time delay substantially complicates the task of the raptor(s), and leaves then vulnerable to deception due to a fairly simple feinting turn. The model lacks the ability to "learn" even repeated behaviors without having a new strategy explicitly designed to counter them incorporated into the text of the program. +- By omitting factors such as terrain, visibility, and obstacles, the model gives considerably more advantage to predators than is the case in nature. This accounts for the unreasonably high capture percentages. + +# Conclusion + +Our model provides conclusive support for the hypothesis that the raptors would be advantaged by hunting in pairs. Moreover, it demonstrates that there + +is no one optimum pursuit or evasion strategy, and that the flexibility to switch between techniques would serve to favor both the predator and the prey. + +# Appendix: Typical Simulator Output Illustrating Prey Strategies in Progress + +While the predator strategies are fairly obvious, the prey strategies may be more difficult to visualize. The following figures illustrate each of the non-default prey strategies in the best possible situation, that is, against the correspondingly weakest predator strategy. Additionally, Figure 4 shows the complications arising from imperfect information. Note, especially as compared to Figure 2, the thescelosaur's tendency to turn too far and the raptor's delayed reaction to the turn. + +![](images/93e8db0526b7c04cc5cd4627ed4e0a41711a2de18215e3465449d2bd309cfbd7.jpg) +Figure 2. Prey-1 with perfect information. + +![](images/dc5b98fcdd0e975b5c8f76d05c13a814a0b7d117524b7f864027840ba62fd62c.jpg) +Figure 3. Prey-2 with perfect information. + +![](images/d30b04244ecaf05a7991c191a2e4e0a9accaa32f74f72d6dc83fcd8f4ac79aed.jpg) +Figure 4. Prey-1 with imperfect information. + +![](images/9bfebddeaba3a0ca91894847b5fd40226ce6b3e177e74ce06a102d6335ab8d24.jpg) +Figure 5. Prey-3 with perfect information. + +# References + +Crichton, Michael, and Steven Spielberg. 1993. Jurassic Park. Motion picture. Hollywood, CA: Universal Pictures. +Curio, Eberhard. 1976. The Ethology of Predation. New York: Springer-Verlag. +Isaacs, Rufus. 1967. Differential Games. New York: John Wiley and Sons. +Miller, Geoffrey F., and Dave Cliff. 1997a. Co-evolution of pursuit and evasion. I: Biological and game-theoretic foundations. Submitted to Adaptive Behavior. Available via the World Wide Web at http://www.cogs.susx.ac.uk/cgi-bin/htm1cogsreps?csrp311. +1997b. Co-evolution of pursuit and evasion. II: Simulation methods and results. In From Animals to Animats 4: Proceedings of the Fourth International Conference On Simulation of Adaptive Behavior, edited by Patti Maes et al. Cambridge, MA: MIT Press Bradford Books. Available via the World Wide Web at http://www.cogs.susx.ac.uk/users/davec/sab96.ps.Z. +Webb, Paul G. 1986. Locomotion in predator-prey relationships. In *Predator-Prey Relationships*, edited by Martin E. Feder and George V. Lauder, 24-41. Chicago, IL: University of Chicago Press. + +# The Geometry and the Game Theory of Chases + +Charlene S. Ahn +Edward Boas +Benjamin Rahn +Harvard University +Cambridge, MA 02138 + +Advisor: Howard Georgi + +# Introduction + +We investigate the hunting strategies of predators and the fleeing strategies of their prey in a chase of finite time. For the velociraptor and thescelosaur, there is no overwhelming advantage for either animal; the velociraptor is faster, but the thescelosaur is more agile. We find optimal strategies for both the hunter and the hunted, for a velociraptor hunting a thescelosaur, as well as for a pair of velociraptors hunting a thescelosaur. + +This original problem actually has a low probability of having occurred, as fossil remains of the velociraptor have been found only in Mongolia, while fossil remains of the thescelosaur have been found only in the Midwestern region of the United States and Canada [Weishampel et al. 1990, 270, 500]. However, this model can be useful in the study of a wide range of such problems, simply by varying the parameters. In studying these particular creatures, we may come to understand the tradeoff between speed and maneuverability. + +# Assumptions and Preliminary Calculations + +We are given that the velociraptor moves at a speed of $v_{v} = 60 \, \mathrm{km/h}$ (16.7 m/s), the thescelosaur moves at a speed of $v_{t} = 50 \, \mathrm{km/h}$ (13.9 m/s), and the velociraptor's hip height is 0.5 m. It is estimated that a velociraptor's turning radius is three times its hip height; thus, the velociraptor can turn in a minimum radius of $r_{v} = 1.5 \, \mathrm{m}$ , while the thescelosaur's minimum turning radius is $r_{t} = 0.5 \, \mathrm{m}$ . We assume that both always find it more advantageous + +to turn with this wide radius rather than to decelerate, stop, change direction, and re-accelerate. + +Hunts are limited in time, e.g., by the maximum endurance (or patience) of the predator, or by the onset of night or day. In this case, the limit is the pitiful endurance of the otherwise fearsome velociraptor. After a burst of speed for $T = 15$ s, the velociraptor must stop to rest, while the thescelosaur can run for a comparatively infinite length of time. We make the additional assumption that the velociraptor must rest for more than $T(v_{v} - v_{t}) / v_{t} = 3$ s, i.e., more than the time required for the thescelosaur to run as far as the maximum distance that the velociraptor could close in 15 s. Thus, the velociraptor must catch the thescelosaur in the first 15 s after the thescelosaur senses the velociraptor. + +The thescelosaur first detects the velociraptor at a distance $D$ , with $15\mathrm{m} < D < 50\mathrm{m}$ , while the velociraptor can detect its prey farther than $50\mathrm{m}$ away. We assume that $D$ is not dependent upon angle; i.e., if each of a pair of velociraptors approaches a thescelosaur from a different angle, the thescelosaur detects each when its distance is less than $D$ . + +We assume that because of the position of the eyes on opposite sides of the dinosaurs' heads, their vision is virtually $360^{\circ}$ ; thus, independent of its own orientation, each is aware of the position of the opponent. + +The average human reaction time is $\approx 0.1$ s; the thescelosaur can turn around $180^{\circ}$ in this amount of time. We assume that animals with the speed and agility of these dinosaurs have a considerably smaller effective reaction time, which we vary between 0.005 s and 0.05 s in our study. Furthermore, we assume that the additional burden on the senses of the thescelosaur of the presence of two velociraptors instead of one does not change its effective reaction time. + +From a picture of the velociraptor [Czerkas and Olson 1987, 28, compared with Paul 1988, 363], we deduce approximate measurements: a body length of $3\mathrm{m}$ , foreclaw length $0.5\mathrm{m}$ , hip-to-foreclaw distance $0.6\mathrm{m}$ , and hip-to-head distance $1.2\mathrm{m}$ . Moreover, a running bipedal dinosaur, because of its long tail, has a center of gravity close to the hips [Alexander 1989, 69]. Based on these measurements, we assume that at top speed the velociraptor will catch anything that comes within a distance $\delta_v = 0.6\mathrm{m}$ of its position, which we define to be the place on its torso from which the foreclaws extend. At the widest point of its torso, the velociraptor is only $0.4\mathrm{m}$ wide; we can ignore this thickness, as it is contained well within the reach of its foreclaws. Note that the location of the center of this reach is not at the hips, which is the point from which we assume the turning radius was calculated by the scientists. However, this slight incongruity does not qualitatively change our approach to the problem. + +The thescelosaur is a biped of similar size. For the velociraptor to catch it, we assume that the velociraptor must be able to grab it at the torso, as the head and tail are too thin to grab easily at $60~\mathrm{km / h}$ . So, we represent the thescelosaur as a circle of radius $\delta_t = 0.2$ m over its hips. If the grabbing region of the velociraptor intersects this circle, the thescelosaur is caught. + +To facilitate calculation, we assume that both predator and prey move at full speed for the entire time of the hunt $T$ , even when they are moving in + +curves, with radii of curvature no less than $r_v$ and $r_t$ . This assumption is not entirely reasonable, as one can calculate the centripetal accelerations to be 19 g's and 39.4 g's, respectively. Given time for further investigation, it would be appropriate to model the dinosaurs with a maximum acceleration up to a top speed. + +For the second part of the problem, with two velociraptors, we assume that the velociraptors work perfectly together: + +- A velociraptor has just as much incentive to let its companion catch the thescelosaur as to catch the prey itself. +- The velociraptors are perfectly coordinated and can communicate their plans. +- The velociraptors allow each other space to move; we assume that this is equivalent to preventing their grabbing regions from intersecting. + +# Analysis: One Velociraptor + +# Approach + +We begin with the simple case of the velociraptor initially chasing the thescelosaur along a straight line, separated by a distance $d$ significantly larger than the turning radii of the dinosaurs. The thescelosaur's goal is to evade the velociraptor for however much time $(T - t)$ remains before the velociraptor runs out of endurance. Thus, if $d > (v_v - v_t)(T - t)$ , the thescelosaur can run directly away from the velociraptor and the velociraptor cannot close the distance in the time remaining. + +But what if $d < (v_{v} - v_{t})(T - t)$ ? (Certainly this will be the case if the velociraptor can approach the thescelosaur undetected to a distance closer than $(v_{v} - v_{t})T = 42 \, \text{m.}$ ) In this scenario, the thescelosaur must make use of its superior maneuverability if it is to survive. For sufficiently large $d$ , no matter how the thescelosaur turns, it is easy for the velociraptor to adjust its course to keep heading directly toward its prey. + +# Encounter Strategies + +The thescelosaur must now make some decisions: When has the velociraptor come near enough for the thescelosaur to make use of its superior agility (while not getting eaten), and how should it let the velociraptor approach? We consider two representative strategies. + +# Encounter Strategy A + +The thescelosaur initially runs directly away from the velociraptor. This costs the velociraptor time, since it can close the relative distance at a rate + +of only $(v_{v} - v_{t})$ . Once the velociraptor has closed to within a distance $k$ , the thescelosaur uses its superior maneuverability to "dive" out of the way. It turns at its minimum turning radius; the velociraptor then turns at its minimum turning radius to intercept, but it is too late (see Figure 1). The distance $k$ must be chosen with great care: If it is too large, the velociraptor can adjust its angle and close on the thescelosaur; if it is too small, the thescelosaur will not be able to get out of the way of the velociraptor's grabbing radius. + +![](images/5ba51fdf52a0ea012f8fe9dee3753290f3959e1dc2790c2bf103bf30c93e7e50.jpg) +Figure 1. Encounter Strategy A. + +![](images/22fa430152af46c4574faf4dbd7421d20a2ae29c2606ed8881833061576ea9dc.jpg) +Figure 2. Encounter Strategy B. + +# Encounter Strategy B + +The thescelosaur allows the velociraptor to close only to a distance $l$ considerably greater than the $k$ in Strategy A. At this point, the thescelosaur turns around and heads directly toward the velociraptor (see ① in Figure 2). The velociraptor, of course, continues to close; the distance between them now shrinks at a rate $(v_v + v_t)$ . At an appropriate distance $m$ , the thescelosaur again dives out of the way (see ② in Figure 2). Compared to Strategy A, however, the thescelosaur will be even more successful at dodging the velociraptor, as it need only change its course by a small amount to fly by the velociraptor at a relative velocity of approximately $(v_v + v_t)$ . The value for $m$ must be chosen with great care: if it is too small, the thescelosaur will not be able to stay outside the reach of the velociraptor, while if it is too large, the velociraptor will be able to compensate and intercept the thescelosaur. + +# Endgame + +If the thescelosaur survives the encounter, the velociraptor will attempt to turn around and once again close in on its prey. The thescelosaur then has two endgame strategies. + +# Endgame Strategy A + +Run away! If the distance between the two is greater than $(v_{v} - v_{t})(T - t)$ , the thescelosaur escapes unscathed as the velociraptor runs out of endurance. + +# Endgame Strategy B + +This is a more daring maneuver but will take a big chunk of the velociraptor's time. Instead of running away from the velociraptor, the thescelosaurus should try to curve around it, ending up directly behind it. The velociraptor must turn around to come at the thescelosaur; because of its superior agility, the thescelosaur may be able to remain in this position relative to the velociraptor for some time. If the velociraptor starts to turn left, the thescelosaur also starts to turn left, attempting to remain $180^{\circ}$ behind it. Because of its superior speed, however, the velociraptor will eventually outdistance the thescelosaur, and the thescelosaur will no longer be able to stay directly behind it. At this point, the thescelosaur should resort to Endgame Strategy A, as the velociraptor will soon turn around and chase it. + +At the end of this post-encounter "endgame," the velociraptor will once again be chasing the thescelosaur, and we return to the "approach" phase. + +# Modeling the Chase: One Velociraptor + +# The Velociraptor Metric + +How does the velociraptor get from point A to point B? More precisely: If the velociraptor is at the origin of the plane, facing in the positive $y$ -direction, how would it get to the point $(x,y)$ ? Since we are considering the velociraptor to have constant speed, it should simply take the shortest path from the origin to the point. Unfortunately for the velociraptor, this distance is not the Euclidean metric, since the velociraptor has a limited turning radius! It cannot take a Euclidean straight-line path. So what is the appropriate path? + +In Figure 3a, the velociraptor is at the origin facing upward. The two circles to either side represent the path of minimum turning radius. For points (such as $A$ and $B$ ) outside these circles, the choice of minimum distance path is fairly clear: The velociraptor turns around the circle of minimum radius until it is heading directly toward the destination point; it then leaves the circle and heads straight toward the point. Representative paths are shown at right as dashed lines. Note that it is always advantageous to turn toward the side of the plane + +on which the destination point lies. (For the calculation of this length, refer to the Appendix.) + +![](images/bf959e20a9543ed0371b5cea926448e4188d66dac7dc56a9a320beaa75dceab4.jpg) +Figure 3a. How a velociraptor at the origin and facing upward gets to points $A$ (at upper left) or $B$ (at lower right). + +![](images/7d73dae0f5533fd36bca8d11be1af2ccae972b4ce47f18cebb27701dd5328031.jpg) +Figure 3b. How it gets to a point $C$ inside the circle of minimum turning radius to its right. + +Points inside the circles of minimum turning radius are more difficult for the velociraptor to access. It must somehow move such that destinations points inside these circles (e.g., point $C$ in Figure 3b) are on or outside the circles of minimum curvature. The shortest way to do this is to turn away (in this case, to the left) from the destination point, following the other circle of minimum radius. Once the destination point lies outside the circles of minimum radius, the velociraptor follows the circles to the point. (For the calculation of this length, refer to the Appendix.) + +We now define a new metric (velociraptor metric 1) on the plane: the distance along the curve from the origin (the location of the velociraptor, with the velociraptor facing the positive $y$ -direction) to the point that the velociraptor follows. This metric is represented in Figure 4 as a density plot with contour lines superimposed. Darker regions correspond to shorter distances for the velociraptor. The circles of minimum turning radius are easy to see, because of the discontinuity of the metric on the portions of the circles above the $x$ -axis. + +Thus far, we have considered the velociraptor as a point, though it has a grabbing radius of $\delta_v$ . To access a point, it need only reach any point a distance $\delta_v$ away from the desired point. We assume that the velociraptor minimizes how far it has to travel. Therefore, we replace the value of the metric at each destination point with the minimum value of the original metric on a disk of radius $\delta_v$ surrounding the destination point, yielding Figure 5 (velociraptor metric 2). Note that the only parameters on which metrics 1 and 2 depend are the grabbing radius and minimum turning radius of the velociraptor, and that metric 1 is simply metric 2 with a grabbing radius of zero. + +Now we treat a subtlety that we alluded to in the previous section. The origin of the coordinate system (the velociraptor's center of gravity) is actually + +![](images/573f8763abd7707057e534ddfaddfdc2d8bda56c7a10b95f5a199dab3cafc78c.jpg) +Figure 4. Velociraptor metric 1, depicted as a density plot. + +![](images/25d74a77026532867a0bd382950ea930b332ac324a1a104714775f278c588128.jpg) +Figure 5. Velociraptor metric 2, depicted as a density plot. + +0.6 m behind the center of the grabbing radius. However, one can see from Figure 6 that given the circular and straight-line motions discussed above, the model of the situation remains exactly the same if we shift the origin to the center of the velociraptor-ball and simply change the effective minimum turning radius to $\sqrt{1.5^2 + 0.6^2} = 1.6$ m. + +![](images/5cf059c325f6a0fff6ab71dbf9e48b71c0ccdc21e50ba6fbbf8a52ec631fb137.jpg) +Figure 6. Result of shifting the origin to take into account the fact that the velociraptor is not a point. + +We can further simplify our model in the following manner: We know that the velociraptor has caught the thescelosaur if their effective regions (circles of radius $\delta_v$ and $\delta_t$ ) overlap. This is equivalent to saying that the centers of the two circles are separated by a distance less than $\delta = \delta_v + \delta_t$ , or that the velociraptor has a grabbing radius of $\delta$ and the thescelosaur is a point. We thus define our metrics as described above, with effective turning radius $1.6\mathrm{m}$ and effective grabbing radius $\delta = 0.8\mathrm{m}$ . + +# Dinosaurs Have Peanut-Sized Brains + +We first assume that the velociraptor acts to minimize, and the thescelosaur to maximize, the value of the velociraptor metric. We will see later that this is not sufficient to encompass all strategies (in particular, Encounter Strategy B, in which the thescelosaur heads straight for the velociraptor until the last possible moment), so we will soon make our dinosaurs a bit more sophisticated. + +To evolve the system in time at a given time $t$ , each dinosaur considers the location and heading of its opponent. It then chooses how to move during the next time step $\Delta t$ . (Note that $\Delta t$ is approximately equal to the effective reaction time, as it is only after the time $\Delta t$ that the dinosaur can next evaluate the movement of its opponent.) As a range of options, the dinosaur chooses from a selection of arcs with length $v\Delta t$ and radii between the minimum turning radius and infinity (a line segment), shown in Figure 7. The dinosaur may choose the path with the most advantageous endpoint, or it may extrapolate each path several more time steps and choose from among those based on the metric evaluated at their endpoints. We vary this choice of strategy in our analysis, to optimize success rates of both predator and prey. + +![](images/131a679a5d6916330ee5bf44c3240903538e169dbf7b35cc5e5dae5d03731644.jpg) +Figure 7. Possible strategies for the velociraptor. + +We developed a computer simulation of the dinosaurs' behavior. Using the strategy of the velociraptor attempting to minimize the metric and the thescelosaur attempting to maximize it, we observed several phenomena discussed in the previous section. Most important, the dinosaurs chose paths similar to those used in determining the metric. + +When the dinosaurs were separated by a distance larger than approximately $3\mathrm{m}$ , we observed the "approach" phase of the chase. The thescelosaur would run directly away from the velociraptor, while the velociraptor would adjust its course to trail directly behind, closing the distance. Under most circumstances, the thescelosaur would attempt to "shake" the velociraptor; but since the velociraptor was a sufficient distance behind, it was easy for it to adjust its course appropriately. Thus, we observed a rapid (on the order of a time-step + +of $\approx 0.01$ s) small-amplitude oscillation of the thescelosaur's direction in the approach phase. In the simulation, once the velociraptor gets close enough to the thescelosaur, the thescelosaur adopts Encounter Strategy A. If it survives, it adopts one of the endgame strategies. In Figures 8 and 9, we show a hunt in which the thescelosaur successfully evades the velociraptor by using Encounter Strategy A followed by Endgame Strategy B. + +![](images/bd97f58474247cffbdd102a31eda01a2ef67482f1b32216de8897b96b1ced5e6.jpg) +Figure 8. Encounter Strategy A, close up. Dotted lines connect points on the two curves that correspond to the same time. The velociraptor makes the big loop. This encounter strategy allows the agile thescelosaur to escape virtually every time if the velociraptor's grabbing radius is below a critical value (0.45 m). Unfortunately, the strategy fails with equal certainty if the velociraptor's grabbing radius is above that critical value. The velociraptor in this figure has a grabbing radius of 0.4 m. A few key stages are: + +A. The thescelosaur runs away from the velociraptor. +B. When the velociraptor gets too close, the thescelosaur quickly turns out of the way. +C. The velociraptor cannot respond to this sudden turn fast enough, letting the thescelosaur duck behind it. Now, the velociraptor must loop around to continue chasing its meal. +D. The thescelosaur escapes before the velociraptor completes its loop. + +We further found that the thescelosaur performed better using metric 2 looking only one time step ahead. Metric 2 is clearly advantageous for the prey, because this metric teaches it to stay out of the path of the predator's grabbing radius, rather than simply avoiding its center. The thescelosaur relies on its ability to maneuver quickly; thus it constantly adjusts its heading, rendering it useless for it to estimate several time steps into the future. + +The velociraptor performed optimally using metric 1, looking 5 time steps ahead. We originally programmed the velociraptor to use metric 2, but it turned out to be a bit too cocky; the velociraptor was constantly disappointed as the thescelosaur barely slipped out of reach. When we changed its strategy to employ metric 1, this problem was eliminated. In a future investigation, it may be useful to make the velociraptor use metric 2 with a nonzero grabbing radius smaller than the actual value. + +![](images/5265875b931d9308807e56268579a1c75ada15b0fa0e07aa7145ad6eee1604bd.jpg) +Figure 9. Encounter Strategy A, far away. After a round of Encounter Strategy A, the velociraptor soon catches up with the thescelosaur, creating a new encounter every $3.2\mathrm{s}$ . This figure shows a typical series of encounters for a 15-s chase when the thescelosaur detects the velociraptor at the minimum detection radius $(15\mathrm{m})$ . Notice that the angle between the chase and escape paths is highly variable and sensitive to initial conditions. After $15\mathrm{s}$ , the velociraptor gets tired and the thescelosaur can simply run away. + +We now ask what parameters allow the thescelosaur to survive. For the given speeds and minimum turning radius, the thescelosaur will always survive for values of the effective grabbing radius $\delta < 0.4\mathrm{m}$ and is always captured for $\delta > 0.5\mathrm{m}$ . In the region in between, the outcome is highly sensitive to initial conditions. Unfortunately for the thescelosaur, the given value of $\delta$ is actually $0.8\mathrm{m}$ . Thus, the thescelosaur should try Encounter Strategy B. + +# The Thescelosaur Learns to Play "Chicken" + +Encounter Strategy B requires the thescelosaur to head directly toward the velociraptor, which is incompatible with the looking forward a few time steps to see which path will maximize its distance from the velociraptor (based on + +metric 2). Thus, we must modify our simulation to study this strategy. + +We assume that the thescelosaur has sufficient time and distance to turn around and head directly toward the velociraptor when the encounter begins. We therefore consider only part 2 of Encounter Strategy B, in which the thescelosaur dives out of the path of the velociraptor just before collision. + +![](images/fcd33aae3ddd938756a8f405d48453800d2041cc83de24fb363c6750df1fba0b.jpg) +Figure 10. Encounter Strategy B. The thescelosaur must follow Encounter Strategy B if the velociraptor has a grabbing radius above $0.45\mathrm{m}$ (at which Encounter Strategy A no longer works). The velociraptor in this figure has a grabbing radius of $0.6\mathrm{m}$ . A few key stages are: + +A. The thescelosaur runs towards the velociraptor. +B. When the velociraptor is too close, the thescelosaur dodges off to a side and the velociraptor follows. +C. The thescelosaur is now to the velociraptor's left, but the velociraptor curves to the right because it knows that it cannot make a sharp enough left turn to catch its meal. +D. The thescelosaur has sent the velociraptor on a huge loop while it swiftly makes a much tighter turn to come in behind the velociraptor. +E. The thescelosaur is far away even before the velociraptor completes its loop. Soon after this, the thescelosaur must turn around and run towards the velociraptor again. + +We assume that once the thescelosaur begins to dodge, it is simply resuming its original strategy of maximizing distance (according to metric 2) between the two dinosaurs (see Figure 10). Thus, we can simulate this strategy using the + +original simulation, with the initial condition that the dinosaurs are heading straight toward each other separated by a small distance $m$ , as shown in the second part of Figure 3. + +We found that if the thescelosaur runs towards the velociraptor and starts turning when it is $2.15\mathrm{m}$ from the velociraptor, it will escape every time from a grabbing radius of $0.6\mathrm{m}$ , even if the other parameters are changed slightly. Thus Encounter Strategy B is a clear improvement over Encounter Strategy A. + +A rough calculation reassures us that such a strategy indeed works for multiple passes, confirming our original assumption. After one such pass, the thescelosaur has a $7\mathrm{m}$ lead, and in the time it takes for the thescelosaur to turn around $180^{\circ}$ , the velociraptor will gain about $1.9\mathrm{m}$ ; this leaves the thescelosaur a bit of maneuvering before the critical $2.15\mathrm{m}$ turning point. + +# The Velociraptor Takes a Gamble, or, The Rational Dinosaurs + +If the thescelosaur can escape using Encounter Strategy B, what is the velociraptor to do? We found that for a grabbing radius of $0.6\mathrm{m}$ , the $2.15\mathrm{m}$ critical distance had very little room for error—if the thescelosaur dodges too early or too late by $0.1\mathrm{m}$ , it will be caught. Thus, the velociraptor knows exactly when the thescelosaur will make its dodge. + +If the velociraptor pursues its until-now optimal strategy of minimizing its metric, it will lose its dinner every time. Therefore, it should try to anticipate the movement of the thescelosaur; if the velociraptor guesses correctly which way the thescelosaur will swerve, and correct its own course accordingly, it can gain valuable time and thus catch its prey. However, if it does not guess correctly, it loses even more time than if it had merely gone straight. Moreover, the thescelosaur pursued by such a decision-oriented velociraptor, for its part, also wants to anticipate the movement of its predator. + +We can model this as a game-theory problem. Consider the last possible moment before the thescelosaur must swerve. Under our original Encounter Strategy B, the thescelosaur swerves either left or right. The velociraptor, knowing this, should arbitrarily choose to swerve either left or right in this instance, giving it a $50\%$ chance of guessing correctly and catching its prey. However, the thescelosaur knows this! So, if it is sure that the velociraptor will swerve, it should keep going straight past the critical point, and once the velociraptor swerves it can dodge the other way an instant later. The velociraptor, knowing this, realizes it is not always to its advantage to anticipate the movement of the thescelosaur; perhaps the thescelosaur will anticipate its anticipation. In this situation, the velociraptor's optimal strategy is to keep going straight! The thescelosaur, then past the critical point, will be eaten. + +Thus, it may be reasonable for the thescelosaur to move left (L), right (R), or stay straight toward the center of the velociraptor (C). The velociraptor can choose to anticipate these moves; we thus denote the velociraptor's strategy by + +L, C, or R. If the velociraptor's guess is correct, we assume that it catches the thescelosaur, receiving a normalized payoff of 1, and the thescelosaur receives a payoff of 0. If the velociraptor guesses incorrectly, the thescelosaur survives the encounter, and the game will be played again at the next encounter, and so on until the velociraptor's endurance runs out. + +If the thescelosaur swerves one way and the velociraptor anticipates the other (a "large miss"), then there will be a decent interval of time before the velociraptor catches up to the thescelosaur for the next encounter. If, however, one of the dinosaurs goes straight and the other swerves (a "small miss"), it will take less time for the velociraptor to catch up. Thus, there will be fewer encounters for the remainder of the hunt following a large miss than after a small miss, and thus the probability $p$ that the thescelosaur survives the hunt after a small miss is less than the probability $q$ that it survives after a large miss. In the small-miss case, the thescelosaur's payoff is therefore $p$ , and that of the velociraptor is $1 - p$ . In the large miss case, the thescelosaur's payoff is $q$ , and that of the velociraptor is $1 - q$ . + +In this analysis, we have simplified the payoffs so all small misses result in the same payoffs and all large misses result in the same payoffs. This may not be entirely correct, as small misses come in two different forms: those in which the velociraptor goes straight, and those in which the thescelosaur goes straight. We have also assumed that the dinosaurs are symmetric and do not prefer one side to the other. + +We would like to find any Nash equilibrium of this payoff matrix. It is clear that there is no pure equilibrium, so we look for a mixed strategy. Let $a$ and $b$ be the respective probabilities that the velociraptor and thescellosaur choose L. Since there is no difference between right or left, the probability of a dinosaur picking L equals the probability that it picks R. Thus, the probabilities that the dinosaurs choose strategy C are $1 - 2a$ and $1 - 2b$ , respectively. + +Finding the mixed Nash equilibrium is now easy. As in the elementary game-theory problem, each dinosaur wants to maximize its own expected payoff, and minimize that of the other. This occurs when the expected payoffs of the opponent are equal for any of its strategies. + +Let $P_{t}(V|L)$ be the expected payoff for the thescelosaur if the velociraptor chooses L. Thus we have $P_{t}(V|L) = P_{t}(V|R) = (1 - 2b)q + bp$ and $P_{t}(V|C) = 2qb$ . Setting these thescelosaur payoffs equal, we find $b = q / (4q - p)$ . + +Similarly, we have $P_{v}(V|L) = P_{v}(V|R) = a + (1 - q)(1 - 2a) + a(1 - p)$ and $P_{v}(V|C) = 2a(1 - q) + (1 - 2a)$ . Setting these equal, we find that $a$ is also $q / (4q - p)$ . + +To determine the probabilities $a$ and $b$ , we must determine $p$ and $q$ . If there is time remaining in the chase for only one encounter, then $p = q = 1$ , as the thescelosaur will not have another chance. Thus, $a = b = 1/3$ , each dinosaur chooses each of its three strategies with equal probabilities, and the thescelosaur escapes $2/3$ of the time! Thus, if both dinosaurs know that only one encounter remains, their expected payoffs from that encounter are $1/3$ (for the velociraptor) and $2/3$ (for the thescelosaur). + +Now suppose (for example) that if there is a miss, there will be time for one more encounter if it is a small miss but not if it is a large miss. Thus, $p = 1$ , since a large miss means that the thescelosaur survives the chase, and $q = 2/3$ , since the probability of the thescelosaur surviving the next encounter is $2/3$ , from the previous paragraph. Therefore, $a = b = 2/5$ . + +Given more time for this study, time-dependent values of $p$ and $q$ could be determined, and we could determine the subgame-perfect Nash equilibrium of the dinosaurs for the entire chase. + +# Two Velociraptors: What Changes? + +# Approach + +It is to the thescelosaur's advantage to run directly away from the velociraptors when the distance from them is large. With two velociraptors, this translates to the thescelosaur running so that the distance between it and one velociraptor remains the same as the distance between it and the other velociraptor. In this large-distance limit, the strategy for the velociraptors is also clear: They should run towards the thescelosaur using the same strategies as above. + +There is one substantial difference between this case and that of one velociraptor: At the very beginning of the chase, the initial configuration is specified not only by $D$ (which suffices for the single velociraptor and prey), but also by the angle made by the two velociraptors with the thescelosaur as the vertex. It is to the velociraptors' advantage to start out $180^{\circ}$ apart, assuming that the thescelosaur moves in a straight line and the velociraptors continually adjust their directions to intercept it (a reasonable assumption in the far distance limit). This arrangement produces the maximum possible initial approach velocity: the velociraptor's maximum velocity $v_{v}$ . + +# Encounter + +By the time the velociraptors close in on the thescelosaur to the point that the prey will will have to start to curve, the configuration of the two velociraptors plus thescelosaur approaches one of only two cases. In the first case, both velociraptors run side by side and hence act roughly as one velociraptor with rather large turning and grabbing radii. In the second case, the velociraptors and the thescelosaur form a straight line but one velociraptor is behind another. + +There are, then, two main strategies for the velociraptors: either to run side by side, or for one to run behind the other and pounce as soon as the thescelosaur starts turning. The side-by-side strategy is quite easy to model; as the two predators act as one, this case is a simple variant of the one-predator case. As one might expect, the critical grabbing radius for switching from + +Encounter Strategy A to Encounter Strategy B turns out to be half that of the one-predator case. + +The consecutive-velociraptor strategy, on the other hand, is a bit trickier. This strategy is meant for very small critical radii (0.3 m or less). The idea is for one velociraptor to "corral" the thescelosaur by curving toward it, even though according to the distance metric this actually makes the thescelosaur farther away. This maneuver restricts the movement of the thescelosaur by a great deal; the other velociraptor, which has been cruising behind the corraling velociraptor during this maneuver, can then circle in for the kill. + +Preliminary studies using our simulation show that this strategy shows a great deal of merit. However, the simulator breaks down; the would-be correlating velociraptor curves the wrong way. Although we did not program the simulator to allow the animals to work together as the preceding paragraph implies, we are confident that this strategy will work for small radii if such experiments are done. Note also that it becomes much harder for the thescelosaur to dive between the predators, although this is probably possible for small enough critical radius. + +# Sensitivity of the Model + +We tested the model by varying the parameters of the simulator, most notably the reaction time and the grabbing radius. Changing the curving radii and the relative velocities, while undoubtedly having a pronounced effect on the outcome, does not exhibit counterintuitive, extremely sensitive, or chaotic behavior. + +On the other hand, Encounter Strategy A is rather sensitive to the reaction times of the dinosaurs. For instance, the critical grabbing radius varies from $0.4\mathrm{m}$ to $0.6\mathrm{m}$ over the range of reaction times that we tested. Changes in reaction time, along with small changes in initial position and velocity, significantly affected behavioral parameters, such as the angle between chase and escape paths. These effects have a biological interpretation: Escaping from a velociraptor using Strategy A involves exploiting a small window of opportunity during which the thescelosaur can duck out of the way before the velociraptor has time to respond. The thescelosaur must respond boldly to the slightest opportunity, resulting in a sensitivity that ensures that no two encounters are alike. + +Encounter Strategy B exhibits a different kind of unpredictability. Although this strategy is less sensitive to initial conditions and reaction times, considerations of game theory require that each dinosaur choose randomly the direction in which to swerve during the encounter. + +Thus, as expected, both models exhibit some unpredictable behavior. After all, the chase that we are modeling is a mortal battle of wits, not a preplanned ritual. + +# Strengths and Weaknesses of the Model + +Our model has many strengths, perhaps the greatest of which is that the model is easy to understand: Minimization and maximization of a metric is a simple concept, and one that is not hard to implement. + +Another prime strength of our model is the extreme robustness of the simulator. Not only can the simulator handle a wide range of similar scenarios simply by changing the parameters involved, but it can also handle a variety of different strategies simply by adjusting the initial conditions accordingly, as we did with Strategy B. However, we were not able to reprogram the simulator to deal with the cooperation between two predators. + +Moreover, we feel that the part of our model incorporating strategy B has many virtues to recommend it. Its robustness, as discussed in the previous section, lends it credibility as a feasible strategy. In addition, its applicability to a relatively large subset of turning radii makes it a better strategy than any other we could find. Furthermore, the game theory presented can be applied to most such finite games. However, we have a caveat in that in an actual situation involving live creatures, the prey would almost certainly not have the presence of mind to realize that running straight towards its predator would be the optimum strategy. + +One of the main weaknesses of our model is the assumption that the dinosaurs go at top speed even when they are turning. More realistically, they should slow down to go around the curves at a reasonable centripetal acceleration. + +# Addendum: Realism Rears Its Ugly Head + +The assumption that the dinosaurs are going at their top speeds even on curves is a rather poor one, due to the tremendous centripetal accelerations that would be involved. A better approximation is to model the dinosaurs' velocity on a circular arc as related to the radius of that circle. Since for centripetal acceleration, $a = v^2 / 4$ , a first approximation is to assume that the dinosaurs can sustain an acceleration of, say, $3\mathrm{g's}$ , and can keep that maximum acceleration no matter what the radius of curvature is. Then we can model the velocity as a function of the radius as $v(r) = \min(\sqrt{a / r}, v_{\max})$ . + +The natural question we must ask is: Does this change the optimal strategy for the velociraptor? + +A second approximation is to take into account the tangential acceleration and deceleration to the curved paths when the radius of curvature is changing. + +# References + +Alexander, R. McNeill. 1989. Dynamics of Dinosaurs and Other Extinct Giants. New York: Columbia University Press. +Czerkas, Sylvia J., and Everett C. Olson, eds. 1987. *Dinosaurs Past and Present*. Vol. II. Seattle. WA: University of Washington Press. +Paul, Gregory S. 1988. Predatory Dinosaurs of the World. New York: Simon and Schuster. +Weishampel, David B., Peter Dodson, and Halszka Osmolska. 1990. The Dinosauria. Los Angeles, CA: University of California Press. + +# Appendix: The Velociraptor Metric + +# Outside the Minimum Turning Radius + +Let the velociraptor (currently considered to be a point) be located at the origin, facing in the positive $y$ -direction. Let the destination point be $A$ , a distance $l$ from the velociraptor, at an angle $\theta$ from the velociraptor's heading; thus, $A = (l \sin \theta, l \cos \theta)$ . We abbreviate the minimum turning radius $r_v$ as $r$ . Let $B = (0, r)$ be the center of the circle of minimum turning radius, and let $C$ be the point at which the velociraptor leaves the circle and moves along a straight line to point $A$ . (Thus line $AC$ is tangent to the circle.) Let $\alpha = \angle ABC$ and $\beta = \angle OBC$ (see Figure 11). + +![](images/2f99851dd17d2deb749c8d370cfc0215354eb71bbff4c4d30ff86b57ebe591a4.jpg) +Figure 11. Diagram for the velociraptor metric outside the minimum turning radius. + +The total distance that the velociraptor must travel is the length $AC$ and the length of the arc $OC$ , which is $r(\beta - \alpha)$ . From the given coordinates, we have $AC = \sqrt{(l\sin\theta - r)^2 + (l\cos\theta)^2}$ and $BC = r$ , so the length $AC = \sqrt{(l\sin\theta - r)^2 + (l\cos\theta)^2 - r^2} = \sqrt{l(l - 2r\sin\theta)}$ . The law of cosines tells us that for a triangle with sides $a$ , $b$ , and $c$ , the angle opposite the side with length $c$ is $\cos^{-1}[(a^2 + b^2 - c^2)/2ab]$ . We found $AB$ above; we know $OB = r$ and $OA = l$ ; thus, we have angle $\beta$ . The angle $\alpha$ is one of the acute angles in right triangle $ABC$ , thus we can say that $\alpha = \tan^{-1}(AC/BC)$ . Thus, the total length is + +$$ +\sqrt {l (l - 2 r \sin \theta)} + r \cos^ {- 1} \left(\frac {r - l \sin \theta}{\sqrt {l ^ {2} - 2 l r \sin \theta + r ^ {2}}}\right) - r \tan^ {- 1} \frac {l (l - 2 r \sin \theta)}{r}. +$$ + +# Inside the Minimum Turning Radii + +Let $A$ be the destination point, $P$ and $Q$ the centers of the circles of minimum turning radius, and $B$ the center of the right-hand circle of minimum turning radius after the velociraptor moves through arc $OC$ , as shown in Figure 12. Let angles $\phi, \alpha,$ and $\beta$ be as shown. The velociraptor must move through minor arc $OC$ and major arc $AC$ , for a total length of $(2\pi - \phi + \alpha + \beta)$ . Let $A$ have coordinates $(x, y)$ . The Pythagorean theorem yields that $AP = \sqrt{(x + r)^2 + y^2}$ and $AQ = \sqrt{(x - r)^2 + y^2}$ . The lengths $PB, PQ,$ and $AB$ are $2r, 2r,$ and $r$ . Thus, we may apply the law of cosines as described in the subsection above and find the angles in question. + +![](images/417c4e5c453d3c5d608469dc724b3c5c29870dc2eb0fe195568bde815468d730.jpg) +Figure 12. Diagram for the velociraptor metric inside the minimum turning radius. + +# Gone Huntin': Modeling Optimal Predator and Prey Strategies + +Hei (Celia) Chan + +Robert A. Moody + +David Young + +Pomona College + +Claremont, CA 91711 + +Advisor: Richard Elderkin + +# Summary + +We develop a model for the hunting strategy of Velociraptor mongoliensis pursuing Thescelosaurus neglectus. Regarding their characteristics, there are discrepancies between the problem statement and the literature; so we parameterize our model in terms of both physical and mechanical characteristics. + +The primary locomotive differences between the animals are their relative speeds and turning radii. We show that the optimal strategies are simple, and we present equations and illustrations for the key components of the model. Since the optimal strategy for the predator includes a stochastic component, we present an equation for the probability of a successful encounter. + +We also model the interaction of multiple predators and multiple prey. With a reasonable assumption regarding cooperative hunting, two or more velociraptors should have an insurmountable advantage, barring earlier detection. + +Finally, we discuss an alternative approach, outlining a genetic programming solution that would evolve optimal strategies for both animals. We begin with the primitives required to evolve such a solution, and we discuss the nature of the evolution required to produce optimal solutions. We show that the evolutionary traits identified by this supposition mirror the known traits. + +# Background + +Velociraptor, member of genus Theropod, lived in Central and East Asia during the Late Cretaceous period (97.5 to 66.4 million years ago). It was a fairly + +small animal, reaching a length of roughly $1.8\mathrm{m}$ and a mass of no more than $45\mathrm{kg}$ [Encyclopaedia Britannica 1997]. Velociraptor is thought to have hunted small herbivorous dinosaurs. Current speculation is that velociraptors hunted in pairs or packs. It is estimated that a velociraptor was capable of brief bursts of speed, roughly $15 - 20\mathrm{s}$ in duration, of up to $60~\mathrm{km / hr}$ . They possessed a sickle claw on each foot and an ossified tendon in the tail, which together enabled them to strike and slash while maintaining balance. The velociraptor is closely related to the slightly larger Deinonychus, indigenous to North America during the Early Cretaceous period (144 to 97.5 million years ago). + +Thescelosaurus neglectus, a member of genus Ornithopod, lived in North America during the Late Cretaceous period. Thescelosauri were herbivores roughly $3.4\mathrm{m}$ in length. They were fast runners, capable of sustained speeds of up to $40 - 50\mathrm{km / hr}$ . While they were of the general type of prey that velociraptors would probably hunt, it is not apparent that the two species inhabited the same continent at the same time. It is more likely that Deinonychus or its descendants were predators of Thescelosaurus. In any event, + +Mongolia during the Cretaceous period was generally arid, and vegetation was probably quite sparse. Thus, the habitat and hunting grounds of Velociraptor were probably large open areas with few obstacles and fewer hiding places [American Museum of Natural History 1997]. + +North America during the Cretaceous was quite wet and heavily vegetated, an environment probably less conducive to high-speed pursuit and better fit for stealthy hunters. Forests and steams may have provided hiding places for the prey but also served as boundaries or obstacles to escape routes. + +# Assumptions and Limitations + +We assume that both predator and prey adopt optimal strategies. This assumption is probably not realistic, since it requires the prey to have relatively perfect information about an approaching predator. Since turning and looking would undoubtedly decrease speed, it is not clear that the prey can both maintain optimal speed and possess optimal information. However, sufficient information might be gathered from other senses to approximate an approaching predator's velocity vector and form a reasonable distance estimate. + +The problem statement advises that Velociraptor can turn with a $1.5\mathrm{m}$ radius, and that Thescelosaurus can turn with a $0.5\mathrm{m}$ radius, while running at full speed. It is highly unlikely that these radii were achievable by either animal. At 60 $\mathrm{km / hr}$ , Velociraptor would experience a centripetal acceleration of $185\mathrm{m} / \mathrm{s}^2$ , or roughly $20\mathrm{g's}$ . Thescelosaurus at $50\mathrm{km / hr}$ would experience $40\mathrm{g's}$ . We doubt that the animals could make these turns. A more reasonable acceleration would be $2\mathrm{g's}$ , corresponding to radii of $14.2\mathrm{m}$ and $9.8\mathrm{m}$ respectively. We are also somewhat suspicious of the assertion that Thescelosaurus has a shorter turning radius than Velociraptor. Velociraptor's higher muscle-to-mass ratio, smaller size, and longer claws suggest that it should be capable of tighter turns at equivalent + +speeds; being more compact, it should be able tolerate higher g-forces as well. Our model ignores the direct impact of terrain, for several reasons: + +- We parameterize velocity, acceleration, and maximum duration of maximum speed for both predators and prey, so it would be easy to accommodate in our model the adverse impact of terrain on either animal. +- We are not confident that we can estimate the impact of terrain on the relative velocity of the two animals. We have not found sufficient information on their foot structures to judge their abilities to pass through non-ideal terrain. +- We are not sure what terrain is appropriate for an encounter. Mongolian terrain of the Cretaceous period is close to ideal for optimal speed, whereas North American terrain could vary significantly. + +We have assumed that random movements (left or right) are captured in a probability function. This seems a reasonable guess, though the literature indicates that mammals favor a particular direction. This is a rather important issue, since the predator would presumably learn the prey's pattern, anticipate any favored turn, and improve its success rate significantly. + +We parameterize acceleration as the maximum acceleration that the animal would tolerate in any direction, though we could assign different values for linear acceleration and centripetal acceleration, or separately limit positive and negative acceleration (starting up and stopping, respectively). We also assume that acceleration is constant at this maximum rate, and that changes in speed would be done prior to initiating a turn. Since optimal turning radius is achieved by decreasing speed prior to turning, this assumption is beneficial to both predator and prey. However, it is somewhat unlikely that an animal would slow down, turn tightly, then accelerate to full speed, rather than begin the turn at full speed and decrease linear velocity while turning. + +We make no attempt to quantify the relative costs to each animal relative to the pursuit game. For example, since the prey dies if it loses, it would be reasonable to assume that the prey would adopt a risk-adverse strategy until capture is imminent, followed by a "try anything" strategy, including attacking the predator at the last minute. The predator, on the other hand, can make multiple attempts at the game, so it would be reasonable for it to attempt a high-risk maneuver with a reasonable probabilistic success rate, since failure implies only a delay in lunch rather than death. + +For flexibility, we parameterize several variables, such as reaction time, attack success probability, and attack radius. + +In the multiple predator model, we focus on the two-predator game. Two predators are virtually certain to win; adding more just doesn't seem fair. + +We do not consider multiple prey models, since a single predator is not going to attempt to capture multiple prey, and multiple predators vs. multiple prey ultimately resolve down to multiple instances of "two or greater vs. one" or "one versus one." + +We ignore the search costs, encounter rates, and stealth strategies, except to note that the predator benefits from minimizing the distance prior to beginning the chase and the prey benefits from maximizing the detection distance. We include a parameter for the detection distance in our model, and we compute the maximum distance at which the predator can expect success. + +Finally, we make no attempt to deal with visibility, weather, presence of alternate animals, presence of other prey, obstacles, obstructions, boundaries, bodies of water, or other possibilities. Mathematically speaking, these are relatively minor omissions. Boundary conditions limit the regions of safety and danger and as such introduce variations to the probability of capture, but these calculations are fairly simple. Obstacles tend to favor the predator, since its reaction time will be longer and it will have seen the prey's response to the obstacles. This is somewhat equivalent to a shortening of the relative distance between the two animals, which can be easily accommodated in our model. Water, weather, and similar environmental considerations favor the animal with superior physical adaptations, and we are not clear as to which animal that is under each of the possible conditions. Again, the impact is most likely to be a change in relative velocities, a condition our model can accommodate. + +# Model 1: Single Prey, Single Predator + +We consider the optimal strategy for a single predator and a single prey. We initially ignore stalking activity by the predator. At some time $t_i$ and location $d_i$ , the prey becomes aware of the presence of the predator and flees. The speeds of the predator and prey are $v_v$ and $v_t$ , the maximum duration over which the predator and prey can maintain their maximum speeds are $M_v$ and $M_t$ , and their minimum turning radii are $r_v$ and $r_t$ . + +At any time during the chase, the prey has the option of changing direction, subject to the minimum turning radius. If the distance between the predator and the prey is sufficiently large—specifically, if the predator is capable of adjusting its approach trajectory to intercept all points on any circular path taken by the prey—it is never prudent for the prey to make such a direction change, since the result of such a path would decrease the net distance between the two animals without increasing the chance of escape. + +If, as shown in Figure 1, the predator can reach the point where its minimal turning radius touches the circle representing the prey's minimum turning radius prior to the prey arriving at that point, then the prey has committed a highly disfavorable action. As long as this condition persists, the prey can maintain the maximum distance between itself and the predator by fleeing along a linear path directly away from the predator. Provided that the minimum turning radius of the prey is smaller than the minimum turning radius of the predator and the prey can execute a turn whose path crosses the intersection before the predator's does, then the prey can exploit this advantage by executing a minimum-radius turn to escape from the danger area [Howland + +1974, 334-335]. Such escape is a temporary solution, since the predator will eventually adjust its approach vector and resume the chase. However, if the prey is capable of executing these maneuvers for a sufficiently long period of time, then the predator may have to abandon the chase. Therefore, the optimal strategy for the prey is to run directly away from the predator until the distance decreases to the point where the turning gambit is effective. + +![](images/0fa3250430572f848ac48ebf24b787af966a474035305ea8199ced5ea3981631.jpg) +Figure 1. Ineffective turning region. + +Both the predator and prey are assumed to be traveling at extremely high speeds. In the problem statement, the maximum speeds of predator and prey are $60\mathrm{km / hr}$ and $50\mathrm{km / hr}$ , with minimum turning radii $1.5\mathrm{m}$ and $0.5\mathrm{m}$ . The equation for centripetal acceleration is + +$$ +a = \frac {v ^ {2}}{r}. +$$ + +Therefore, the centripetal acceleration of the predator and prey, given their maximum speeds and turning radii, are + +$$ +\begin{array}{l} a _ {v} = \frac {\left(\frac {6 0 \mathrm {k m}}{\mathrm {h r}} \cdot \frac {1 0 0 0 \mathrm {m}}{1 \mathrm {k m}} \cdot \frac {1 \mathrm {h r}}{3 6 0 0 \mathrm {s}}\right) ^ {2}}{1 . 5 \mathrm {m}} = 1 8 5. 2 \mathrm {m / s e c} ^ {2} = 1 8. 9 \mathrm {g} \mathrm {s}, \\ a _ {t} = \frac {\left(\frac {5 0 \mathrm {k m}}{\mathrm {h r}} \cdot \frac {1 0 0 0 \mathrm {m}}{1 \mathrm {k m}} \cdot \frac {1 \mathrm {h r}}{3 6 0 0 \mathrm {s}}\right) ^ {2}}{0 . 5 \mathrm {m}} = 3 8 5. 8 \mathrm {m / s e c} ^ {2} = 3 9. 4 \mathrm {g} ^ {\prime} \mathrm {s}. \\ \end{array} +$$ + +These are not reasonable acceleration rates. Either the turning radii must be significantly larger, or else the animals must decelerate prior to entering the turns. We define $G_{v}$ and $G_{t}$ as the maximum number of g's that the animals can tolerate, either as linear or centripetal acceleration. We estimate 2.0 to be a reasonable value for both of these constants, which result in turning radii of $14.2\mathrm{m}$ for the predator and $9.8\mathrm{m}$ for the prey, or else speeds of $5.4\mathrm{m / sec}$ for the predator and $3.1\mathrm{m / sec}$ for the prey. + +We include a reaction time and a deceleration period during which the animal adjusts its velocity to achieve its minimum turning radius. The ratio + +of the turning radii is more relevant than their actual values, since this ratio determines whether the prey will successfully reach the safe area. We therefore normalize the radii relative to the radius of the predator's minimum turn. Following the example of Howland [1974], we normalize the speeds in a similar manner. Therefore the predator's speed is arbitrarily set to 1, as is the predator's radius. The prey's speed is set to $v_{t} / v_{v}$ , and the prey's radius is set to $r_t / r_v$ . To create a parametric equation in dimensionless units, we normalize time, $x$ , and $y$ as follows: + +$$ +t = \frac {T v _ {v}}{r _ {v}}, x _ {v} = X _ {v} / r _ {v}, y _ {v} = Y _ {v} / r _ {v}. +$$ + +We define the starting point of the turning gambit to be $T_0 = 0$ , which implies that $t = 0$ at the beginning of the maneuver. The total time of the chase is the sum of the time spent in the linear chase, plus the time spent in the maneuver, plus the time spent following the maneuver, assuming that it is successful. In Figure 2, we label the four critical time events. + +![](images/953406c6a83433d2b074b139f8d42f5a2e1e6b89fb51d7f254709d3982112332.jpg) +Figure 2. Turning gambit. + +At $T_{0-3}$ , the prey begins to decelerate. At $T_{0-2}$ , the prey enters the turn. At this point the predator recognizes the turn and experiences a reaction period. At $T_{0-1}$ , the predator begins to decelerate while the prey continues on the turn. At $T_{0}$ , the predator begins his turn. The gambit is successful if the prey is able to reach the intersection prior to the predator reaching that point. Since the interval between $T_{0-3}$ and $T_{0}$ is easily calculable, and the distance traveled by the predator is along a straight path, we calculate this time and distance separately: + +$$ +\left[ T _ {0} - T _ {0 - 3} \right] \mathbf {s}, \qquad \left[ (T _ {0 - 1} - T _ {0 - 3} V _ {v} + (T _ {0} - T _ {0 - 1}) V _ {v} + \frac {1}{2} G _ {v} g (T _ {0} - T _ {0 - 1}) ^ {2} \right] \mathbf {m}, +$$ + +and begin our calculations for the turning gambit at point $T_0$ . The equations for the two arcs are as follows: + +$$ +x _ {v} = \sin t, y _ {v} = 1 - \cos t; +$$ + +$$ +x _ {t} = x _ {0} + r \sin \left(\frac {v [ t + (t _ {0} - t _ {0 - 3}) ]}{r}\right), \qquad y _ {t} = r - r \cos \left(\frac {v [ t + (t _ {0} - t _ {0 - 3}) ]}{r}\right). +$$ + +The prey reaches the safe area if it arrives at the intersection of the two arcs prior to the arrival of the predator. The intersection occurs at $x_{v} = x_{t}$ , $y_{v} = y_{t}$ . These values can be computed using the bisection method, or alternatively by Newton's method, since the predator's path is limited to the first quadrant. Using a relative speed of 0.33, a combined normalized reaction and deceleration time of 0.2 (which corresponds to a real time significantly less than 0.2 s, so in essence this is reaction time only, with no deceleration), and a normalized $x_{0}$ value of 1.06, intersection occurs at (0.9, 0.66), and both animals arrived simultaneously. Thus, this starting point was a poor choice for the prey. + +Once we have solved the intersection problem, it is easy to find the minimum distance required for the gambit to be successful. If detection has not occurred outside of this range, then the prey's gambit will fail and the predator will win. An effective stealth strategy is therefore beneficial to the predator. + +The intersection calculated above, which reflects the assumptions in the problem statement, represents the critical point: Any $x_0$ less than 1.06 that does not permit the predator to catch the prey on the predator's linear path will result in escape. Assuming a 1.5 m radius for the predator, this represents a negligible elapsed time; but if we assume a more reasonable radius of 14.2 m, this corresponds to 23.9 m, or roughly 1.5 s. During this time, the predator has completed roughly one-fourth of a circle, so an additional 3.0 s will expire prior to its re-establishing a linear vector with respect to the prey. So if the prey bolted when the predator was 30 m or farther away, and $M_v$ is 15 s or less, then the predator will have failed to capture the prey and the game is over. + +In fact, with a relative radius of 0.33, the prey can repeat the winning strategy indefinitely, regardless of the actions of the predator, assuming that the predator reacts to the prey's maneuvers. Therefore, we would recommend that a single predator attempt to anticipate the optimal distance for a turning maneuver, guess the direction, and turn preemptively. If the predator notices that the prey has not turned after the beginning of the predator's preemptive move, then the predator should change to the opposite direction. This will force the prey to turn in the revised direction, decreasing the length of the arc prior to intersection with the predator's path. This results in a probability function for the predator of + +$$ +\frac {\left(t _ {0} - t _ {0 - 1}\right) + \left(t _ {0 - 2} - t _ {0 - 3}\right)}{\left(t _ {0} - t _ {0 - 3}\right)} \cdot P [ \text {s u c c e s s f u l} +$$ + +# Model 2: Multiple Predators, Single Prey + +Our first objective is to define a distance function representing the prey's possible destinations, given a finite escape window. We assume that the prey can either continue in the forward direction at its maximum speed, or make a turn with a radius less than or equal to its minimum turning radius, or it can come to a full stop and resume in any direction. Although we account for acceleration and deceleration time when the animals change direction, we + +assume that their deceleration time going into a curve and their acceleration times coming out of curves are instantaneous. This somewhat overstates the available forward region; but it overstates it for both predator and prey, and it doesn't change the characteristics of the safety and danger regions (as we will show). So we believe that it will result in a slight understatement of the "double danger" region, and an even smaller impact upon the overall probability function. + +As we discussed in the one-one model, a tighter turning radius implies a smaller speed, given a limiting acceleration rate. In Figure 3, the region that can be reached in a finite time period, in the forward direction, is the area in the figure marked "A". + +![](images/1e07aa4e2e94b90ac29a537d07718d4fd627b3a347199e34839ecdcd0871f203.jpg) +Figure 3. Accessible area. + +The area marked "B" represents the reachable area if the animal comes to a complete stop, turns $180^{\circ}$ , then resumes travel in the reverse direction and remains subject to the minimum turning radius. In fact, the animal could resume travel in any direction; however, given the reality of high-speed pursuit, it likely will resume maximum speed as quickly as possible, so the area inscribed by the figure is accurate, although the direction may not be. The area in region B and the area in region A are described by the same equation, although the time value in the B area is smaller since the deceleration time must be deducted. The equations for determining these areas are + +$$ +r = \frac {V ^ {2}}{a}, \theta = \frac {t v}{r} = t \sqrt {\frac {a}{r}}, +$$ + +$$ +x = r (1 - \cos \theta), \qquad y = r \sin \theta , +$$ + +$$ +\operatorname {A r e a} (A) = 2 \int_ {V ^ {2} / M} ^ {\infty} r \sin \left(t \sqrt {\frac {a}{r}}\right) d r - 2 \int_ {0} ^ {D} \sqrt {\left(\frac {V ^ {2}}{M}\right) ^ {2} - x ^ {2}} d x, +$$ + +where + +$$ +D = \left[ 1 - \cos \left(t \sqrt {\frac {a}{V ^ {2} / M}}\right) \right] \frac {V ^ {2}}{M}. +$$ + +The calculation for $\operatorname{Area}(B)$ uses the same equation, except that the time value $t$ is replaced by $(t - t_b)$ , where $t_b$ is the total time required to decelerate, stop, and accelerate to full speed in the new direction. + +Given our area equation, we now have the necessary tools to develop an optimal chase strategy. The probability of a successful hunt is + +$P[\text{successful hunt}] = \{\text{Overlap}(\text{Prey}, \text{Predator1}) + \text{Overlap}(\text{Prey}, \text{Predator2})\}$ + +- $2 \times \text{Overlap}(\text{Predator1}, \text{Predator2})$ + +$\times P[\text{successful hunt with single predator}]$ + ++ Overlap(Predator1, Predator2) $\times P$ [success of two predators], + +where $\operatorname{Overlap}(A, B)$ is a function computing the area of the overlapping region between areas $A$ and $B$ . This function can be constructed as a combination of two iterations of the same area integral used previously, with adjustment for the relative positions and orientations of the two areas. We have not constructed this function during this project. + +Given our probability function, and the existence of an Overlap function, we can numerically solve for the optimal displacement and orientation vectors that maximize the value of $P$ [successful hunt]. With reasonable values of the two success functions, we would expect a strategy of converging attack, with one predator remaining sufficiently behind, such that the probability of capturing the prey throughout its entire region is at least $P$ [successful hunt with a single predator], as shown in Figure 4. + +![](images/74e95cbd6ddc5e8fbf95e048c84b3f0704b12fe094a26ae7859227b67f6116e3.jpg) +Figure 4. Optimal attack vectors. + +Note that we have drawn the accessible regions quite smaller than their actual size, to illustrate the relationships without the clutter of overlapping lines and curves. Any strategy where the predators are parallel to the $x$ -axis + +is inferior, since at some point in the chase the $B$ region of the prey will be uncovered by either probability function. This in turn gives rise to an effective prey strategy. It is in the prey's interest to have such an uncovered region exist. Therefore, the prey's best strategy is to alter course in the direction of the trailing predator, in an attempt to achieve a normal escape vector with respect to the lines between the two predators. If the prey is able to get directly between the two predators, with a direction vector directly towards one of the two, it can employ the strategy outlined in the first model, effectively reducing the problem to a single-predator problem. These simple assertions address all of the possible cases of the two-one problem. + +Adding additional predators increases the successful hunt probability and effectively eliminates the prey's linearization strategy, since getting three or more predators collinear is more difficult than two. Whether this total probability is superior to the case where two or more separate games (of one-one or two-one) are conducted concurrently depends on the probability values assigned to each function. With the exception of assuring that all predators avoid collinearity, the strategy employed by each predator in a group, except for the lead predator, is identical. This results in the prey being coerced into a spiral path with decreasing predator-to-prey distances. In any case of two or more predators, the prey is in a very poor situation. + +# Limitations of the Model + +As with most mathematical models, our model has a number of limitations. + +- Our model deals with a one-time chase and gives no indication of the result of a second encounter between the same prey and the same predator. We have not explored the learning that each animal undergoes between chases. +- Our model deals solely with the optimum distance at which the prey should initiate a turning strategy and does not take into account the scenario of detection of the predator by the prey. If the distance at which the prey first detects the predator is smaller than the optimal distance at which the prey should employ its turning maneuver, then the prey will not escape successfully. +- Although the probability functions that assign probabilities of a successful hunt to various areas of overlap exist, these functions may be difficult to derive explicitly. +- Our models do not rely extensively on the actual biological or mechanical aspects of the dinosaurs themselves, and this limitation is probably the most difficult one to overcome, since the only data regarding these dinosaurs are found in fossils. + +- We have limited knowledge of the Velociraptor's hunting habits. We are uncertain whether they are search-oriented creatures (as we have assumed from the outset) or whether they wait and hide for their prey. +- We have treated the dinosaurs as point masses. In reality, even if their trajectories do not cross at exactly the same time, it is still possible that the two dinosaurs come sufficiently close that the predator can reach the prey, or the predator may otherwise jump out to capture the prey. + +# Discussion of Alternatives + +Our model addresses the most obvious variable characteristics of a predator-prey relationship but it does so in an idealized manner. We assume that the prey and the predator are making relatively informed decisions and that they are making rational choices based on perceived information. While we are confident that we in fact do model the behavior of the animals, we are uncomfortable with the premise that the animals are solving optimization problems while running for their lives. In this section, we discuss a method by which the predator and prey can essentially choose the expected actions but that does not require the assumption of advanced cognition. Specifically, we are searching for general strategies that optimize success in all of the various $n \times m$ models while minimizing both the information-gathering and cognitive-processing requirements. + +Our method uses genetic programming to develop both the prey and predator models. The essence of this method is the generation of "chromosomes" that contain "genes" in a manner that selects for fitness. In this context, the chromosomes represent entire programs and the genes represent individual program steps. Given their differing objectives, the predators and prey have different genes and chromosomes. The process of genetic programming is to generate randomly several hundred "individuals," to test each individual's fitness, then to select the most fit individuals for reproduction. + +Reproduction involves copying the individuals in the culled pool and randomly applying certain mutations. At the end of the reproduction stage, the new pool of individuals is again tested; and the process is repeated until the pool is comprised exclusively of fit individuals. + +It is fairly easy to develop a predator model that can always converge on a stationary prey or set of prey. Koza [1992] describes two genetic programs relating to ant behavior. The first involves a trail of food with an objective of passing over each item in a limited amount of time [1992, 251-257]. Using an extremely limited function set consisting of + +- terminal actions Move, Right, and Left; +- decision functions If-food-present-do NEXT-else-do-subsequent, Do-two-actions-sequentially, and Do-three-actions-sequentially; + +20 genes, + +it took only 21 generations to evolve a program that successfully located 89 of 89 food objects. Expanding the function set to a total of nine operations, including Drop-pheromone and Move-to-adjacent-pheromone, with 47 genes, resulted in a single program produced by generation 38 that could be executed by each member of an ant colony, with the result that the colony was able to locate and transport 144 food objects from two locations within a limited time period [1992, 310–317]. + +We developed a set of primitives for both the prey and predator programs that could be used to evolve effective strategies genetically. The requirement of adapting to the presence of one or more additional predators makes both the prey and predator programs significantly more complex than Koza's ant functions, but they are entirely reasonable as possibilities for higher animals. Thus, we would expect to need a chromosome with roughly 1,000 genes, and we would expect a minimum of several hundred generations before an adequate program evolved. If dinosaur cognition is primarily instinctive, as is the case for ants, one might argue that the predators are unlikely to survive a learning process of such a duration. On the other hand, if the dinosaurs learn their hunting strategies, then an individual need only participate in a few hundred hunts to master or develop a successful technique. The prey do not get the opportunity to learn from their mistakes. Thus, we would suspect that their responses need to be more instinctual. We would therefore argue that necessary conditions for this mechanism to be adopted by both prey and predator would be $r$ -favored reproduction by the prey and relatively high intelligence in the case of the predators. These traits are consistent with current theory regarding both animals. + +# References + +American Museum of Natural History. 1997. Information pages at www.amnh.org. +Encyclopaedia Britannica. 1997. Encyclopaedia Britannica CD 97. Chicago, IL: Encyclopaedia Britannica. +Howland, Howard C. 1974. Optimal strategies for predator avoidance: The relative importance of speed and maneuverability. Journal of Theoretical Biology 47: 333-350. +Koza, John R. 1992. The genetic programming paradigm: Genetically breeding populations of computer programs to solve problems. In Dynamic, Genetic and Chaotic Programming, edited by Branko Soucek, 203-321. New York: Wiley. + +# Lunch on the Run + +Gordon Bower + +Orion Lawler + +James Long + +University of Alaska Fairbanks + +Fairbanks,AK 99775 + +Advisor: John P. Lambert + +# Introduction + +We devised two different models for this pursuit: a purely mathematical model, and a computer model. Both models use the following assumptions: + +- All animals are represented as points at their respective centers of mass. +- The simulations and chase both begin when the predator, slowly sneaking up on the prey, is spotted by the prey. +- The chase lasts 15 s or until the prey is killed, whichever occurs first. +- No animal may travel at more than its specified maximum speed. +- Each time that the distance from the prey to the predator is at a local minimum, the prey takes a chance of being killed. + +Table 1 summarizes notation used in the paper. + +# Introduction to the Mathematical Model + +The mathematical model also assumes that it takes negligible time to start and stop moving but that the turning radius imposes a maximum angular velocity. This assumption makes the analysis possible without a computer but neglects the finite acceleration capability of real animals. This model can analyze only the one predator/one prey situation; it works quickly and well. + +We analyze two predator strategies and one prey strategy. The prey strategy is to run directly away from the predator until the predator gets closer than + +some critical distance, then make a sharp turn to the left or right. The prey can just squeak by the predator, whose larger turning radius makes it lose a few meters. The predator strategies considered were the hungry predator, who always heads straight for the current location of the prey if possible, and the maximal-turning predator, who uses knowledge of the prey's strategy to turn more sharply than the hungry predator in an effort to cut off the prey. + +We calculate the probability of the prey's survival, given parameters about the predator and prey paths and the distribution of initial separations. + +# Introduction to the Computer Model + +We use an iterative algorithm to figure out where the predators and prey go and which prey survive. By running several thousand 15-s scenarios, we determined how survival rates change depending on predator strategy, prey strategy, initial separation, the elevation and terrain, animal reaction times, and so on. + +The computer model supports $n$ predators chasing $m$ prey across any terrain. It takes into account acceleration/deceleration, reaction times, two predator strategies, and three prey strategies. Written in $C++$ and running on a PowerPC 604-based $132\mathrm{mHz}$ workstation, the program can calculate 600 15-s scenarios using a 1-ms timestep in $1\mathrm{min}$ . + +# Non-Chasing Phases of the Hunting + +# Locating Prey + +Thescelosaurus is similar in size to Velociraptor, so a single kill would provide the predator with sufficient meat for several meals; Velociraptor need bring down only one victim every few days. Assuming that each predator-prey interaction is independent, the number of chases required to kill one animal has a geometric distribution with parameter $1 - P$ , the probability of a kill on one chase. Thus the expected number of chases per kill is $1 / (1 - P)$ , with variance $P / (1 - P)^2$ . Assuming a typical value for $P = 0.3$ , the predator usually succeeds on the first or second attempt. More than four chases will be required only $1\%$ of the time. + +# Stalking + +A predator, having discovered prey as yet unaware of the predator's presence, normally attempts to approach the prey stealthily, in an effort to reduce the length of the inevitable high-speed chase. Likewise, the prey normally attempts to be vigilant, making it difficult for the predator to approach unnoticed. + +Table 1. +Notation. + +
(x1,y1)location of the predator, Velociraptor mongoliensis
(x2,y2)location of the prey, Thescelosaurus neglectus
v1speed of the predator; usually 50/3 m/s, or 60 km/h
v2speed of the prey; usually 125/9 m/s, or 50 km/h
r1full-speed turning radius of the predator; usually 3/2 m
r2full-speed turning radius of the prey; usually 1/2 m
rv=v1/(v1-v2)ratio of predator speed to closing speed, i.e., how far the predator must run in a straight-line chase to get 1 m closer to the prey; usually 6
θ1orientation of the predator, in radians
θ2orientation of the prey, in radians
ttime
tmaxmaximum chase duration (running time for the predator); usually 15 s
hIseparation of predator and prey at the beginning of the chase
hBseparation of predator and prey when the prey attempts to break away by turning distance traveled by predator during initial straight-line pursuit
dI=rv(hI-hB)distance traveled by predator during the prey's breakaway maneuver
dBdistance traveled by predator during the prey's breakaway maneuver by the prey
dGdistance traveled by predator to regain ground lost during a successful breakaway maneuver by the prey
Si(d)closest approach of predator and prey during the first d meters traveled by the predator during the ith phase of the chase (i=1: initial straight pursuit; i=2: breakaway maneuver; i=3: catch-up pursuit following successful breakaway)
pS(S)probability that the prey survives one close approach to the predator, during which its closest approach to the predator is S
pH(hI)probability that the prey survives a complete attack, given initial separation hI
f(hI)probability density function for the initial separation of predator and prey
a,bshape and scale parameters for a gamma distribution
ν,ωshape parameters for a beta distribution
Pprobability of prey surviving a typical attack
+ +Neither of these behaviors appears explicitly in our models. Instead, our models calculate $p_{H}(h_{I})$ , the conditional probability of the prey's survival, given the separation at the start of the chase, and combine $p_{H}$ with $f(h_{I})$ , the probability density function for the initial separation. We account for the stealth of the predator and the vigilance of the prey by our choice of $f$ . The more attentive the prey, the higher the mean of $f$ ; the more stealthy the predator, the lower the mean. We model $f$ using gamma and beta distributions, whose parameters can be altered easily to reflect conditions. + +# Analytic Model for Trajectory Selection + +# Model Development + +The problem statement does not specify the acceleration capabilities of predator or prey, but it does specify turning radii (which, considering the speeds involved, are perhaps too small). We choose to permit infinite acceleration, i.e., + +at any given time, each animal may select any speed between zero and its maximum speed. However, to preserve the turning-radius limitations, we also introduce a maximum angular velocity. + +The maximum angular velocity is calculated from the maximum speed and minimum turning radius. Given a speed $v$ and radius $r$ , turning $180^{\circ} = \pi$ radians requires traveling $\pi r \, \text{m}$ , which can be done in $\pi r / v \, \text{s}$ . Thus, we obtain + +$$ +\left| \frac {d \theta}{d t} \right| _ {\mathrm {m a x}} = \frac {v}{r}, +$$ + +which, using the constraints in the problem, results in maximum angular velocities of $100 / 9$ rad/s for Velociraptor and $250 / 9$ rad/s for Thescelosaurus. + +Further, we assume that the predator begins running directly toward the prey and the prey begins running directly away from the predator. + +The pursuit and evasion strategies consist of recipes for choosing new angular and linear velocities, given current positions and velocities. + +# Strategy for Predator and Prey Far Apart + +If the predator is sufficiently far away, since the predator tires quickly, the prey can escape by simply outrunning the predator. In general, this is possible when $v_{1}t < v_{2}t_{\max} + h_{I}$ , or, rearranging terms, when $h_{I} > (v_{1} - v_{2})t_{\max}$ . In the situation of the problem statement, $(v_{1} - v_{2})t_{\max} = 125 / 3$ m. + +If the predator and prey are closer than $(v_{1} - v_{2})t_{\mathrm{max}}$ meters, but the separation is still large in comparison to the turning radii, then neither predator nor prey gains any benefit from being the first to deviate from a straight path. If the predator turns, it increases the minimum path length required to catch up to prey moving straight ahead. In addition, when the prey sees the predator turn one way, the prey can respond by turning in the opposite direction, further increasing the separation between them. Similarly, if the prey deviates from its straight course, it is inviting the predator to turn in the same direction; the predator's arc will lie inside the prey's, meaning that the prey has unnecessarily helped the predator to catch up. + +# Strategy When Predator and Prey Are Near + +If the prey is close to the predator, a sharp turn by the prey may enable it to reach a "safe zone" that the predator—faster, but handicapped with a wider turning radius—cannot enter without stopping and turning around. + +Since the prey's goal is to reach the "safe zone" while giving the predator a minimum amount of time to respond, the prey should always make a minimum-radius turn. But when is the ideal moment to make the turn? If the prey turns while the separation is still wide, it will run into the waiting jaws of the predator; if it waits too long, it may be within the predator's grasp before + +the turnaround maneuver is complete. The answer depends on what strategy the predator uses to respond to the prey's sharp turn, which depends on how much the predator knows about the prey's intentions. + +If the predator turns itself around and catches up to the prey again, a second breakaway maneuver may be necessary, then a third, and so on. Even if the idealized "point animals" never touch other, the breakaway maneuver is not risk-free. We assign the prey a probability of surviving each breakaway, which depends on the closest approach distance. + +By computing $d\left[(x_1 - x_2)^2 + (y_1 - y_2)^2\right] / dt$ and seeing how it depends on $dx_1 / dt$ and $dy_1 / dt$ , we learn that the predator should continue moving forward only if the prey is in front of, rather than behind or directly alongside, the predator. Computationally, this amounts to finding the angle between the predator's course and the azimuth from predator to prey: + +$$ +\mathrm {M o v e f o r w a r d i f} \left| \theta_ {1} - \tan^ {- 1} \left(\frac {y _ {2} - y _ {1}}{x _ {2} - x _ {1}}\right) \right| < \frac {\pi}{2}, +$$ + +with the minor caveat that it may be necessary to add $2\pi$ to or subtract $2\pi$ from $\theta_{1}$ to ensure that $\theta_{1}$ and the calculated (normally the principal) inverse tangent differ by $\pi$ or less. This decision procedure tells the predator only if it should move forward, not how fast; it is often not always best for the predator to travel at top speed. + +For a predator always running at top speed, it is not particularly difficult to compute the locus of all paths that it can follow in a given time. For a predator with speed $v$ and turning radius $r$ at the origin, initially moving along the $y$ -axis, the region is bounded by three curves: the two circles of radius $r$ centered at $(\pm r, 0)$ and the arc described by the parametric equations + +$$ +x = \pm r \left[ 1 - \cos u + \left(\frac {v t _ {\mathrm {m a x}}}{r} - u\right) \sin u \right], y = \pm r \left[ \sin u + \left(\frac {v t _ {\mathrm {m a x}}}{r} - u\right) \cos u \right] +$$ + +for $u \in [0, vt / r]$ . The resulting mushroom-shaped region is shown in Figure 1. The problem is isomorphic to the familiar geometric problem of a swinging rope hanging between two tangent circles and closely related to the swinging rope hanging from the cusp of a cycloid (i.e., a pendulum of uniform period). These problems are considered in various common references, such as Wells [1991]. + +For the parameter values of the problem statement, the prey can always get outside of this locus by executing a sharp turn, provided the prey is less than about $1.57\mathrm{m}$ away. (We conjecture that the condition is $h_B < \pi /2$ , but we did not have the time to prove it.) + +# The Hungry Strategy + +"Run directly toward the prey, if possible; if not, turn to face the prey, moving in its general direction if possible." This type of strategy is traditionally called + +![](images/10408286f89356c67e661d0cdbc1f73a63de32d4391e1685388e961be6f8cacd.jpg) +Figure 1. Locus of predator's paths 0.2 s after breakaway. + +a "greedy algorithm"; in the context of this problem, "hungry" seems a more evocative name. + +For the predator to select an angular velocity, it needs to know how the angle between his course and the prey's position is changing. This amounts to calculating the derivative of $\tan^{-1}[(y_1 - y_2) / (x_1 - x_2)]$ , which is + +$$ +\frac {y _ {2} - y _ {1}}{(x _ {2} - x _ {1}) ^ {2} + (y _ {2} - y _ {1}) ^ {2}} \left(\frac {\partial x _ {2}}{\partial t} - \frac {\partial x _ {1}}{\partial t}\right) - \frac {x _ {2} - x _ {1}}{(x _ {2} - x _ {1}) ^ {2} + (y _ {2} - y _ {1}) ^ {2}} \left(\frac {\partial y _ {2}}{\partial t} - \frac {\partial y _ {1}}{\partial t}\right), +$$ + +where + +$$ +\frac {\partial x _ {i}}{\partial t} = v _ {i} \cos \theta_ {i}, \quad \frac {\partial y _ {i}}{\partial t} = v _ {i} \sin \theta_ {i}. +$$ + +Note that all quantities in this formula— $x_{1}, x_{2}, y_{1}, y_{2}, v_{1}, v_{2}, \theta_{1}$ , and $\theta_{2}$ —are assumed known to both animals at all times. In our basic hungry algorithm, the predator charges ahead at full speed all the time (which turns out to be slightly wasteful in certain situations.) + +A computer simulation plotted paths resulting from a hungry predator pursuing prey that executed a breakaway maneuver at arbitrary distance $h_B$ from the predator. The ideal moment for the prey to begin turning is when predator and prey are $1.62 \, \text{m}$ apart; this guarantees that the predator and prey never come nearer to one another than $0.88 \, \text{m}$ . The predator runs $1.94 \, \text{m}$ before the hungry algorithm demands that it stop. (Note the inefficiency of the predator's course; it was shown above that for $h_B > 1.57 \, \text{m}$ , there is a predator path that prevents the prey's escape.) + +Following the breakaway maneuver, the prey is standing very close to, but slightly behind, the predator. At this point, the prey's greater maneuverability is sufficient to prevent the predator from turning around to face the prey. Unable to move toward the prey until facing toward it, the predator will come to a halt and rotate in place. We assumed that time spent rotating but not moving forward does not count in the predator's 15-s allotment. + +Given the predator's angular velocity limit of $100 / 9 \, \text{rad/s}$ , the prey can retreat to a distance of $1.25 \, \text{m}$ from the predator (the greatest radius at which a speed of $125 / 9 \, \text{m/s}$ and an angular velocity of at least $100 / 9 \, \text{rad/s}$ are compatible) and still prevent the predator from turning around. + +The prey neither gains nor loses by remaining at the $1.25\mathrm{m}$ distance; since the prey can't run in a circle forever, eventually it has to make a break for it again. We could not determine analytically the precise distance that the prey could travel before the predator catches up and forces another breakaway attempt. + +If the prey begins to run directly away from the predator instead of circling, the predator has to run about $14\mathrm{m}$ under the strict greedy algorithm to force another breakaway maneuver. By turning in place a while longer, a smarter predator could cut this down to just under $12\mathrm{m}$ . + +Alternatively, the prey could simply begin to run in a straight line. The hungry predator then requires about $15\mathrm{m}$ to catch up. By turning in place longer, the predator can reduce this to $13\mathrm{m}$ . Experimentation indicates that an optimal run-at-full-speed strategy is unlikely to change these numbers significantly. + +The distances $(h_B = 1.62\mathrm{m},d_B = 1.94\mathrm{m},d_G\approx 14\mathrm{m},S(d_B) = 0.88\mathrm{m})$ are used in a statistical procedure, described later, that uses the probability density of $h_I$ to calculate the prey's chance of surviving for $15\mathrm{s}$ . The procedure returned survival probabilities of $P = 0.20$ for a reference beta density, $P = 0.28$ for a reference gamma density, $P = 0.44$ for a modified beta density, and $P = 0.43$ for a modified gamma density. + +A numerical model discussed later implements a similar greedy algorithm. The results of that algorithm were qualitatively similar, though the exact distances, turning points, and so on were slightly different. + +# The Maximal Turn Strategy + +The hungry strategy is a good all-around strategy for a predator that does not know what its prey is likely to do next. However, an intelligent predator might notice that, once the prey has begun its tight circle, it plans to continue circling tightly. This more intelligent predator, rather than aiming at the prey, might respond by turning in the same direction, as sharply as possible, in an effort to cut off the prey's anticipated escape. + +Suppose the predator is at the origin, running along the $y$ -axis, and the prey is at $(0, h_B)$ , also running along the $y$ -axis, when the prey initiates a breakaway maneuver. If the predator responds by simultaneously making a minimum-radius turn in the same direction, then the paths of predator and prey are given by the following parametric equations: + +$$ +\begin{array}{l} x _ {1} = r _ {1} \left[ 1 - \cos \left(\frac {v _ {1} t}{r _ {1}}\right) \right], \quad y _ {1} = r _ {1} \sin \left(\frac {v _ {1} t}{r _ {1}}\right) \\ x _ {2} = r _ {2} \left[ 1 - \cos \left(\frac {v _ {2} t}{r _ {2}}\right) \right], \qquad y _ {2} = h _ {B} + r _ {2} \sin \left(\frac {v _ {2} t}{r _ {2}}\right). \\ \end{array} +$$ + +With the aid of a Mathematica-type application or a simple program, it is quite simple to calculate $\min \{(x_1 - x_2)^2 + (y_1 - y_2)^2\}$ . This figure represents the closest approach of predator to prey during the breakaway. The prey's survival depends on choosing an optimal breakaway time, i.e., the optimal $h_B$ is given by $\arg \max \{\min \{(x_1 - x_2)^2 + (y_1 - y_2)^2\}, h_B\}$ , which comes to $h_B = 0.66 \, \text{m}$ , $d_B = 1.52 \, \text{m}$ , $S(d_B) = 0.25 \, \text{m}$ . This indicates that the prey stands a much smaller chance of surviving a single encounter with this more intelligent predator. Should it survive one breakaway maneuver, the earlier remarks about circling at $1.25 \, \text{m}$ and attempting to flee apply almost unchanged; we have $d_G \approx 14 \, \text{m}$ for this case also. The calculated survival probabilities were $P = 0.03$ for the reference beta distribution, $P = 0.15$ for the reference gamma distribution, $P = 0.21$ for the modified beta distribution, and $P = 0.29$ for the modified gamma distribution. + +# A Refined Trajectory Model + +It is sometimes right for the predator not to run at full speed. Neither the hungry nor the maximal-turn strategies are optimal—the predator does better by turning in the same direction as the prey and simultaneously slowing down. This prevents the prey from exploiting the time during which the prey and predator are running in opposite directions on a non-collision course. + +Can such a strategy be determined? Doing so requires an algorithm for minimizing travel time given initial position, initial velocity, and final position, subject to the constraints that $v(t) \leq v_{1}$ and $|d\theta_1 / dt| \leq v_1 / r_1$ . Then we must determine what point along the prey's circular path is the best destination, with the prey trying to maximize, and the predator trying to minimize, the closest-approach distance. We could not solve this problem analytically nor implement a good numeric approximation, so it is probably unrealistic to expect a Cretaceous dinosaur to calculate the solution in its head instantaneously. So this ideal strategy, though of theoretical interest, likely would not be usable. + +# Flaws of the Analytic Approach + +The ability of both dinosaurs to select any speed they wish, that is, to be capable of brief bursts of essentially infinite acceleration, is unrealistic. Neither animal can leap into the chase at full speed or stop on a dime. The predator has to waste some time running in the wrong direction if the prey survives a breakaway. The effect is that $d_{G}$ , in a practical sense, will always be in the $20 - 25\mathrm{m}$ range, not $14\mathrm{m}$ as proposed above. The prey's survival probability is correspondingly higher. Table 2 gives $P$ for both the hungry and maximal-turn strategies, recalculated using $d_{G} = 22\mathrm{m}$ . The results agree more closely with the computer simulations than do the earlier numbers for $14\mathrm{m}$ . + +Table 2. Survival rates with revised $d_{G}$ + +
DistributionP for hungry strategyP for maximal-turn strategy
Reference beta.38.03
Reference gamma.35.15
Modified beta.51.21
Modified gamma.49.29
+ +# Effect of Changing Minimum Turning Radii + +Extreme forces are involved in the sharp high-speed turns that we have discussed. Doubling the turning radius of each species would halve the force that each animal would need, increasing the realism of the scenario. The shape of all trajectories would remain unchanged but the scale would be doubled. This increase in realism would greatly improve the prey's chances: The closest approaches would be $1.76\mathrm{m}$ and $0.50\mathrm{m}$ , resulting in much lower death rates than the current $0.88\mathrm{m}$ and $0.25\mathrm{m}$ . Similarly, $d_{I}$ would remain unchanged, but $d_{B}$ and $d_{G}$ would be doubled, which would substantially reduce the number of breakaway cycles that prey would have to endure to survive a 15-s chase. Table 3 gives $P$ , recalculated using the doubled values, including $d_{G} = 44\mathrm{m}$ . + +Table 3. Survival rates with doubled turning radii. + +
DistributionP for hungry strategyP for maximal-turn strategy
Reference beta.84.08
Reference gamma.94.19
Modified beta.95.27
Modified gamma.95.33
+ +# Implications + +When the prey is forced to pass very close to the predator to break away, its chance of surviving a single breakaway maneuver is very low, and the chance of surviving two or more such turns is virtually nil. In this case, the probability of survival is essentially equal to the probability of spotting the predator at a distance of $125/3$ or more meters. Thus, increased vigilance of the prey would be expected to carry a strong evolutionary reward. On the other hand, if the prey can escape easily during a breakaway cycle (e.g., hungry predator and doubled radii), the predator is exerting minimal selection pressure against inattentive prey. + +Comparing our calculated probabilities with a subjective assessment of the survival rates of modern prey, it seems that the hungry strategy with tight turning radii, and the maximal-turn strategy with doubled radii, yield the + +most plausible results. This seems reasonable: The strategies involved are simple and "obvious" enough that moderately intelligent animals could probably implement them. + +# Converting Trajectory Descriptions to Survival Probabilities + +# Overview of Procedure + +The trajectories produced by all of our models have some common structural features. In particular, the trajectories can be divided into three phases: + +- The first phase is a straight-run segment, during which the predator gains on, but is virtually never able to kill, the prey. +- The second phase is the breakaway maneuver, during which the prey makes its closest approach to the predator and runs a significant risk of being killed. +- The third phase is a brief period of almost-straight chasing as the predator attempts to reclaim the distance the prey gained via a successful breakaway. + +Each trajectory, regardless of the model that produced it, begins in phase one, then alternates between phases two and three for the remainder of its length. Within a given trajectory, all complete phase-two episodes will be the same length, as will all complete phase-three episodes. The final segment, during which the predator's allotted time expires, will of course be incomplete. + +We assume that the prey has a nonzero chance of being killed each time the distance between predator and prey reaches a minimum. We model the chance of survival of each encounter as a function of minimum separation, $p_S$ . The probability of the prey surviving the entire 15-s encounter depends primarily on the number of close encounters, i.e., the number of times the prey must attempt the breakaway maneuver. This number depends on two things. The first is the length of the phase-two and phase-three episodes. This depends only on the model design; we calculated these lengths for each model we subjected to the statistical procedure. The second factor is the length of the initial sprint. This depends on the optimal breakaway separation $h_B$ , which depends on model design, and on the separation at the beginning of the chase, $h_I$ . It is thus possible to plot $p_H$ as a function of $h_I$ . + +The initial separation will be different every time the predator goes hunting. Initial separation is influenced by terrain and vegetation, visibility conditions, the alertness of the prey, the stealth of the predator, and countless other factors. Rather than trying to account explicitly for the effect of each factor, we opt instead to treat $h_I$ as a random variable, the shape of the probability density function being chosen based on these factors. Generally speaking, bare ground, good lighting, and attentive prey cause the mean of $f(h_I)$ to be high, while + +obstacles, fog, darkness, or particularly stealthy stalking on the predator's part cause the mean of $f(h_{I})$ to be low. In addition, the problem statement proposed $15\mathrm{m}$ and $50\mathrm{m}$ as minimum and maximum values for $h_{I}$ . + +Once a distribution of $h_I$ has been selected, we can use $p_H(h_I)$ and $f(h_I)$ together to determine $P$ , the probability that prey survive a 15-s attack, given the conditions specified by model design, the choice of $p_S$ , and the choice of $f$ . + +# Method of Computing $P$ + +Three numbers and one function, based on the model design, are required for the computation of $P$ . The three numbers are + +- $h_{B}$ , the optimal separation for attempting the breakaway maneuver; +- $d_B$ , the distance traveled by the predator during a breakaway attempt; and +- $d_{G}$ , the distance traveled by the predator while regaining ground lost during a successful breakaway. + +The function $S(d)$ is the minimum separation during the first $d$ meters traveled by a predator during a phase. During phase one, the function $S_{1}(d)$ is linear; during phase two, $S_{2}(d)$ decreases rapidly in a complicated way to a minimum value; that minimum value is characteristic of the model design. The definition of $S_{3}(d)$ is slightly different, since phase three begins with the prey close to but behind the predator, out of harm's way. The separation rapidly increases to a maximum, then drops approximately linearly as the predator catches back up to the prey, following an ever-straighter path. We define $S_{3}(d)$ as the minimum separation after maximum separation has already been reached, but infinite before that time. (Some of the path-determining models assume that, immediately after a successful breakaway, the prey can briefly move about in perfect safely.) For the purposes of computing $p_{H}$ and $P$ , $S_{2}$ is the only function of real interest, since the prey has virtually no chance of dying unless $S$ is small. + +It is necessary to choose a function $p_S$ that relates the minimum separation to the prey's probability of surviving a close encounter with the predator. The choice of this function is arbitrary, subject to some obvious constraints: + +- $p_S(0) = 0$ : If the predator and prey actually contact each other, the prey will surely be killed. (Actually, one could realistically let $p_S(0)$ be a small positive number; it is plausible that the prey still has a slight chance of surviving a direct assault by the predator.) +- $p_S(x) \to 1$ as $x \to \infty$ : If the predator never comes close to the prey, the prey clearly will survive. +- $\frac{dp_S(x)}{dx} \geq 0$ : It is always safer to be farther away from the predator. + +Velociraptor's most potent weapon was the large claw on each foot, and its legs were approximately $0.5\mathrm{m}$ long. Its mouth and forearms posed a significant but much smaller danger to the prey [Sattler 1983]. On the basis of these facts, it seems reasonable that the prey is in great danger if it is within one leg length $(0.5\mathrm{m})$ . A distance of two leg lengths $(1\mathrm{m})$ ought to bring safety from the claws but not from the jaw. At distances significantly greater than $1\mathrm{m}$ , the danger should be negligible. We decided that $p_S(1) = 0.8$ seemed like a reasonable figure. For general distance $x$ , we use + +$$ +p _ {S} (x) = \left(1 - \frac {1}{1 + 4 x ^ {4}}\right) = \frac {4 x ^ {4}}{1 + 4 x ^ {4}}, +$$ + +though any of several other S-shaped functions would serve as well. + +Calculating $p_{H}$ as a function of $h_{I}$ is the most complex portion of the computation of $P$ . The function is piecewise defined. Letting + +$$ +d _ {I} = \left(\frac {v _ {1}}{v _ {1} - v _ {2}}\right) h _ {I}, +$$ + +we treat the three phases of the trajectory separately, saving phase two for last because it is computationally most difficult. + +Phase One: If $d_I \geq v_I t_{\max}$ , then $p_S(h_I) = p_S(h_I - (v_1 - v_2) t_{\max})$ . + +If the prey is able to outrun the predator, then the closest approach of predator and prey occurs at time $t_{\mathrm{max}}$ , when the predator is forced to abandon the chase. Unless the prey was about to make its first breakaway attempt, probability of survival is essentially 1. On Figure 2, this phase produces a long plateau at the right. + +Phase Three: If $d_{I} + k(d_{B} + d_{G}) - d_{G} \leq v_{1}t_{\max} \leq d_{I} + k(d_{B} + d_{G})$ for some $k \in \{1,2,3,\ldots\}$ , then let + +$$ +d ^ {*} = v _ {1} t _ {\mathrm {m a x}} - d _ {I} - k (d _ {B} + d _ {G}) + d _ {G}, \qquad p _ {H} (h _ {I}) = p _ {S} [ S _ {2} (d _ {B}) ] ^ {k} p _ {S} [ S _ {3} (d ^ {*}) ]. +$$ + +This looks like an uglier computation than it is. Unless $h_B$ is very small, $S_3(d^*)$ , the final term in the product, representing the probability of surviving the beginning of a brief chase, is very close to 1. The first term in the product is simply the probability of surviving a single breakaway maneuver, raised to the $k$ th power; each breakaway attempt is viewed as an independent event for the purposes of this calculation. In Figure 2, this phase produces the equally spaced plateaus that occupy most of the left portion of the plot. + +Phase Two: If neither of the above is true, then + +$$ +d _ {I} + k (d _ {B} + d _ {G}) < v _ {1} t _ {\max} < d _ {I} + k (d _ {B} + d _ {G}) + d _ {B}, +$$ + +for some $k \in \{0,1,2,3,\ldots\}$ ; and letting $d^{*} = v_{1}d - k(d_{b} - d_{G})$ , we have $p_{H}(h_{I}) = p_{S}[S_{2}(d_{B})]^{k}p_{S}[S_{2}(d^{*})]$ . + +![](images/72bf6d03c353db54cf0fd27542738e1ed32ef1a8ffca64a9ad3376496de01d85.jpg) +Figure 2. Prey survival rate vs. initial separation (mathematical model). + +Phase two comes into play if the 15 s expires while the prey is in the act of attempting to break away. There is no way around computing $S_{2}(d^{*})$ this time; $S_{2}$ has to be calculated numerically from a simulation of the phase-two trajectory. It makes a great deal of difference to the prey exactly how close the closest approach is: The prey is almost twice as likely to survive at 36 cm as at 30 cm! The term raised to the $k$ th power represents the previously completed breakaways, while the last term is the probability of surviving the final, incomplete, breakaway. In Figure 2, this phase produces the S-shaped pieces connecting the plateaus on the left side of the graph. + +Figure 2 is typical of most graphs of $p_H$ . If $S$ is never ignored, then $p_H(h_I)$ is a continuous monotonically increasing function with domain $[0, \infty)$ and range $[0, 1)$ . As $p_S(S_2(d_B))$ becomes small, $p_H \to 0$ for all $h_I < (v_1 - v_2)t$ . + +The conditional probability of survival given initial separation is given by $p_{H}(h_{I})$ . Hence, by the law of total probability [Freund 1992], the unconditional probability of survival is given by $\int_0^\infty p_H(h_I)f(h_I)dh_I$ , which, once $p_{H}$ has been calculated, is extremely easy to calculate numerically. + +For our tests, we use the gamma and beta distributions, two flexible and widely known families. We adapt the descriptions by Evans et al. [1993]. + +Gamma-distributed variates can take on any positive value. The density function has two parameters, with $\alpha$ controlling the shape and $\beta$ controlling the scale: + +$$ +f _ {\mathrm {g a m m a}} (x) = \left(\frac {1}{\beta^ {\alpha} \Gamma (\alpha)}\right) x ^ {\alpha - 1} e ^ {- x / \beta}. +$$ + +The gamma distribution has mean $\alpha \beta$ , mode $(\alpha - 1)\beta$ , and variance $\alpha \beta^2$ . To reflect the conditions of the problem, we selected Gamma(6,5) as our reference gamma distribution; this particular density is less than 15 about $8\%$ of the time and greater than 50 about $7\%$ of the time. To determine how much it benefits the prey to be alert, thereby increasing the chance of seeing the predator at considerable distance, we repeated our calculations using the Gamma(6,6) distribution. + +Beta variates, in their original form, have a domain of $[0,1]$ . Multiplying by 35 and then adding 15 produces a beta-distribution on $[15,50]$ , reflecting the stipulations of the problem statement. The transformed beta density function has two parameters, $\nu$ and $\omega$ , which together determine the shape of the density: + +$$ +f _ {\mathrm {b e t a}} (x) = \left(\frac {\Gamma (\nu + \omega)}{3 5 \Gamma (\nu) \Gamma (\omega)}\right) \left(\frac {x - 1 5}{3 5}\right) ^ {\nu - 1} \left(\frac {5 0 - x}{3 5}\right) ^ {\omega - 1} \qquad \text {f o r} 1 5 \leq x \leq 5 0, +$$ + +with mean $15 + 35\nu /(\nu +\omega)$ , mode $15 + 35(\nu -1) / (\nu +\omega -2)$ , and variance $1225\nu \omega /[(\nu +\omega)^{2}(\nu +\omega +1)]$ + +The ratio of $\nu$ to $\omega$ controls the mean of the distribution, while the variance is inversely proportional to $(\nu + \omega)$ . We selected the Beta(2,3) distribution as our reference beta distribution. It has a mean, mode, and central "hump" shape very similar to the reference gamma distribution, but no tails sticking out below 15 and above 50. We also examined the Beta(3,2) distribution, with the same shape but a higher mean and mode, to check the effect of increased prey alertness. Figures 3 and 4 show the four chosen densities. Note, however, that nothing in the calculation procedure limits us to the use of these densities. + +![](images/5ad11562254676937db01bf14d5fa288ac95f63b7487b6a6d3e85627a6638110.jpg) +Figure 3. Reference and modified gamma densities. + +![](images/8a80948b5dec30e5f41fd20fd9ebb9a63d752e19462f7f6a6e2af8b6e7181506.jpg) +Figure 4. Reference and modified beta densities. + +# The Computer Simulation Model + +# Model Overview and Assumptions + +The computer model uses physics-based motion, where accelerations are finite, instead of the more-theoretical motion of the mathematical model (in which acceleration is not considered). While there is no explicit limit on the turning radius (it isn't even calculated in the simulation), when we restrict our creatures' acceleration to the assumed value, then at top speed the animals can turn in no less than their minimum turning radius. Hence, although the mathematical and computation models' methods differ, the two models give the same answer. + +# Computer Model Assumptions + +- The predator(s) slowly sneak up on the prey as long as the prey doesn't start to run away. At some distance $h_{I}$ (where $15\mathrm{m} < h_{I} < 50\mathrm{m}$ ), the prey notices the approach and starts running. +Each animal has a maximum speed. +- Each animal has a maximum acceleration, derived from $a = \frac{v^2}{r}$ , where $v$ is the animal's maximum speed and $r$ is its minimum turn radius at top speed. We further assume that this acceleration can be applied in any direction—centripetal, tangential, or a combination thereof. +- At each instant, each animal figures out in which direction it would like to apply its acceleration, based on its present position and the position of every other animal. However, an animal knows only the position of the other animals $20\mathrm{ms}$ ago (its reaction time). +- Whenever a predator stops getting closer to the prey (i.e., the predator has made a close pass by the prey), a probability-of-death function, of the smallest distance between the predator and prey, is evaluated. The prey's probability of death depends only on the closest approach distance. +- Simulation continues until the prey dies or 15 s elapses. + +# Model Design + +Our simulation models each animal as a point at its center of mass, which can accelerate in any direction. The simulation outputs smooth curves of the + +paths of the centers of motion of the animals, by iteratively solving these vector differential equations using Euler's method (outlined below): + +$$ +\frac {d \vec {P}}{d t} = \vec {V}, \qquad \frac {d \vec {V}}{d t} = \vec {A}, +$$ + +where $\vec{P},\vec{V}$ , and $\vec{A}$ are the position, velocity, and acceleration as functions of time. + +The simulation cycle begins by determining the optimal direction for the animal to accelerate, based on the animal's strategy. The animal accelerates at its maximum acceleration in that direction. Values for upper bounds on acceleration are parameters to the simulation, derived from the minimum turn radius at top speed via the central acceleration formula, $a = v^2 / r$ . + +The acceleration vector is then added to the local elevation gradient multiplied by the acceleration of gravity, making it easier to go down a hill than up. The elevation gradient is determined from a bilinearly interpolated elevation grid, which is read in from an elevation file. + +Once the animal has decided on a direction, the simulation applies one step of Euler's method to update its position and velocity vectors. This method of solving differential equations works by noting that the first two terms of the Taylor series expansion of a function of time depend only on the present condition of the system: + +$$ +f (t + h) = f (t) + h \frac {d f}{d t}. +$$ + +Hence, if we know an animal's position, velocity, and acceleration at some time $t$ , we can figure out a first-order approximation for where the animal will be at time $t + \Delta t$ by substituting the vector equations into the Taylor series expansion: + +$$ +\vec {P} (t + \Delta t) = \vec {P} (t) + \Delta t \vec {V} (t), \qquad \vec {V} (t + \Delta t) = \vec {V} (t) + \Delta t \vec {A} (t). +$$ + +Taking $\Delta t = 0.001\mathrm{s}$ , and given the initial conditions and an acceleration vector, our simulation computes the velocity and position of each animal at each time step. + +# Life and Death in the Computer + +In the computer model, each time the prey passes near the predator, a probability-of-death function is evaluated. The formula that we chose for the probability of death, given minimum separation, is + +$$ +P _ {\mathrm {d e a t h}} (x) = \frac {1}{1 + 4 x ^ {4}}, +$$ + +whose graph is shown in Figure 5. In the notation of the section Converting Trajectory Descriptions . . . , this function is $(1 - p_{S})$ + +![](images/8174b10019a55fd1f97b18938d0a2c77ac448d1fe6db39cd2ef28301729adf60.jpg) +Figure 5. Probability of death vs. closest approach to predator (in meters). + +# Modeled Hunting Strategies + +Every orientation of two animals can be simulated by starting the predator at the origin and the prey a distance $h_I$ along the $x$ -axis, where $h_I$ varies in this simulation from 15 m to 50 m. + +# The Hungry Hunter + +A hungry hunter heads straight for the current position of the closest prey. This is the only strategy that we model both analytically and numerically. + +# The Smart Hunter + +A smart hunter determines the point where it can intercept the closest prey and heads straight for that point. A derivation of the quadratic equation that the smart hunter for the intercept point is given in Appendix A. [EDITOR'S NOTE: Omitted for space reasons.] + +# Modeled Evasion Strategies (One Predator) + +# The Frightened Prey + +Frightened prey flee straight away from the nearest predator. These prey always die if the predator can close the distance between them before its 15 s are up. At a closing rate of $2.78\mathrm{m / s}$ , the prey will always be overtaken and die if $h_I$ , the initial separation of the predator and prey, is less than $41.7\mathrm{m}$ . + +# The Smart Prey + +When the nearest predator is far away, smart prey act like frightened prey and flee straight away. But when the predator closes to a critical distance $h_B$ (1.619 m—see Appendix B for derivation [EDITOR'S NOTE: Omitted for space reasons]), the prey darts either to the left or to the right. By using its much smaller turn radius, the prey buys some distance, which is again closed by the predator, whereupon the prey can dart again. + +For the two-animal case, there are two possibilities for what this looks like: smart prey versus hungry predator, and smart prey versus smart predator. As Figures 6-7 show, the prey succeeds in outrunning the predator for $15\mathrm{~s}$ . + +![](images/c9b08b8a473c110ba6d65b73bc0dc259267382e6f97ea96cc2965b51b7c1b03b.jpg) +Figure 6. Smart prey vs. hungry predator, $h_{I} = 15$ m. + +# The Gradient Prey + +Gradient prey are the same as smart prey when the nearest predator is very close (less than $h_B$ ) or when there is only one predator. When there are two or more predators, the gradient prey runs in the direction of least danger, i.e., along the gradient of the danger function. If the danger from each predator decreases with the inverse square of its distance, and the danger from each predator is added to produce the danger function, then the danger gradient can be computed by adding the gradient of the danger from each predator: + +$$ +\bigtriangledown \left(\sum \frac {1}{d _ {i} ^ {2} (x , y)}\right) = \sum \bigtriangledown \left(\frac {1}{d _ {i} ^ {2} (x , y)}\right). +$$ + +This can be done quickly and easily in the simulation. The gradient prey is different from the smart prey only when there are two predators; their graphs and analysis are presented in the next section. + +![](images/8020f55666cc091bbc83cd50b4e7111680ac67c7677500973cdcc1e3107e512f.jpg) +Figure 7. Smart prey vs. smart predator, $h_{I} = 15$ m. + +# The Two-Predator Situation + +We determined that a good strategy for the predators is for the second predator to follow the first one (by about $8\mathrm{m}$ ), since our prey's strategies hinge around getting behind the first predator. + +# Frightened Prey + +Because frightened prey always die unless they detect the predator(s) more than $41.7\mathrm{m}$ away, there is no advantage in chasing them with two predators. + +# Smart Prey and Gradient Prey + +Figures 8-11 illustrate typical trajectories taken by smart and by gradient prey when pursued by smart and hungry predators. Each plot shows $x = 60$ to $90\mathrm{m}$ , $y = -30$ to $5\mathrm{m}$ . Initial positions are $(-8,0)$ for predator 1, $(0,0)$ for predator 2, and $(15,0)$ for the prey. + +# Computational Results + +If the animals started running with separation determined by the reference beta distribution, the estimates of overall survival rates and their standard errors (2,000 runs) are shown in Table 4. Note that the error bars for some strate + +![](images/6688944700cd398ef4168fe3786ffe665ae2ac86364395ba39303796bcaa38eb.jpg) +Figure 8. Smart prey, smart predators. + +![](images/1974c6865f6387a8171a4cbaf7c562a1fbf1055dbef77cf0e87b16282a349b63.jpg) +Figure 9. Smart prey, hungry predators. + +![](images/6d9e50515362f9cc2ce4aac9c63bf2ac50155061d8ab074b0fedda7ced416bef.jpg) +Figure 10. Gradient prey, smart predators. + +![](images/d0208be51c99e751391f5826ab24217067fdad830dfa5da0c21d1a457f54c946.jpg) +Figure 11. Gradient prey, hungry predators. + +gies overlap; for instance, we cannot be certain that smart prey do significantly better than gradient prey against two hungry predators. + +# Prey Strategies + +Frightened Prey: Always fleeing the predator unconditionally leads to the lowest survival rates. This is a bad choice. + +Smart Prey: The smart prey, which darts to the side when the predator closes to some critical distance behind it, did quite well in the single-predator runs. This is the best overall evasion strategy for Thescelosaurus. + +Gradient Prey: On two-predator runs, the gradient prey did quite well against the smart predators but more poorly than the smart prey against the hungry ones! + +Table 4. Estimated survival rates. + +
Prey strategyOne hungry predatorOne smart predator
Frightened prey3.2 ± 0.4%3.9 ± 0.4%
Smart prey35.7 ± 1.1%35.5 ± 1.1%
Gradient prey35.7 ± 1.1%35.5 ± 1.1%
Two hungry predatorsTwo smart predators
Frightened prey3.2 ± 0.4%3.9 ± 0.4%
Smart prey3.7 ± 0.4%4.0 ± 0.4%
Gradient prey3.5 ± 0.4%9.4 ± 0.7%
+ +# Predator Strategies + +Hungry Predator: This predator killed the most prey in both the single-predator and hungry-predator scenarios; it is never misled by the prey's movements. Our data indicate that this is the best overall hunting strategy for Velociraptor: Always head straight for the prey. + +Smart Predator: This predator is too easily misled by the prey's movements, and, except alone against smart prey, is always worse than the hungry predator. It is especially poor in the two-predator scenario against the gradient prey. + +# Conclusion + +The single most important factor in survival is the distance $h_I$ between the prey and predator at the moment of detection. This is because the most successful prey strategy is to flee the predator directly until the predator closes to another critical distance, $h_B$ (1.6 m), then make a sharp turn. By utilizing its smaller turn radius in this fashion, the prey can get slightly ahead of the predator. The predator rapidly closes this distance, whereupon the prey can pull the same trick again. Each time it does so, however, it takes another chance that the predator will win. Hence the number of breakaway turns is the single most important factor in determining chances of surviving one predator. The chances of survival are very low (< 4%) in the two-predator case when one predator trails the first by 8 m. + +Figure 12 reproduces Figure 2, a predicted-probability-of-survival graph produced by the mathematical model, for a smart prey pursued by a hungry hunter. Figure 13 graphs the estimated survival probability from the computer model (50,000 runs, 100 at each $h_I$ ). The graphs, aside from the static produced by the random variation of our small sample size, are nearly identical. + +Using the mathematical model, we found that changing the turn radii affects the sharpness of curvature of the probability graphs but does not diminish the importance of $h_I$ . With the computational model, and with many different hunting and evasion strategies, $h_I$ was the most important factor in every case. + +![](images/a2ffb0b8d09f7f7a2d8c6a7e0872ced506509373e28f9cdb34da039336870fe3.jpg) +Figure 12. Prey survival rate vs. initial separation (mathematical model). + +![](images/6aa8f03890c94be0049cfabfca2cb4571c49d1eced8e99e8e2fe0a93150b2955.jpg) +Figure 13. Prey survival rate vs. initial separation (computer model). + +# References + +Evans, Merran, Nicholas Hastings, and Brian Peacock. 1993. Statistical Distributions. 2nd ed. New York: Wiley. +Freund, John E. 1992. Mathematical Statistics. 5th ed. Englewood Cliffs, NJ: Prentice-Hall. +Sattler, Helen R. 1983. The Illustrated Dinosaur Dictionary. New York: Lothrop, Lee, and Shepard. +Swinter, W.E. 1970. The Dinosaurs. New York: Wiley. +Wells, David. 1991. The Dictionary of Curious and Interesting Geometry. New York: Penguin. + +# A Three-Phase Model for Predator-Prey Analysis + +Lance Finney + +Jade Vinson + +Derek Zaba + +Washington University + +1 Brookings Drive, CB 1040 + +St. Louis, MO 63130 + +Advisor: Hiro Mukai + +# Abstract + +We model the hunt as a game of three explicit stages: the stalk, the attack, and the subdual. We implemented this model in Matlab to simulate a velociraptor hunting a thescelosaurus and an African lion hunting a gazelle. + +During the attack phase, also known as the macroscopic game, the speed constraints are binding but the curvature and acceleration constraints are not. The predator's goal for this game is to minimize the time that it takes to intercept the prey. We introduce a two-predator strategy that is space-optimal for both the predator and prey and conjecture that it is also time-optimal. + +In the subdual phase, also known as the microscopic game, the time constraints are insignificant. In this game, the predator's goal is to minimize its closest approach to the prey. The prey's strategy is to use its smaller turning radius to outmaneuver the predator until time runs out. + +Based on our model and simulations, we conclude that the hunting strategies of an African lion and of a velociraptor differ. The lion has a slower maximum speed but has greater acceleration and is more maneuverable than a gazelle. Conversely, the velociraptor has greater speed but less maneuverability than a thescelosaurus. An ambush is the most effective strategy for a pair of lions, and chasing from behind is the most effective strategy for a pair of velociraptors. + +The three-phase model may apply to other situations, such as a guided missile chasing an aircraft. + +# Introduction + +We have structured our model in accordance with a modified version of Elliott et al.'s framework that divides the hunt into three distinct sections: stalk, attack, subdue [1977]. + +Stalking refers to the predator's attempt to reduce the predator-prey distance while the prey is unaware of the predator's actions. The predator has the choice of whether to use a sneaking approach or a running approach. In the sneaking approach, the chance of detection by the prey is given by a probability distribution based on the predator-prey distance. In the running approach, the predator rushes the prey, attempting to benefit from the element of surprise while sacrificing the chance that the prey remains unaware of its position. + +The attack phase refers to an active approach taken by the predator seeking to maximize the chance of predator-prey contact. The attack phase effectively begins when the prey detects the predator and ends when the predator-prey separation is reduced to a specified level, beginning the subdue stage. + +The subdue stage can be viewed as microscopic, as opposed to the macroscopic attack stage. Curvature constraints become increasingly important when the predator-prey distance is reduced to this level, and the chance of physical contact approaches certainty. When the predator achieves a specified level of separation (the capture radius), the prey is deemed captured. + +In the stalking phase, the predator seeks to minimize the effective separation. As the predator sneaks toward the prey, there is a point at which the risk of a detection balances the benefit of a closer approach. We find (depending on the choice of reflex times and a probability distribution for the detection range) that the optimal target distance for a stalk is $24.7\mathrm{m}$ . + +The two-player phase begins when either the prey or predator starts running and the other reacts. We model the positions $x_{P}(t)$ (" $P$ " for pursuer) of the predator and $x_{E}(t)$ (" $E$ " for evader, as in Hajek [1975]) of the prey as moving subject to maximum speed, minimum curvature radius, and eventually acceleration constraints. Let $R_{\mathrm{min}}$ be the closest approach of the predator to the prey within 15 s; it is the predator's goal to minimize $R_{\mathrm{min}}$ and the prey's goal to maximize it. We show that for the given parameters of the velociraptor and thescelosaurus, the two-player game may be decomposed into phases: the macroscopic and microscopic games. + +In the macroscopic game, distances are large enough that the curvature constraints are nonbinding. Since velociraptors have a turning radius of $1.5\mathrm{m}$ , the macroscopic game begins at about 3 to $5\mathrm{m}$ of separation. The predator's goal is to minimize the time needed to approach the prey and enter the microscopic game; it is the prey's goal to maximize this time. We show analytically that the prey's best macroscopic strategy when facing a single predator is to run directly away, and (ignoring the 15-s time limit) we explain the optimal macroscopic strategy for two predators and one prey. + +The microscopic game is more difficult to analyze directly; we instead simulated it by computer. Compared with the macroscopic game, the microscopic + +game takes an insignificant amount of time. Since the microscopic games starts after an extended macroscopic game, we assume that predator and prey enter the microscopic game at maximum velocity. Thus, the only consideration of the microscopic game is the closest approach $R$ . We summarize the decomposition in Table 1. + +Table 1. Summary of model stages. + +
Lion huntOur modelPredator's goalPrey's goal
Phase 1StalkOne-player gameMinimize effective distance
Phase 2AttackMacroscopic gameMinimize pursuit timeMaximize pursuit time
Phase 3SubdueMicroscopic gameMinimize closest approachMaximize closest approach
+ +We prove upper bounds for the closest approach of the predator to the prey given that a microscopic game commences. Based on the bounds, smaller bounds found through simulation, and the sizes of a velociraptor and a thescelosaurus, we conclude that in a microscopic game, a lone velociraptor can achieve physical contact. Since lions are successful in killing $71\%$ of the large prey that they touch [Elliott et al. 1977] and only $17\%$ of all prey hunted (which includes what they do not touch) [Stander 1992], we argue by analogy that the velociraptors always win the microscopic game. + +In our model, the only random factor is the distance the predator can approach undetected. Similarly, the stalking phase of a lion hunt is the most important factor in determining success [Elliott et al. 1977]. + +In our model, the prey live if and only if the effective distance at the end of the stalking phase is greater than $(S_P - S_E)t_{\mathrm{max}}$ . If the value of $t_{\mathrm{max}}$ is exactly 15 s, then during the stalking phase the velociraptor has a trivial strategy: Approach until some critical point is reached, then jump. + +# Basic Model: The Two-Car Problem + +The logic behind the two-car problem proposed by R. Isaacs [Hajek 1975; Isaacs 1965] is the basis of our model. Car $P$ (the pursuer) chases car $E$ (the evader). If car $P$ ever gets closer to car $E$ than a specified distance $\delta$ (the capture radius), then car $P$ wins the contest. Both cars have minimum turning radii $\rho_{P}$ and $\rho_{E}$ and move at constant speeds $S_{P}$ and $S_{E}$ . In the special case of a perfect capture, $\delta = 0$ signifies that $P$ captures $E$ only if their positions coincide exactly. The two-car problem with perfect capture was solved exactly by E. Cockayne [Cockayne 1967; Cockayne and Hall 1975]. + +Theorem 1 (Cockayne). $P$ can capture $E$ from any initial state if and only if $S_P > S_E$ and $S_P^2 / \rho_P \geq S_E^2 / \rho_E$ . + +Much research in differential games was conducted during the Cold War for military purposes. For example, the two-car problem might model a dogfight between two airplanes or one boat chasing another. Tellingly, roughly half the articles with direct military application or support were written in Russian, half in English. + +# Additions to the Model + +We modify the basic model to incorporate the hunting strategies of the African lion. We assume that the probability of capture depends on the closest approach to the prey: + +$$ +A = \min _ {t} | x _ {P} (t) - x _ {E} (t) |. +$$ + +The paths $x_{P}(t)$ and $x_{E}(t)$ of the centers of gravity of the pursuer and evader satisfy the maximum speed and minimum curvature radius constraints. The predator tries to minimize, and the prey tries to maximize, $A$ . We introduce delay times $\gamma_{P}$ and $\gamma_{E}$ , typically 0.05 s, which may be thought of as either reflex times or imperfect information [Schreuer 1976]. For instance, $P$ can only react to the actions that $E$ took $\gamma_{E}$ ago. + +The performance data for lions and their prey [Elliott et al. 1977, Stander 1991] do not include the turning radius; those for velociraptors and thescelosauri do not include forward acceleration. Hence, we assume for all species a constant ratio $f$ of maximum forward acceleration $a$ to maximum lateral acceleration $S^2 /\rho$ . A value of $f = 0.5$ gives reasonable values for the inferred constants. + +# Table 2. + +Model parameters for different species. + +Values for $S$ , $\rho$ , and $K$ are from the problem statement or from Hajek [1975]; + +the others are inferred. + +
S(m/s)ρ(m)K(1/s)S2/ρ(m/s2)S2f/ρ = Ks(m/s2)force in turn (g's)baton distances (m)
Velociraptor16.61.55.618.5931.893.0
Thescelosaurus13.90.513.938619339.41.0
African lion14.310.50.6819.49.71.9821
Thomson's gazelle27.1800.179.24.60.94159
Zebra16.426.40.3110.25.11.0453
Wildebeest14.718.90.3911.45.71.1638
+ +# The Stalk: A One-Person Game + +The stalking phase is the most important factor affecting the success of a lion's hunt [Elliott et al. 1977]. During the stalk, the predator tries to minimize its effective separation from the prey. The effective separation accounts for the actual separation, the acceleration capabilities of the two species, and which player jumps first. + +For the predator, the advantage of sneaking closer is a decrease in the actual separation. The disadvantage is the risk of being noticed and losing the element of surprise. + +We define the effective separation to have the following property. If the prey runs directly away from the predator, then starting from a standstill and assuming that one species surprises the other is "equivalent" to starting from the effective separation with both species running at full speed. By "equivalent," we mean that the time that it takes the predator to catch the prey is the same. Explicitly, + +$$ +R _ {\text {e f f e c t i v e}} \approx R _ {\text {a c t u a l}} - b _ {E} + b _ {P} - \left\{ \begin{array}{l l} - \gamma_ {E} S _ {E}, & \text {i f p r e d a t o r j u m p s f i r s t ;} \\ + \gamma_ {P} S _ {P}, & \text {i f p r e y j u m p s f i r s t .} \end{array} \right. +$$ + +The prey and predator approach their maximum speed asymptotically but never reach it. However, our model parameters imply that with a 10-m head start, the thescelosaurus will be traveling at nearly full speed. Since the velociraptor is only slightly faster, it too must be running at near full speed when it captures the thescelosaurus. The approximation above would be equality if full speed was attained by both species before contact. + +The constants $b$ are "baton distances." For example, the baton distance $b_{P}$ is the initial separation that a stationary velociraptor on a relay team needs from its teammate in order to accelerate to full speed and receive the baton as the teammate catches up. By the acceleration assumptions of our model, $b = S / K = \rho / f$ . + +The reflex term can be either positive or negative depending on which species reacts first. If $P$ jumps first, we assume that $E$ immediately notices, takes $\gamma_E$ s to react, and loses an effective distance of $\gamma_E S_E$ m. If $E$ notices the predator and jumps first, the predator needs $\gamma_P$ s to react and loses an effective distance of $\gamma_P S_P$ . Since the prey is not anticipating flight, but the predator is anticipating a detection, we assume that $0.05 = \gamma_P < \gamma_E = 0.20$ . + +We now devise a cumulative probability distribution function $P(x)$ for the distance at which the prey first notices the stalking predator. We wish to fit a twice-differentiable and easily analyzable function $P$ to the constraints $P(15) = 0$ and $P(50) = 1$ . We choose + +$$ +P (x) = \frac {x - 1 5}{3 5} - \frac {\sin \left(\frac {2 \pi (x - 1 5)}{3 5}\right)}{2 \pi}. +$$ + +The predator will try to stalk until reaching its target separation $x = R$ and then jump; of course, the predator may have to jump sooner if detected. The expected effective distance is $24.7\mathrm{m}$ , as discussed below. + +Figure 1 shows the expected effective distance as a function of the target distance. For our choice of parameters and probability distribution, there is a unique minimum, the optimum target separation for the stalking predator. We locate the minimum at $24.7\mathrm{m}$ by taking the derivative of the expected effective distance with respect to target distance. The derivative shown in Figure 2 does not adequately represent the predator's disadvantage in advancing from $20\mathrm{m}$ to $15\mathrm{m}$ , because the strategies with targets $20\mathrm{m}$ and $15\mathrm{m}$ vary only in the rare case that the predator reaches $20\mathrm{m}$ undetected. We account for this effect by considering the expected benefit for a predator at $x\mathrm{m}$ to advance infinitesimally and obtain the conditional derivative by dividing by $P(D)$ . The conditional derivative is shown in Figure 3. + +![](images/3ab541c4bd223fa99e605e468020797d6ae228036ad72ef400c7bb9af16a1e47.jpg) +Figure 1. Effective distance. + +# The Macroscopic Game + +This phase of the game is the simplest strategically for both the predator and the prey. The predator's goal is to use the least amount of time to reach the prey. The prey's goal is to use as much time as possible before being reached by the predator. Using simple trigonometry, it is easily shown that the prey will then run directly away from the predator to extend the time spent in the chase. Obviously, the best strategy for the predator is to run directly toward the prey. Because of differences in top velocity, the predator will reach the prey and the microscopic game will begin in $R_{\text{effective}} / (S_P - S_E)$ s, where $R_{\text{effective}}$ is the initial effective distance between the prey and the predator. + +![](images/2a25647f42bcd6163da0e8eb8d0e66bdb9ca9a55615a9a9f1f9db071d2e112fa.jpg) +Figure 2. Derivative of effective distance. + +![](images/eba0042d61e2902eb7509b95198d8aefa8baf29d77c83db747037828959e2dbb.jpg) +Figure 3. Conditional derivative of effective distance. + +# The Microscopic Game + +We prove in Theorem 2 that the predator, simply by running directly toward the prey, can approach very close to the prey no matter what the prey does. The following lemma is instrumental in this proof. When the predator is as close as Theorem 2 allows, then the microscopic game has begun. + +Lemma. Suppose that the predator always tries to run directly at the prey. Then the predator can run directly at the prey until the separation distance is $\rho_{E}(S_{E} / S_{P})$ meters. + +Proof: We may assume that for large distances the predator can line up with the prey. The predator can maintain this orientation as long as the predator's ability to change direction is greater than the prey's ability to change the direction of of the vector from predator to prey: + +$$ +\frac {| x _ {E} ^ {\prime} |}{d} \leq \frac {S _ {P}}{\rho_ {P}}. +$$ + +Since $|x_E'| < S_E$ , the condition $d \geq \rho_P S_E / S_P$ suffices. + +A similar result holds for the prey: If the prey's strategy is to run directly away from the predator, then the prey can keep the predator directly behind it at least until the separation distance is $\rho_{E}(S_{P} / S_{E})$ m. + +Theorem 2. A predator with maximum speed $S_P$ , turning radius $\rho_P$ , and reflex time $\gamma_P$ is guaranteed to approach within $\rho_P(S_E^2 / S_P^2) + \gamma_P S_E$ of a prey with maximum speed $S_E$ . + +Proof: First, we consider the zero-reflex case when $\gamma_{P} = 0$ . The predator can run directly at the prey until the separation is $\rho_{P}(S_{E} / S_{P})$ . Then the predator continues in a straight line to the point where the prey is at this instant. In the time that it takes to reach this point, the prey travels at most $\rho_{P}(S_{E}^{2} / SD_{P}^{2})$ m. + +For nonzero reflex time, the predator should always choose the point where the prey was $\gamma_{P}$ s ago. When the predator comes within $\rho_{P}(S_{E}^{2} / S_{P}^{2})$ of the pursued point, it is within $\rho_{P}(S_{E}^{2} / S_{P}^{2}) + \gamma_{P}S_{E}$ of the prey. + +The only physical advantage of the prey over the predator is the prey's much smaller turning radius. However, to use this advantage, the prey must allow the predator to approach close by. Assuming perfect capture and instantaneous reaction, and more than $1.4\mathrm{m}$ separation, if the prey turns as hard as it can, then the predator can capture it. In our model, with nonzero reaction time, the predator instead aims directly for the prey. Figure 4 shows a typical result of such a turn: The prey can maneuver more quickly and the predator must make a larger turn, using more time. Figure 5 shows the minimum distance between the centers of gravity for the predator and the prey at different turning distances, assuming instantaneous reactions but using the parameters for velociraptor and + +![](images/8f17c31b2c0d1cfe482d3998891fc93f15b80054e9f2ae4c91a729cfb0d1754d.jpg) +Figure 4. Trajectories of hard turn maneuver. + +![](images/0425e7813ae4a497f12fffb20a16804a78cf6e445d5bdd152d927bd274a1ff32.jpg) +Figure 5. Minimum approach vs. turning distance. + +thescelosaurus. The predator is kept the farthest from the prey if the prey turns when the predator is about $1\mathrm{m}$ behind. Figures 6-7 show the effect of reaction time on the optimal time to turn (optimality is keeping the predator as far away as possible). Figure 6 shows what the minimum approaches are with various reaction times; and Figure 7 shows at what separation the prey should turn to obtain that minimum approach, given the reaction times. + +![](images/5151406bccd62c40d642efca11dc2e0c44ecd94ddf63eedcb40e9b70501a1a8e.jpg) +Figure 6. Minimum approach vs. predator reaction time. + +![](images/a04ecb0c0953ea95b6181913139da4540d7fb47c824c60cb1a7b202a432c5da8.jpg) +Figure 7. Optimal turning distance vs. predator's reaction time. + +# Hunting Strategy of Multiple Lions + +For a multiple lion hunt, Stander [1992] proposes a three-part framework to model the predators' strategies. The most effective strategy for the lion is a coordinated ambush, with a probability of success of $26\%$ for large prey. A single lion initiates the attack, driving the prey towards the other lions lying in wait. Multiple lions have been observed to use this strategy in $52\%$ of their large-prey hunts. + +Another strategy is convergence, in which two lions jointly initiate the attack phase, pursuing the prey from the same direction. For large-prey hunts with multiple lions, this strategy is pursued $14\%$ of the time with a probability of success of $14\%$ . + +The least effective strategy for group hunting is an uncoordinated ambush, which usually occurs when one lion startles the herd before the others are in position to receive the prey. While Stander observed a $34\%$ occurrence rate for large prey hunts, not a single animal was killed under this approach out of 68 attempts [1992]. + +# The Two-on-One Macroscopic Game + +The two-predator game is similar to the one-predator game in several respects. In both, the goal of the predator is to reach the prey as soon as possible. With two predators, one goal for the prey would be to avoid being contacted by any predator as long as possible; another goal would be to avoid being contacted by both predators as long as possible. The second goal implies that a situation in which one predator is quickly encountered might be preferable if that meant that the time until the second predator joins the fight is delayed. + +For the sake of simplicity, assume that the pursuers are faster than the evader and have no constraints on turning, acceleration, or time. The optimal strategies for each goal are provided by circles of Apollonius. + +For two points $P$ and $E$ in the plane and a constant $k$ , the locus of all points $Q$ that satisfy $PQ = k \cdot QE$ is a circle (for $k \neq 1$ ), known as a circle of Apollonius. The points $P$ and $E$ are the positions of the predator and prey, and $k$ is the ratio $S_P / S_E$ of their speeds. For $k > 1$ , the circle contains $E$ . As the pursuer and evader proceed, their positions and the circle change; the area of the circle decreases continuously as the pursuer and evader approach each other. + +We omit proofs of the following results. + +Theorem. Regardless of the pursuer's strategy, the evader can reach any point in or on the boundary of the original circle of Apollonius. If the pursuer follows an optimal strategy, the evader can reach the boundary only by traveling in a straight line. + +Theorem. If the evader runs in a straight line, the fastest way for the pursuer to capture the evader is by following a straight line. + +![](images/6f8f72d70b1a4f89e0cee6f0ae2d9fa5de8e4f46a939c1f6abc53d73bdee6b7c.jpg) +Figure 8. Example of circles of Apollonius. + +Assuming that the prey can survive a microscopic game with one predator but not with two, the prey's goal in the microscopic game is to delay the entrance of the second predator into the microscopic game. Assuming further that the microscopic game takes up very little space compared to the macroscopic game, the prey's optimal macroscopic strategy is to engage the first predator as far from the second as possible. Thus, the prey aims for the point on the Apollonius circle with $P_{1}$ (the closer predator) that is farthest from $P_{2}$ . + +Based upon the different assumption that the prey's macroscopic goal is to delay the first contact with either predator, we conjecture that the optimal strategy for the prey is to run towards an intersection of the circles of Apollonius with the two predators, if such a point exists. If none exists, the prey's optimal strategy is to run away from the closest predator. + +# The Ambush + +The ambush is an effective strategy for lions because they have greater maneuverability and acceleration than their prey. Indeed, Stander empirically confirms that the most effective hunting strategy for multiple lions is a coordinated ambush. He witnessed a $27\%$ success rate for the coordinated ambush strategy vs. an average success rate of $15\%$ . In addition, Stander notes that while the lions pursued a coordinated ambushed strategy in $68\%$ of the total hunts, the figure increases to $87\%$ if only large prey are considered [1992]. + +Figure 9 shows the results of our model simulating an ambush of a gazelle by a lion. We assume, based on Stander's observations, that the lion's ambush is $15\mathrm{m}$ from the gazelle. At this distance, the probability of a single lion capturing a gazelle from behind is essentially zero. The gazelle, after being scared by a + +second lion, runs in a direction that, if unaltered, would pass within $10\mathrm{m}$ of the hidden lion. Just after the gazelle starts running, the hidden lion leaps out. The gazelle accelerates through a turn in order to escape the lion but is unsuccessful. We experimented with different angles for the lion and different degrees of aiming for a point in front of the gazelle. In this simulation, we assumed zero reaction time for the gazelle. + +We also considered a velociraptor attempting to ambush a thescelosaurus. We assume, since the agile thescelosaurus has $15\mathrm{m}$ to accelerate, that it is traveling at full speed when near the ambush. The constant speed assumption makes an analytic solution possible. Let $D$ be distance at which the thescelosaurus would pass from the ambush if it continues in a straight path. We assume a reaction time of $\gamma = 0.2$ for the thescelosaurus, whose strategy is to try and cut away from the ambush as soon as the predator reveals itself. We assume also that the predator's strategy is to run in a straight line to intercept the thescelosaurus after the thescelosaurus has turned by an angle of $\theta$ . Based on these assumptions, the velociraptor's leap must be perpendicular to the thescelosaurus's original path. For a zero capture radius, the maximum distance $D$ from which the ambush will be successful is + +$$ +D (\theta) = S _ {P} \left(\frac {\rho_ {E} \theta}{S _ {E}} + \gamma_ {E}\right) - \frac {S _ {P}}{K _ {P}} \left\{1 - \exp \left[ - K _ {P} \left(\frac {\rho_ {E} \theta}{S _ {E}} + \gamma_ {E}\right) \right] \right\} - \rho_ {E} (1 - \cos \theta). +$$ + +The optimum value of $\theta$ is 1.0681 radians, which allows a range of $1.51\mathrm{m}$ for the ambush, as seen in Figure 10. Note that in our simulation of the velociraptor's ambush, all of the assumptions except zero capture radius favor the velociraptor, so we are confident that $1.5\mathrm{m}$ is an upper bound for the maximum distance of effective ambush. + +![](images/fc6aa76d39b9514fdf7e659a186bf24213a1d00d0291b463570c6e4fbbc842a0.jpg) +Figure 9. Ambush of a gazelle by a lion. + +![](images/8e9eb099320542c4b7a1ac8d1aa0b447304c96e3ea1a97f1364e55e64a038440.jpg) +Figure 10. Maximum ambush distance by a velociraptor. + +Our model accurately predicts that the lion should be able to capture a gazelle from $10\mathrm{m}$ and that a velociraptor should be able to capture a thescelosaurus from $1.51\mathrm{m}$ . This difference explains why the ambush is a useful strategy for the lion but not for the velociraptor. The lion has a slower maximum speed but has greater acceleration and is more maneuverable than a gazelle, while the velociraptor has greater speed but less maneuverability than a thescelosaurus. An ambush is the most effective strategy for a pair of lions, and chasing from behind is the most effective strategy for a pair of velociraptors. + +# Testing the Model + +We determined reasonable parameter values for the velociraptor hunt by comparing the predictions of our model with the success rates of different strategies for the African lion. Our model could be validated by testing it on another known species, such as the cheetah, for which there are data. One particularly significant assumption that should be tested is the ratio $f$ of maximum forward acceleration to maximum lateral acceleration $(f)$ , whose value of 0.5 was chosen because it produced realistic results in the lion-gazelle hunt. + +# Extensions + +We could make the capture a stochastic process. One alternative scoring function [Friedman 1970, Marec and Van Nhan 1977] is to integrate some func + +tion of time and the positions with respect to time: + +$$ +B = \int \mu (t, x _ {P} (t), x _ {E} (t)) d t. +$$ + +The probability of a capture is $e^{-B}$ . We briefly considered both + +$$ +\mu = \frac {1}{| x _ {P} - x _ {E} |} - \frac {1}{3} +$$ + +and + +$$ +\mu = 1 + \cos \left(\frac {\pi \left| x _ {P} - x _ {E} \right|}{3}\right) +$$ + +for $0 \leq |x_P - x_E| \leq 3$ but disagreed with their predictions. The "chicken maneuver" from Cockayne's proof of Theorem 1, in which the prey charges at the predator but swerves at the last moment, gives the prey a good score. Although the predator and prey come close together, they are close for only a short time, hence the integral with respect to time is small. + +For the velociraptor-thescelosaurus hunt, the closest approach in the chicken maneuver is less than the size of the prey and would probably result in a kill. We conclude that the probabilistic approach is not appropriate for the given model parameter values. + +Since the closest approach scales linearly with turning radius, the probabilistic approach may be appropriate for situations in which the turning radius is much larger than the predator and prey, such as a missile chasing an airplane. + +# Strengths and Weaknesses + +We took advantage of the fact the turning radii and the distances needed to accelerate to full speed are negligible compared with the overall distances. This fact allowed us to decompose the chase into a macroscopic phase and a microscopic phase. The players have different restrictions and goals in each phase. We feel that the decomposition of a difficult problem into more manageable problems is the major strength of our approach. + +The assumptions that $t_{\mathrm{max}} = 15$ , $\rho_P = 1.5$ , $\rho_E = 0.5$ , $S_P = 16.67$ , and $S_E = 13.89$ are unrealistic for a biomechanical analysis of dinosaurs. In particular, a thescelosaurus turning at full speed in a radius of $0.5 \mathrm{~m}$ has a centrifugal acceleration of 39 times the force of gravity. We might fix the g-force by linearly scaling the given constants $\rho$ and $t_{\mathrm{max}}$ and scaling the assumed reflex time. Since our model is invariant under this change of scale, the strategic analysis will be unchanged, with the possible exception that the capture radius stays constant and thus the microscopic game does not always favor the velociraptor. + +A disadvantage of our model is that the transition between the macroscopic and microscopic games is not well defined. The exact transition would be unimportant if the initial distances are much larger than the curving radii, but we were not able to quantify their importance. + +A second disadvantage is that the decomposition does not apply in all situations, since the macroscopic phase explicitly assumes that the predator's maximum speed is greater than the prey's. For instance, the macroscopic phase does not apply to the lion-gazelle hunt, because the lion is slower than its prey. + +We could not find general strategies for the microscopic game by mathematical analysis. Instead, we used simulations to conjecture and test strategies. + +An unforeseen advantage of our model is its potential application to other situations in which the initial distances are large compared to the turning radii, such as air combat or naval warfare. + +# References + +Cockayne, J. 1967. Plane pursuit with curvature constraints. SIAM Journal of Applied Mathematics 15: 1511-1516. +_____, and G.W.C. Hall. 1975. Plane motion of a particle subject to curvature constraints. SIAM Journal of Control 13: 197-220. +Elliott, J.P., I.M. Cowan, and C.S. Holling. 1977. Prey capture by the African lion. Canadian Journal of Zoology 55: 1811-1828. +Friedman, A. 1970. Existence of value and of saddle points for differential games of pursuit and evasion. Journal of Differential Equations 7: 92-110. +Hajek, O. 1975. Pursuit Games. New York: Academic Press. +Isaacs, R. 1965. Differential Games. New York: Wiley. +Marec, P., and Nguyen Van Nhan. 1977. Two-dimensional pursuit-evasion game with penalty on turning rates. Journal of Optimization Theory and Applications 23: 305-345. +Schreuer, M. 1976. Stochastic pursuit-evasion games with information lag. I. Perfect observation. Journal of Applied Probability 13: 248-256. II. Observation with error. 13: 313-328. +Stander, E. 1992. Foraging dynamics of lions in a semi-arid environment. Canadian Journal of Zoology 70: 8-21. + +# Judge's Commentary: The Outstanding Velociraptor Papers + +John S. Robertson + +Dept. of Mathematics and Computer Science + +Georgia College and State University + +Milledgeville, GA 31061-0490 + +jroberts@mail.gac.peachnet.edu + +# Introduction + +The life sciences provide a particularly fertile area in which a mathematical modeler can apply the craft. Physics and engineering are the traditional playgrounds of applied mathematicians, while biology and its subdisciplines have often been thought of as areas in which little or no mathematics is necessary—soft subjects, if you will. That stereotype is just not true. (Just look at any volume of The UMAP Journal!) + +Paleontology is a particularly rich area for the mathematical modeler because there are simply no data available from observations. It is not possible, for example, to watch a tyrannosaur hunt down its dinner or a pterodactyl soar through the air. What we do know about ancient animals is based on a lot of inferential detective work, something that often requires a substantial amount of applied mathematics. + +The success of Michael Crichton's novels *Jurassic Park* and *The Lost World* (and their extraordinary popularity as movies under the masterful direction of Steven Spielberg) only served to raise the interest in many people—young ones, in particular—in paleontology. In that sense, the timing for this problem could not have been better. + +# The Importance of Assumptions + +The modeling process hinges critically on the assumptions made. In the cases of the velociraptor and the thescelosaurus, paleontologists provided data about certain physical characteristics of these dinosaurs, based on analysis of fossil remains. The paleontologists needed help in analyzing hunting strategies used by the raptors. + +The data provided suggested that unrealistically high forces were present as either animal made a sharp turn, a maneuver almost certainly necessary for survival. Most teams realized this; and since they were unable to obtain revised data from the paleontologists, they made reasonable moderating assumptions. This type of difficulty is not unknown in the real world, and it was important to the success of the teams that they identified the data provided as containing an evident weakness and reacted in some reasonable and appropriate way. For example, quite a few teams drew on information readily available in the literature concerning mammalian species similar in size and behavior traits to the dinosaurs in the problem. This enabled the teams to adjust the data in a realistic way, raising the likelihood that their subsequent results would be of real utility to their clients. + +Another major issue that had to be treated with assumptions dealt with the geometry and mechanisms of the stalk, the chase, and the capture. The best teams provided clear, detailed thinking about their choices. Good work here inevitably eased the problem of interpretation that the teams eventually had to face. + +# The Choice of Models + +Once teams clarified their assumptions, they set to the task using a surprising diversity of approaches. Some teams were able to formulate models that used just algebra and geometry and no calculus. Others used differential equations, and one of the best papers submitted used differential game theory. + +In almost every case, teams turned to a computer to perform model calculations. The judges saw all manner of approaches to this. Computer algebra systems were very popular, as was Matlab. And many teams used a good old-fashioned programming language ( $C++$ in most cases) for this purpose. + +This problem was especially well suited to graphical interpretation of one sort or another, and most teams provided graphs and charts that depicted the conduct and outcome of chases with one or two predators. Graphical analysis is particularly important when working with clients who may not grasp all the technical mathematics (such as paleontologists). The old saying—that a picture is worth a thousand words—is decidedly true in this case, as the illustrations produced by teams were absolutely vital to their analysis or the model's predictions. + +# Analysis of Results + +The very best papers gave a thorough treatment to an interpretation of their results and predictions. In most case, this involved a consideration of model weaknesses uncovered by those results. These weaknesses often were traced back to assumptions made at the beginning of the process. This step is also + +important to the modeler. The client is not always in a position to assess the existence, never mind the seriousness, of weaknesses. Such weaknesses are not necessarily fatal, but they do serve to point out where better data, more accurate assumptions, or different methods are called for. This leads to an important interaction between the modeler and the client. Without this, the whole process loses its focus. + +# Conclusion + +The Outstanding papers showed remarkable depth of insight into both the biology and the mathematics used, as well as into the process of melding the two together. The judges had a great time reading them. It was evident that the topic stimulated a intense interest on the part of the contest participants. No doubt the dinosaur problem has prompted many students to contemplate investigating biology as an exciting and fruitful area of work for a mathematician. + +# About the Author + +Dr. John S. Robertson is Chair and Professor of Mathematics and Computer Science at Georgia College and State University, the alna mater of the great southern writer Flannery O'Connor. He describes himself as a "dirt-under-the-fingernails" applied mathematician, and is as fascinated by the applications of mathematics to other disciplines as he is by the mathematics itself. He and his family are happily ensconced in the Deep South, from where he enjoys watching weather reports about all the winter snow that falls in the northeastern United States. He doesn't miss it a bit. + +# An Assignment Model for Fruitful Discussions + +Han Cao + +Hui Yang + +Zheng Shi + +East China University of Science and Technology + +Shanghai, China 200237 + +Advisor: Lu Xiwen + +# Introduction + +We present a boolean programming model to solve a practical problem of giving assignments for a meeting. + +Because the objective function of the model is nonlinear and because there are too many variables in the model, the problem is quite difficult to solve by means of general methods from integer programming. We use a greedy algorithm to get an initial feasible solution, then we optimize locally and iterate to approach the optimal solution. + +We believe that our algorithm solves the given problem quite well. For the possibility that some board members will cancel at the last minute or that some not scheduled will show up, we give an adjustment method that makes the fewest necessary changes in assignments. + +Our ideas admit of generalization. The parameters, such as the number of members, the number of types of attendees, and the number of different levels of participation can be varied, and the model and the algorithm always give a good solution. The model has the following advantages: + +- It solves the given problem successfully, and it can generate a group of quite optimized solutions quickly. +- The model is general; it can give quite good solutions for different parameter values. +- It has lots of applications. + +# Assumptions + +- Three kinds of members attend the meeting: + +- senior officers (6), +- in-house members (9), +- other members (20). + +- The whole meeting is divided into two stages: + +- A.M.: The morning meeting includes 3 sessions, each session consists of 6 discussion groups, and each group is led by a senior officers. +- P.M.: The afternoon meeting includes 4 sessions, each session consists of 4 discussion groups, and no senior officer attends. + +- The assignments should satisfy the following two criteria: + +- For the morning sessions, no board member should be in the same senior officer's discussion group more than once. +- No discussion group should contain a disproportionate number of in-house members. + +- No member changes groups during the meeting. + +Table 1. Description of the variables. + +
Xstrategy vector; xijk means member i is or is not in group k in session j
Pdividing matrix
Pjthe dividing matrix of session j
Qacquaintance matrix
Qjthe acquaintance matrix of session j
Qsumthe summary acquaintance matrix, Qsum = P7j=1Qj
f(X)objective function, the number of 0s in matrix Qsum
g(X)objective function, the square of the norm of the matrix
0 0 1 ... 11
T = Qsum - kB1 0 ... C1
@ @ : ... 1A1
where k is a constant.
+ +# Analysis and Model Design + +# Preparation Knowledge + +Divide the set $S = \{s_1, \ldots, s_M\}$ into $n$ groups $G_1, \ldots, G_n$ and represent the division by the $m \times n$ matrix $P = (p_{ij})$ , where $p_{ij} = 1$ if $s_i \in G_j$ and 0 otherwise. We call $P$ the dividing matrix. We also consider the $m \times m$ matrix $Q = (q_{ij})$ , where $q_{ij} = 1$ if $s_i$ and $s_j$ are in the same group ( $i \neq j$ , $1 \leq i, j \leq m$ ) and 0 otherwise (in particular, 0 on the diagonal). We call this the acquaintance matrix. We have a basic theorem that relates the dividing matrix and acquaintance matrix. + +Theorem. Let $P$ be a dividing matrix. Then the corresponding acquaintance matrix is $Q = PP^T - E$ , where $E$ is a matrix of all 1s. + +Proof: Because each $s_i$ can be in only one group, only one element of each row of $P$ is 1 and the others are 0. We can easily calculate the elements of $Q = (PP^T - E)$ : + +$$ +q _ {i j} = \left\{ \begin{array}{l l} \sum_ {k = 1} ^ {m} p _ {i k} p _ {j k}, & i \neq j; \\ \sum_ {k = 1} ^ {m} p _ {i k} ^ {2} - 1, & i = j. \end{array} \right. +$$ + +If $P_{ik}$ and $P_{jk}$ are not both 1, then $s_i$ and $s_j$ and are not in the same group, so $q_{ij} = 0$ . + +If $P_{ik}$ and $P_{jk}$ are both 1, then $s_i$ and $s_j$ are in the same group, so $q_{ij} = 1$ . + +Set + +$$ +x _ {i j k} := \left\{ \begin{array}{l l} 1, & \text {i f m e m b e r i i s a s s i g n e d t o g r o u p k i n s e s s i o n j ;} \\ 0, & \text {o t h e r w i s e .} \end{array} \right. +$$ + +For our problem, $i$ ranges from 1 to 29 (let $i = 1, \ldots, 9$ be the in-house members), $j$ from 1 to 7, and $k$ from 1 to 6. For each assignment (session) $j$ , there is a dividing matrix $P_j$ and a corresponding acquaintance matrix $Q_j$ . + +# Constraints + +Each member is assigned to only one group in each session: + +$$ +\sum_ {k = 1} ^ {6} x _ {i j k} = 1, \qquad i = 1, \dots , 2 9; j = 1, \dots , 3; +$$ + +$$ +\sum_ {k = 1} ^ {4} x _ {i j k} = 1, \qquad i = 1, \dots , 2 9; j = 4, \dots , 7. +$$ + +- Each discussion group should contain a proportionate number of in-house members in each session: + +$$ +1 \leq \sum_ {i = 1} ^ {9} x _ {i j k} \leq 2, \quad k = 1, \dots , 6; j = 1, \dots , 3; +$$ + +$$ +2 \leq \sum_ {i = 1} ^ {9} x _ {i j k} \leq 3, \quad k = 1, \dots , 4; j = 4, \dots , 7. +$$ + +- In the morning session, each of six discussion groups is led by a senior officer and no board member should be in the same senior officer's discussion group more than once: + +$$ +0 \leq x _ {i 1 k} + x _ {i 2 k} + x _ {i 3 k} \leq 1, \quad i = 1, \dots , 2 9; k = 1, \dots , 6. +$$ + +We seek a 0-1 three-dimensional matrix $X$ that satisfies all these constraints. How can we judge whether it is good for our purposes? + +# Objective Function + +The problem requires that the assignments should + +- mix all the board members well, +- have each board member with each other board member in a discussion group the same number of times while minimizing common membership of groups for the different sessions. + +Consider an extreme situation: If the 29 members are divided into just one group, the goal of mixing well is satisfied but the number of repetitions (session after session) is greatest. That is, when the number of the dividing groups is small, more sessions will increase the repetitions. At the same time, we must avoid too many people (thereby discouraging productive discussion) and a group being controlled or directed by a dominant personality. Therefore, during the course of model design and solution, we consider only the situation of the 29 board members divided as equally as possible into groups. We think that such a plan should minimize the number of repetitions, though we can't prove this claim. So we divide each morning session into 6 groups of 5, 5, 5, 5, 5, and 4, and each afternoon session into 4 groups of 7, 7, 7, and 8. And how can we describe or judge whether the results of the seven divisions are good mathematically? The number of times that each member meets may be calculated by following formula: + +$$ +Q _ {\text {s u m}} = \sum_ {j = 1} ^ {7} Q _ {j} = \left(q _ {i j} ^ {\text {s u m}}\right) _ {2 9 \times 2 9} \quad s, t = 1, \dots , 2 9, +$$ + +where + +$$ +q _ {i j} ^ {\text {s u m}} = \sum_ {l = 1} ^ {3} \sum_ {k = 1} ^ {6} x _ {i l k} x _ {j l k} + \sum_ {l = 4} ^ {7} \sum_ {k = 1} ^ {4} x _ {i l k} x _ {j l k}, \quad i, j = 1, \dots , 2 9. +$$ + +The matrix $Q_{\mathrm{sum}}$ is called the summary acquaintance matrix of the division. Considering the goals, we think that the final summary acquaintance matrix should have as few zero elements as possible. It would be ideal for each nondiagonal element to be 1, with each main diagonal element 0. + +Altogether, the assignments provide + +$$ +3 \times \left(4 \times \binom {5} {2} + \binom {4} {2}\right) + 4 \times \left(3 \times \binom {7} {2} + \binom {8} {2}\right) = 5 3 2 +$$ + +chances for two individuals to meet each other. On the other hand, there are only $\binom{29}{22} = 406$ pairs of members. So every pair meets $K = 532 / 406 \approx 1.31$ times on average. Let + +$$ +T = (t _ {i j}) _ {2 9 \times 2 9} = Q _ {\mathrm {s u m}} - K \left( \begin{array}{c c c c c} 0 & 1 & \dots & \dots & 1 \\ 1 & 0 & \dots & \dots & 1 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & 1 & \dots & \dots & 0 \end{array} \right) +$$ + +and let + +$$ +f (X) = \text {t h e n u m b e r o f 0 e l e m e n t s i n} Q _ {\text {s u m}}, \quad \text {a n d} \quad g (X) = \| T \| ^ {2} = \sum_ {i = 1} ^ {2 9} \sum_ {j = 1} ^ {2 9} t _ {i j} ^ {2}, +$$ + +where + +$$ +t _ {i j} = \left\{ \begin{array}{l l} \sum_ {l = 1} ^ {3} \sum_ {k = 1} ^ {6} x _ {i l k} x _ {j l k} + \sum_ {l = 4} ^ {7} \sum_ {k = 1} ^ {4} x _ {i l k} x _ {j l k} - K, & i, j = 1, \ldots , 2 9, i \neq j; \\ \sum_ {l = 1} ^ {3} \sum_ {k = 1} ^ {6} x _ {i l k} ^ {2} + \sum_ {l = 4} ^ {7} \sum_ {k = 1} ^ {4} x _ {i l k} ^ {2}, & i = j = 1, \ldots , 2 9. \end{array} \right. +$$ + +When the function $f(X)$ is minimized, the goal of mixing well will be satisfied best; and we think that an assignment would have each board member in a discussion with each other board member the same number of times when the function $g(X)$ attains a minimum. + +# Model + +We have two objective functions to minimize, $f(X)$ and $g(X)$ , and constraints as indicated earlier. This is a standard 0-1 integer programming and multiobjective programming problem. The task now is to solve this model. + +# Model Solution + +# Analysis + +Although the constraints are linear, the objective functions are nonlinear. For this kind of integer programming problem, there is no general method to get the optimal solution efficiently; in fact, it is an NP-complete problem. For our problem instance, with 986 variables, the solution space has size at least least $6^{986}$ , so the method of exhaustion is infeasible. We must devise an efficient algorithm. + +We find a good feasible solution and adjust it iteratively to approach the optimal solution, arriving at an acceptable approximately optimal solution. + +# Initial Feasible Solution + +We use the greedy heuristic to get an initial feasible solution. The heuristic assigns the 29 members into each group one by one in each session. Before each member is assigned into one of groups, we examine which would be the best group for the member. + +# Iteration + +Because there are two objective functions, we use the strategy of multiobjective programming. We program the problem first using the function $f(X)$ until its value cannot be reduced. We can always get assignments that minimize $g(X)$ when $f(X)$ is minimized. + +In the first step, we first adjust the vector for the seventh division to reduce the value of $f(X)$ , then we perform exchanges (see below) to reduce $g(X)$ without affecting $f(X)$ . + +Then we similarly adjust the vector for the sixth division, then the fifth, and so on. Repeating the procedure until the values of the objective functions cannot be reduced further in this way, we obtain an approximate optimal solution. + +# Permutation + +We still have more than $10^{80}$ different combinations, regardless of whether the members are divided into 6 or 4 groups in one session. It is still impossible for a computer to investigate each possibility. So we give a simple strategy. We exchange the seats of two members who are not assigned to the same group. If the exchange reduces the value of the objective function, it is acceptable; otherwise, we refuse to accept it. We repeat the procedure over all such pairs, thereby minimizing the value of the objective function. + +# Steps of the Algorithm + +[EDITOR'S NOTE: For space reasons, we omit the detailed pseudocode of the algorithm.] + +# Solution + +Using the greedy heuristic gives the plan of Table 2 as the initial feasible solution, which has $f(X) = 209$ and $g(X) = 28.25$ . Applying our iteration yields the result in Table 3, with $f(X) = 81$ and $g(X) = 19.44$ . + +# Table 2. + +Assignment table for initial feasible solution from greedy heuristic. + +Morning. + +
Group 1Group 2Group 3Group 4Group 5Group 6
Session 11713192528142026391521274101622285111723296121824
Session 24111421265101523286121620291817242727182239131925
Session 3291522271116212947172428312182368131925510142026
+ +Afternoon. + +
Group 1Group 2Group 3Group 4
Session 43 7 11 14 20 22 271 5 9 16 18 24 264 8 12 15 19 23 282 6 10 13 17 21 25 29
Session 52 7 10 16 19 23 273 6 11 15 17 22 264 8 12 13 18 21 28 291 5 9 14 20 24 25
Session 61 6 10 14 18 21 272 5 11 13 19 22 26 293 7 12 16 20 24 284 8 9 15 17 23 25
Session 71 6 11 13 18 22 252 7 9 15 20 23 283 5 12 16 19 24 274 8 10 14 17 21 26 29
+ +# Table 3. + +Assignment table after application of iteration. + +Morning. + +
Group 1Group 2Group 3Group 4Group 5Group 6
Session 11 6 13 19 242 3 14 25 265 8 10 15 277 16 20 22 289 11 21 23 294 12 17 18
Session 23 10 11 26 285 12 19 22 236 16 20 21 291 2 17 24 274 7 15 188 9 13 14 25
Session 35 7 14 17 294 13 15 16 211 9 19 26 283 11 18 236 8 12 25 272 10 20 22 24
+ +Afternoon. + +
Group 1Group 2Group 3Group 4
Session 44 6 11 14 20 23 277 8 9 16 18 24 262 5 17 19 21 25 281 3 10 12 13 15 22 29
Session 51 4 10 14 16 23 255 6 11 15 17 22 262 8 13 18 19 27 28 293 7 9 12 20 21 24
Session 66 9 10 18 21 22 271 2 11 12 14 15 16 197 8 13 17 20 23 263 4 5 24 25 28 29
Session 71 5 11 13 18 20 252 6 9 15 23 24 283 7 10 16 17 19 274 8 12 14 21 22 26 29
+ +We compare these two plans in Table 4. The initial feasible solution satisfies the criterion of balanced assignment. The final solution as optimized by the + +iterative method not only keeps the balance of the initial solution but also reduces the objective functions $f(X)$ and $g(X)$ . We believe that the final solution satisfies the criteria of the problem. It attains the two goals of the problem, and it reduces not only the number of pairs that fail to meet (to 26) but also the number of multiple meetings. + +Table 4. Comparison of initial feasible solution with result after iteration. + +
Number of times a pair meets012345
Initial feasible solution from greedy heuristic901541193391
Solution after iteration262531022500
+ +# Adjusting Assignments + +Let's consider how to adjust the assignments when some board members cancel at the last minute or some not scheduled show up. We could just renumber the attendees and solve the problem again. But in real life, we would not like to adjust the assignments too much. So we give another strategy for adjustment. + +# Case 1 + +When some members not scheduled show up at the last minute, we consider only how to assign without changing the given assignments. We use an adjustment method that is similar to the greedy heuristic. The additional members are assigned into the groups one by one. We always try to find the best group that the member should be in, according to the given constraint of keeping the assignments balanced and making the attendees mixed well. + +# Case 2 + +When some board members cancel, we delete one absentee at a time. We select one of the board members of the same type from the original assignments whose absence will lead to best mixing, that is, whose absence would reduce the value of the objective functions most. Then we let that member replace one of the absentees and delete the absentee in the list of assignments. + +# Case 3 + +When some board members cancel and some not scheduled show up at the same time, we classify them according to their types. Let $a$ stand for the number of absentees and $b$ stand for the number of additional members. For the members of the same type, we do the following operation: + +- If $a = b$ , then use the additional members to replace the absentees. +- If $a < b$ , then first replace all of the absentees by some of the additional members. Then assign the remaining additional members using the method of Case 1. +- If $a > b$ , then replace all of the additional members by some of the absentees, keeping the balance of the assignments. Then delete the remaining absentees using the method of Case 2. + +Table 5 compares the results of such adjustment with the greedy heuristic and with the iterative algorithm, for several cases of absentees and additional members. +Table 5. Comparison of results for the greedy heuristic (G), the iterative algorithm (I), and the adjustment algorithm (A) for various cases of absentees and additional members. + +
SeniorSituation In-houseOtherMethodNumber of times that a pair meets
012345
6920G901541193391
I2625310225
Ano adjustment needed
6921G88171132395
I3325312821
A33264108282
6919G76145126292
I24236100162
A262359324
61020G87164146362
I3524713419
A33263104282
6820G7414014024
I26231103171
A232379523
+ +The adjustment strategy and the iterative algorithm are both satisfactory. It seems that the solution from the adjustment strategy should be as good as that from the iterative method; but in several cases (e.g., for 8 in-house members), the solution from the adjustment strategy is better. + +# Extension of the Model + +Our model and solution method are completely general and apply to number of members, kinds of members, and levels of participation. + +Assume that + +- there are $d$ types of attendees, +the whole meeting is divided into $w$ stages, +- there are $S_{i}$ sessions for the $i$ th stage, and +- each session consists of $G_{i}$ discussion groups for the $i$ th stage. + +The assignments should also satisfy the following requirements: + +- Each member can be assigned to only one group in each session. +- The attendees of type $\alpha$ are to be divided equally in a session. +- The whole assignment must always be balanced. +- An attendee of type $L$ is not allowed to meet a particular senior officer more than $c$ times in stage $r$ . + +Assume that there are $b_{i}$ members for the $i$ th type of attendee. Then there are $m = \sum_{i=1}^{d} b_{i}$ attendees, and we number them from 1 to $m$ . The whole meeting involves $w$ stages with $S_{i}$ sessions in the $i$ th stage, for a total of $S = \sum_{i=1}^{w} S_{i}$ sessions. We assume that each session in the $i$ th stage is divided into discussion groups. We define all variables analogously to the simpler setting of the original problem, and we arrive at a generalized boolean programming problem. + +# Strengths and Weakness of the Model + +The model has quite good practicality, and the given algorithm has little time complexity. For the given problem size, our C program for the greedy algorithm and iterative method runs in less than 5 min on a Pentium-100 computer. That means the model can give a list of assignments quickly when the number of attendees is not too large. The assignments produced are close to the optimal solution. + +The model is quite easy to extend to different numbers of attendees, numbers of groups, types of attendees, and levels of participation. + +We can adjust to last-minute changes either by re-running our program or (to minimize effect on assignments already made) by using our adjustment procedure. + +The weakness of the model is that there is some difference between the real optimal solution and the solution obtained from the model. + +# References + +Churchman, C.W. 1957. Introduction to Operations Research. New York: Wiley. + +Hwang, C.L., and K.S. Yoon. 1981. Multiple Attribute Decision Making. New + +York: Springer-Verlag. + +# Using Simulated Annealing to Solve the Discussion Groups Problem + +David Castro + +John Renze + +Nicholas Weininger + +Macalester College + +St. Paul, MN 55105 + +Advisor: Karla V. Ballman + +# Introduction + +Our task is to assign 29 corporate board members to a sequence of discussion groups organized into seven sessions, of which three are morning sessions led by senior officers and four are afternoon sessions not led by senior officers. We wish to find a combination of group assignments that best satisfies the following objectives: + +- In the morning sessions, no board member should be in the same senior officer's discussion group twice. +- No group should have too many in-house members (there are nine in-house corporate employees among the 29 members). +- The number of times any two board members are in the same group should vary as little as possible. +- No two groups should have a large number of common members. + +The specific case given assumes that all members will participate in all sessions and that there will be six discussion groups for each morning session and four for each afternoon session. Our model ought to be capable of producing good answers quickly under these assumptions and of adjusting to more general configurations of board members. In particular, it should be able to adjust to small changes, such as individual additions or subtractions of board members, without recalculating all assignments from scratch. + +# Model Design Considerations + +This problem is essentially one of optimization. The different potential assignments of board members form a solution space, and we must optimize the "fitness" of our assignments (how well they satisfy our objectives) over that solution space. Thus we have to consider the issues that come up when designing any optimizer: time and space efficiency, flexibility, and optimality of the solution. We also have concerns particular to this problem. + +# Multiple Objective Satisfaction + +Most optimization problems involve maximizing a single objective function; here we have four objectives. The problem statement doesn't tell us whether, for example, to consider minimizing common group membership more important than minimizing the number of times any two board members meet. Presumably, we want to use some sort of weighted combination of the objectives as the function to optimize over the solution space—but determining how to combine and weight them is nontrivial. + +# Large Solution Space + +The number of ways of assigning the board members to the discussion groups is enormous. Since each board member goes into seven sessions, we have a total of $29 \times 7 = 203$ variables; since there are six possible assignments for each member in the morning sessions and four for the afternoon sessions, the total number of possible solutions is on the order of $6^{87} \times 4^{116} \approx 3 \times 10^{137}$ . Furthermore, no matter how we define our objective function, the solution space will probably contain a large number of local optima. This has two implications. First, it's probably impossible to find a global optimum (and it isn't absolutely necessary here). Second, we need a solution model that doesn't require searching over a significant fraction of the solution space. + +# Fast Readjustment + +Since board members may drop out or in at the last minute, we want a way of taking a precomputed solution and adjusting it for small changes in the member configuration. This adjustment should be significantly faster than a complete recomputation and should also involve fewer changes of assignment. + +# Simulated Annealing + +We chose simulated annealing as the basis for our model, as it offers the best chance of constructing an effective solution-finding algorithm. + +# The Simulated Annealing Process + +The simulated annealing algorithm can be described as follows: + +1. Start with an objective function that is evaluable for every point in the solution space, a randomly chosen point in that space, and an initial "temperature" value. +2. Evaluate the initial point's objective function value. +3. Perturb the point by a small amount in a random direction. +4. Calculate the objective function value of the perturbed point. +5. If the new function value is better than the old one, accept it automatically, staying at the perturbed point. +6. If the new function value is worse, decide probabilistically whether to stay at the perturbed point or go back to the old one. The probability of not staying at the new point should depend on how low the temperature is and how much worse the new point is than the old one. +7. Lower the temperature slightly and go to step 3. Iterate until the temperature is close to 0 or the solution stops changing. + +The algorithm essentially does a hillclimb through the solution space, with occasional random steps in the wrong direction. The temperature controls how likely the algorithm is to take a step the wrong way; at the beginning, the algorithm jumps almost completely randomly around the solution space, but by the end it almost never takes a wrong step. The idea is that the random steps allow the hillclimber to avoid getting stuck at local minima or maxima. In most implementations of simulated annealing, the probability of acceptance of a wrong-way step is $e^{-d / T}$ , where $d$ is the difference between the old and new objective functions and $T$ is the temperature. The temperature typically starts at a value sufficient to give almost any degree of wrong-way movement a significant probability of acceptance, and decreases exponentially from there. + +The original idea for simulated annealing comes from statistical mechanics. The molecules of a liquid move randomly and freely at high temperatures; when the liquid is cooled and then frozen, the molecules essentially "search" for the lowest energy state possible. The minimum energy state occurs when the molecules are arranged in a specific crystalline structure. If you cool the liquid slowly, the molecules will have time to redistribute themselves and find this structure; if you cool it quickly, they will typically get "stuck" at a higher-energy, noncrystalline state. + +# Reasons to Use Simulated Annealing + +A number of factors influenced our decision to apply simulated annealing to our problem: + +- It's fast. Simulated annealing requires evaluation of the objective function over a relatively small number of points to achieve its result. +- It's simple. Everything except the fitness function evaluation can be done in well under 100 lines of C code. It isn't difficult to understand or debug. +- It lends itself well to discrete solution spaces and nondifferentiable objective functions. As long as you can provide a random jump between solution points and a way of evaluating the function at any point, it can work; many other methods require a continuous solution space, or demand that you evaluate the function's derivative. +- It has a track record of success. Simulated annealing has been used for applications as diverse as stellar spectrum analysis and chromosome research. +- We were all well acquainted with simulated annealing, and one of us had previously used it to solve a similar partitioning problem. + +# Alternatives + +# Heuristic Algorithms + +One could try to get a solution by using some sort of intuitive rules about how to rearrange things, much as a human secretary would. This would likely be too slow for a large number of members, though, and coming up with good intuitive rules that a computer can implement is difficult. + +# Gradient Methods + +Traditional gradient descent methods are another possibility. These, however, generally require that the objective function's derivative be evaluated or at least estimated, and our objective function is extremely difficult to express mathematically. Furthermore, they run the risk of getting stuck at local minima. + +# Integer Programming + +Integer programming is commonly used for discrete optimization problems like this one. But none of us had experience implementing it, and we feared that the large size of the solution space might make it too slow. + +# Genetic Algorithms + +We could establish a pool of potential solutions that would "evolve" toward an optimal solution through a process analogous to natural selection. But this, too, would likely be too slow and complex for our problem. + +# Modeling the Solution + +We designed and coded in C a program that uses simulated annealing to solve a general set of discussion-group problems, including the one given. + +# The Data Structure + +Our approach to the problem began with data structure design: We needed a way to encode the whole list of assignments that would allow us to code the annealing process straightforwardly, and preserve important testing and organization data for analysis of the annealing results. + +The data structure partition has three levels of organization: + +1. The top level, partition. This contains a lot of data pertaining to the whole solution structure: the number of morning and afternoon sessions, the number of groups in each type of session, and the total number of members and of in-house members participating in each session. It also contains some data relevant to the objectives concerning pairs of members and common membership (more on that below). +2. The list of groups contained in the partition. Each group corresponds to one of the discussion groups in one of the sessions and has a size variable. +3. The list of people contained in each group. Each person has a name, a number and a flag indicating whether they're in-house. Thus each instance of person corresponds to one member's participation in one session. Note that this allows members to participate in only some of the sessions. + +# The Objective Function + +We chose as the objective function a weighted sum of four subfunctions. Each subfunction takes a partition and calculates how close it comes to satisfying one of the objectives given in the problem. The results are all expressed in terms of "badness," or the degree to which the partition fails to satisfy an objective; thus our problem becomes the minimization of the "badness function." + +# Senior officer nonrepetition + +The first objective we consider is that no board member should be in the same senior officer's morning group twice. Our first subfunction simply takes the morning groups headed by each senior officer and checks for repetitions in their member lists. The total number of repetitions—that is, the total number of times a board member is in the same senior officer's morning group more than once—is multiplied by one weight to form the first part of the badness function. Since we want zero repetitions, and want the annealer to stay with + +solutions with zero repetitions once it finds them, we also have a “zero bonus” that subtracts a given amount from the badness if there are zero repetitions. + +# In-house members + +The second objective states that no group should have a disproportionate number of in-house members. Our second subfunction calculates the amount of disproportionality in the distribution of in-house members among groups. For each session, it uses the number of total members and in-house members participating in the session to compute ideal floor and ceiling values for the number of in-house members in each group. (If the number of in-house members is a multiple of the number of groups in the session, then the floor and ceiling are the same; otherwise they differ by one.) Then it goes through each group in the session and tests to see if the number of in-house members is more than the floor or less than the ceiling; if so, it adds to the disproportionality sum the difference between the floor or ceiling and the actual number. So, if the ideal ceiling were two in-house members per group, a group with four in-house members in it would add two to the disproportionality sum. + +We want the disproportionality sum, like the repetition sum, to be zero, and so implement a zero bonus; we set this equal to the disproportionality weight rather than making it separately adjustable, for reasons discussed below. + +# Pairings of board members + +Our third objective is to try to ensure that each board member meets each other board member approximately the same number of times. Our third subfunction does this by computing ideal floor and ceiling values for that meeting number, just as the second does for the proportion of in-house members. In the partition structure we keep a matrix whose elements correspond to the number of times each pair of board members are in the same group. The third subfunction recalculates this matrix by looking at each pairing in each discussion group; derives the ideal floor and ceiling values from the matrix; and then goes through the matrix again, adding up the instances in which the number of times a pair of members meets is less than the floor or more than the ceiling. This sum becomes the pairwise anomaly count. + +We also calculate the maximum number of times any two board members meet, and use that as well as the pairwise anomaly count in the badness calculation. The reasoning behind this is not only because it's desirable for most pairs of members to meet the same number of times, but also to ensure that no pair of members is in a hugely disproportionate number of groups together. We don't want a solution with one pair that is in every group together but with no other anomalous pairs to be preferable to a solution with many slightly anomalous pairs but with no hugely anomalous ones. + +Here, we have no zero bonus for the anomaly sum, because the meeting criterion ought to be satisfied "as much as possible," rather than absolutely, and because it's impossible in many cases (including ours) to drive it to zero. + +# Common membership + +Our final objective is to ensure that no two groups have a disproportionate number of common members. The fourth subfunction is almost precisely analogous to the third. Again, we have a matrix in the partition structure, listing for each pair of groups how many members they have in common; again, we calculate this matrix, use it to derive an ideal mean, and find the sum of deviations from that mean and the maximal deviation. Here, however, we don't count as deviant groups those that have fewer than the mean number of common members. It doesn't matter if two groups have no members in common, only if they have too many. + +# The Annealing Iteration Process + +Once we have the "badness function" described above, we perform simulated annealing. The relevant implementation details are: + +- The perturbation: A single swap of two members is the unit of random perturbation. Our program randomly chooses a session, two discussion groups within that session, and a member in each group, and then swaps them. +- The initial configuration: The file-reader dumps the members from the file into the partition in order, giving an extremely bad configuration. Our program does random swaps on that configuration for a random number of times (between 1 and 32,767), producing a “shuffled” initial configuration. +- The starting temperature setting and the rate of exponential decay: These are two key variables that determine how long the annealing takes and what the temperature profile is. + +For the starting temperature, we just use the badness value of the starting configuration; this means that at the starting temperature, a solution as bad as our starting one would have a $10\%$ chance of being accepted as a step away from a perfect (zero-badness) configuration. + +For the decay, we used a variety of different rates. Multiplying the temperature by 0.998 at the end of each iteration tends to give the best time/accuracy tradeoff. Slower decay makes for long annealing runs that don't get much better; faster decay doesn't give the process enough room to "jump around" randomly at the beginning, resulting in much worse solutions. + +# Setting the Weights + +We anneal in an attempt to minimize the following badness function: + +$$ +\begin{array}{l} \text {b a d n e s s} = w _ {0} * \text {r e p t o t a l} - w _ {1} * \text {r e p z e r o} + w _ {2} * \text {d i s p r o p} \\ - w _ {2} * \text {d i s z e r o} + w _ {3} * \text {p a i r a n o m} + w _ {4} * \text {m a x p a i r} \\ + w _ {5} * \text {c o m m a n o m} + w _ {6} * \text {m a x c o m m} \\ \end{array} +$$ + +# where + +- reptotal is the number of times a member is in the same senior officer's group twice; +- disprop is the disproportionality sum for in-house members; +- both repzero and diszero are 1 if reptotal and disprop both are 0, and both are 0 otherwise; +- pairanom and maxpair are the anomaly score for pairwise meetings and the maximum number of times a pair meets; +- commanom and maxcomm are the anomaly score for group common membership and the maximum number of members a group has in common; and +- $w_{0}, \ldots, w_{6}$ are integer weights. + +How should we set the weights? We did so by trial and error. The set of weights $w_0 = w_1 = 1200$ , $w_2 = 1000$ , $w_3 = 400$ , $w_4 = 4000$ , $w_5 = 100$ , and $w_6 = 500$ produces extremely good results for a 9,000-iteration annealing run on the standard problem configuration. Such a run takes about 5 min on an HP 712/60 workstation. Using these weights, we could repeatedly produce solutions that had no senior officer repetitions, no instances of in-house member disproportion, no pair of members that met more than three times, and no pair of groups with more than three members in common. + +We used somewhat different weights for a longer annealing run to produce our very best solution, and also adjusted the weights to produce solutions for different session configurations, as we will describe below. But the set of defaults above appeared to work remarkably well for a variety of configurations. + +Some things to note about the default weights: + +- The zero bonus for senior officer nonrepetition equals the minimization weight. That works so well that we hard-coded in the same equality for in-house disproportionality, rather than making another adjustable zero bonus. +- The ratio of pairwise minimization weight to pairwise maximum weight is 1 to 10. That means that the algorithm considers reducing the pairwise anomaly score by 10 equivalent to reducing the pairwise maximum by 1. Lower ratios tend not to force the pairwise maximum low enough; higher ratios tend to prevent the annealer from occasionally making the pairwise maximum higher by 1 on its way to a much lower pairwise anomaly score. + +- The common membership weights are quite small. We found that good common membership configurations were strongly correlated with good pairwise meeting configurations; that is, configurations in which no pair of members met an inordinate number of times also tended to be configurations in which no two groups had too many common members. + +# Our Solution + +Using extra-long runs and adjusting the weights by trial and error, we eventually produced the solution in Tables 1-2. The in-house members are 1-9 and the non-in-house members are 10-29. + +Table 1. Assignments by discussion group. + +
SessionGroupMembers
Morning 1127 22 20 17 1
24 28 21 3 13
310 11 8 29 18
414 19 24 7 23
515 9 5 16 12
66 25 2 26
Morning 2113 29 2 6 28
210 16 9 19 25
37 23 20 22 12
45 11 26 17 3
58 14 20 27 4
624 21 1 15
Morning 3116 8 19 21 5
22 18 22 1 23
315 26 17 9 14
412 25 4 29 27
52 13 20 11 6
628 7 3 10
Afternoon 1127 26 24 2 8 10
26 3 22 15 19 4 18
311 16 23 5 25 14 1 28
421 9 13 17 29 7 12
Afternoon 2123 6 5 21 10 27 12
23 15 29 14 2 25 20
326 9 1 8 28 19 22 13
47 24 4 17 16 11 18
Afternoon 3115 4 1 13 2 10 16
25 24 12 25 22 17 8
321 28 14 20 7 26 18 6
427 23 11 19 3 9 29
Afternoon 417 25 27 13 19 5 18
221 14 22 11 10 2 9
312 29 3 6 16 24 26 1
44 23 17 15 28 20 8
+ +Table 2. Assignments by board member. + +
MorningAfternoon
1231234
In-house members
11623313
26121212
32462243
42542414
55413121
66152133
74364431
83511324
95234342
Other members
103261112
113453442
125344123
132154311
144533232
155632214
165213413
171434424
183522431
194212341
201351234
212614132
221322322
234323144
244651423
256243221
266431333
271541141
282163334
293144243
+ +# How Good Is This? + +In this configuration, no member is ever in the same senior officer's morning discussion group twice. No group contains a disproportionate number of in-house members (the morning groups contain 1 or 2, the afternoon groups 2 or 3). + +No pairs of members are in the same group together more than 3 times. Of the possible pairs of members, 40 never meet; 214 meet once; 138 meet twice; and 14 meet three times. Thus the pairwise anomaly score is 54, and the vast majority of members meet one another a "reasonable" number of times (the mean number of meetings is about 1.3). It is possible to achieve a configuration in which no two members meet more than twice, but not while preserving the other objectives. + +Also, no two discussion groups have more than two members in common. + +This is as low as possible. + +We conclude that this configuration satisfies all the objectives well and three of the four perfectly; it is likely extremely close to the global optimum. + +# Typical Results on the Standard Configuration + +A typical annealing run, with default weights, will zero out the senior officer repetition and in-house disproportionality. It will also reduce the maxima of member-pair meetings and group common membership to 3. It will not usually reduce maximum group common membership to 2, and it typically gives a pairwise anomaly score of 50 to 60. + +Thus, the standard annealing run's result isn't quite as good as our best; but it's nearly as good and much easier to achieve. It considers only about 9,000 different sets of assignments in reaching its solution, and takes about 6 min. + +# Generalizing the Model + +# More and Different Board Members + +We tested the annealing process on data sets with a variety of different numbers of board members, ranging from 20 to 100. We also tried keeping the number of members at 29 and adjusting the proportion of in-house members. Finally, we tried keeping the existing set of 9 in-house and 20 other members but changed the number of senior officers, the number of morning and afternoon sessions, and the number of groups in each afternoon session. In all cases, we annealed with the same default weights used for the standard configuration. + +The model responded extremely well in each case. Table 3 shows the times required for a full annealing run on the standard session configuration with various numbers of members. + +Table 3. +Run-time results. + +
MembersRun-time
In-houseOthersTotal
510152:56
920296:38
1227398:25
1533489:53
18415922:06
307010058:01
+ +We also tried changing the profile to 4 in-house and 25 other members, and to 14 in-house and 15 other members. In both cases, the annealing still drove the + +in-house disproportionality to zero and reduced the pairwise and commonality maxima to 3. + +We tried a total of five different changes in the session/group profile; these ranged from decreasing the number of groups in each morning session to two to having only one morning session and six afternoon sessions. In almost all cases, the annealing produced solutions that were as good as could be expected, given the limitations of the sessions and groups. + +The only exception occurred when we tried having three officer-led groups for each of the three sessions. Then the default weights failed to minimize the senior officer nonrepetition criterion, which is much harder to achieve with three groups than with six. Increasing $w_0$ to 2,000 and rerunning gave much better results. + +# Different Levels of Session Participation + +The data structure and annealing code are designed to deal transparently with members who attend some but not all of the sessions, by considering each member's participation in each session as a separate variable. We tested our code on several different session-participation variations, including: + +- making the in-house members not attend the afternoon sessions, +- making some of the non-in-house members not attend the morning sessions, and +- introducing new members who go only to one morning session. + +In all cases, the annealing produced good results—zero senior officer repetitions, no more than two instances of in-house disproportionality, maximal pairwise meetings, and group common memberships at one more than the ideal mean. + +# Adjusting Quickly to Last-Minute Changes + +Another important consideration is how well our model can deal with small changes in the configuration of board members—one or two new board members added or deleted, say. We investigated two approaches to adjusting an already-annealed configuration, reannealing and flip-path search. Both are motivated by the idea that the new configuration's best solution should be quite close to the old one's. We tested these approaches on several small modifications: single and double additions and deletions to all sessions, plus additions to only one or two sessions. + +# Reannealing + +Since we want to stay in the neighborhood of the old configuration and to take less time than the original annealing, we use a much lower starting temperature. Experimentation shows that dividing the regular starting temperature by 1,000 works best. The new configuration often is very different from the old one. This would be undesirable in real-world applications, where you might want to make as few changes to group assignments as possible. + +# Greedy path search + +The alternative is to try to reduce the badness function with as few changes as possible. One way is to try all possible single flips in the configuration, then try all possible combinations of two possible single flips, and so on, always looking for the flip sequence that produces the lowest badness function value. + +This approach quickly becomes too slow. There are 2,310 possible flips in the standard configuration, and evaluating all of them takes more than 1 min on an HP 712/60; thus, evaluating all two-flip sequences would take at least $20\mathrm{h}$ . An alternative, much faster approach is to try all single flips, take the one that produces the lowest badness function, perform that flip, try all single flips from the resulting flipped configuration, and so on. We call this a greedy path strategy, and in our tests it brought the new configuration's badness function down acceptably close to the old one's in seven to ten flips. + +Furthermore, we observed during our tests of the greedy path search that it tended to do best after doing one flip involving groups in each of the seven sessions. This was especially true when we added a new member to all of the sessions. This makes sense, because it ought to take one flip to put a new member in the "right" place in each session. + +So we tried the following variation: Perform the best of the possible flips involving groups in the first session, then perform the best of the flips involving the second session, and so on, to the last session. This requires considering all of the 2,310 possible flips only once. This does not work nearly as well as the original greedy search, probably because this approach fixes the order in which the best flips in the sessions are taken. + +We finally found a workable hybrid approach. This approach starts out by finding and taking the best flip from all the sessions, then takes the best flip in each session in order, and finally again finds and takes the overall best flip. It runs in roughly 5 min for the standard configuration and gives test results about as good as for the original greedy path search. It doesn't always work as well as reannealing, but it never requires more than nine (in the standard configuration) single-flip changes. + +# Improving the Model + +# Complexity + +Our algorithm runs in time approximately proportional to the square of the number $n$ of board members. The pairwise part of the badness calculator goes through a matrix including all $n(n - 1) / 2$ pairs of members. The run-time is also quadratic in the number of groups, because of the pairs-of-groups common membership matrix. + +It would be nice to develop a badness function that runs in linear time and produces a good approximation to our original function value. We could also make smaller efficiency improvements, such as developing a way to update efficiently the pairwise and commonality matrices with each single flip instead of recalculating them on each iteration. Time considerations prevented us from doing this (we tried doing it with the pairwise matrix but never got it to run significantly faster than straight recalculation). We believe that our algorithm runs remarkably well as it is; finding a near-minimum over a solution space of $10^{137}$ points in 6 min is no small task. + +# Flexibility + +The present model allows only two kinds of group sessions—morning and afternoon—and assumes that all sessions have the same number of groups of the same size (or differing by only one). It wouldn't be too difficult to extend the partition structure to allow for a more complex session structure, such as one that would allow for different numbers of senior officers at each morning session. + +We could also add different types of board members and specify new objectives based on them. Perhaps, for example, one might want to stipulate that some board members are new, and that new board members should all be in the same discussion groups so that they can get to know each other (or at least that every new member should meet every other member once). + +Finally, we could try to devise a general method for setting annealing weights, given a configuration. The default standard weights appear to work well for a large range of configurations, but they are almost certainly not optimal for all configurations. + +# References + +Kirkpatrick, S., C.D. Gelatt, and M.P. Vecchi. 1983. Optimization by simulated annealing. Science 220 (13 May 1983): 671-680. + +Press, William, et al. 1992. Numerical Recipes in C. 2nd ed. New York: Cambridge University Press. + +# Meetings, Bloody Meetings! + +Joshua M. Horstman + +Jamie Kawabata + +James C. Moore, IV + +Rose-Hulman Institute of Technology + +Terre Haute, IN 47803 + +Advisor: Aaron D. Klebanoff + +# Introduction + +We present a model and three algorithms for solving the problem of scheduling An Tostal Corporation's upcoming board meeting. To determine how well a schedule will fit An Tostal Corporation's needs, we establish a badness function based on assumptions as to what kinds of schedules are desirable. These assumptions include that a good mix of people in meetings is pivotal for effective idea-sharing. + +The algorithms that we compare are based on random selection, greedy assignment, and greedy assignment followed by hill-climbing. The random algorithm places board members at random. The greedy algorithm assigns one member at a time by examining which possibility is locally optimal, hoping for a solution that is globally optimal. The greedy algorithm with hill-climbing enhances the greedy algorithm's effectiveness by tweaking the schedule in a manner that will cause the schedule to edge closer and closer to optimality. + +We also provide a simple algorithm that will allow a secretary to handle any unforeseen additions or cancellations. This algorithm is designed to alter as few board members' schedules as possible while still obtaining a good mix of people. + +Finally, we summarize the strengths and weaknesses of the algorithms presented, present a sample solution for use by the An Tostal Corporation, and conclude that the "greedy-twiddle" algorithm is most effective and provides a very good solution nearly every time. + +The UMAP Journal 18 (3) (1997) 321-329. ©Copyright 1997 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +# Assumptions and Justifications + +- All discussion groups in a given session should be of approximately the same size. A discussion group that is too large will not facilitate good discussion. If any of the discussion groups are too small, then some of the discussion groups will be too large (since there are a fixed number of discussion groups and board members), and we will have the same problem. +- Since concurrent discussion groups will be of roughly equal size and we are to put a proportionate number of in-house board members in each discussion group, each concurrent discussion group should have approximately the same number of in-house members. +- All board members are treated equally, with the exception of in-house employees as outlined in the problem statement. There are no special needs considered in the assignment of schedules, such as a board member desiring to avoid a particular individual. +- All of the senior officers are present for the three morning meetings. +- The schedules will not be adjusted for board members who are late or have to leave during the middle of the day. +- When the secretary is performing a last-minute change to the schedule, it is desirable to minimize the number of members who experience a schedule adjustment. That is, a change that completely rearranges one member's schedule is better than a change that slightly alters many members' schedules. + +# Defining the Model + +A natural model is a hypergraph in which each vertex corresponds to a board member and each edge corresponds to a discussion group. However, the added constraints, such as the in-house board members' even distribution and the limitation that each board member see each officer at most once, disrupt the structure of the hypergraph enough to render many of the well-known theorems inapplicable. On the other hand, some of the matrices derived from the hypergraph model, such as the adjacency matrix and the incidence matrix, are quite useful for our computations. + +With this as the underlying model, each board member corresponds to a row and a column of the adjacency matrix, and to a row of the incidence matrix. A column of the incidence matrix corresponds to a discussion group. Each board member is represented by a number from 1 to 29, with 1 through 9 representing the in-house board members (corporate employees). + +A good schedule is one that does a good job of meeting the following criteria: + +- The total number of board members in any two concurrent discussion groups does not differ by much. +- The number of in-house board members in any two concurrent discussion groups does not differ by much. +- In the morning sessions, no board member is in multiple discussion groups led by the same senior officer. + +We design our algorithm so that every schedule produced satisfies the third condition; so we attempt to devise a schedule that does a good job of meeting the first two criteria. We measure how good a schedule is by defining a function that assigns a "badness" to each schedule. First, we define an encounter as one instance of two board members being in the same discussion group. We want our badness function to penalize, by increasing the badness, a schedule that contains a pair of board members who have too many encounters. Making sure that no board members have too many encounters automatically ensures that no board members have too few encounters. Since there is a fixed number of encounters (as long as the discussion group sizes stay constant), then if one pair of board members has too few encounters, it is only because another pair has too many. We define $e_i$ to be the number of pairs of board members who encountered each other $i$ times. A reasonable function for the badness $b$ of a particular schedule $S$ is + +$$ +b (S) = \sum_ {i = 2} ^ {\infty} e _ {i} \cdot 4 ^ {i - 2}. +$$ + +By making the badness penalty increasingly higher for each additional encounter between a pair of board members, we create a function that is minimized when every pair of board members has the same number of encounters. This badness function is not good enough, though, since such a function also must take into account the distribution of the in-house board members. We will say that all schedules for which the numbers of in-house members in any two concurrent discussion groups differ by no more than one are equally good. However, a very heavy penalty will be imposed for each discussion group that contains a number of in-house board members that is more than one away from the average number of in-house board members in a discussion group. Define $d$ to be the number of discussion groups in the entire schedule that fall into this category; then we can revise our badness function to + +$$ +b (S) = 1 0 0 0 d + \sum_ {i = 2} ^ {\infty} e _ {i} \cdot 4 ^ {i - 2}. +$$ + +This penalty prevents deviation from an even distribution of in-house board members. Minimizing the badness of a schedule will take care of the first two criteria of an optimal schedule referred to above. The minimum theoretical value for $b(S)$ is 129, calculated using the Pigeonhole Principle. + +# Random Selection Algorithm + +There is a huge search space, containing more than a googol $(10^{100})$ of schedules, so a brute-force attack is out of the question. To provide a basis for comparison, we see how effective a random selection algorithm is. This algorithm assigns each board member to a random discussion group in each session, making sure that no board member is in multiple discussion groups led by the same senior officer. Although we impose no limit on the size of a discussion group, we hope that a random algorithm will distribute the board members evenly. We implemented this algorithm on a computer and ran it 1,024 times. Figure 1 displays the badness of these executions. The results are not good. + +![](images/7e4200ee038b2145cfb63afe605567781f95becf01207869c683fbc5e55d1bbc.jpg) +Figure 1. Badness results for random assignments. + +# Greedy Algorithm + +We sought a heuristic requiring a minimum amount of backtracking, and a greedy algorithm seemed logical. We assign the in-house board members separately before assigning the rest of the board members. Each board member is placed in the first available position that minimizes the number of encounters that board member has had with any other board member. The algorithm, shown in Figure 2, works by assigning all of the in-house board members to each session, then going back and filling each session with the remaining board members. The order in which board members are placed and the order in which the various discussion groups are tried is random. This feature facilitates + +distributing the board members evenly, by eliminating regular patterns that may occur when placing the board members in the same way every time. + +```txt +for member from 1 to 9 (in random order) +for session from 1 to 3 + select the groups in which member has never been + of them, choose the groups containing the fewest other members + of those, choose the group containing members with whom member has been + the fewest times + place member in this group +for session from 4 to 7 + select the groups containing the fewest other members + of those, choose the group containing members with whom member has been + the fewest times + place member in this group +for session from 1 to 7 + for member from 10 to 29 (in random order) + set greedlevel to 0 + select a group not led by an officer already encountered by member + repeat + does this group have a member with more than greedlevel encounters? + If no, place member in this group + If yes, select a group that hasn't been tried yet + If every group has been tried, increment greedlevel and consider all + groups untried + until member is placed +``` + +Figure 2. Greedy algorithm. + +Figure 3 shows the badness from 1,024 runs of the greedy algorithm. The greedy algorithm is a dramatic improvement over random assignment—all badnesses are below 300, while the best that random assignment could do was more than 5,000. However, there are many final schedules that the greedy algorithm can never reach. For example, in first several sessions, no board member will see any other more than once, since the greedy algorithm won't place two board members who have already encountered each other together in a meeting until it is unavoidable. The best schedule, on the other hand, may have a pair of board members who encounter each other in both the first and second sessions; but the greedy algorithm will never find it. + +# Greedy-Twiddle Algorithm + +We can do better by using hill-climbing together with the greedy algorithm. This means that we make small changes to our schedule, determine whether the slightly modified schedule is better or worse, using our badness function, and proceed making small adjustments until we can do no better. The greedy algorithm helps a lot, in that we start our hill-climbing from a schedule that is much better than a random assignment. We call the small changes twiddles. Each twiddle consists of swapping two board members between meetings in a + +![](images/8242cef06e068cf24658868bfb36d226d440fcf999e94fb1cb27c8be60462365.jpg) +Figure 3. Badness results for assignments made by the greedy algorithm. + +session or simply moving a board member from one meeting to another. After twiddling, we compute the badness function and determine if the twiddle made our schedule better or worse. If it made it worse, we undo it. We continue until no swap or move improves our schedule. + +Figure 4 shows the badness of 679 runs of the greedy-twiddle algorithm, which shows significant improvement on the simple greedy algorithm—all badnesses are below the best that the greedy algorithm could do. + +Table 1 shows our final recommendation to the An Tostal Corporation on scheduling their meeting; it is the best schedule from the 679 runs mentioned above. + +# Secretary's Algorithm + +To maintain the near-optimality of the schedule while changing as few members' schedules as possible, the secretary will do the following when performing last-minute changes: + +- Determine the net change in the number of in-house members who will be attending and the net change in the number of out-of-house members who will be attending. If you can pair up an in-house member who canceled and an in-house member who added, then just give the added member the schedule originally assigned to the canceled member, and similarly for out-of-house members. Do this for as many pairs as possible. +- Treat the situation for in-house members as follows: + +Table 1. Recommendation on scheduling. + +
SessionMembers
Morning 11 3 10 12 26
2 8 17 21 25
6 13 22 28
5 11 18 23 27
4 14 15 19 24
7 9 16 20 29
Morning 27 11 13 17 24
1 4 15 22 27
8 9 16 18 26
6 10 14 20 25
3 5 21 28 29
2 12 19 23
Morning 34 5 25 29
6 7 11 19 26
3 15 17 20 23
9 12 13 21 24
2 10 16 18 22
1 8 14 27 28
Afternoon 12 3 6 18 24 26 27 29
8 9 11 12 14 15 22 25
1 5 13 16 19 20 21
4 7 10 17 23 28
Afternoon 25 6 7 12 15 18 21
2 4 11 14 20 26 28
1 9 17 19 22 24 29
3 8 10 13 16 23 25 27
Afternoon 35 8 20 22 23 24 26
3 7 9 18 19 25 28
1 2 10 11 13 15 29
4 6 12 14 16 17 21 27
Afternoon 45 9 10 15 17 26 27
4 8 12 13 18 19 20 29
1 6 11 16 23 24 25 28
2 3 7 14 21 22
+ +![](images/947a2ea11ead154814f237dc8b5ae5d8ce25af285f644ce85b535254a4cd6c17.jpg) +Figure 4. Badness results for assignments made by the greedy-twiddle algorithm. + +- If there are still in-house cancellations that could not be paired with in-house additions: + +* Remove from each discussion group the member(s) who canceled. +* If this makes the number of in-house members in a discussion group disproportionately low, move an in-house member from a group with more than the optimal number of in-house members to this group. Move as few members as possible, choosing first from members whose schedules have already been changed. Repeat for each discussion group. + +- If there are still in-house additions that could not be paired with in-house member cancellations: For each session, place each added member in a discussion group by following these steps: + +* Eliminate all discussion groups led by a senior officer whom the member has already encountered. +* From the discussion groups remaining, eliminate any group that has more in-house members attending than any other remaining discussion group. +* For each remaining group, determine which member has had the most encounters with the member to be added. Choose the group for which this number of encounters is the lowest. + +- Treat the situation for out-of-house members in analogous fashion. + +# Strengths and Weaknesses + +We detail the strengths and weaknesses of the various algorithms in Table 2. + +Table 2. Strengths and weaknesses of the algorithms. + +
AlgorithmStrengthsWeaknesses
Randomvery simple, fast, requires no computerbad schedules uneven group sizes
Greedyrelatively simple, fast on a computermediocre schedules, need computer for large problems
Greedy-twiddlevery good schedulesslow, difficult to program
Secretary'ssecretary could do it by hand, many schedules stay unchangednot as good as completely recomputing the schedule
+ +# Other Applications + +Although the problem that motivated this model has many details unique to its situation, the model that we have developed has many other applications, such as a similar meeting with different numbers of board members or discussion groups, and also for other situations. + +Consider, for example, the case of Mardi Gras Junior High School. The 120 eighth-grade students there are required to take four core classes: English, mathematics, history, and science. All four classes are offered during each of four class hours. The students also take one of six concurrent home rooms and one of six concurrent study halls. They must attend each of the core classes exactly once, but a student may have the same teacher for home room and study hall. + +The school board believes that the students' educational experience can be enhanced by ensuring that the students are mixed as well as possible. That is, every student should have roughly the same number of classes with every other student. Furthermore, the board feels that the 30 honors students should be distributed so that each class (including home room and study hall) has a proportionate number. The model and algorithms developed for use at the An Tostal Corporation could be applied with minimal changes at Mardi Gras Junior High School. + +# A Greedy Algorithm for Solving Meeting Mixing Problems + +Adrian Corduneanu + +Cyrus C. Hsia + +Ryan O'Donnell + +hsia@math.toronto.edu + +{ a.corduneanu, ryan.odonnell } @utoronto.ca + +University of Toronto + +Toronto, Ontario M5S 3G3 + +Canada + +Advisor: Nicholas A. Derzko + +# Introduction + +We consider the problem of how to best arrange a large number of people into small discussion groups so that the groups are well mixed. It is important to have well-mixed groups because any meeting runs the risk of being controlled or directed by a dominant personality. Thus, we wish to ensure schedules give different mixes of people for each group. + +This problem relates directly to the case of An Tostal Corporation. The company wants to place its board members in small groups within each session so that the board members are well mixed throughout the day. The company's schedule must also satisfy other constraints: + +- At the morning session, there is a senior officer assigned to each group, and no board member is to be placed with any senior officer more than once. +- A percentage of the board members are in-house members, and the company wishes that no group should have a disproportionate number of in-house members. + +To solve the problem, we first develop a scoring system for schedules. There must be a schedule or schedules that achieve the best possible score; but the total number of schedules is astronomical, so we cannot check all of them. Consequently, the problem reduces to finding as good a schedule as possible in + +a short period of time. To find good schedules, we wrote a computer program in C that uses a greedy algorithm. The algorithm places the board members into the schedule one by one, at each step making the schedule placement that gives the best possible score. Our algorithm also uses a switching procedure to improve on the greedy placement at each step. + +# Assumptions + +- The day is split into a number of sections, and each section has a fixed number of sessions and a fixed number of groups per session. We assume that the senior officer constraint applies in a fixed subset of the sections. For these sections, we assume that the number of senior officers is the same as the number of discussion groups, and that there are at least as many discussion groups in an officer-led section as there are meeting sessions (not assuming this makes the problem unsolvable). +- The secretary in charge of scheduling can change the parameters of our computer program to reflect the discussion day at hand. +- It is not vitally important to get the perfect mix of group members; close-to-optimal solutions are acceptable. + +# Choosing an Incidence Scoring System + +We need to develop some scoring system to provide a total ordering on the set of all possible configurations; this way, there will be some best configurations, and our goal is a configuration with a score that is as good as possible. Although there are a number of side constraints imposed by the An Tostal Corporation (relating to the in-house members and the senior officers), we assume that our computer algorithm will take care of these and that the main criterion—mixing well—is what our score should measure. + +The obvious first choice for an incidence score is just the incidence sum. However, this scoring system turns out to have a major flaw: The minimal incidence sum is easy to achieve, but in many cases the incidence elements vary widely. An incidence matrix composed mostly of 1s and 2s is better than one with lots of 0s but also lots of 5s, 6s, and 7s; we don't want to minimize just overall incidence, we want to try to minimize everyone's incidence, which presumably involves keeping everyone's incidence about the same. At the other extreme, the variance or standard deviation of the incidence elements in the lower incidence matrix could be used as the scoring system. However, using just variance does nothing to keep the overall incidence low; it merely keeps all the incidence elements close. As a compromise, the scoring system + +Table 1. Definitions and notations (in parentheses, values for the An Tostal problem instance). + +
DayThe time block that encompasses all the meetings for a given problem.
SessionA block of time within a day, to which meetings are devoted. Each person goes to only one meeting during a session.
GroupsActual meetings attended by sets of board members.
SectionA set of sessions with the same number of groups per session.
ConfigurationAn assignment of people into groups so that each person is in one and exactly one group per session (sometimes referred to as a schedule).
SThe number of sections. (2)
N1, ..., NSThe number of sessions per section. (3, 4)
G1, ..., GSThe number of groups per session. (6, 4)
BThe number of people to be scheduled (board members). (29)
IThe number of board members who are in-house members. We number the board members 1, ..., B, and 1, ..., I are the in-house members. (9)
OThe number O such that the senior officer constraint applies to sections 1, ..., O. For simplicity, we number the groups in any officer-led section 1, ..., G and regard each officer as leading the same group number for the entire section. Thus, a configu-ration satisfies the officer constraint if and only if every board member is in a different group number (within each officer-led section). (1)
IncidenceThe incidence of person X with person Y is the number of times that X is in the same group as Y within a given config-uration.
Incidence matrixThe incidence matrix for a given configuration is the B × B matrix IM = (aij), where aj is the incidence of person i and person j. Elements of this matrix are incidence elements.
Lower incidence triangleSince any incidence matrix has zeros down its main diagonal and is symmetric, we often consider just the lower triangle minus the diagonal, i.e., aj for j < i.
Incidence scoreA rating given to the configuration that reflects how well mixed it is, a function of the configuration's lower incidence triangle. Lower scores are better.
Incidence sumThe sum of the incidence elements in the configuration's lower incidence matrix, which gives the total number of common memberships over all groups.
Optimal configurationA configuration with the lowest possible incidence score.
+ +that we use is a sum of squares: + +$$ +\text {t h e i n c i d e n s c o r e o f} I M = \sum_ {1 \leq i \leq j \leq B} a _ {i j} ^ {2}. +$$ + +This scoring system requires the incidence elements to be close to one other. At the same time, if we can bound this score, we can also automatically bound the incidence sum using the Cauchy-Schwartz inequality (see Theorem 3). + +# Theoretical Lower Bounds for Scores and Sums + +We wish to find a theoretical lower bound for all possible incidence sums for a given day. We first find a closed form for this total. + +Theorem 1. For any configuration, the incidence sum is + +$$ +\sum_ {a l l g r o u p s G} \binom {n _ {G}} {2}, +$$ + +where $n_G$ is the number of people in the group $G$ . + +Proof: The sum of the incidence elements in the lower incidence matrix of a configuration is the same as the sum over all possible pairs in all the groups. For each group with $n_G$ people, there are $\binom{n_G}{2}$ pairs. The total number of pairs for all groups is the indicated sum. + +This closed form allows us to calculate a lower bound in terms of the problem parameters. + +Theorem 2. For a given day, the incidence sum of configurations is bounded below by + +$$ +\frac {1}{2} B ^ {2} \sum_ {i = 1} ^ {S} \frac {N _ {i}}{G _ {i}} - \frac {1}{2} B \sum_ {i = 1} ^ {S} N _ {i}. +$$ + +(For An Tostal, this value is 529.25.) + +Proof: From the previous theorem, we have that a configuration's incidence sum is + +$$ +\sum_ {\text {a l l g r o u p s} G} \binom {n _ {G}} {2} = \sum_ {\text {a l l g r o u p s} G} \frac {n _ {G} ^ {2} - n _ {G}}{2} = \frac {1}{2} \sum_ {\text {a l l g r o u p s} G} n _ {G} ^ {2} - \frac {1}{2} B \sum_ {i = 1} ^ {S} N _ {i}. +$$ + +To get a lower bound for this value, we apply the Cauchy-Schwartz inequality [Plank and Williams 1992, 46] to each section, since the number of groups may differ across sections. We have + +$$ +\left(\sum 1 ^ {2}\right) \left(\sum n _ {G} ^ {2}\right) \geq \left(\sum n _ {G}\right) ^ {2} = B ^ {2} N _ {i} ^ {2}, +$$ + +where the sums are each over all groups $G$ of section $i$ . This yields + +$$ +\begin{array}{l} \left(G _ {i} N _ {i}\right) \left(\sum n _ {G} ^ {2}\right) \geq B ^ {2} N _ {i} ^ {2}, \\ \sum n _ {G} ^ {2} \geq \frac {N _ {i}}{G _ {i}} B ^ {2}, \\ \end{array} +$$ + +with equality holding iff all the $n_G$ are equal (this may not be achievable in our discrete case, as these numbers must be integers). So taking the sum of all these inequalities over all the possible sections, we have + +$$ +\sum_ {\text {a l l g r o u p s} G} n _ {G} ^ {2} \geq B ^ {2} \sum_ {i = 1} ^ {S} \frac {N _ {i}}{G _ {i}}. +$$ + +Thus, we reach the conclusion claimed, with equality holding when in each section the groups have the same number of people. + +This minimal value cannot be achieved when the number of people cannot be divided evenly among the groups in the sessions. However, by distributing the people as evenly as possible among the groups, the sum will be as small as possible. + +Fact. The minimal sum for An Tostal Corporation is 532. + +Proof: Distributing the 29 people as evenly as possible among the 6 groups in each morning session we get groups of 4, 5, 5, 5, 5, and 5 (in some order); for the 4 groups in the afternoon sessions, we get groups of 8, 7, 7, and 7 (in some order). The resulting incidence sum is then + +$$ +3 \left[ \binom {4} {2} + 5 \binom {5} {2} \right] + 4 \left[ \binom {8} {2} + 3 \binom {7} {2} \right] = 5 3 2. +$$ + +Theorem 3. For a given day, a configuration's incidence score is bounded below by + +$$ +\frac {B}{2 (B - 1)} \left(B \sum_ {i = 1} ^ {S} \frac {N _ {i}}{G _ {i}} - \sum_ {i = 1} ^ {S} N _ {i}\right) ^ {2}. +$$ + +For An Tostal, this value is about 689.9. + +Proof: By using the Cauchy-Schwartz inequality and Theorem 2, we have that + +$$ +\begin{array}{l} \left(\sum_ {1 \leq i \leq j \leq B} a _ {i j} ^ {2}\right) (1 + \dots + 1) \geq \left(\sum_ {1 \leq i \leq j \leq B} a _ {i j}\right) ^ {2} \\ \geq \left(\frac {1}{2} B ^ {2} \sum_ {i = 1} ^ {S} \frac {N _ {i}}{G _ {i}} - \frac {1}{2} B \sum_ {i = 1} ^ {S} N _ {i}\right) ^ {2}, \\ \end{array} +$$ + +so + +$$ +\left(\sum_ {1 \leq i \leq j \leq B} a _ {i j} ^ {2}\right) \binom {B} {2} \geq \frac {1}{4} B ^ {2} \left(B \sum_ {i = 1} ^ {S} \frac {N _ {i}}{G _ {i}} - \sum_ {i = 1} ^ {S} N _ {i}\right) ^ {2} +$$ + +and + +$$ +\left(\sum_ {1 \leq i \leq j \leq B} a _ {i j} ^ {2}\right) \geq \frac {B}{2 (B - 1)} \left(B \sum_ {i = 1} ^ {S} \frac {N _ {i}}{G _ {i}} - \sum_ {i = 1} ^ {S} N _ {i}\right) ^ {2}. +$$ + +The first line demonstrates that the incidence score can be used to bound the incidens sum from above. Hence, incidence scores are also bounded below and incidence sums are squeezed in between. So, if we achieve a small incidence score, we also achieve a small incidence sum. + +In our case, the minimum sum of squares cannot be achieved, for similar reasons that the minimum sum could not be achieved. We can still calculate the minimum possible sum of squares by distributing the $B$ people as evenly as possible among the groups of a session. By doing so, we can find a closed form that may be achieved. + +Theorem 4. The minimum incidence score possible given a fixed minimum incidence sum $MS$ and a fixed number of people $B$ is + +$$ +(2 d + 1) M S - d (d + 1) \left( \begin{array}{c} B \\ 2 \end{array} \right), +$$ + +where + +$$ +d = \left\lfloor \frac {M S}{\binom {B} {2}} \right\rfloor . +$$ + +For An Tostal, this minimum incidence score is 784. + +Proof: We wish to make the incidence elements as close as possible, which means we must have a number of them, say $a$ , which have value $d$ , and the rest, say $b$ , which have value $(d + 1)$ . The total $(a + b)$ must be the total number of pairs of people, $\binom{B}{2}$ . The minimum sum of the incidence elements, $MS$ , is then $da + (d + 1)b$ . Solving for $a$ and $b$ in the two equations, we get $b = MS - \binom{B}{2}$ and $a = (d + 1)\binom{B}{2} - MS$ . This gives us our incidence score the value + +$$ +a d ^ {2} + b (d + 1) ^ {2} = (2 d + 1) M S - d (d + 1) \left( \begin{array}{c} B \\ 2 \end{array} \right), +$$ + +which upon substitution for $a$ and $b$ , and some manipulation, yields the desired result. + +Fact. Any incidence matrix for the An Tostal case must contain at least some values greater than 1 in the optimal distribution. + +Proof: By the previous Fact, the optimal distribution has sum of incidence elements 532. The greatest number of distinct incidences is $\binom{29}{2}=406$ . Thus, at least one person will have an incidence of at least 2. + +This fact means that in our results for the An Tostal problem, we expect to see at least 2s and 1s in any incidence matrix. + +# Theoretical Bounds on Computer Run Time + +We show that the total number of possibilities is exponential in $B$ . + +Theorem 5. The total number of possible configurations is + +$$ +\left(\prod_ {i = 1} ^ {S} G _ {i} ^ {N _ {i}}\right) ^ {B}. +$$ + +For An Tostal, this is $(6^{3}4^{4})^{2}9 > 3\times 10^{137}$ + +Proof: For each of the $B$ people, this person may be in any of the $G_{i}$ groups for any of the $N_{i}$ sessions for all possible sections $i$ ranging from 1 to $S$ . $\square$ + +We can do better than this by considering only cases in which every group in a session has at least one member. However, even if we have an even distribution (e.g., 4, 5, 5, 5, 5, 5/8, 7, 7, 7 for An Tostal), we have a corresponding bound. + +Theorem 6. The total number of possible configurations with even distribution is bounded below by + +$$ +\prod_ {i = 1} ^ {S} \left[ \frac {B !}{\left(\left\lceil \frac {B}{G _ {i}} \right\rceil !\right) ^ {G _ {i}}} \right] ^ {N _ {i}}. +$$ + +For An Tostal, this is + +$$ +\left[ \frac {2 9 !}{(5 !) ^ {6}} \right] ^ {3} \left[ \frac {2 9 !}{(8 !) ^ {4}} \right] ^ {4} > 3 \times 1 0 ^ {1 0 5}. +$$ + +Proof: In the even distribution, each group in a session of section $i$ has at most $\lceil B / G_i\rceil$ people. Given a group of $n_1 + \dots +n_k$ numbers with $n_1$ 1s, . . . , and $n_k$ + +ks, the total number of ways of rearranging them is given by the multinomial coefficient + +$$ +\left( \begin{array}{c} n _ {1} + \dots + n _ {k} \\ n _ {1}, \ldots , n _ {k} \end{array} \right). +$$ + +Therefore, in each session, the total number of possible group placements is bounded below by + +$$ +\frac {B !}{\left(\left\lceil \frac {B}{G _ {i}} \right\rceil !\right) ^ {G _ {i}}}. +$$ + +Thus, the total number of possible configurations with even distribution is bounded below by the claimed quantity. + +# How We Implement a Greedy Algorithm + +The algorithm that we use to generate configurations has two main ingredients, which we call Greedy Placement and Switching. The principal ingredient is Greedy Placement; Switching is merely a tweak to give slightly better scores. + +Greedy Placement proceeds through the $B$ board members one by one; on the $i$ th iteration, it finalizes the placement of person $i$ in all the groups. The placements are "greedy," that is, we "[make] the choice that looks best at the moment" [Cormen et al. 1996, 329]. At each iteration, the algorithm looks at every possible way to place the person subject to the senior officer restriction and chooses the one that leads to the best possible incidence score. The first $I$ board members are the in-house members, so Greedy Placement distributes the in-house members as evenly as it distributes all the board members. Thus, if the algorithm is successful overall, it also satisfies the criterion that no group should have a disproportionate number of in-house members. + +Finally, we describe the Switching add-on. After every iteration of Greedy Placement, we do Switching. Switching looks at our current configuration and tries to find a case where, within a given session, it is possible to switch the placement of two board members and consequently get a better score. If there is such a case, we make the switch and reevaluate the configuration to see if there is another useful switch. We continue making switches until we are in a state from which any switch would be detrimental; then we move on to the next iteration of Greedy Placement. In this way, we get a score that is a local minimum after every iteration. Although many switches could take place at each iteration, causing an unpredictable increase in running time, we found that the average number of switches per iteration was about 1. + +Switching introduces two complexities. + +- We must make sure not to make any switches that would cause the senior officer restriction to be violated. + +- If we allow all other switches, we run the risk of switching in-house members with non-in-house members, which destroys our argument that the number of in-house members per group will not be disproportionately high. To counter this, we restrict Switching to switch in-house members only with in-house members and non-in-house members with non-in-house members. + +# Justification for Our Algorithm + +- Trying every possible configuration is impossible, as Theorems 4-5 show. +- Our algorithm is fast. When the An Tostal day was run on an SGI Challenge, our algorithm took less than 45 sec. Even when it was run on a Pentium 90 MHz with 32 MB RAM, it took only about $7\mathrm{min}$ . Thus, if any board members don't show up, or extra board members do, the secretary can surely calculate a new assignment in under an hour. +- Our algorithm is flexible. All parameters to the problem (i.e., every variable defined near the beginning of this paper) can be altered simply in a text file. +- Our algorithm always gives a configuration that satisfies the senior officer criterion. Since the configuration satisfies the main criterion of mixing well (see below), it also does not have a disproportionate number of in-house members in any group. +- Our algorithm produces configurations that satisfy the main criterion (mixing well). To demonstrate this, we created some different days (parameter setups). We ran our algorithm for these cases and compared the scores produced by the algorithm with calculated lower bounds from Theorems 2 and 3 (see the Results section below). + +# Results + +To determine the effectiveness of our algorithm, we ran a number of test days. In all cases, the incidence sum found by the algorithm is very close to the minimal theoretical sum; this means that the total number of common memberships is essentially minimized. Also, in each case the incidence score is quite small. In cases similar to the An Tostal problem, the scores found never exceeded the theoretical bound by more than $14\%$ . Even in a huge test case, the score exceeded the theoretical bound by only $29\%$ . This shows that another important constraint is achieved by our algorithm: Each board member meets others a similar number of times. + +We present in Table 2 the results of our greedy algorithm as run on the An Tostal day. Table 3 shows the distribution of incidence elements and gives other data. + +# Table 2. + +The in-house members are numbers 1 through 9. + +Recommendation to An Tostal. + +
Morning Section
Group 1 Officer 1Group 2 Officer 2Group 3 Officer 3
Session 11 4 14 22 252 9 12 21 283 10 15 23 27
Session 22 7 18 19 271 8 13 23 254 9 17 20 29
Session 33 9 17 23 264 10 19 20 241 11 12 18
Group 4 Officer 4Group 5 Officer 5Group 6 Officer 6
Session 15 6 18 20 267 11 17 24 298 13 16 19
Session 23 11 16 22 285 10 14 21 266 12 15 24
Session 32 13 14 15 296 8 16 22 275 7 21 25 28
Afternoon Section
Group 1Group 2Group 3Group 4
Session 41 5 9 16 23 24 272 6 10 13 17 22 26 283 3 7 12 14 19 20 254 8 11 15 18 21 29
Session 51 3 6 17 19 21 27 292 5 11 15 20 228 9 10 14 18 24 25 284 7 12 13 16 23 26
Session 61 2 10 15 16 17 257 8 9 20 21 26 273 5 12 13 18 22 24 294 6 11 14 19 23 28
Session 71 7 15 19 22 26 28 292 3 4 16 18 21 246 9 10 11 13 25 275 8 12 14 17 20 23
+ +# Table 3. + +Distribution of incidence elements. + +
Incidence01234+
Number of incidence elements33226134130
Mean: 1.39 +Incidence Score: 879 +Incidence Score Lower Bound: 784Standard Deviation: 0.67 +Incidence Sum: 533 +Incidence Sum Lower Bound: 53
+ +Note that the officer constraint is indeed satisfied, the in-house members are in as even a distribution as possible, and that the resultant score was very close to its theoretical lower bound. + +# Limitations of Our Model + +- Our algorithm does not guarantee an optimal incidence score. We have found cases where an optimal solution is known but is not found by our algorithm. +- Our greedy algorithm with switching may take too much time for very large parameter sizes. +- Our algorithm also does not guarantee a minimal incidence sum. On the other hand, the incidence sums we got in our test were very close and should really be good enough. +- We were able to provide only theoretical lower bounds on incidence scores; these lower bounds may not be achievable. +- Our algorithm is not easy to change. A secretary could not easily change it to use a different incidence scoring system or to consider additional constraints. + +# Conclusion + +We generalized the problem to allow for any number of board members and in-house members; any values for sections, sessions, and groups; and any number of officer-led sessions. We defined a scoring system to evaluate possible configurations that successfully encapsulated the well-mixing criterion. Following this, we determined theoretical lower bounds on scores in terms of the problem parameters. We created a computer program that uses a modified greedy algorithm to come up with good schedules according to our scoring system; the program also took care of the other An Tostal constraints. We used this algorithm to provide the An Tostal Corporation with a well-mixed schedule for their original problem. Finally, we ran tests to verify that our algorithm came up with schedule scores close to the theoretical lower bounds. We believe that the results more than adequately achieved the criteria specified by the original problem, and that our algorithm is a valuable tool for use in scheduling similar planning days. + +# References + +Cormen, Thomas H., et al. 1996. Introduction to Algorithms. Cambridge: MIT Press. + +Plank, A.W., and N.H. Williams. 1992. Mathematical Toolchest. Canberra: Australian International Centre for Mathematics Enrichment. + +# Judge's Commentary: The Outstanding Discussion Groups Papers + +Donald E. Miller + +Department of Mathematics + +Saint Mary's College + +Notre Dame, IN 46556 + +dmiller@saintmarys.edu + +Making the An Tostal situation particularly open-ended is the fact that a "good mix" of board members is not clearly defined. This also makes it a particularly realistic problem. In practice, it is not uncommon that those requesting the solution don't know exactly what they want or what is possible. They look to the modeler for these answers and related suggestions. While the problem statement provides some guidance of the mixing desired, it allows for a lot of interpretation by the modeler. Thus, in order to evaluate any set of board member assignments to the seven sessions it is imperative that some sort of measure of the "goodness" of a particular solution be identified. + +To establish this measure, or objective function, it is necessary to make several assumptions. These assumptions can be made by answering questions such as: + +- Is the third meeting of two board members worse than the second? +- Is the second meeting of two in-house members worse than the second meeting of two regular members? +- How should the second meeting of an in-house member with a regular member be evaluated? +- Does increasing the time between sessions in which two members are in the same group reduce the "cost?" +- How does the "cost" of having two board members fail to meet compare to that of having them meet more than once? + +The UMAP Journal 18 (3) (1997) 343-346. © Copyright 1997 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +The name "An Tostal" has no real meaning as far as the problem is concerned. It was the name of the spring-quarter weekend of celebration just before final exams at Kent State University, where I studied as an undergraduate. + +- Is common membership in the form of A-B-C worse than common membership of the form A-B and C-D? + +It should not come as a surprise that wide variations in these assumptions and thus in how to measure success were employed by the modeling teams. The quality and justification of such assumptions were weighted heavily by the judges in their evaluation of papers. While varied assumptions were reasonable, others considered unreasonable include: + +- It is better to have a member skip a session than to be in a session with the same member again. +- To minimize common membership, the number of groups for the afternoon sessions may be increased from four to five. +- To ensure that everyone meets at least once, there will be only one group for the seventh session. + +A common weakness of papers in past competitions has been their failure to provide both a functional model and a solution. Frequently, they have provided either a "brute force" solution or a simulated solution without a model. At the other extreme are those providing creative models without demonstrating their functionality in solving the problem at hand. + +In the statement of this year's problem, we attempted to avoid these pitfalls by calling for a solution to the current problem as well as a simple algorithm that can be used in the event that the problem parameters are changed. Thus, papers providing only a "brute force" solution for the existing problem were screened out early in the process. The judges were unanimous in their opinion that the general quality of the entries was improved over those of a year ago. There seemed to be an increased understanding and a willingness to discuss what form a realistic solution might take as well as the related mathematics and bounds on solutions. + +While methods of solution varied from the brute-force listing, with creative matrix methods of accounting, to orthogonal Latin squares, the most common methods were simulated annealing and the greedy algorithm. However, few teams addressed the generalized problem for future meetings with any or all parameters changed. One team that used the greedy algorithm noted that local optimization—that is, optimization at the session level—does not guarantee global optimization, and thus it may be desirable to allow a second encounter of two members early in the day. + +Several teams used a different algorithm to make last-minute changes than to make the initial assignments. One team doing this made a conscious effort to alter drastically the schedule of a few people rather than modify slightly several schedules. One judge commented that the team must have contained a social scientist, while another retorted, "Or someone who had worked with college faculties." Another team noted that in measuring success it is desirable to consider balance (balance of membership of groups) and mixture (pairs that + +meet a disproportionate number of times). To accomplish this, the geometric mean of these two measures was minimized. Another team decided that if a pair of members must meet twice during the day, this would be less "costly" if the time between these two meetings were maximized. + +The four papers judged outstanding had many similarities. All decided it would be desirable to keep the size of the groups for each session as equal as possible. With this agreement, most observed that 532 pairings would need to be made in order to complete the day's schedule, a value achieved in their final solution only by the team from East China University of Science and Technology. Some argued effectively that uneven group sizes increase the number of pairings needed. All four teams recognized that 406 pairings are necessary if each board member were to meet each other board member. That is, for a group of 29 people it would take 406 handshakes for each to shake hands with every other person. In fact, all these teams reported the number of times each of these pairings (handshakes) occurred in their final or "best" solution, as shown in Table 1. For example, the University of Toronto team reported that 33 of the pairs never met, 226 of the pairs met exactly once, 134 of the pairs met twice, and 14 of the pairs met three times. + +Table 1. Occurrences of numbers of pairs in final solutions. + +
TeamNumber of pairs
0123
East China Univ. of Science and Tech.2625310225
Macalester College4021413814
Rose Hulman Inst. of Tech.322181524
University of Toronto3322613414
+ +In spite of the fact that the teams used quite different objective functions, these results are quite similar. Some of the differences might be predicted from the difference in objective functions. The team from Rose-Hulman Institute of Technology got only four groups meeting three times, since their penalty for this situation was modeled as powers of four. The team from Macalester College had the largest number of pairs that never met, 40. This is because they were the only one of the four teams that looked beyond individual pairs in developing their objective function. They even reported that for their solution, "No two discussion groups have more than two members in common." + +Three of the four Outstanding papers used a form of the greedy algorithm to obtain a solution. The other, from Macalester College, used simulated annealing. This paper stood out for its explanation of how simulated annealing is used and for its strong objective function. This function was the sum of four objectives and included a penalty for more than one repeat pairing in a group. In addition, the paper contained a nice proof that if no two pairs of board members are together more than two times, then some pair is never together. + +The paper from the Rose-Hulman team stood out for its comparison of three + +solution methods and the fact that it was well written for the intended audience. Schedules made randomly, with the greedy algorithm and a modified greedy algorithm, were compared statistically. The random method could then be used as a baseline for comparison of the other methods. + +Finally, the paper from the University of Toronto team provided some excellent proofs on bounds for solutions. These results were proven in general and then demonstrated for the An Tostal situation. + +# About the Author + +Donald Miller is Associate Professor and of Chair of Mathematics at Saint Mary's College. He has served as an associate judge of the MCM for five years and prior to that mentored two Meritorious teams. He has done considerable consulting and research in the areas of modeling and applied statistics. His current research, with a colleague in political science, involves the statistical analysis of the politics related to the adoption of state lotteries and state approval of other forms of gambling. He is currently a member of SIAM's Education Committee and Past President of the Indiana Section of the Mathematical Association of America. + +# Practitioner's Commentary: The Outstanding Discussion Groups Papers + +Vijay Mehrotra + +CEO + +Onward Inc. + +888 Villa St., Suite 210 + +Mountain View, CA 94041 + +vijay@ onward-net.com + +http://www.onward-net.com + +As an operations management consultant, I am used to dealing with difficult problems, incomplete information, and unclear objectives. My profession requires + +- a willingness to wrestle with such assignments by understanding the key business goals and issues; +- a desire to solve the problems by finding the right roles for the right people, models, processes, and information systems; +- and an ability to “sell” our solutions by presenting both our methods and our results clearly to diverse and demanding audiences. + +That's why I loved this problem. + +This problem is a decidedly nontrivial combinatorial optimization problem with lots of different dimensions. It features 7 different time slots, with multiple concurrent meetings per time slot, 3 different classes of people, and a whole lot of restrictions on how these people are to be scheduled. Moreover, the students were also asked to create a solution method that + +- could be run by an individual with no technical knowledge, +- could be re-run in less than one hour if inputs were changed slightly, and +- was sufficiently general enough to tackle slightly different versions of the same problem, with different parameters or more general constraints. + +In summary, it is a tough, practical problem requiring creative thinking, thorough analysis, and diverse skills. + +That's why I was so impressed with the winning papers (and with several other high-quality submissions). + +Because this is not a cookie-cutter problem, there is no "right" way to solve it. The problem's complexity—different types of board members, different sessions with different constraints, different parameter values—prevents teams from using a standard modeling framework and turning the crank. Accordingly, the best papers, including the winners published here, tackled the problem with many different methods, including simulated annealing, greedy algorithms, graph theory, and integer programming. + +From the practitioner perspective, this is a key aspect of what the MCM should be teaching: Though the vast majority of academic curriculum in mathematics is organized around specific methods (e.g., multivariable calculus, probability theory, linear programming, etc.), the actual problems that we face as practitioners are rarely well defined and often require us to stretch beyond what we has been presented in the classroom. The top submissions in this competition reflect an appreciation for this fact, revealing creativity in the way in which different models and methods are leveraged to tackle a difficult and very real type of problem. + +Young practitioners often become discouraged at the lack of direct application (and direct appreciation) of their freshly minted skills. Buried in large organizations or saddled with narrow responsibilities, many leave the mathematical sciences for other pursuits. This is a distressing outcome, because of the following paradox: When models do not fit real problem/decisions, it is often merely evidence that the problem is difficult, which is precisely why we, and our customers, need structured frameworks and models to help solve them! Yet too few of our graduates understand this notion. + +The skills that mathematics students possess, even at what most faculty members consider to be an elementary level, are powerful problem-solving tools, when applied creatively and thoughtfully. In addition to modeling skills and mathematical insights, these tools are brought to life by a strong understanding of the business context, by the ability to use the computer to actually solve the problem, and also by the ability to effectively communicate the nature of the problem, the description of the solution methodology, and the results of the analysis. + +However, the same mathematical tools are of limited value when detached from actual problems and viewed in isolation. In this year's competition, papers which looked for a simple "turnkey" solution, or conversely solved the mixing problem narrowly and without regard to extensions or modifications, were not evaluated favorably. The nonstandard and dynamic nature of the problem is an important aspect that had to be addressed in order for a submission to impress the judges in the competition, just as it would in practice in industry and business. + +Conversely, in almost all of the highly rated entries, there were a number of + +characteristics that appeared over and over again. These common themes are evident in all of the papers published here, and are discussed below. + +First of all, the best papers all demonstrate a clear understanding of the problem and its competing objectives. All of the desired elements (minimal common membership, maximal interaction between different board members, in-house representation, senior officer group restriction) are explicitly included in the analysis, whether represented as model constraints, as part of an objective function, or as part of the post-solution verification. These award-winning submissions all had a good sense of how these different elements of the problem were in conflict with one another, while also developing solution techniques to reconcile these competing goals. + +Once again, in practice, this is something that we face on a regular basis. In countless presentations throughout my career, I have faced questions like "How did you account for X?" and "Where does Y fit into what you did?" In today's world, where the volume of data grows ever faster but valuable information and knowledge are increasingly hard to find, it is a major challenge for analysts to determine what to include, what not to include, and why. As Einstein said, "Things should be made as simple as possible, but not any simpler." The process of identifying the key aspects of a problem, decidedly a black art, is a huge part of what this competition offers to its participants. + +Another common theme in this year's winning entries was an understanding of the power of good abstraction. As you read each of the papers published here, you will see a precise and well-presented mathematical formulation of the problem that they propose to solve, along with a clear description of how this abstract problem formulation relates to the "real" problem that is being addressed. In turn, the quality of the abstraction that was selected is directly related to + +- the adaptability of the solution to different problems or slightly different conditions, and +- the computational feasibility of the selected solution method. + +Note that there were several elegant formulations that couldn't be solved and many clever solutions that couldn't be extended. None of those papers appears here. + +Finally, each of the winning entries took the time to examine critically the quality of the scheduling solution generated by their modeling methods. It is challenging to define a standard for what a "best" solution is, yet this type of yardstick is essential for assessing how well a specific method works. Once more, this is something that we as practitioners struggle with, both in trying to determine how successful we have been and in identifying areas where we can improve. + +While my primary purpose is to celebrate and illuminate the best of the best of this year's papers, I think it is important to step back and examine the gauntlet that is the MCM. Competitions can be frightening and/or overwhelming, especially when we don't quite know what we're doing, we've got very + +little time to do it, and we've no choice but to work with other people to get it done. Contests can bring out the best in us, especially when we are desperately trying to do our best, for we can discover a deeper-than-imagined capacity for hypothesizing, learning, assessing, analyzing, and cooperating. In some sense, the MCM is just an extreme case of what we struggle with today in our projects and in our careers. + +Today, in business and in life, we typically don't have all the information about the problems, don't always know who the judges are and what they are thinking, aren't sure what the absolute best approach is, and are perpetually time-constrained. Our choices are clear: either not to participate (or go through the motions half-heartedly), because of all of the uncertainties; or to dive in, think hard, work with our teammates, struggle and fall down a few times, take our best shot, clean up our mess as best we can, and explain clearly what we did and why we did it. The MCM gives students a chance to go through this experience at a relatively young age, to make use of things that they have learned already, and to learn a good deal more simply by going through the process. + +# About the Author + +Dr. Vijay Mehrotra is the co-founder and CEO of Onward, an operations management consulting firm based in Mountain View, CA. He has been a management consultant since 1987, specializing in the application of appropriate mathematical models to key business problems. He has worked with clients in many industries, including semiconductor manufacturing, call center operations management, container shipping, electric power, and sales and marketing management. Vijay holds a Ph.D. in Operations Research from Stanford University and a B.A. in Mathematics and Economics from St. Olaf College. \ No newline at end of file diff --git a/MCM/1995-2008/1998MCM/1998MCM.md b/MCM/1995-2008/1998MCM/1998MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..bbf6b52d10f2c56aa931aa1557aa257391f96da5 --- /dev/null +++ b/MCM/1995-2008/1998MCM/1998MCM.md @@ -0,0 +1,3401 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +Editor + +Paul J. Campbell + +Campus Bo + +Beloit College + +700 College Street + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Department + +Troy State University + +Montgomery + +P.O. Drawer 4419 + +Montgomery, AL 36103 + +JMCargal@aol.com + +Development Director + +Laurie W. Aragon + +Creative Director + +Roger Slade + +Production Manager + +George W. Ward + +Project Manager + +Roland Cheyney + +Copy Editors + +Seth Maislin + +Pauline Wright + +Distribution Coordinator + +Kevin Darcy + +Production Secretary + +Gail Wessell + +Graphic Designers + +Ben Blevins + +Daiva Kiliulis + +# AP Journal + +Vol. 19, No. 3 + +# Associate Editors + +Don Adolphson + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +Leah Edelstein-Keshet + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Peter A. Lindstrom + +Walter Meyer + +Gary Musser + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Gene Woolsey + +Brigham Young University + +Univ. of Houston-Downtown + +Harvey Mudd College + +Troy State Univ.-Montgomery + +Univ.of Wisconsin—Madison + +Harvey Mudd College + +Univ. of Minnesota—Duluth + +University of British Columbia + +Gettysburg College + +COMAP, Inc. + +California State Univ.-Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +North Lake College + +Adelphi University + +Oregon State University + +Eastern Washington University + +Georgia College + +Lawrence Livermore Laboratory + +Lehigh Carbon Comm. College + +Beloit College + +St. Mark's School, Dallas + +Comm. Coll. of Allegheny County + +Colorado State University + +Indiana University + +Univ. of Illinois and NSF + +Colorado School of Mines + +# Subscription Rates for 1998 Calendar Year: Volume 19 + +Individuals subscribe to The UMAP Journal through COMAP's MEMBERSHIP PLUS. This subscription includes quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, a $10\%$ discount on COMAP materials, and a choice of free materials from our extensive list of products. + +(Domestic) #MP9920 $64 + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +(Foreign) #MP9921 $74 + +Institutions can subscribe to the Journal through either Institutional Membership or a Library Subscription. Institutional Members receive two copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, and our organizational newsletter Consortium. They also receive a $10\%$ discount on COMAP materials and a choice of free materials from our extensive list of products. + +(Domestic) #UJ9940 $165 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +(Foreign) #UJ9941 $185 + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching. + +(Domestic) #UJ9930 $140 + +# LIBRARY SUBSCRIPTIONS + +(Foreign) #UJ9931 $160 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02173, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02173 + +© Copyright 1997 by COMAP, Inc. All rights reserved. + +# Table of Contents + +# Publisher's Editorial + +A Time for Reflection Solomon A. Garfunkel 185 + +# Modeling Forum + +Results of the 1998 Mathematical Contest in Modeling Frank R. Giordano 189 + +# The Scanner Problem + +A Method for Taking Cross Sections of Three-Dimensional Gridded Data +Kelly Slater Cline, Kacee Jay Giger, and Timothy O'Conner. 211 + +A Model for Arbitrary Plane Imaging, or the Brain in Pain Falls +Mainly on the Plane +Jeff Miller, Dylan Helliwell, and Thaddeus Ladd 223 + +A Tricubic Interpolation Algorithm for MRI Image Cross Sections +Paul Cantrell, Nick Weininger, and Tamás Németh-Csöri. 237 + +MRI Slice Picturing +Ni Jiang, Chen Jun, and Li Ling. 255 + +Judge's Commentary: The Outstanding Scanner Papers William P. Fox 273 + +Proposer's Commentary: The Outstanding Scanner Papers Yves Nievergelt. 277 + +# The Grade Inflation Problem + +Alternatives to the Grade Point Average for Ranking Students +Jeffrey A. Mermin, W. Garrett Mitchener, and John A. Thacker 279 + +A Case for Stricter Grading +Aaron F. Archer, Andrew D. Hutchings, and Brian Johnson 299 + +Grade Inflation: A Systematic Approach to Fair Achievement Indexing +Amanda M. Richardson, Jeff P. Fay, and Matthew Galati 315 + +Judge's Commentary: The Outstanding Grade Inflation Papers Daniel Zwillinger 323 + +Practitioner's Commentary: The Outstanding Grade Inflation Papers Valen E. Johnson 329 + +# Publisher's Editorial + +# A Time For Reflection + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02173 + +s.garfunkel@mail.comap.com + +Don't worry, this is not yet another Y2K (Year 2000) sentimental musing of an aging mathematics educator. It's just that as I (an aging mathematics educator) write this, we are reviewing the blue lines for the Mathematics: Modeling Our World (ARISE) Course 3 high school textbook. The publication of this text represents the culmination of six years effort (seven, if you count writing the proposal). While we are still working on Course 4, we can quite clearly see the light at the end of this particular tunnel. But more importantly, it is time to take a clear look at where we stand, and here I mean the real we. + +By the end of next year, all of the U.S. national K-12 comprehensive curriculum projects will be well out and published. Soon we will begin to see a new generation of students, those who have taken one of Everyday Math, Investigations, Connected Math, Math in Context, ARISE, Core-Plus, or IMP. These students will be our entering undergraduate students. And what will they look like? How different will they be? + +They'll be better! They'll have handled a graphing calculator from 4th grade on. They'll know what residuals mean and how to look at messy data. They'll never ever ask, "What's this good for?" They won't be afraid to attack a problem because they don't know how to solve it before they start. They will be mathematical modelers, looking for new means of attack. And yes, they will have solid symbol manipulation skills. I firmly believe that these students will challenge us in much the same way that computer science students challenged their faculties some 25 to 30 years ago, when the world moved from mainframes to PCs. + +As the title of this editorial suggests, I have been reflecting on COMAP's next steps. The creation of a complete high school curriculum is a major step in the implementation of reform. Where are the necessary next steps? The common wisdom says that the next major effort must be directed towards + +in-service education for the K-12 teachers, to better prepare them to present students with this incarnation of change. Certainly, there is much that needs to be done in this arena, but I believe that the important answers lie elsewhere. They lie in the graduate and undergraduate math classrooms. + +We cannot always be in this cycle of change the curriculum, then change the teachers to teach the new curriculum. The truest truism of education is that people teach as they were taught. We need to change the way we are taught. This means at all levels, including how we are taught as undergraduates and, yes, even as graduate students. For now we are in a transitional phase—teachers teaching curricula they did not take. That will soon change. People decide to teach mathematics because they enjoy it and are good at math in school. That math will soon be an ARISE-type curriculum. + +So, it is time to work seriously on the undergraduate curriculum, to create classroom experiences that will continue the excitement of the K-12 experiences and build a core of ambassadors for mathematics. These will be our future teachers and our future users of mathematics. Just as it is long past time to think of mathematics in a layer-cake approach—algebra, geometry, trigonometry—it is also long past time to think of the style of teaching mathematics as being grade-level dependent. + +With the A Nation At Risk report [1983], we convinced Congress (and I suspect ourselves) that we had the finest undergraduate educational system in the world, and all we needed to do was to "fix" K-12. The truth was and is much subtler. Undergraduate mathematics education needs work. It needs new courses and pedagogies that reflect the best aspects of reform. + +When we finally begin to see mathematics education as one common enterprise, interconnected at all levels—with graduate school affecting elementary education and the elementary schools determining how our youngest students do or do not go on to further study—then we will have completed this cycle of reform. + +And then it will be time to start over again. + +# Reference + +United States National Commission on Excellence in Education. 1983. A Nation at Risk: The Imperative for Educational Reform. Washington, DC: Superintendent of Documents, U.S. Government Printing Office. + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for eleven years and has dedicated the last 20 years to + +research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he appeared as the on-camera host), Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary school mathematics. + +# Modeling Forum + +# Results of the 1998 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02173 + +f.giordano@mail.comap.com + +# Introduction + +A total of 472 teams of undergraduates, from 246 institutions from 8 countries, spent the second weekend in February working on applied mathematics problems. They were part of the twelfth Mathematical Contest in Modeling (MCM). On Friday morning, the MCM faculty advisor opened a packet and presented each team of three students with a choice of one of two problems. After a weekend of hard work, typed solution papers were mailed to COMAP on Monday. Nine of the top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first thirteen contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-1997). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +# Problem A: The Scanner Problem + +# Introduction + +Industrial and medical diagnostic machines known as Magnetic Resonance Imagers (MRI) scan a three-dimensional object, such as a brain, and deliver + +their results in the form of a three-dimensional array of pixels. Each pixel consists of one number, indicating a color or a shade of gray that encodes a measure of water concentration in a small region of the scanned object at the location of the pixel. For instance, 0 can picture high water concentration in black (ventricles, blood vessels), 128 can picture a medium water concentration in gray (brain nuclei and gray matter), and 255 can picture a low water density in white (lipid-rich white matter consisting of myelinated axons). Such MRI scanners also include facilities to picture on a screen any horizontal or vertical slice through the three-dimensional array (slices are parallel to any of the three Cartesian coordinate axes). + +Algorithms for picturing slices through oblique planes, however, are proprietary. Current algorithms + +- are limited in terms of the angles and parameter options available, +- are implemented only on heavily used dedicated workstations, +- lack input capabilities for marking points in the picture before slicing, and +- tend to blur and "feather out" sharp boundaries between the original pixels. + +A more faithful, flexible algorithm implemented on a personal computer would be useful + +- for planning minimally invasive treatments; +for calibrating the MRI machines; +- for investigating structures oriented obliquely in space, such as post-mortem tissue sections in animal research; +- for enabling cross sections at any angle through a brain atlas consisting of black-and-white line drawings. + +To design such an algorithm, one can access the values and locations of the pixels but not the initial data gathered by the scanner. + +# Problem + +Design and test an algorithm that produces sections of three-dimensional arrays by planes in any orientation in space, preserving the original gray-scale values as closely as possible. + +# Data Sets + +The typical data set consists of a three-dimensional array $A$ of numbers $A(i,j,k)$ , where $A(i,j,k)$ is the density of the object at the location $(x,y,z)_{i,j,k}$ . Typically, $A(i,j,k)$ can range from 0 through 255. In most applications, the data + +set is quite large. Teams should design data sets to test and demonstrate their algorithms. The data sets should reflect conditions likely to be of diagnostic interest. Teams should also characterize data sets that limit the effectiveness of their algorithms. + +# Summary + +The algorithm must produce a picture of the slice of the three-dimensional array by a plane in space. The plane can have any orientation and any location in space. (The plane can miss some or all data points.) The result of the algorithm should be a model of the density of the scanned object over the selected plane. + +# Problem B: The Grade Inflation Problem + +# Background + +Some college administrators are concerned about the grading at A Better Class (ABC) College. On average, the faculty at ABC have been giving out high grades (the average grade now given out is an A—), and it is impossible to distinguish between the good and the mediocre students. The terms of a very generous scholarship only allow the top $10\%$ of the students to be funded, so a class ranking is required. + +The dean had the thought of comparing each student to the other students in each class, and using this information to build up a ranking. For example, if a student obtains an A in a class in which all students obtain an A, then this student is only "average" in this class. On the other hand, if a student obtains the only A in a class, then that student is clearly "above average." Combining information from several classes might allow students to be placed in deciles (top $10\%$ , next $10\%$ , etc.) across the college. + +# Problem + +Assuming that the grades given out are $(\mathrm{A + },\mathrm{A},\mathrm{A - },\mathrm{B + },\ldots)$ , can the dean's idea be made to work? + +Assuming that the grades given out are only (A, B, C, ...) can the dean's idea be made to work? + +Can any other schemes produce a desired ranking? + +A concern is that the grade in a single class could change many students' deciles. Is this possible? + +# Data Sets + +Teams should design data sets to test and demonstrate their algorithms. Teams should characterize data sets that limit the effectiveness of their algorithms. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at Southern Connecticut State University (Problem A) or at Carroll College (Montana) (Problem B). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Scanner43147106189
Grade Inflation34869163283
779116269472
+ +The seven papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +# Scanner Papers + +"A Method for Taking Cross Sections of Three-Dimensional Gridded Data" + +Eastern Oregon University + +LaGrande, OR + +Norris Preyer + +Kelly Slater Cline + +Kacee Jay Giger + +Timothy O'Conner + +"A Model for Arbitrary Plane Imaging, or the Brain in Pain Falls Mainly on the Plane" + +Harvey Mudd College + +Claremont, CA + +Michael Moody + +Jeff Miller + +Dylan Helliwell + +Thaddeus Ladd + +"A Tricubic Interpolation Algorithm for MRI Image Cross Sections" + +Macalester College + +St. Paul, MN + +Karla V. Ballman + +Paul Cantrell + +Nicholas Wenninger + +Tamás Németh-Csői + +"MRI Slice Picturing" + +Tsinghua University + +Beijing, China + +Ye Jun + +Ni Jiang + +Chen Jun + +Li Ling + +# Grade Inflation Papers + +"Alternatives to the Grade Point Average for Ranking Students" + +Duke University + +Durham, NC + +Greg Lawler + +Jeffrey A. Mermin + +W. Garrett Mitchener + +John A. Thacker + +"A Case for Stricter Grading" + +Harvey Mudd College + +Claremont, CA + +Michael Moody + +Aaron F. Archer + +Andrew D. Hutchings + +Brian Johnson + +"Grade Inflation: A Systematic Approach to Fair Achievement Indexing" + +Stetson University + +Deland, FL + +Erich Friedman + +Amanda M. Richardson + +Jeff P. Fay + +Matthew Galati + +# Meritorious Teams + +Scanner Papers (31 teams) + +California Polytechnic State Univ., San Luis Obispo, CA (two teams) (Thomas O'Neil) + +East China Univ. of Science and Technology, Shanghai, China (Yuanhong Lu) + +Fudan University, Shanghai, China (Xi Zhou) + +Harvey Mudd College, Claremont, CA (Ran Libeskind-Hadas) + +Lawrence Technological Univ., Southfield, MI (Ruth G. Favro) + +Macalester College, St. Paul, MN (Susan Fox) + +N.C. School of Science and Mathematics, Durham, NC (two teams) (Dot Doyle) +Nankai University, Tianjin, China (XingWei Zhou) +Nat'l. Univ. of Defence Technology, Changsha, HuNan, China (Cheng LiZhi) +Nat'l. Univ. of Defence Technology, Changsha, HuNan, China (Wu Yu) +Rose-Hulman Institute of Technology, Terre Haute, IN (Aaron D. Klebanoff) +Seattle Pacific University, Seattle, WA (Steven D. Johnson) +South China Univ. of Technology, Guangzhou, Guangdong, China (Xie Lejun) +Southeast University, JiangSu, Nanjing, China (Zhou Jian Hua) +Southeast University, JiangSu, Nanjing, China (Wu Hua Hui) +Tsinghua University, Beijing, China (Hu Zhiming) +University of Alaska Fairbanks, Fairbanks, AK (John P. Lambert) +University of Colorado-Boulder, Boulder, CO (Anne Dougherty) +University of Massachusetts-Lowell, Lowell, MA (Lou Rossi) +University of Missouri-Rolla, Rolla, MO (Michael G. Hilgers) +University of Puget Sound, Tacoma, WA (Robert A. Beezer) +Univ. of Science and Technology of China, Hefei, Anhui, China (Rong Zhang) +University College-Cork, Cork, Ireland (J.B. Twomey) +Western Washington University, Bellingham, WA (Sebastian Schreiber) +Worcester Polytechnic Inst., Worcester, MA (Bogdan Vernescu) +Xi'an Jiaotung University Xi'an, Shaanxi, China (He Xiaoliang) +Xi'an Jiaotong University, Xi'an, Shaanxi, China (Zhou Yicang) +XiDian University Xi'an, Shaanxi, China (Liu Hongwei) +Youngstown State University, Youngstown, OH (Thomas Smotzer) + +# Grade Inflation Papers (48 teams) + +Benedictine College, Atchison, KS (Jo Ann Fellin, OSB) +Bucknell University, Lewisburg, PA (Sally Koutsoliotas) +Colby College, Waterville, ME (Jan Holly) +College of William and Mary, Williamsburg, VA (Larry Leemis) +Colorado College, Colorado Springs, CO (Barry A. Balof) +David Lipscomb Institute, Nashville, TN (Mark A. Miller) +E. China Univ. of Sci. and Tech., Shanghai, China (Xiwen Lu) +Eastern Mennonite University, Harrisonburg, VA (John Horst) +Grinnell College, Grinnell, IA (Marc Chamberland) +Gustavus Adolphus College, St. Peter, MN (Gary Hatfield) +Harvey Mudd College, Claremont, CA (Ran Libeskind-Hadas) +Humboldt State Univ., Arcata, CA (Roland Lamberson) +Johns Hopkins University, Baltimore, MD (Daniel Q. Naiman) +Lafayette College, Easton, PA (Thomas Hill) +Lawrence Technological Univ., Southfield, MI (Howard Whitston) +Loyola College-Maryland, Baltimore, MD (Timothy J. McNeese) +Messiah College, Grantham, PA (Douglas C. Phillippy) +Mt. St. Mary's College, Emmitsburg, MD (John August) +N.C. School of Science and Mathematics, Durham, NC (John Kolena) +Natl. Univ. of Defence Technology, Changsha, HuNan, China (Wu MengDa) +Nazareth College, Rochester, NY (Kelly M. Fuller) +Nebraska Wesleyan University, Lincoln, NE (P. Gavin LaRose) +Pomona College, Claremont, CA (Richard Elderkin) +Rose-Hulman Institute of Technology, Terre Haute, IN (Aaron D. Klebanoff) + +Saint Mary's College, Notre Dame, IN (Joanne Snow) +Salisbury State University, Salisbury, MD (Steven M. Hetzler) +Shanghai Normal University, Shanghai, China (Shenghuan Guo) +Southeast University, JiangSu, Nanjing, China (Shen Yu Jiang) +Southern Connecticut State University, New Haven, CT (Ross B. Gingrich) +Trinity University, San Antonio, TX (Diane G. Saphire) +Tsinghua University, Beijing, China (Ye Jun) +Tsinghua University, Beijing, China (Hu Zhiming) +U.S. Military Academy, West Point, NY (Kellie Simon) +United States Air Force Academy, USAF Academy, CO (Harry N. Newton) +United States Air Force Academy, USAF Academy, CO (Mark Parker) +Univ. of Science and Technology of China, Hefei, Anhui, China (Yi Shi) +Univ. of Wisconsin-Stevens Point, Stevens Point, WI (Nathan Wetzel) +University of Alaska Fairbanks, Fairbanks, AK (John P. Lambert) +University of Dayton, Dayton, OH (J.M. O'Hare) +University of Puget Sound, Tacoma, WA (Perry Fizzano) +University of Toronto, Toronto, Ontario, Canada (James G.C. Templeton) +Valparaiso University, Valparaiso, IN (Rick Gillman) +Wake Forest University, Winston-Salem, NC (Edward Allen) +Western Carolina University, Cullowhee, NC (Jeff A. Graham) +Western Carolina University, Cullowhee, NC (Scott Sportsman) +Western Connecticut State Univ., Danbury, CT (Judith A. Grandahl) +Worcester Polytechnic Inst., Worcester, MA (Arthur C. Heinricher) +Xidian University, Xi'an, Shaanxi, China (Mao Yongcai) +Youngstown State University, Youngstown, OH (Paul Mullins) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, gave a cash award and a three-year membership to each member of the teams from Macalester College (Scanner Problem) and Stetson University (Grade Inflation Problem). Moreover, INFORMS gave free one-year memberships to all members of Meritorious and Honorable Mention teams. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from Macalester College (Scanner Problem) and Harvey Mudd College (Grade Inflation Problem). The Harvey Mudd team presented its results at a special Minisymposium of the SIAM Annual Meeting in Toronto in July. Each of the three Harvey Mudd team members was awarded a $300 cash prize. Their school was given a framed, hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding team from each problem as an MAA Winner. The teams were from Eastern Oregon University (Scanner Problem) and Duke University (Grade Inflation + +Problem). Both teams presented their solutions at a special session of the MAA Mathfest in Toronto in July. Each team member was presented a certificate by MAA President-Elect Tom Banchoff. + +# Judging + +Director + +Frank R. Giordano, COMAP, Lexington, MA + +Associate Directors + +David C. Arney, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +# Scanner Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, Stillwater, OK + +Associate Judges + +Kelly Black, Mathematics Dept., University of New Hampshire, Durham, NH + +Paul Boisen, Defense Dept., Ft. Meade, MD + +Courtney Coleman, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY (INFORMS) + +William Fox, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Debbie Levinson, Dept. of Mathematics, Colorado College, Colorado Springs, CO (SIAM) + +Mark Levinson, Edmonds, WA (SIAM) + +Jack Robertson, Head, Mathematics and Computer Science, Georgia College and State University, Milledgeville, GA (MAA) + +Theresa M. Sandifer, Southern Connecticut State University, New Haven, CT + +John L. Scharf, Carroll College, Helena, MT + +Lee Seitelman, Glastonbury, CT + +# Grade Inflation Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +Karen Bolinger, Dept. of Mathematics, Clarion University of Pennsylvania, Clarion, PA + +James Case, Baltimore, Maryland + +Doug Faires, Dept. of Mathematics and Statistics, Youngstown State University, Youngstown, OH + +Jerry Griggs, University of South Carolina, Columbia, SC (SIAM) + +Mario Juncosa, RAND Corporation, Santa Monica, CA + +John Kobza, Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA (INFORMS) + +Mario Martelli, Dept. of Mathematics, California State University, Fullerton, CA + +Vijay Mehrotra, Onward Inc., Mountain View, CA (INFORMS) + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Dept. of Mathematics, St. Mary's College, Notre Dame, IN + +Catherine Roberts, Northern Arizona University, Flagstaff, AZ (SIAM) + +Kathleen M. Shannon, Salisbury State University, Salisbury, MD (MAA) + +Robert M. Tardiff, Dept. of Mathematical Sciences, Salisbury State University, Salisbury, MD + +Michael Tortorella, Lucent Technologies, Holmdel, NJ + +Marie Vanisko, Carroll College, Helena, MT + +Daniel Zwillinger, Zwillinger & Associates, Arlington, MA + +# Triage Session + +# Scanner Problem + +Head Triage Judge + +Theresa M. Sandifer, Southern Connecticut State University, New Haven, CT + +Associate Judges + +Therese L. Bennett, Southern Connecticut State University, New Haven, CT + +Ross B. Gingrich, Southern Connecticut State University, New Haven, CT + +Cynthia B. Gubitose, Western Connecticut State University, Danbury, CT + +C. Edward Sandifer, Western Connecticut State University, Danbury, CT + +# Grade Inflation Problem + +(all were from Mathematics Dept., Carroll College, Helena, MT) + +Head Triage Judge + +Marie Vanisko + +Associate Judges + +Peter Biskis, Terence J. Mullen, Jack Oberweiser, Paul D. Olson, and Phillip Rose + +# Sources of the Problems + +The Scanner Problem was contributed by Yves Nievergelt, Mathematics Dept., Eastern Washington University. The Grade Inflation Problem was contributed by Dan Zwillinger, Zwillinger & Associates, Arlington, MA. + +# Acknowledgments + +The MCM was funded this year by the National Security Agency, whose support we deeply appreciate. We thank Dr. Gene Berg of NSA for his coordinating efforts. The MCM is also indebted to INFORMS, SIAM, and the MAA, which provided judges and prizes. + +I thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +P = Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +A = Scanner Problem +B = Grade Inflation Problem + +
INSTITUTIONCITYADVISORAB
ALABAMA
Huntingdon CollegeMontgomerySid StubbsP
University of AlabamaHuntsvilleClaudio H. MoralesP
ALASKA
Univ. of AlaskaFairbanksJohn P. LambertMM
ARIZONA
Northern Ariz. Univ.FlagstaffTerence R. BlowsH
University of ArizonaTucsonBruce J. BaylyH
CALIFORNIA
Calif. Inst. of Tech.PasadenaRichard M. WilsonP
Calif. Poly. State Univ.San Luis ObispoThomas O'NeilM,M
Calif. State Univ.BakersfieldJohn DirkseP,P
Harvey Mudd CollegeClaremontMichael MoodyO
Ran Libeskind-HadasMO,M
Humboldt State Univ.ArcataJeffrey B. HaagH
Roland LambersonM
L.A. Pierce CollegeWoodland HillsBob MartinezP
Loyola Marymount U.Los AngelesThomas M. ZachariahP,P
Occidental CollegeLos AngelesRon BuckmireH
Pepperdine Univ.MalibuBradley W. BrockH,H
Pomona CollegeClaremontRichard ElderkinM
Sonoma State Univ.Rohnert ParkSunil K. TiwariP
Univ. of RedlandsRedlandsSteve MoricsP
COLORADO
Colorado CollegeColorado SpringsBarry A. BalofM,P
Fort Lewis CollegeDurangoDick WalkerP
Mesa State CollegeGrand JunctionEdward Bonan-HamadaP
U.S. Air Force AcademyUSAF AcademySteven F. BakerP
Harry N. NewtonM
Mark ParkerPM
Univ. of ColoradoBoulderAnne DoughertyM
Bengt FornbergP
Univ. of South. ColoradoPuebloBruce N. LundbergP
CONNECTICUT
Connecticut CollegeNew LondonKathy McKeonH
Southern Conn. State Univ.New HavenRoss B. GingrichM
Theresa BennettP
U.S. Coast Guard AcademyNew LondonJanet A. McLeaveyP
Western Conn. State Univ.DanburyJudith A. GrandahlM
Paul HinesH
C. Edward SandiferP
DISTRICT OF COLUMBIA
Georgetown UniversityWashingtonAndrew VogtHP
FLORIDA
Florida Inst. of TechnologyMelbourneGary W. HowellP,P
Florida Southern CollegeLakelandWilliam G. AlbrechtP
Charles B. PateP
Allen WuertzP
Jacksonville UniversityJacksonvillePaul R. SimonyP
Robert A. HollisterPP
Stetson UniversityDelandErich FriedmanO
GEORGIA
Agnes Scott CollegeDecaturRobert A. LeslieH
Georgia College & State Univ.MilledgevilleCraig TurnerP
State Univ. of West GeorgiaCarrolltonScott GordonP
Everett D. McCoyP
IDAHO
Boise State UniversityBoiseAlan R. HausrathP
ILLINOIS
Greenville CollegeGreenvilleGalen R. PetersP
Illinois Wesleyan UniversityBloomingtonZahia DriciP
Northern Illinois UniversityDekalbHamid BelloutP
Wheaton CollegeWheatonPaul IsiharaH,P
INDIANA
Ball State UniversityMuncieFred Gylys-ColwellP
Earlham CollegeRichmondMic JacksonP
Charlie PeckH
Tekla LewinH
Indiana UniversityBloomingtonLarry MossPH
South BendMorteza Shafii-MousaviH
Rose-Hulman Inst. of Tech.Terre HauteFrank YoungH
Aaron D. KlebanoffMM
Saint Mary's CollegeNotre DameJoanne SnowM,H
Valparaiso UniversityValparaisoRick GillmanM,P
IOWA
Drake UniversityDes MoinesLuz M. De AlbaP
Alexander F. KleinerH
Graceland CollegeLamoniSteve K. MurdockP
Grinnell CollegeGrinnellMarc ChamberlandPM
Iowa State UniversityAmesStephen J. WillsonP
Luther CollegeDecorahReginald D. LaursenP
Simpson CollegeIndianolaRick SpellerbergH
M.E. “Murphy” WaggonerP
Univ. of Northern IowaCedar FallsGregory M. DotsethH
Timothy L. HardyP
KANSAS
Baker UniversityBaldwin CityBob FragaPP
Benedictine CollegeAtchisonJo Ann Fellin, OSBM
Bethel CollegeNorth NewtonMonica MeissenP
KENTUCKY
Asbury CollegeWilmoreKenneth P. RietzH
Bellarmine CollegeLouisvilleJohn A. OppeltP
Brescia CollegeOwensboroChris A. TiahrtP
LOUISIANA
McNeese State UniversityLake CharlesKaren AucoinH
Northwestern State Univ.NatchitochesLisa R. GalminasP
MAINE
Bowdoin CollegeBrunswickHelen MooreP
Colby CollegeWatervilleJan HollyM,P
MARYLAND
Goucher CollegeBaltimoreDavid HornH
Robert E. LewandP
Hood CollegeFrederickJohn Boon, Jr.P
Johns Hopkins UniversityBaltimoreDaniel Q. NaimanM
Loyola College–MarylandBaltimoreDipa ChoudhuryH,H
Timothy J. McNeeseM
Mt. St. Mary's CollegeEmmitsburgJohn AugustM
Theresa A. FrancisP
Salisbury State UniversitySalisburySteven M. HetzlerM
St. Mary's Coll. of Md.St. Mary's CityJames TantonPP
MASSACHUSETTS
Bentley CollegeWalthamLucia KimballP
Boston CollegeChestnut HillPaul R. ThieP
Boston UniversityBostonGlen HallP
Harvard UniversityCambridgeCurtis McMullenP
Salem State CollegeSalemJoyce AndersonP
Simon's Rock CollegeGreat BarringtonAllen B. AltmanPH
Michael BergmanP
Smith CollegeNorthamptonRuth HaasP
Univ. of MassachusettsAmherstEdward A. ConnorsH
LowellJ. “Kiwi” Graham-EagleP
Lou RossiM
Western New England Coll.SpringfieldLorna HanesP
Williams CollegeWilliamstownStewart JohnsonPP
Worcester Polytechnic Inst.WorcesterArthur C. HeinricherM
Bogdan VernescuM
MICHIGAN
Albion CollegeAlbionScott DilleryPP
David SeelyP
Calvin CollegeGrand RapidsThomas L. JagerH
Eastern Michigan Univ.YpsilantiChristopher E. HeeHP
Hillsdale CollegeHillsdaleJohn P. BoardmanPP
Lawrence Tech. Univ.SouthfieldRuth G. FavroM
Scott SchneiderP
Howard WhitstonM
Michigan State UniversityE. LansingC.R. MacCluerP
MINNESOTA
Gustavus Adolphus Coll.St. PeterGary HatfieldM
Macalester CollegeSt. PaulKarla V. BallmanO
Susan FoxM
Daniel KaplanP
Univ. of MinnesotaDuluthZhuangyi LiuP
MorrisPeh NgP,P
Winona State UniversityWinonaSteven LeonhardiP
MISSOURI
Central Missouri State Univ.WarrenburgL. Vincent EdmondsonP
Northwest Missouri State U.MaryvilleRussell EulerPP
Truman State UniversityKirksvilleSteve SmithPP
Univ. of MissouriRollaMichael G. HilgersM,H
MONTANTA
Carroll CollegeHelenaTerence J. MullenP
Jack OberweiserP
Phil RoseH
Anthony M. SzpilkaP
NEBRASKA
Hastings CollegeHastingsDavid B. CookeH
Nebraska Wesleyan Univ.LincolnP. Gavin LaRoseM,P
NEVADA
Sierra Nevada CollegeIncline VillageElizabeth CarterP
NEW JERSEY
Camden County CollegeBlackwoodAllison SuttonP
New Jersey Inst. of Tech.NewarkJohn BechtoldH
NEW MEXICO
New Mexico State Univ.Las CrucesJoseph LakeyP
NEW YORK
Buffalo State CollegeBuffaloRobin SandersH
Great Neck South HSGreat NeckRobert SilverstoneP
Ithaca CollegeIthacaJames E. ConklinP
John C. MaceliP
Nassau Community Coll.Garden CityAbraham S. MantellP
Nazareth CollegeRochesterKelly M. FullerM,H
Niagara UniversityNiagaraSteven L. SiegelP
Pace UniversityPleasantvilleRobert CiceniaP
St. Bonaventure UniversitySt. BonaventureFrancis C. LearyH
Albert G. WhiteP
SUNY GeneseoGeneseoChris LearyP
U.S. Military AcademyWest PointChuck MitchellP
James S. RolfH
Kellie SimonM
Charles C. TappertH
Wells CollegeAuroraCarol C. ShilepskyP
Westchester Comm. CollegeValhallaRowan LindleyP
Sheela WhelanP
NORTH CAROLINA
Appalachian State UniversityBooneHolly P. HirstPP
Duke UniversityDurhamGreg LawlerO
N.C. School of Sci. & Math.DurhamDot DoyleM,M
John KolenaM
Salem CollegeWinston-SalemDebbie L. HarrellH
Paula G. YoungP
Univ. of North CarolinaChapel HillDouglas G. KellyH
Jon W. TolleP
PembrokeRaymond E. LeeP
Wake Forest UniversityWinston-SalemEdward AllenM
Stephen B. RobinsonH
Western Carolina UniversityCullowheeJeff A. GrahamM
Scott SportsmanM
Kurt VandervoortP
NORTH DAKOTA
Univ. North DakotaWillistonWanda M. MeyerP
OHIO
College of WoosterWoosterReuben SettergrenP
Hiram CollegeHiramLarry BeckerP,P
Brad GubserPP
Marietta CollegeMariettaTom LaFramboisePP
Miami UniversityOxfordDouglas E. WardP
Ohio UniversityAthensDavid N. KeckP
University of DaytonDaytonJ.M. O'HareM
Ralph C. SteinlageH,P
Xavier UniversityCincinnatiRichard J. PulskampP
Youngstown State UniversityYoungstownStephen Hanzely Paul Mullins Thomas SmotzerP MM H
OKLAHOMA
Oklahoma State UniversityStillwaterJohn E. WolfePP
Southeastern Okla. State Univ.DurantJohn M. McArthur Karla OtyP PP P
Southern Nazarene UniversityBethanyPhilip CrowPP
OREGON
Eastern Oregon State CollegeLaGrandeDavid Allen Norris Preyer Jenny WoodworthO,HH P
Southern Oregon UniversityAshlandKemble R. YatesPP
PENNSYLVANIA
Allegheny CollegeMeadvilleDavid L. HousmanPP
Bucknell UniversityLewisburgSally KoutsoliotasPM
Chatham CollegePittsburghEric RawdonPP
Gettysburg CollegeGettysburgJames P. FinkHP
Lafayette CollegeEastonThomas HillMM
Messiah CollegeGranthamDouglas S. Phillippy Lamarr C. WidmerHM
Penn State Berks-Lehigh ValleyReadingL. Miller-Van Wieren D.M. Van WierenPP
Shippensburg UniversityShippensburgDoug Ensley Gene FioriniPP
Susquehanna UniversitySelinsgroveKenneth A. BrakkePP
Westminster CollegeNew WilmingtonBarbara FairesHH
RHODE ISLAND
Rhode Island CollegeProvidenceD.L. AbrahamsonPP
SOUTH CAROLINA
Charleston Southern Univ.CharlestonStan Perrine Ioana MihailaPP
Coastal Carolina UniversityConwayNieves A. McNultyPP
Univ. of South CarolinaAiken
SOUTH DAKOTA
Northern State UniversityAberdeenA.S. ElkhaderHH
TENNESSEE
Austin Peay State UniversityClarksvilleMark C. GinnH
Christian Brothers UniversityMemphisCathy W. CarterP,P
David Lipscomb UniversityNashvilleGary C. HallP
Mark A. MillerM
TEXAS
Abilene Christian UniversityAbileneDavid HendricksPP
Angelo State UniversitySan AngeloAndrew B. WallaceP
Baylor UniversityWacoRonald B. MorganP
Trinity UniversitySan AntonioDiane G. SaphirePM
University of DallasIrvingRichard P. OlenickP
Edward P. WilsonP
University of HoustonHoustonBarbara Lee KeyfitzH
University of TexasAustinMike OehrtmanPH
RichardsonAli HooshyarP
T. ConstantinescuP
VERMONT
Johnson State CollegeJohnsonGlenn D. SproulP,P
VIRGINIA
College of William & MaryWilliamsburgLarry LeemisM
Eastern Mennonite UniversityHarrisonburgJohn HorstM,H
Randolph-Macon Woman's Coll.LynchburgEric ChandlerP
Thos. Jefferson HS for Sci.& Tech.AlexandriaJohn DellH,P
University of RichmondRichmondKathy W. HokeP
Virginia Western Comm. CollegeRoanokeRuth ShermanPP
WASHINGTON
Pacific Lutheran UniversityTacomaRachid BenkhaltiP
Seattle Pacific UniversitySeattleSteven D. JohnsonM
University of Puget SoundTacomaRobert A. BeezerMH
Perry FizzanoM,P
Western Washington UniversityBellinghamSebastian SchreiberMP
Saim UralP,P
WISCONSIN
Beloit CollegeBeloitPhilip D. StraffinH,H
Carroll CollegeWaukeshaJohn SymmsP
William WelchP
Edgewood CollegeMadisonKen JewellP
Steven PostP
Northcentral Technical CollegeWausauFrank J. FernandesP
Robert J. HenningPP
St. Norbert CollegeDe PereJohn A. FrohligerP
Univ. of WisconsinEau ClaireCarl SchoenP
PlattevilleSherrie NicolP
Stevens PointNathan WetzelM
UW Colleges-Marathon CountyWausauFe EvangelistaH
Paul A. MartinP
Wisconsin Lutheran CollegeMilwaukeeM.C. PapenfussP
AUSTRALIA
Univ. of Southern QueenslandToowoomba, QLDC.J. HarmanH
Tony RobertsH
CANADA
Univ. of Western OntarioLondon, OntarioPeter H. PooleH
University of AlbertaEdmonton, AlbertaJoseph SoP
University of CalgaryCalgary, AlbertaD.R. WestbrookH
University of SaskatchewanSaskatoon, SKJames A. BrookeH
Raj SrinivasanH
Tom SteeleP
University of TorontoToronto, OntarioN.A. DerzkoP,P
J.G.C. TempletonM
York UniversityToronto, OntarioNeal MadrasPH
CHINA
Anhui Inst. of Mech. & Elec. Eng.Wuhu, AnhuiWang ChuanyuP
Wang GengP
Anhui UniversityHefei, AnhuiWu FuchaoP
Yang ShangjunH
Beijing Institute of TechnologyBeijingBao Zhu GuoPP
Xiao Di CuiP
Beijing Normal UniversityBeijingLaifu LiuPP
Wenyi ZengP,P
Beijing U. of Aero. & Astro.BeijingLi Wei guoH
Beijing Union UniversityBeijingRen KaiLongP
Zeng QingliP
Beijing Univ. of Chem. Tech.BeijingLiu DaminP
Shi XiaodingP
Zhao BaoyuanP
Central South Univ. of Tech.Changsha, HunanHan XuliH,P
Central-south Institute of Tech.Hengyang, HunanLi XianyiH
Central-south Inst. of Tech.Hengyang, HunanLiu YachunP
China U. of Mining & Tech.Xuzhou, JiangsuZhang XingyongH
Zhou ShengwuH
Chongqing UniversityChongqingFu LiH
Gong QuH
He ZhongshiP
Liu QiongsenP
Dalian Univ. of TechnologyDalian, LiaoningHe MingfengH,P
Yu HongquanP
Zhao LizhongP
E. China Univ. of Sci. & Tech.ShanghaiNianci ShaoH
Xiwen LuM,H
Yuanhong LuM
East China Normal Univ.ShanghaiLin WuzhongP
Exp'l HS, Beijing Normal U.BeijingHan LeqingP,P
Math ChairP
First Middle SchoolJiading, ShanghaiChenganP,P
Fudan UniversityShanghaiJin LiuP
Xi ZhouMP
Zhijie CaiP
Harbin Inst. of Tech.Harbin, HeilongjiangShang ShoutingHP
Wang YongHH
Hebei Institute of Tech.Tangshan, HebeiLiu BaoxiangP
Liu ChunfengP
Lu ZhenyuP
Hefei University of Tech.Hefei, AnhuiXueqiao DuH
Yonghua HuP
Yongwu ZhouP
Youdu HuangH
Jilin Institute of TechnologyChangchun, JilinSun ChangchunP
Wang XiuyuP
Xu YunhuiP
Lu Xian RuiP
Shi ShaoYunP
Yin Jing XueP
Fang PeichenP
Zhang KuiyuanP
Jinan UniversityGuangzhou, GuangdongShiqi YeP
Suohai FanH
Lanzhou Railway InstituteLanzhou, GansuBai LihuaH
He ShangluH
Li YonganP
Zhang JianxunH
N.W. Polytech. Univ.Xian, ShaanxiPeng GuohuaP
Rong HaiwuH
Wang MingYuP
Zhang ShengguiH
Nankai UniversityTianjinBin WangH
Jishou RuanH
XingWei ZhouM
Nanyang Model HSShanghaiTuqing CaoP
Natl. Univ. of Defence Tech.Changsha, HunanCheng LiZhiM
Wu MengDaM
Wu YuM
Peking UniversityBeijingJian-hua WuP
Lei Gong-yanH,P
Zhuoqun XuP
Qufu Normal UniversityQufu, ShandongYuzhong ZhangP
Shandong UniversityJinan, ShandongCui YuquanH
Long HepingP
Piming MaP
Zhengyuan MaP
Shanghai Jiaotong Univ.ShanghaiLi ShidongH
Song BaoruiP
Sun ZhulingP
Zhou GangP
Shanghai Normal Univ.ShanghaiShenghuan GuoHM
South China Univ. of Tech.Guang Zhou, GuangdongChang ZhihuaP
Fu HongzhuoH
Hao ZhifengH
Xie LejunM
Southeast UniversityJiangSu, NanjingNie Chang haiP
Shen YujiangM
Wu Hua huiM
Zhou Jian huaM
Southwest Jiaotong Univ.Chendu, SichuanDeng PingH
Li TianruiP
Yuan JianH
Zhao LianwenP
Tsinghua UniversityBeijingHu ZhimingMM
Ye JunOM
Univ. of Elec. Sci. & Tech.ChengduXu QuanzhiH
Zhong ErjieP
Univ. of Sci. & Tech. of ChinaHefei, AnhuiChaoyang ZhuMH
Rong Zhang
Shizhuo JiH
Yi ShiM
Xi'an Jiaotong UniversityXi'an, ShaanxiZhou YicangM
He XiaoliangMH
Xidian UniversityXi'an, ShaanxiHu YupuH
Liu HongweiM
Mao YongcaiM
Zhejiang UniversityHangzhou, ZhejiangQifan YangHH
Shu Ping ChenHH
Zhengzhou Electr. Pwr Coll.Zhengzhou, HenanLiang HaijiangP
Wang JiadeH
ZhengZhou Univ. of Tech.Zhengzhou, HenanWang JinlingP
Wang ShubinP
Zhang XinyuP
ZhongShan UniversityGuangzhou, GuangdongShe Wei LongP
Tang MengxiH
Wang Yuan ShiH
Zhang LeiP
FINLAND
Paivola CollegeTarttilaBill ShawH
HONG KONG
Hong Kong Baptist Univ.Kowloon Tong, KowloonChong Sze TongP
Wai Chee ShiuH
IRELAND
Trinity College DublinDublinT.G. MurphyH
James C. SextonP
University College, CorkCorkPatrick FitzpatrickH
Finbarr O'SullivanP
Gareth ThomasP
J. B. TwomeyM
University College DublinDublinTed CoxP
Maria MeehanH
University College GalwayGalwayMartin MeereH
Michael P. TuiteH
LITHUANIA
Vilnius UniversityVilniusRicardas KudzmaP
+ +# A Method for Taking Cross Sections of Three-Dimensional Gridded Data + +Kelly Slater Cline + +Kacee Jay Giger + +Timothy O'Conner + +Eastern Oregon University + +LaGrande, OR 97850 + +Advisor: Norris Preyer + +# Summary + +Effective three-dimensional magnetic resonance imaging (MRI) requires an accurate method for taking planar cross sections. However, if an oblique cross section is taken, the plane may not intersect any known data points. Thus, a method is needed to interpolate water density between data points. + +Interpolation assumes continuity of density, but there are discontinuities in the human body at the borders of different types of tissue. Most interpolation methods try to smooth these sharp borders, blurring the data and possibly destroying useful information. + +To capture qualitatively the key difficulties of this problem, we created a sequence of simulated biological data sets, such as a brain and an arm, each with some specific defect. Our data sets are cubic arrays with 100 elements on each side, for a total of one million elements, specifying water density at each point with an integer in the range [0, 255]. In each data set, we use differentiable functions to describe several tissue types with discontinuities between them. + +To analyze these data, we created a group of algorithms, implemented in $\mathbf{C}++$ , and compared their effectiveness in generating accurate cross sections. We used local interpolation techniques, because the data are not continuous on a global level. Our final algorithm searches for discontinuities between tissues. If it finds one at a point, it preserves sharp edges by assigning to that point the water density of the nearest data point. If there is no discontinuity, the algorithm does a polynomial fit in three dimensions to the nearest 64 data points and interpolates the water density. + +We measured the accuracy of the algorithms by finding the mean absolute difference between the interpolated water density and the actual water density at each point in the cross sections. Our final algorithm has an error $16\%$ lower than a simple closest-point technique, $17\%$ lower than a continuous linear interpolation, and $22\%$ lower than a continuous polynomial interpolation without discontinuity detection. + +# Assumptions + +- An MRI scan is an equally spaced grid of data. We take it to be a $100 \times 100 \times 100$ array. +- Each element of the array is an integer ranging from 0 to 255, representing the water density at that point. +- The resolution of the cross section to be taken is equal to the resolution of the data set. (If the array elements have a spacing of one micron, then the cross section should have the same spacing.) +- We recognize that all methods of interpolation assume continuity between the data points. Thus, we assume that the water density in living tissue can be represented as continuous differentiable functions with discontinuities between tissues. + +# Simulated Data Sets + +Since we could find few existing data sets of three-dimensional arrays, we constructed simulated data sets. While real biological organs are extraordinarily complicated, a set of simulated organs should be able to represent qualitatively the kind of problems that an MRI scan is typically used to investigate. Although actual MRI data have much greater resolution, the characteristics that we are looking for should be the same: tumors, fractures, or general anomalies. Generally, these areas will have different water densities than surrounding tissue, generating discontinuities. We created the following mock organs with imperfections: + +1. Globules: A continuous, repeating spherical pattern, with a density peak at the center (Figure 1). +2. Arm: Smooth tissue with two bones, one containing a small spherical hole (Figure 2). +3. Generic organ: A round shape filled with several discontinuous regions (Figure 3). +4. Brain: A dense skull, periodically varying gray matter, and a small area of different density in one lobe (Figure 4). + +![](images/152c6f41adb6970176fa992e61f7f0736e4223440cd0a3ff022e411b7e0361bd.jpg) +Figure 1. Globules. + +![](images/be345483ddfac84be55be32eb1442feec9ef47d3ea7bd04a410cf2a1a8c9144c.jpg) +Figure 2. Arm. + +![](images/8b2125a29a99af5f8318304557d379d7e158c000368bf4fc52306115a27b5b53.jpg) +Figure 3. Generic organ. + +![](images/f6e67ebb915203f193ed84f5bc827e6cf02b378109b5c3f527893322ff252961.jpg) +Figure 4. Brain. + +# Coordinate Systems and Definitions + +The existing array of data imposes a Cartesian coordinate system on the problem. If there are $n$ data points in each direction, then the coordinates range from $(0, 0, 0)$ to $(n, n, n)$ . We define a cross section by picking a point in this coordinate system $(x_0, y_0, z_0)$ and two angles $(\theta, \phi)$ representing the angles that the plane makes with the positive $x$ -axis and the positive $y$ -axis. This point becomes the origin of our plane, with the new $x$ -axis being the projection of the $x$ -axis onto this plane in the $z$ -direction, so the unit vector is + +$$ +\hat {x} ^ {\prime} = \hat {x} \cos \theta + \hat {z} \sin \theta . +$$ + +We can solve for the unit vector $\hat{y}'$ if we require it to be orthogonal to $\hat{x}'$ , to make an angle $\phi$ with the unit vector $\hat{y}$ , and to be of unit length, so + +$$ +\hat {y} ^ {\prime} = - \hat {x} \sin \phi \sin \theta + \hat {y} \cos \phi + \hat {z} \sin \phi \cos \theta . +$$ + +Thus, we can convert from the $(x', y')$ coordinate system back to the array system as: + +$$ +\begin{array}{l} x = x _ {0} + x ^ {\prime} \cos \theta - y ^ {\prime} \sin \phi \sin \theta \\ y = y _ {0} \quad + y ^ {\prime} \cos \phi \\ z = z _ {0} + x ^ {\prime} \sin \theta + y ^ {\prime} \sin \phi \cos \theta \\ \end{array} +$$ + +We call the known points $(x,y,z)$ data points and the unknown points $(x^{\prime},y^{\prime})$ plane points. + +# Interpolation Algorithms + +The plane points do not generally match existing data points. We know the water density surrounding each plane point, so we must interpolate to estimate plane point density. + +There are two major classes of interpolation techniques: + +- global methods, which use every data point in the set to estimate the density at each plane point, and +- local methods, which only use a small subset of the data points. + +Because interpolation methods assume continuity, global methods are inappropriate to this problem. We know that the organs are only piecewise continuous and differentiable with discontinuities between tissues. Thus, all of our algorithms use local interpolation techniques. + +# Proximity + +This algorithm assigns to the plane point the density of the data point that is closest to the plane point. This method seems naive, but it should preserve sharp edges without blurring. It looks at each point in the plane $(x', y')$ , calculates $x, y,$ and $z$ in the original array, and rounds them to integer values $(X, Y, Z)$ , thus giving the closest data point. + +# Density Mean + +This method uses more information to estimate the water density at each point. We can visualize every plane point as being inside a cube, with data points at the corners. To estimate the value inside, we take the arithmetic mean of the density from the surrounding eight points. Despite the use of more information, this method blurs the edges of discontinuities. + +# Trilinear Interpolation + +This algorithm uses the same eight points as the density mean method but does a weighted average of the density $(\rho)$ values. This method assumes that the slope $d\rho /dx$ is constant between the data points. With a low-resolution dataset, this approach will create inaccuracies; but as the resolution increases, the slope will appear to be more constant, since any differentiable function appears linear when examined on a small enough scale. The formula for linear interpolation [Press 1988, 104-105] is + +$$ +\rho (x ^ {\prime}) = \sum_ {i = 1.. 2} (1 - T _ {i}) \rho_ {i}, +$$ + +where $T_{i} = |x^{\prime} - x_{i}|$ + +The weight that we give to the value $\rho_{i}$ is equivalent to the distance to the opposite point. Here, every pair of consecutive data points has a distance of 1 unit between them, so the distance to the opposite point is $1 - T_{i}$ . For trilinear interpolation, we extend this sum over all the points, so that + +$$ +\rho (x ^ {\prime}, y ^ {\prime}, z ^ {\prime}) = \sum_ {i = 1.. 2} \sum_ {j = 1.. 2} \sum_ {k = 1.. 2} (1 - T _ {i}) (1 - U _ {j}) (1 - V _ {k}) \rho_ {i j k}, +$$ + +where $T_{i} = |x^{\prime} - x_{i}|,U_{j} = |y^{\prime} - y_{j}|$ , and $V_{k} = |z^{\prime} - z_{k}|$ + +# Polynomial Interpolation + +To estimate better the water density function, the polynomial interpolation method uses even more data. We expand the surrounding cube of eight points in every direction to make a cube with four points on each side, getting the 64 nearest points. Polynomials can fit differentiable functions better because they have more derivatives and can incorporate larger trends in the function. Recall that two points determine a unique line, three points determine a unique quadratic, and four points determine a unique cubic. Making use of this, we can develop a method for fitting functions in three dimensions. By doing a sequence of fits in the $x$ , $y$ , and $z$ directions, we can synthesize these into a density estimate for a point in space. Thus, we break the problem into a series of one-dimensional interpolations. + +Let $(x,y,z)$ be the plane point, and let the data points be $(x_{1..4},y_{1..4},z_{1..4})$ . First, we fix $x_{1}$ and $y_{1}$ , fit the four points $(x_{1},y_{1},z_{1..4})$ to a cubic, and interpolate the density at $(x_{1},y_{1},z)$ . We increment to $x_{1},y_{2}$ and do the same until we have the densities at the points $(x_{1},y_{1},z)$ , $(x_{1},y_{2},z)$ , $(x_{1},y_{3},z)$ , $(x_{1},y_{4},z)$ . Then we fit a polynomial in the $y$ -direction to these four points and interpolate the density at $(x_{1},y,z)$ . We repeat this whole process to find $(x_{2},y,z)$ , $(x_{3},y,z)$ , and $(x_{4},y,z)$ , and then perform one last polynomial fit to these points to interpolate the density at $(x,y,z)$ . + +There are many techniques for doing polynomial fits. We used the Lagrange formula [Acton 1990, 96] because it is the least computationally intensive: + +$$ +\begin{array}{l} \rho \left(x ^ {\prime}\right) = \frac {(x - x _ {2}) \left(x - x _ {3}\right) \left(x - x _ {4}\right)}{\left(x _ {1} - x _ {2}\right) \left(x _ {1} - x _ {3}\right) \left(x _ {1} - x _ {4}\right)} \rho_ {1} + \frac {(x - x _ {1}) \left(x - x _ {3}\right) \left(x - x _ {4}\right)}{\left(x _ {2} - x _ {1}\right) \left(x _ {2} - x _ {3}\right) \left(x _ {2} - x _ {4}\right)} \rho_ {2} \\ + \frac {(x - x _ {1}) (x - x _ {2}) (x - x _ {4})}{(x _ {3} - x _ {1}) (x _ {3} - x _ {2}) (x _ {3} - x _ {4})} \rho_ {3} + \frac {(x - x _ {1}) (x - x _ {2}) (x - x _ {3})}{(x _ {4} - x _ {1}) (x _ {4} - x _ {2}) (x _ {4} - x _ {3})} \rho_ {4}, \\ \end{array} +$$ + +where $\rho_{i}$ is the density at $x_{i}$ . + +This method will blur edges but should do very well over regions described by differentiable functions. + +# Hybrid Algorithms + +All of the above methods have strengths and weaknesses. The methods that are strongest on the differentiable regions (trilinear, polynomial) are weakest at discontinuities, because they try to smooth out the sharp borders. The method that most closely preserves discontinuities (proximity) is weakest at identifying smooth trends in the functions. + +To capitalize on the strengths of both approaches, we created a hybrid algorithm. Before interpolating between a group of points, the hybrid looks for discontinuities within them; it uses the proximity method if it finds any and a continuous method otherwise. This hybrid algorithm locates discontinuities by measuring the difference in density $(\Delta \rho)$ between each pair of extreme opposite points surrounding the plane point. If there is a discontinuity, then $\Delta \rho$ will be large and we use the proximity method. If not, $\Delta \rho$ will be small and we use a continuous method, either trilinear or polynomial. To distinguish between the two cases, we set the threshold value of $\Delta \rho_0$ . Thus, the hybrid algorithm allows us to use each method where it is strongest. + +# Testing and Results + +Because we have defined precise water density functions for each of our four simulated data sets, we can compare the interpolation value with the actual density and find the residual. To measure the accuracy of a cross section, we calculate the mean absolute residual over all of the plane points (i.e., the average of the absolute values of the residuals). + +To compare the algorithms, we took 12 cross sections through each simulated data set, at different angles, points, and discontinuity threshold levels. We generally selected points near the center of the data so as to generate a large planar region. + +Table 1 shows the mean absolute residual for each algorithm applied to each data set and averaged over all data sets. We discuss the results on each of the data sets (columns in Table 1) in turn. + +Table 1. Mean absolute residuals for interpolation methods applied to the data sets. + +
AlgorithmData Set
GlobulesArmGeneric organBrainCombined
Proximity1.820.970.893.251.73
Density mean1.391.891.345.522.53
Trilinear interpolation1.541.270.703.491.75
Polynomial interpolation1.551.480.753.671.86
Hybrid trilinear interpolation (Δρ0=20)1.551.010.593.491.49
(Δρ0=30)1.541.010.592.821.49
(Δρ0=40)1.541.010.612.821.50
Hybrid polynomial interpolation (Δρ0=20)1.660.990.713.141.62
(Δρ0=30)1.630.990.612.861.52
(Δρ0=40)1.610.990.532.561.45
+ +# Globules + +On average, the mean method most accurately generated oblique planes, with both the trilinear and polynomial interpolations providing cross sections of similar accuracy. For this data set, the hybrid algorithms did worse than the purely continuous methods (Figure 5); the proximity method did poorest of all, probably because there are no edges in the globule data sets and continuity is never broken. The trilinear and polynomial interpolations may also have had trouble with the peaks in the center of each sphere. + +![](images/1111a7f4a36680e225860a98efcd5b37bc231c6549c097b2b94df7971769702e.jpg) +Figure 5. Globules: Polynomial hybrid method, with $\theta = 45^{\circ}$ , $\phi = 0^{\circ}$ , (45,45,50). + +![](images/3e99225b472ed08ed168501e042b10f8196a6ce216a6d13cbfc36bf063f66848.jpg) +Figure 6. Arm: Polynomial hybrid method, with $\theta = 10^{\circ}$ , $\phi = 0^{\circ}$ , (40,80,50). + +# Arm + +Here we found the situation almost totally reversed. The proximity method and the hybrid algorithms (Figure 6) all performed significantly better than the purely continuous methods, with the mean method doing particularly badly. + +This is quite reasonable, because our arm has many very sharp edges as we go from bone to muscle. The mean method should fail on any discontinuities, and this is what we see. The discontinuity detection seems to be working, because the hybrid algorithms perform noticeably better than the trilinear and polynomial methods. + +![](images/9f6f6354d929a79c2891a0740390fd6346396c7dcd6b8eb541f65419fc5509f3.jpg) +Figure 7. Generic organ: Polynomial hybrid method, with $\theta = 0^{\circ}$ , $\phi = 30^{\circ}$ , (50, 50, 50). + +![](images/42d31045963f6eef3c2789ac863e6f3a56a7be818a681259357e4acf360782aa.jpg) +Figure 8. Brain, polynomial hybrid method, $\theta = 5^{\circ}$ , $\phi = 0^{\circ}$ , (50, 50, 50). + +# Generic Organ + +We found that both of the hybrid formulas, trilinear and polynomial, produced equally favorable results (Figure 7). Once again, the arithmetic mean method performed poorly, followed by the proximity, polynomial, and trilinear methods, whose results were comparable. We suspect that this is the case because for this data set we used smooth functions to generate each of the different tissue types. + +# Brain + +Due to the general smoothness of each lobe and the sharp contrast of the skull, the hybrid polynomial method with a high value of $\Delta \rho_0$ produced the most accurate results. The proximity method produced very accurate results, surpassing the trilinear and polynomial methods but falling short of the hybrid methods (Figures 8-10). The most inaccurate results were produced by the arithmetic mean method, and Figure 11 clearly shows how it fails by trying to smooth the edges of the skull. + +# Residual Plots + +Another way to examine the algorithms is to plot the residuals, which allows us to see exactly where the a method breaks down. In Figures 12-14, large positive or negative residuals stand out in white or gray. + +![](images/997bb6d42d3c692fff1a5ab733fbae8c99efa84310e600810d60528d8216aa44.jpg) +Figure 9. Brain, polynomial method, $\theta = 0^{\circ}$ $\phi = 30^{\circ}$ (50,50,50). + +![](images/ca1e7d81e6953ad24a28be23f2180e84e95d526efc287c643d46ef4ed0c7f55f.jpg) +Figure 10. Brain, proximity method, $\theta = 5^{\circ}$ , $\phi = 0^{\circ}$ , (50, 50, 50). + +![](images/a28608379f50275dcf29312643e83fef2597c74791156ae3cae5a23f5cbf0e0b.jpg) +Figure 11. Brain, density mean, $\theta = 0^{\circ}$ $\phi = 30^{\circ}$ (50,50,50). + +![](images/3212d8bc445e443fe490b92d5888e4342411c073b16e86534bd5654f43ab8aa6.jpg) +Figure 12. Brain, proximity residuals, $\theta = 5^{\circ}$ $\phi = 0^{\circ}$ (50,50,50). + +![](images/b3f4f54fe4b1751e28c9785bcba029dfd5221dcf4d915210388f9aec9e06fbf7.jpg) +Figure 13. Brain, density mean, residuals, $\theta = 0^{\circ}$ $\phi = 30^{\circ}$ (50,50,50). + +![](images/1edbf73686e2adee75715b96e3ccf37a4f6aa7ea6e0757881b4c04878d551780.jpg) +Figure 14. Brain, polynomial hybrid method, residuals, $\theta = 5^{\circ}$ , $\phi = 0^{\circ}$ , (50,50,50). + +The proximity method was generally accurate over most of the brain, except for the dark area in one of the lobes and a few areas where the sharp discontinuity of the skull also produced errors (Figure 12). + +The arithmetic mean algorithm clearly shows errors around all the edges (Figure 13). + +Finally, the hybrid polynomial algorithm has inaccuracies near the edge of the skull, but it handles the dark area very well (Figure 14). + +# Overall Results + +- Averaging over all four data sets (see the last column of Table 1), the most accurate algorithm for generating oblique planes through arrays of three-dimensional data is the hybrid polynomial algorithm (with $\Delta \rho_0 = 40$ ), which produced an average error $16\%$ less than the proximity method. +- The hybrid trilinear algorithm (with $\Delta \rho_0 = 30$ ) did almost as well, generating an average error $14\%$ less than the proximity method. +- The proximity algorithm produced reasonably accurate planes, with average error $7\%$ less than the continuous polynomial algorithm and about $1\%$ better than the continuous trilinear algorithm. +- The trilinear algorithm performed better than the polynomial method, but neither handled discontinuities well enough to produce results as good as the hybrid methods. +- The arithmetic mean algorithm produced particularly poor results, because of the large errors that it makes around a discontinuity of any kind. + +# Strengths and Weaknesses + +The hybrid polynomial algorithm is flexible and could easily be expanded to handle data arrays of any size. It can take a slice through any point at any angle. Moreover, it is effective at interpolating through smooth regions but still preserves the sharpness of edges in the original data. Even if the closest point is on the wrong side of the discontinuity, the image is still qualitatively correct: It shows a sharp edge. The threshold constant $\Delta \rho_0$ allows the user to choose how much smoothing is done. + +The hybrid algorithm will miss small discontinuities in the data, and it does not look for nondifferentiable points. If there are cusps, the algorithm will probably not notice them and attempt to smooth through these points, even though to do so is inappropriate. + +# Future Work + +The next step is to implement better methods for interpolating in the continuous regions, using larger sets of points. Cubic and quartic splines might be effective, as might other types of polynomials, or perhaps a method of rational function interpolation. + +The discontinuity detection algorithm could be improved by expanding it to look for cusps and nondifferentiability. The discontinuities could also be used to perform automatic tissue typing; the algorithm might then be able automatically to output images showing just brain gray matter, or showing just tumor tissue. Even with the current software, a more detailed investigation of the dynamics of $\Delta \rho_0$ would be very useful. + +Most important, the algorithm needs thorough testing against actual MRI data. + +# References + +Acton, Forman S. 1990. Numerical Methods That Work. Washington DC: Mathematical Association of America. +Garcia, Alejandro L. 1994. Numerical Methods for Physics. Englewood Cliffs, NJ: Prentice-Hall. +Hornak, Joseph P. 1997. The Basics of MRI. http://www.cis.rit.edu/htbooks/mri/. (7 February 1998). +Lancaster, Peter, and Kestutis Salkauskas. 1986. Curve and Surface Fitting. London: Academic Press. +Press, William H., et al. 1988. Numerical Recipes in C. New York: Cambridge University Press. +Summit, Steve. C Programming FAQs. 1996. New York: Addison-Wesley. + +# A Model for Arbitrary Plane Imaging, or the Brain in Pain Falls Mainly on the Plane + +Jeff Miller + +Dylan Helliwell + +Thaddeus Ladd + +Harvey Mudd College + +1250 N. Dartmouth Ave. + +Claremont, CA 91711 + +{ jmiller, dhelliwe, tladd } @math.hmc.edu + +Advisor: Michael Moody + +# Summary + +We present an algorithm for imaging arbitrary oblique slices of a three-dimensional density function, based on a rectilinear array of uniformly sampled MRI data. + +We + +- develop a linear interpolation scheme to determine densities of points in the image plane, +- incorporate a discrete convolution filter to compensate for unwanted blurring caused by the interpolation, and +- provide an edge-detecting component based on finite differencing. + +The resulting algorithm is sufficiently fast for use on personal computers and allows control of parameters by the user. + +We exhibit the results of testing the algorithm on simulated MRI scans of a typical human brain and on contrived data structures designed to test the limitations of the model. Filtering distortions and inaccurate modeling due to interpolation appear in certain extreme scenarios. Nonetheless, we find that our algorithm is suitable for use in real-world medical imaging. + +# Constructing the Model + +Our model consists of four main parts: + +- First, we develop a technique for positioning a plane anywhere in $\mathbb{R}^3$ . +- Then we interpolate data from the region in $\mathbb{R}^3$ that contains data onto the plane. +- Next, we use a sharpening technique to remove extra blur caused by the interpolation. +- Finally, we construct a difference array and use it to create a line drawing representing edges in the image. + +# Assumptions + +- Density variations in the source object are reasonably well behaved and continuous. Discontinuities such as sharp edges will be approximated in the model but only if they are isolated on a scale of several array elements. Similarly, erratic behavior and wild fluctuations can be accurately modeled only if they exist on a scale of several pixels. The model should image the source of the data array, not the array itself; but the accuracy of the oblique slice images depends on the accuracy of the data in the array. +- The data array represents isotropically spaced samples. The array $A(i,j,k)$ contains discretized samples from a continuous three-dimensional space, for which we use coordinates $(x,y,z)$ . The component $A(i,j,k)$ represents a density $f$ at some point $(x_i,y_j,z_k)$ . We assume that the source was uniformly sampled, so that + +$$ +x _ {i} = i \delta x, \quad y _ {j} = j \delta y, \quad z _ {k} = k \delta z, +$$ + +where $\delta x$ , $\delta y$ , and $\delta z$ are constant distances between samples (typically close to $1\mathrm{mm}$ ). We also assume that these distances are all equal (if not, then we rescale the coordinate system to compensate). + +- The destination computing platform is a typical contemporary personal computer. Thus, input data arrays may not be larger than the memory of a typical PC. We assume arrays of up to $256 \times 256 \times 256$ integer elements (16 MB in size) each in [0, 255]. Also, we gauge computing time by typical PC processor speeds. + +# Plane-Array Intersection + +To represent the slice of the object on the computer screen, we establish a mapping between the three-space of the object and the plane of the monitor. + +We represent an arbitrary plane in $\mathbb{R}^3$ in terms of the angles that it makes with the $xy$ -plane and the $z$ -axis, together with a displacement of the origin. The map $T: \mathbb{R}^2 \to \mathbb{R}^3$ given by + +$$ +T (u, v) = R \left( \begin{array}{l} u \\ v \\ 0 \end{array} \right) + \left( \begin{array}{l} x _ {0} \\ y _ {0} \\ z _ {0} \end{array} \right) +$$ + +transforms a point in $\mathbb{R}^2$ into a point on the plane, where the point $(x_0,y_0,z_0)$ is a displacement of the origin and $R$ is the rotation matrix + +$$ +R = \left( \begin{array}{c c c} {\cos \phi \cos \theta} & {- \sin \theta} & {\sin \phi \cos \theta} \\ {\cos \phi \sin \theta} & {\cos \theta} & {\sin \phi \sin \theta} \\ {- \sin \phi} & 0 & {\cos \phi} \end{array} \right). +$$ + +The angles $\phi$ and $\theta$ are the polar and azimuthal angles in spherical coordinates for the vector normal to the plane. + +# Interpolation + +To represent the image, we seek a regularly spaced array of discretized points $(u_{p}, v_{q})$ corresponding to the pixels of a computer monitor. Since the points $T(u_{p}, v_{q})$ need not coincide with the points $(x_{i}, y_{j}, z_{k})$ for the data, we need to be able to approximate density values at arbitrary points in $\mathbb{R}^3$ . Thus, we interpolate the data from nearby points whose values are given by $A$ . + +With a slight abuse of notation, let $g(x,y,z)$ be the gray-scale value of the image at $(x,y,z)$ , so that $g\big(T(u,v)\big) = g(u,v)$ and $g(x_{i},y_{j},z_{k}) = A(i,j,k)$ . + +From the numerous techniques for interpolation, we seek an algorithm that will smoothly approximate the density without being computationally intractable. + +# Nearest-Neighbor Approach + +Let $(x^{*},y^{*},z^{*})$ be the point for which we want to know the density. This point is contained in a cubic cell, of size $\delta x\times \delta y\times \delta z$ , that has corners of known density given by the array $A$ . From these eight corners, we simply find the point $(x_{a},y_{b},z_{c})$ that is closest to $(x^{*},y^{*},z^{*})$ and set $g(x^{*},y^{*},z^{*}) = A(a,b,c)$ . + +# 3-D Linear Interpolation + +We also develop a technique that we call 3-D linear interpolation. For this method, we hope to find a smooth continuation of the data within the cubic cell, starting from the density values of the corners of the cubic cell containing $(x^{*},y^{*},z^{*})$ . We base our approach on solving the Laplace equation + +$$ +\Delta g = \frac {\partial^ {2} g}{\partial x _ {1} ^ {2}} + \dots + \frac {\partial^ {2} g}{\partial x _ {n} ^ {2}} = 0 +$$ + +successively in one, two, and three dimensions. + +We choose two adjacent corners of the cell and solve the one-dimensional Laplace equation, using the densities at the corners as boundary values. This gives the smoothest function between the two corners—a straight line. We then solve the two-dimensional Laplace equation on the faces of the cubic cell, using the straight lines as boundary conditions. Finally, we fill the cube with the three-dimensional solution to the Laplace equation, using as boundary conditions the values on the faces. + +The details are not difficult. Denote the points in the cubic cell as in Figure 1. The values along edges are constructed by simple linear interpolation; for example, the values along the lower left edge of the cube in Figure 1 are given by + +$$ +\begin{array}{l} g \left(x ^ {*}, y _ {j}, z _ {k}\right) = g \left(x _ {i}, y _ {j}, z _ {k}\right) + \frac {g \left(x _ {i + 1} , y _ {j} , z _ {k}\right) - g \left(x _ {i} , y _ {j} , z _ {k}\right)}{x _ {i + 1} - x _ {i}} \left(x ^ {*} - x _ {i}\right) \\ = A (i, j, k) + \frac {A (i + 1 , j , k) - A (i , j , k)}{x _ {i + 1} - x _ {i}} \left(x ^ {*} - x _ {i}\right). \\ \end{array} +$$ + +![](images/667baa410c91de49ce007b79d157103455ad05eb3cc29ee73f20e64567eb3063.jpg) +Figure 1. A cubic cell demonstrating the notation for the 3-D interpolation scheme. + +# Similarly, we find + +- values $g(x^{*},y_{j + 1},z_{k})$ along the lower right edge in terms of $A(i,j + 1,k)$ and $A(i + 1,j + 1,k)$ , +- values $g(x^{*}, y_{j}, z_{k + 1})$ along the upper left edge in terms of $A(i, j, k + 1)$ and $A(i + 1, j, k + 1)$ , and +- values $g(x^{*}, y_{j+1}, z_{k+1})$ along the upper right edge in terms of $A(i, j+1, k+1)$ and $A(i+1, j+1, k+1)$ . + +We continue to use linear interpolation to get the value $g(x^{*},y^{*},z_{k})$ on the bottom face in terms of the value $g(x^{*},y_{j},z_{k})$ on the lower left edge and the + +value $g(x^{*}, y_{j+1}, z_{k})$ on the lower right edge, as well as the value $g(x^{*}, y^{*}, z_{k+1})$ on the upper face in terms of the value $g(x^{*}, y_{j}, z_{k+1})$ on the upper left edge and the value $g(x^{*}, y_{j+1}, z_{k+1})$ on the upper right edge. + +As a last step, we use linear interpolation yet again to obtain the value $g(x^{*},y^{*},z^{*})$ in terms of the value $g(x^{*},y^{*},z_{k})$ on the lower face and the value $g(x^{*},y^{*},z_{k + 1})$ on the upper face. The result is a unique value for $g(x^{*},y^{*},z^{*})$ , in terms of the eight closest corners, which does not depend on the order of interpolation. [EDITOR'S NOTE: We omit the authors' proof of this fact, which they arrive at by explicit calculation of $g(x^{*},y^{*},z^{*})$ and the observation that the result is symmetric in $x,y$ , and $z$ .] In addition, + +$$ +\frac {\partial^ {2} g}{\partial x ^ {2}} (x ^ {*}, y ^ {*}, z ^ {*}) = \frac {\partial^ {2} g}{\partial y ^ {2}} (x ^ {*}, y ^ {*}, z ^ {*}) = \frac {\partial^ {2} g}{\partial z ^ {2}} (x ^ {*}, y ^ {*}, z ^ {*}) = 0, +$$ + +which means that Laplace equation is satisfied in the cubic cell. The uniqueness theorem for the Laplace equation with Dirichlet boundary conditions implies that only one solution may be obtained by this method. + +# Other Techniques + +We considered a three-dimensional spline, in which a cubic interpolation is chosen to make first derivatives continuous. Unfortunately, the complexity and the computing time of this technique made it intractable. We also looked at spatially weighted averaging techniques. However, in applying this method to a simple two-dimensional case, we found gross discrepancies with the true image (a simple linear ramp was altered to look like a series of wavy steps). + +Ultimately, we found: + +- The nearest-neighbor technique is the most useful for cursory image analysis, and +- 3-D linear interpolation is the most efficient method for a more realistic image. + +# Image Sharpening + +Interpolation inevitably blurs, or low-pass filters, the actual image $f$ . Hence, we add a stage to the algorithm that sharpens, or high-pass filters, the recorded image $g$ . We considered various techniques for sharpening. + +# Revert from Boundaries + +One approach to sharpen is to detect the location of edges or boundaries in the image and then revert to a nearest-neighbor pixel determination near those locations. We discovered that this approach has the adverse effect of increasing graininess and pixelation. + +# Point-Spread Function + +Another approach, following Andrews and Hunt [1977], is to assume that the recorded image is the actual image convolved with a point-spread function (PSF), denoted $h(x,y)$ . Thus, + +$$ +g (u, v) = \int_ {- \infty} ^ {\infty} \int_ {- \infty} ^ {\infty} h (x - \xi , y - \eta) f (\xi , \eta) d \xi d \eta . \tag {1} +$$ + +The actual image is then obtained by deconvolution with a discrete Fourier transform. The PSF may be calculated a priori or measured a posteriori. + +Alternatively, the discretized nature of the data and the linear nature of the interpolation procedure lead us to recast (1) into the matrix equation + +$$ +g (u _ {p}, v _ {q}) = \sum_ {m = 1} ^ {N} \sum_ {n = 1} ^ {N} a (p, m) b (q, n) f (u _ {m}, v _ {n}), +$$ + +where $a(p, m)$ is an $N \times N$ matrix that blurs the columns of the digitized plane image and $b(q, n)$ is an $N \times N$ matrix that blurs the rows. The "blurring" matrices may be approximated with components near unity on the leading diagonals and components equal to some small "mixing" parameter on the adjacent off-diagonals. The image $f$ may then be restored by inverting these matrices. + +Ultimately, we deemed this and the Fourier PSF approach to be too computationally expensive. + +# Convolution Filter + +Our favored technique, following Rosenfeld and Kak [1982], is to use a convolution filter. This approach rests on the assumption that the blurring occurred as a diffusion process. If the actual image $f$ is an initial condition to the diffusion equation + +$$ +\kappa \nabla^ {2} g = \frac {\partial g}{\partial t}, +$$ + +then by expanding a time-dependent $g(u,v;t)$ about a small value $\tau$ of time we obtain + +$$ +\begin{array}{l} f (u, v) = g (u, v; 0) = g (u, v; \tau) - \tau \frac {d g}{d t} (u, v; \tau) + O \left(\tau^ {2}\right) \\ = g - \kappa \tau \nabla^ {2} g + O \left(\tau^ {2}\right). \\ \end{array} +$$ + +Thus, $f$ may be restored by subtracting the Laplacian of $g$ from $g$ . This technique, commonly called unsharp masking, is especially appealing in our model; since we chose an interpolation scheme that forces interpolated regions of the image $g$ to satisfy Laplace's equation, $\nabla^2 g = 0$ . + +In practice, the Laplacian is approximated using finite differences. Define + +$$ +\Delta_ {u} g (u _ {p}, v _ {q}) = g (u _ {p}, v _ {q}) - g (u _ {p - 1}, v _ {q}), +$$ + +$$ +\Delta_ {v} g (u _ {p}, v _ {q}) = g (u _ {p}, v _ {q}) - g (u _ {p}, v _ {q - 1}). +$$ + +Higher-order difference operators are defined by repeated first differencing, as in + +$$ +\Delta_ {u} ^ {2} g (u _ {p}, v _ {q}) = \Delta_ {u} g (u _ {p + 1}, v _ {q}) - \Delta_ {u} g (u _ {p}, v _ {q}), +$$ + +leading to + +$$ +\begin{array}{l} \nabla^ {2} g = \Delta_ {u} ^ {2} g \left(u _ {p}, v _ {q}\right) + \Delta_ {v} ^ {2} g \left(u _ {p}, v _ {q}\right) \tag {2} \\ = g (u _ {p + 1}, v _ {q}) + g (u _ {p - 1}, v _ {q}) + g (u _ {p}, v _ {q + 1}) + g (u _ {p}, v _ {q - 1}) - 4 g (u _ {p}, v _ {q}). \\ \end{array} +$$ + +Applying the Laplacian operator to an entire matrix may be viewed as a discrete analog of convolution. That is, we can find the Laplacian of a component $g(u_{p},u_{q})$ of the image matrix by multiplying component-wise each value in the $3\times 3$ neighborhood around the component with the "mask" matrix + +$$ +\left( \begin{array}{c c c} 0 & 1 & 0 \\ 1 & - 4 & 1 \\ 0 & 1 & 0 \end{array} \right) +$$ + +and then summing all of the components of the resulting $3 \times 3$ matrix. + +In light of (2), we wish to convolve with the mask + +$$ +\left( \begin{array}{c c c} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right) - \alpha \left( \begin{array}{c c c} 0 & 1 & 0 \\ 1 & - 4 & 1 \\ 0 & 1 & 0 \end{array} \right) = \left( \begin{array}{c c c} 0 & - \alpha & 0 \\ - \alpha & 1 + 4 \alpha & - \alpha \\ 0 & - \alpha & 0 \end{array} \right), +$$ + +which subtracts the Laplacian times a control parameter $\alpha$ (analogous to $\kappa \tau$ ) from the function itself. + +Natural extensions of this technique are to use higher-order approximations of the Laplacian operator (thus convolving with a larger mask) or to enhance the mask with some sort of high-pass filter. Considerations of time and computational complexity prevented our development of such techniques. + +# Edge Detection + +To make boundaries more visible in our images, it is useful to detect edges and generate corresponding line drawings. To do this, we use a variation of the finite differences method already discussed. At each point $(x_{i},y_{j},z_{k})$ , we evaluate + +$$ +\Delta_ {2 x} (i, j, k) = A (i - 1, j, k) - A (i + 1, j, k) +$$ + +$$ +\Delta_ {2 y} (i, j, k) = A (i, j - 1, k) - A (i, j + 1, k) +$$ + +$$ +\Delta_ {2 z} (i, j, k) = A (i, j, k - 1) - A (i, j, k + 1). +$$ + +This set of definitions centers the difference around the point in question. We construct a new array $\Gamma$ , where + +$$ +\Gamma (i, j, k) = \max \{\Delta_ {2 x} (i, j, k), \Delta_ {2 y} (i, j, k), \Delta_ {2 z} (i, j, k) \}, +$$ + +which gives a measure of how fast the values of $A$ are changing through a given point. + +This technique has many strong points: + +- The values for $\Gamma$ are easy to compute. +- The method does not bias the type of edge encountered (straight, curved, diagonal, etc.). +- The values of $\Gamma$ remain within the range of the original grayscale range [Rosenfeld and Kak 1982]. + +We then apply our interpolation techniques to a plane passing though the box of data given by $\Gamma$ rather than by $A$ . Once we have this new two-dimensional image, we convert it into a binary image by applying a threshold condition: Every point with an interpolated value above the threshold value is made black, while every point with an interpolated value below the threshold value is made white. This converts regions where the difference values are high into black regions against a white background. Since edges will exhibit large differences in density, the black regions will thus represent edges. + +# Analysis of the Model + +We implemented our plane-imaging algorithm on a Unix graphics workstation and analyzed the algorithm's behavior on several different data sets. One data set is a simulated MRI scan of a human brain, an example of a data volume that the model would expect to receive in a real-world medical imaging environment. We created several other contrived data sets to test our algorithms on known structures and thereby expose limitations of the model. + +# Computer Implementation + +We built our model as an interactive graphical application using the C++ language and the OpenGL 3-D graphics library. We designed this program to possess many of the features that a plane-imaging system used in an actual medical situation would contain. It takes as input an arbitrarily-sized 3-D block of byte values (0-255) representing the density array $A$ . The program presents two display windows to the user: + +- The first window is a view of the $(x, y, z)$ coordinate space, showing wireframe representations of both the input data array and the projection plane. + +- The second window shows the scanned image that lies on the plane as generated by our plane-imaging algorithm. + +The user can use the keyboard and mouse to move the imaging plane to different positions $(x_0, y_0, z_0)$ and angles $(\phi, \theta)$ within $A$ , viewing in real time how the projected image changes. We used this program to generate all of the figures in this paper. + +The coded algorithms for interpolating in $A$ and for creating the sharpened image in the projection plane are all straightforward translations of the mathematical expositions given earlier. We imparted some extra intelligence to the imaging algorithm so that it can determine in which part of the $(u, v)$ plane the source data lie (see Appendix). + +The user can control all parameters of the model, including the sharpening control factor and the edge-detection threshold. Different interpolation techniques may be selected, and the sharpening and edge-detecting filters may be toggled as well. The program executes quickly enough; but since we paid little attention to creating optimized algorithms, there is much much potential for speeding up operations such as the sharpening filter. + +# Results on Brain MRI Data + +The principal test data set for our model is a simulated human brain MRI volume containing 181 slices, each $181 \times 217$ pixels. These data are the output of a highly realistic MRI computer simulation [Cocosko et al. 1997] and are thus likely to reflect data from an actual MRI scanner that would be of diagnostic interest to a user. + +# Exploration of Obliquely Oriented Structures + +Figures 2-5 show sample output of our plane-imaging algorithm on the brain MRI data set, for several different plane orientations, both orthogonal and oblique. Viewing such output dynamically on a graphical computer display would allow a doctor to explore structures in the brain that lie on planes in any possible position and orientation. + +# Fine Structure + +The brain MRI data set demonstrates the merits of applying the sharpening filter. Figure 6 displays two brain images; the one on the left has been sharpened and the one on the right has not. The sharpened image lacks the overall blurriness of the unsharpened image. It also displays more clearly the fine structure in the brain that would be of interest to a surgeon planning a minimally invasive procedure. + +![](images/7926833a57ff3f1c22b58f4cf24820835f4f22874acee91a4c016c8c6210adf3.jpg) +Figure 2. Image centered at (91, 108, 120) with $(\phi, \theta) = (0, 0)$ . The imaging plane is orthogonal to the $z$ -axis. + +![](images/1ee6f5351808a73a7e6ba0afefa4377c76d97deaa6968d3c67f3c0d3d2eac772.jpg) +Figure 3. Image centered at (91, 108, 120) with $(\phi, \theta) = (35, 75)$ . The imaging plane lies obliquely within the data volume. + +![](images/d386eeb3dc19fc23f0b16845838feb3f16601d02e6e3e8fab6c590dc47c104d1.jpg) +Figure 4. Image centered at (110, 100, 70) with $(\phi, \theta) = (130, -30)$ . + +![](images/903b8221aa4aea06083b554b3089780d110695a65825466f5a06dd455e2b85f0.jpg) +Figure 5. Image centered at (100, 100, 48) with $(\phi, \theta) = (-20, 90)$ . + +![](images/63cf8de69c8ed914025ea9545bb31bb552d36ddee33022fd7467bb22d9069544.jpg) +Figure 6. Comparison of two oblique plane scans of the brain MRI data. The image on the right has been passed through the sharpening filter, while the image on the left has not. The imaging plane is centered at (85, 118, 58) with $(\phi, \theta) = (30, 0)$ . + +![](images/8f70173c7beded8bb2a5125247a37718810ca2106b1098f0b7d9e4516b893802.jpg) + +# Line Drawings + +The edge-detection algorithm that we described can be used to generate black-and-white line drawings of the kind found in an anatomical atlas (see Figure 7). Such drawings are useful for seeing clear boundaries of structures in the brain. + +![](images/23fb66a15da7be96c02e48300cbf0181d944e0c2516d85847c01188eaca77b34.jpg) +Figure 7. Black-and-white line drawing of an oblique plane through the brain MRI data. Lines were drawn using edge detection. The imaging plane is the same as in Figure 3. + +# Results on Contrived Data Sets + +A question that arises about our model is whether or not the sharpening filter is removing the blurriness introduced by the interpolation algorithm or just the blurriness inherent in the data array. To attempt to answer this question, we examined the operation of our model on a data set with perfectly discrete boundaries—a data set consisting of "slabs," that is, parallel, evenly spaced planes of some nominal depth containing maximum intensity pixels. Figure 8 shows the close-up results of passing an imaging plane through this volume at an angle of $35^{\circ}$ to the slabs (an arbitrary choice). Comparing the clarity of the + +slab edges in the two different images demonstrates that our sharpening filter performs well in removing effects of interpolation, provided that boundaries cross the image plane at a sufficiently large angle. When the imaging plane is at a small angle to the slabs, as in Figure 9, we see blurred edges even in a sharpened image—an inevitable consequence of our 3-D interpolation scheme. + +![](images/190300e08aa0a22525a9ea3c3b611afc8633a0588c5bc126c0f07f739c95394a.jpg) +Figure 8. Comparison of two oblique plane scans of the "slabs" data. The angle of incidence between the imaging plane and the slab visible is $35^{\circ}$ . The image on the right has been passed through the sharpening filter, while the image on the left has not; resulting images are close-up views. + +![](images/6d39163c1c0ae2ec824aa558d688af36dd33ba9b554811f8aeea725713c69fbb.jpg) + +The values in the data set can be viewed as the discrete extreme of behaviors that our model can expect to encounter. To examine the opposite continuous extreme, we created a data set whose intensity does not vary in a single $xy$ -plane but instead varies as a cubic function of $z$ . Figure 10 shows the projected image from a plane passing through the data volume at angle of $35^{\circ}$ to the $xy$ -plane. The interpolation algorithm in our model linearizes much of the nonlinearly varying data, and subsequently the sharpening filter introduces distortion. + +# Limitations of the Model + +Our model has several limitations: + +- Arrays of non-uniformly or anisotropically sampled data are not considered. +- Objects with planar edges parallel or nearly parallel to the projection plane are imaged inaccurately. +- Interpolation and smoothing generate minor distortions that, given more time and work, may be alleviated by the use of higher order schemes. Using additional contrived data structures as algorithm input may further illuminate the causes of such distortions. + +![](images/8b9959aa2e89b1594b7748ae455f036d016048df8bac42e990f1711355924328.jpg) +Figure 9. A nearly parallel oblique plane scan of the slabs data (angle of incidence is $5^{\circ}$ ) with sharpening filter applied. Edge ramping effects are visible. + +![](images/a46073a62f37d49e6fa04607e3769156d8b6831cb59bc4a48d0e955b6cf7966a.jpg) +Figure 10. Oblique plane scan of a continuously varying data volume. Some linearization effects of the sharpening filter are visible. + +# Conclusions + +Our computer implementation provides convincing pictures that illustrate the ability of our model to depict density variations in oblique planes. Our algorithm surpasses most existing algorithms by allowing any planar orientation, by generating images quickly, and by including sharpening and edge-detecting filters. The results of testing our model on simulated brain MRI images demonstrate its applicability to real-world medical imaging. + +# Appendix: Bounds of the Imaging Plane + +For computational optimization of our plane-imaging algorithm, we derive bounds for the region of intersection of the plane and the box of data. This allows our algorithm to iterate over the smallest $(u,v)$ region necessary to ensure that all of the intersected data have been obtained. + +The transformation $T(u,v)$ gives the three equations + +$$ +x = \cos \phi \cos \theta u - \sin \theta v + x _ {0} +$$ + +$$ +y = \cos \phi \sin \theta u + \cos \theta v + y _ {0} +$$ + +$$ +z = - \sin \phi u + z _ {0}. +$$ + +We can rewrite these to get + +$$ +x - x _ {0} = \cos \phi \cos \theta u - \sin \theta v +$$ + +$$ +y - y _ {0} = \cos \phi \sin \theta u + \cos \theta v +$$ + +$$ +z - z _ {0} = - \sin \phi u. +$$ + +We want to know when the plane crosses an edge of the box of data in $\mathbb{R}^3$ . This + +edge will be constant in two of the three variables $(x,y,z)$ . We can plug these two values into the appropriate equations above and solve for $u$ and $v$ . + +For instance, suppose we want to know at what point $(u,v)$ the plane intersects one of the edges of the data box that is parallel to the $z$ -axis. In this case, we know what $x$ and $y$ are since we know the size and position of the box in $\mathbb{R}^3$ . Using the equations for $x - x_0$ and $y - y_0$ , we have: + +$$ +\left( \begin{array}{c} x - x _ {0} \\ y - y _ {0} \end{array} \right) = \left( \begin{array}{c c} \cos \phi \cos \theta & - \sin \theta \\ \cos \phi \sin \theta & \cos \theta \end{array} \right) \left( \begin{array}{c} u \\ v \end{array} \right). +$$ + +Notice that if $\phi = \pi n + \frac{\pi}{2} n$ , where $n \in \mathbb{Z}$ , this transformation does not have an inverse. In this case, the plane is parallel to the $z$ -axis, and hence any vertical edge of the data box. If $\phi \neq \pi n + \frac{\pi}{2}$ , then we can invert the above transformation to obtain + +$$ +\left( \begin{array}{c} u \\ v \end{array} \right) = \frac {1}{\cos \phi} \left( \begin{array}{c c} \cos \theta & \sin \theta \\ - \cos \phi \sin \theta & \cos \phi \cos \theta \end{array} \right) \left( \begin{array}{c} x - x _ {0} \\ y - y _ {0} \end{array} \right). +$$ + +We can perform similar operations for the other two directions in $\mathbb{R}^3$ with similar restrictions on $\phi$ and $\theta$ . We thus obtain 12 points in the $uv$ -plane, or fewer if the plane is parallel to one of the axes. With these 12 values, we simply choose the largest and smallest values for $u$ and $v$ to get a rectangle that tightly bounds the intersection of the plane and the data box. + +# References + +Andrews, H.C., and B.R. Hunt. 1977. Digital Image Restoration. Englewood Cliffs, NJ: Prentice-Hall. +Cocosco, Chris A., Vasken Kollokian, Remi K.-S. Kwan, and Alan C. Evans. 1997. BrainWeb: Simulated Brain Database. http://www.bic.mni.mcgill.ca/brainweb/. +Rosenfeld, Azriel, and Avinash C. Kak. 1982. Digital Picture Processing. 2 vols. San Diego, CA: Academic Press. +Russ, John C. 1995. The Image Processing Handbook. Boca Raton, FL: CRC Press. + +# A Tricubic Interpolation Algorithm for MRI Image Cross Sections + +Paul Cantrell + +Nick Weininger + +Tamás Németh-Csőri + +Macalester College + +St. Paul, MN 55105 + +Advisor: Karla V. Ballman + +# Introduction + +We designed and implemented a program capable of: + +- taking in a large three-dimensional array of one-byte grayscale voxels (volume "pixels"), the output from an MRI machine; +- slicing through that array along an arbitrary plane; +- and using interpolation to produce an image of the cross section described by the plane. + +We allow the user to select the plane of cross section by specifying three points that should be in the plane, or by specifying one point and two angles. We account for the possibility of voxels of unequal size in different dimensions, but presume they are evenly spaced in each dimension. We then use a tricubic interpolation algorithm to produce a cross-sectional image. This method is our extension of bicubic interpolation, an algorithm used widely with two-dimensional images. We chose the tricubic method because it offers an optimal balance of accuracy and computational speed. Finally, we allow the user to "stain," or color, important portions of the data. + +We tested the program on simple geometric figures to verify its correctness. We then tested it on actual MRI image slices of four brains, with very satisfactory results. We found that important image features were preserved well and that image staining was useful in visualization. The interpolation algorithm runs in linear time; it produces an image from a $256 \times 256 \times 256$ data volume in a few seconds. + +Finally, we constructed several data sets that point out the limitations of our algorithm and of the problem itself. These limitations concern the behavior of our algorithm in areas of maximal uncertainty, farthest from the sample points. + +# Design Considerations + +# Typical Uses of MRI Images + +As described in Rodriguez [1995], MRI scans of various parts of the body are used to diagnose a wide range of disorders. One of the most common uses is the detection of abnormal bodies in the brain, such as tumors, cysts, and hematomas. Because of variations in appearance of both healthy tissues and tumors, it is critical that the sharpness or fuzziness of boundaries, as well as the general shape and brightness of regions, should be preserved when taking cross sections. + +An analysis program should rely on intuitive spatial understanding and also provide a straightforward way for a user to specify a volume for highlighting in subsequent cross sections. + +# Characteristics of Image Data + +# Data Size + +The usual data size for one MRI slice is $256 \times 256$ grayscale pixels. In order to get data covering an entire 3-D object, multiple slices are required. The time taken for each scanning slice is dependent on a parameter to the scanning process called repetition time; a typical slice might take several minutes to scan, though multiple slices may sometimes be scanned simultaneously [Hornak 1997]. Since the amount of time that patients can spend immobile in the machine is limited, the number of slices that can be taken is small compared to the slice resolution. The database for our real-world test data [Johnson and Becker 1997] typically took 25-60 slices to scan an entire brain. + +This means that the actual volume of space represented by each voxel is likely not to be a cube. Instead, it will be a rectangular prism, significantly longer in one dimension than in the other two; the algorithm will need to take this fact into account so as not to produce distorted output. Furthermore, if a voxel is much longer along one axis than along the others, much more interpolation in that dimension will be required, so images taken in planes parallel to that axis may be especially inaccurate. + +# Data Artifacts + +Many different types of artifacts may be present in MRI image data; some are results of incorrect operation or configuration of the machine, while others + +are products of the physical properties of the scanning process [Ballinger 1997; Hornak 1997]. + +Since most of these types of artifacts reflect problems with the machine's configuration that may produce misleading images, it is important that they be preserved in cross section, so that the MRI operator can see them and recalibrate the machine appropriately. + +# Sampling Characteristics + +The manifestation of all of these image characteristics in data is fundamentally tied to the properties of discrete sampling. We can classify data in which discrete sample points describe a continuous function (as our data do) as either undersampled, oversampled, or critically sampled (see Figure 1), depending on how the sampling resolution corresponds to the actual detail in the image. + +![](images/a8f68047b3f2e1290ba12387277cf58a23d62a35648092b79c952157abdd0573.jpg) +a. Undersampled. + +![](images/bc3545faa77a907615ce0ffdef32249af59e0549aa7875f458da562e1ac59c4c.jpg) +b. Oversampled. + +![](images/3d4e75fc3a49ee482021bdf8337be73ef51e55acef2abaf9b1deac4edd7479c8.jpg) +c. Critically sampled. +Figure 1. Typical data sampling characteristics for images. + +- Oversampled data: The sample grid is finer than the image detail. Such images tend to look very blurry, and neighboring grid points tend to vary only slightly and contain essentially redundant information. This high level of detail lends these images to accurate interpolation and enhancement. +- Undersampled data: The image contains detail finer than the sample grid and there is little correlation between neighboring pixels, especially at the edges of objects in the image. If the actual sample area for each pixel is smaller than the sample area that the pixel represents, the image may be characterized by jagged edges and sharp contrasts. Such images make interpolation and enhancement a matter of heuristics and guesswork. +- Critically sampled data lies at the border of undersampling and oversampling, and MRI data fall into this category. As with oversampled data, the edges of boundaries tend to be unaliased (smooth), and the image may even appear slightly blurry; however, as with undersampled data, the detail at the pixel level is important, and interpolation possibilities are limited. + +# Interpolation Algorithms + +Our input data come as a set of image values taken at discrete points, but the cross sections that we want to take may not pass exactly through any of these points. Therefore, we need a way to estimate image values at arbitrary points based on the image values at the sample points. That is, based on our array of samples $A_{i,j,k}$ , we want an interpolating function $f: \mathbb{R}^3 \to \mathbb{R}$ such that + +$$ +f (i, j, k) = A _ {i, j, k} +$$ + +when $i, j$ , and $k$ are integers, and such that $f$ takes on reasonable values for nonintegral $i, j, k$ . (This stipulation that the interpolating function match the sample points is reasonable, as MRI images tend to be very clean and have a high signal-to-noise ratio.) + +In choosing an interpolating function, we had to make a trade-off between accuracy of image production and running time, limited by the typically critically sampled nature of MRI data. We chose a cubic method, which we found to be surprisingly fast and quite accurate for actual MRI data. + +# Tricubic Interpolation + +Cubic interpolation is a special case of Lagrange interpolation, which is a simple method of finding the unique polynomial of degree $(n - 1)$ that passes through $n$ data points [Mnuk 1997]. + +We consider first the one-dimensional case. Cubic interpolation begins with the four sample points closest to the target point $x$ — its two nearest neighbors on either side, $\lfloor x \rfloor, \lfloor x \rfloor - 1, \lceil x \rceil$ , and $\lceil x \rceil + 1$ — and fits a cubic function $p: \to$ to them; $p(x)$ gives the interpolated value at $x$ (see Figure 2). Note that the particular $f$ described by these four points around $x$ gives the values only for the region between the middle two. Thus, the function $f$ that interpolates the whole image is a piecewise composite of many different cubic functions. + +![](images/08f5f0b3dd18da01b6538eea7843ea1a4acb61c0470ee3efd0d357e2c90eb847.jpg) +Figure 2. One-dimensional cubic interpolation. + +![](images/b60955b88588a02760a3e65caf3954ceac5cfd4baf67d5655d780ef244723495.jpg) +Figure 3. Two-dimensional cubic interpolation. + +This procedure generalizes nicely to multiple dimensions. It does not require, as one might expect, the construction of an elaborate multivariate poly- + +nomial or the solution of a large system of equations; in fact, it is sufficient to perform the interpolation in each dimension consecutively. + +It is perhaps easiest to visualize this process in two dimensions first [Makivic 1996]. As shown in Figure 3, we separate the 16 points surrounding the target point into four lines of four points each. We do a one-dimensional cubic interpolation along each of these lines and evaluate the resulting cubics at points along a perpendicular line containing the target point. We then use these four evaluated points to interpolate another cubic that we can evaluate at the target point. + +We can then extend this to three dimensions in the obvious way: Split the 64 points into four planes of 16 points each. In each of these planes, perform the two-dimensional process to get four interpolation points along a line through the target point. Finally, perform an interpolation to get a function value for our target point. This requires a total of 21 one-dimensional interpolations, five for each plane plus the final one. The process is illustrated in Figure 4. It runs in time linear in the total number of voxels in the volume. + +![](images/d4eae065a746b2f8eb5607a5484a6950d28dca2b21c58e4e8f43470d9b23f0a6.jpg) +Figure 4. Schematic of three-dimensional cubic interpolation. + +One key question arises: Does it matter how we choose the planes, and the lines within each plane? It turns out that the final value obtained at the target point is independent of the order in which the dimensions are chosen for the interpolation. [EDITOR'S NOTE: We omit the authors' proof.] + +Cubic interpolation is particularly appropriate to critically sampled data. It relies on some correlation and continuity between neighboring points, producing smooth curves that fit slightly smoothed object edges very well without introducing artificial detail not present in the original image. However, it does not oversmooth or mangle detail past the single-pixel level, and it introduces minimal artifacts. Although cubic interpolation performs poorly on undersampled or jagged data, it is appropriate for MRI data. + +Furthermore, because it involves only simple arithmetic and is linear in the size of the data set, bicubic interpolation is fast enough to produce typical MRI images in close to real time without a high-end workstation. The results of a two-dimensional bicubic enlargement are in Figure 5a. + +![](images/517e4318337a6cec92017232b1af184c45dc465d5e5991bbfad23117fa2dd3da.jpg) +a. Bicubic. + +![](images/f6e5861eb788a9b54cac9a5d008998db0777f9c18c3074af13aa86df47e95dfb.jpg) +b. Nearest-neighbor. + +![](images/892a713ed1e90f9cab6389e00bc16c3e1a31a890094a75b8f047f25b6de3c754.jpg) +c. Bilinear. +Figure 5. Enlargements by various interpolation algorithms. + +# Alternative Interpolation Methods + +We considered and rejected several alternative interpolation methods. + +# Nearest-Neighbor Interpolation + +In nearest-neighbor interpolation, the image value at an arbitrary point is the value of the nearest sample point. This is a very fast algorithm—it requires only a single rounding operator for each dimension. + +Nearest-neighbor interpolation is often appropriate to undersampled data because it preserves the jaggedness of such data and does not attempt presume that image fades smoothly across the sharp edges. It would probably be the most appropriate method for cross sections of black-and-white line art medical diagrams. + +However, for these same reasons, it performs very poorly on critically sampled data, and its rounding can actually locally distort image proportions where a smoother interpolation would preserve them (Figure 5b). + +# Linear Interpolation + +Linear interpolation takes the image value at a point to be the average of the image values of all its known neighbors, weighted by how far away they are. This is also a very simple and fast algorithm. However, it is little more than a blurring of the nearest-neighbor method and shares many of its problems. In particular, it leaves object edges jagged and uneven, even when they are not aliased, and tends to blur excessively (Figure 5c). + +# Convolution-Based Methods + +There is a wide variety of much more intricate interpolation methods based on the convolution of the image matrix, including Fourier-based methods, CMB interpolation, and Wiener enhancement. + +Although they perform extraordinarily well, for several reasons they are inappropriate to the task at hand. These methods are primarily targeted at oversampled data and tend to act more as de-blurring algorithms than interpolators. Furthermore, since they require taking the convolution, they are not linear in the data set size and tend to be quite slow (Mahan [1996] describes a run of dozens of hours to enhance a small image of Saturn). + +Since MRI data tend not to be particularly blurry, and since enlargement is not our goal, these computationally expensive algorithms are simply overkill. Furthermore, especially in critically sampled data, they are likely to produce artifacts with visually striking large-scale structure, which could be misleading to a reader of the image and lead to a misdiagnosis. + +# Image Enhancement + +Either during or after interpolation, we have the option of enhancing the image to sharpen blurred regions, enhance edges, or otherwise bring out details. However, we found that the tricubic method performed well enough that most of these methods were either inappropriate or unnecessary. The human eye is extremely adept at interpolation of obscured detail, and tricubic interpolation tends to capitalize on this by producing blurry but suggestive output in regions of uncertainty. The enhancement algorithms that we examined revealed no details that the eye could not already interpolate. Given the dangers in introducing artificial detail in medical imaging, we decided to leave our tricubic method unenhanced. + +# Anti-Aliasing + +For sharp or undersampled data, it can be beneficial to blur high-contrast edges slightly. However, this is counterproductive in our case; the detail in our images is significant, but the edges of objects are not generally aliased. + +# Sharpening + +Traditional sharpening algorithms work by moving a pixel's value away from the average of its neighbors, perhaps weighted so that the sharpening will be localized to the edges of objects. + +A major problem with this sort of sharpening is that it can produce jagged edges and exaggerate the effects of noise in data (Figure 6). Since tricubic interpolation can produce blurry output in heavily interpolated regions, such sharpening could be of use. However, we found that it made little visible difference in cross sections of real data and revealed no significant new detail. + +![](images/0b102905bf596536b294eaf497ce9b021be37d5182c2fb5ef4e7a12962f17bd7.jpg) +Figure 6. Oversharpening increases noise and aliases edges. + +![](images/4a6ffa04297237d4b87b84faa32295b8d729b8ec12e2c9926da830bae14a95f2.jpg) + +# Edge Fitting + +Many algorithms work to enhance jagged or blurry edges by finding the edges in an image (the areas where pixels differ from the average of their neighbors), fitting curves to them, and enhancing them, either by anti-aliasing or selective sharpening. Such algorithms, however, are usually used as an aesthetic enhancement and not in situations that call for scientific accuracy; although their output can be very pleasing to the eye, they are guilty in the extreme of introducing artificial detail. Thus, although they might produce visually pleasing results in our problem, they should be used very judiciously if at all. + +# Image Staining + +Even with very clear interpolations and a graphical aid to help visualize the orientation of various cross sections, it can be difficult for a human to mentally compose cross sections into a solid and to identify the same object in different cuts. To assist in this visualization process, it would be nice to allow the user to "stain" portions of the data with a color. A physician could mark a reference point or an anomalous object and be sure of its location in different cross sections. Such a feature should work so as to make a colored area visible even in cross sections that do not exactly intersect the region that the user originally marked. Thus, a marked region should be "feathered" or blurred outward slightly from the plane in which the user places it. + +# Implementing Our Algorithm + +# Specifying the Plane of Cross Section + +We allow the user to specify the plane for a cross section by either: + +- selecting three noncollinear points in the plane of the cross section, a good method for selecting an initial arbitrary cut based on image features; or +- selecting an arbitrary point to be included and specifying two angles, known as Euler angles. The first angle is the angle between the $xy$ -plane and the plane of cross section; the second is the angle between the $x$ -axis and the intersection of the cross-sectional plane with the $xy$ -plane. This is a good method for continuously traversing the image or for fine-tuning the orientation of a particular cut. + +To calculate the cross section, we need to transform the input data into a triplet $(\vec{p},\hat{x},\hat{y})$ , where $\vec{p}$ is an arbitrary point on the plane and $(\hat{x},\hat{y})$ forms an orthonormal basis for the plane. + +# Three-Point Representation + +To obtain $(\vec{p},\hat{x},\hat{y})$ from the three-point representation $(\vec{p_1},\vec{p_2},\vec{p_3})$ , we take the cross product of the two vectors $\vec{p_2} -\vec{p_1}$ and $\vec{p_3} -\vec{p_1}$ to produce the normal vector $\vec{n}$ , which is perpendicular to the plane. Then we solve the system of equations + +$$ +\vec {n} \cdot \hat {x} = \vec {n} \cdot \hat {y} = 0, \qquad \hat {x} \cdot \hat {x} = \hat {y} \cdot \hat {y} = 1, \qquad \hat {x} \cdot \hat {y} = 0, \qquad \hat {x} _ {z} = 0 +$$ + +for $\hat{x}$ and $\hat{y}$ . The first two equations ensure that the basis vectors are in the plane; the next two ensure they are of unit magnitude; the fifth makes them perpendicular. These five equations in six unknowns do not specify a unique basis, so we need one more constraint. We choose $\hat{x}_z = 0$ as that last constraint because it simplifies the resulting formulas greatly. Finally, we let $\vec{p} = \vec{p_1}$ . + +# Point Plus Euler Angles + +For input of the form $(\vec{p},\phi ,\theta)$ , we think of the plane as a rotation of the $xy$ -plane, with the origin set to $\vec{p}$ . We first rotate the plane by $\phi$ around the $x$ -axis, and then rotate it by $\theta$ around the $z$ -axis. The resulting transformations are given by + +$$ +\hat {x} = R _ {\theta} R _ {\phi} \hat {i}, \quad \hat {y} = R _ {\theta} R _ {\phi} \hat {j}, +$$ + +where $\hat{i},\hat{j}$ are the standard basis for the $xy$ plane and + +$$ +R _ {\phi} = \left( \begin{array}{c c c} {1} & {0} & {0} \\ {0} & {\cos (\phi)} & {- \sin (\phi)} \\ {0} & {\sin (\phi)} & {\cos (\phi)} \end{array} \right), \qquad R _ {\theta} = \left( \begin{array}{c c c} {\cos (\theta)} & {- \sin (\theta)} & {0} \\ {\sin (\theta)} & {\cos (\theta)} & {0} \\ {0} & {0} & {1} \end{array} \right). +$$ + +The vectors that we obtain from these methods are not always the best ones for our purposes. We would like the display orientation to correspond to the user's concept of the volume; that is, up should remain up and left should remain left whenever possible. Therefore, we try to align the basis vectors as + +closely as possible with the $xy$ -plane's basis. To do this, we rotate the basis vectors in the plane so as to maximize $\hat{x} \cdot \vec{i}$ , thus bringing the $\hat{x}$ vector as close as possible to the true $x$ -axis. We then reverse the direction of $\hat{y}$ (effectively flipping the image over) if that reversal increases the value of $\hat{y} \cdot \vec{j}$ . + +Once we have our adjusted basis, we calculate where, if anywhere, the plane intersects each of the 12 edges of the data volume. These edge intersection points define the boundary of the cross section of the volume. We define the data volume to be the parallelepiped with corners at $(0,0,0)$ and $(x_{\max},y_{\max},z_{\max})$ . Each edge has two fixed coordinates; thus, we can compute each edge intersection by solving two equations in two unknowns. For example, to compute the point of intersection with the edge running along the $z$ -axis, we solve the equation system + +$$ +p _ {x} + c _ {1} \hat {x} _ {x} + c _ {2} \hat {y} _ {x} = 0, \qquad p _ {y} + c _ {1} \hat {x} _ {y} + c _ {2} \hat {y} _ {y} = 0 +$$ + +(where $p_x$ is the $x$ -component of $\vec{p}$ , $\hat{x}_x$ is the $x$ -component of $\hat{x}$ , and so on) for $c_1$ and $c_2$ . This system will always have a unique solution unless the plane is parallel to the $z$ -axis. If that happens, either the plane does not intersect the $z$ -axis, or else the $z$ -axis lies in the plane; in the latter case, we take the two endpoints of the edge at 0 and $z_{\max}$ to be the intersection points. + +If we have unique values for $c_{1}$ and $c_{2}$ , we then solve for the $z$ -coordinate of the intersection: $z_{\mathrm{intercept}} = p_z + c_1\hat{x}_z + c_2\hat{y}_z$ . If $z_{\mathrm{intercept}} \in [0, z_{\mathrm{max}}]$ , then the plane does indeed intersect this edge of the data volume. Otherwise, it intersects the line defined by the edge at a point outside the volume. + +When we have all the edge intersection points, we can define a rectangle bounding the cross section in terms of the basis vectors: Just take the maximum and minimum values of $c_{1}$ and $c_{2}$ over all the points. Finally, we calculate the upper left corner of the bounding rectangle and proceed to the interpolation. + +# Performing the Interpolation + +At this phase in the computation, we scale the individual components of the $\vec{p},\hat{x}$ , and $\hat{y}$ to account for the possibility of voxels with different sizes in different dimensions, which could result from MRI slices taken far apart. In other words, if the actual size of a voxel is $(a b c)$ , we scale from the basis + +$$ +(1 0 0) \quad (0 1 0) \quad (0 0 1) +$$ + +(which reflects geometric reality) to the basis + +$$ +(a 0 0) \quad (0 b 0) \quad (0 0 c) +$$ + +(which is appropriate for our data array). This method presumes that the size of a voxel in each dimension is constant and thus that MRI slices are spaced evenly. + +The cross-section sample points are now described by $\vec{p} + a\hat{x} + b\hat{y}$ , where $a$ and $b$ are integers. From this, we can easily construct a double loop to traverse the cross section. + +At each of these cross-section sample points, we perform a tricubic interpolation. Since doing so involves two neighbor points in each direction, we define the value of a sample point outside the data array to be a uniform dark gray, so that we can interpolate for points near the edge. + +The assumption that the data slices are equally spaced allows a number of simplifications in the Lagrange polynomials and thus in the code to perform the interpolation. Allowing for uneven voxel size within a dimension (as might result from an uneven series of slices) would require substantial extension of this portion of the program. + +# Performing Image Staining + +[EDITOR'S NOTE: We omit the authors' description of implementation of the staining feature.] + +# Testing the Algorithm + +# Correctness Testing + +As a simple test that the algorithm was working properly, we used it to take sections at various angles through two different geometric objects (see Figure 7): + +- a cube, filled with smaller cubes alternating black and white in a checkerboard pattern; and +- a torus, filled according to a variable grayscale gradient, with three perpendicular cylinders of different diameters, filled with white, intersecting at the center of the torus. + +We chose these objects because their correct cross sections at any given angle are readily identifiable. The algorithm did indeed take correct cross sections of these objects at a variety of angles. + +# Real-World Testing + +To provide test data reflecting conditions actually encountered in diagnosis, we downloaded four series of axial (xy-plane) MRI slices from the Whole Brain Atlas [Johnson and Becker 1997], a database of information on brain anatomy and pathology. We converted these slices into four three-dimensional arrays of test data using Adobe Photoshop. + +![](images/eb7f345cdba6124a7c769bbf832ec332380b0b83e9dd74d1e5ff6be57f9b9ae9.jpg) +Figure 7. Geometric test objects. + +![](images/24d337d69487e603aeb7bd98d71390cf1d0bafa18047b0cda389c96f2c2439b8.jpg) + +- Data Set 1 was from a normal, healthy brain; +- Data Set 2 was from a brain containing a type of tumor known as a glioblastoma; +- Data Set 3 was from a brain affected by cerebral hemorrhaging; and +- Data Set 4 was from the brain of a woman with advanced Alzheimer's disease. + +Examples of the resulting cross-sectional images are shown in Figure 8. We found that the algorithm worked better in perpendicular planes than in oblique planes, as expected. However, almost all the images were quite sharp and clear, preserving object boundaries and shapes excellently. Note in particular the series of oblique cross sections of Data Set 2, showing the shape and boundaries of the glioblastoma with great clarity. + +When our image data contained artifacts, these too were preserved; they occurred most notably in the images from Data Set 3, which also produced by far the lowest-quality images. The primary reason for this is that it contained only 24 slices, as against 54 for Data Set 1, 56 for Data Set 2, and 45 for Data Set 4. Thus, the pixels in this data set were much more "stretched" in the $z$ -direction, forcing the algorithm to do more vertical interpolation in taking cross sections. + +Finally, we found that staining worked well in highlighting important image features in different cross sections. [EDITOR'S NOTE: We must omit the authors' two-color figures, which strikingly highlight the hemorrhage in Data Set 3.] + +# Problems in Our Algorithm + +Our interpolation produces a slight increase in blurriness or fuzziness of edges characteristic of most image interpolation methods. Different cuts of the + +![](images/83be760a8242dbaceb1195a4b314e1293a6a3a2eb845305e8489c707a0e1a6ea.jpg) + +![](images/cef68ed5629247fe5eddbadcdf7b5e9d6e5e5aec94e727c5474139e7af2fc81a.jpg) +Slice from Data Set 1, the brain of a healthy elderly woman. + +![](images/59ab2f52dd722bb29fa07cb1611682c9fc84d0539fa2c2d75d49cae757e26275.jpg) + +![](images/0e397cce10fffcbd0906331445063dae96fde109ff9e44c2b1c8bb740062a6d4.jpg) +Slice from Data Set 2, the brain of a man with glioblastoma. The large bright mass at lower right is the tumor. + +![](images/bd66da7daa4bd0797afc52aed5115d0374bc5599d4ac75b0df29b731a2fc5940.jpg) + +![](images/deea721eca6dacd7dd4a87f6eb3c356cf7f68a025ed776b7ac2fad8438d7bd84.jpg) +Slice from Data Set 3, the brain of a man with acute cerebral hemorrhage (the dark mass on the right side of the brain). Note the image artifacts; this is our lowest-quality data set. + +![](images/d20a53efc9a6ff61cb38c05889fe9ff5336b80a48a1446503c5accdca1e3c133.jpg) +Figure 8. Sample slices from the four data sets. + +![](images/17768c625e699daf548bbb708e6a8eb381ae86254e89fa7381d0eb0f9d9fffe2.jpg) +Slice from Data Set 4, the brain of a woman in the advanced stages of Alzheimer's disease. Note the enlarged lateral ventricles (bright structures in the center of the brain) and unusually large, bright convolutions at the top of the image. + +data can produce widely varying results; compare (see Figure 9): + +A. a cut along a horizontal plane, in which the points in the cut correspond to actual data points and we have maximal clarity (Figure 9a); +B. a cut along a horizontal plane halfway between two actual slices, in which points on the slice are blurred between the neighboring layers (Figure 9b); +C. a vertical cut, in which the image is blurry in the $z$ dimension, where the voxels are very tall (Figure 9c); and +D. a maximally oblique cut, in we are cutting across the long diagonal of a voxel (Figure 9d; note the jaggedness along the edge of the skull). + +We do have some control over this blurring. Examples A and B are essentially the same image, but the former is slightly clearer; a smarter algorithm could take this into account. However, these are special cases; in general, we cannot avoid moving through areas that require a high degree of interpolation. + +We can illustrate this problem with the extreme case of a three-dimensional checkerboard of $1 \times 1 \times 1$ -voxel black and white squares. Tricubic interpolation gives an even $50\%$ gray at points midway between data points; thus, a cross section of this object shows high-contrast checkering in areas with a low degree of interpolation, and gray in areas that are heavily interpolated. Note, for example, the effect of shifting a horizontal plane up half a pixel just as we did in examples A and B above (Figure 10a). Because neighboring pixels differ so much, the blurring effect is much more pronounced. + +A nearly horizontal but slightly oblique plane passes closer and farther from data points, producing an interference pattern (Figure 10b). The gray areas in this image correspond to points where sharp edges would blur slightly in real sample data. We could move these interference patterns around by translating the plane of cross section and rotating our basis vectors. In certain circumstances, this can actually decrease the overall blurriness, as in examples A and B. However, we cannot eliminate the interference pattern without compromising the integrity of the interpolation. + +It might seem that we could perturb the sample plane slightly, bending it toward data points (i.e., avoiding the gray areas). However, as the perturbation increases, this algorithm degrades to nearest-neighbor, which distorts proportions locally and essentially defeats the purpose of interpolation. We experimented with restricting this perturbation to a direction normal to the plane; however, we found that unadorned bicubic interpolation worked best. + +It is important to realize that the checkerboard is a very poor model of actual data—it is nothing but very high-contrast noise, which is not at all typical of MRI data. Its usefulness lies in illustrating the fundamental problem of discrete sampling: We simply cannot avoid approximating the values for a significant portion of an arbitrary cut. + +The blurring that tricubic interpolation produces, however, does not mangle detail beyond the one-voxel level. Even a $2 \times 2 \times 2$ -voxel checkerboard shows its + +![](images/5c9499acb67687e462d5b2dae17feb274d88dd8752f0dff5e0925af8cc4ac2ae.jpg) + +![](images/ab1267b17996b6a6c152db767bc4bed27573e7eebf0a0fcebf26a8b99ec36e9c.jpg) + +a. A cut along a horizontal plane; points in the cut correspond to actual data points and we have maximal clarity. + +![](images/2c5d0a8ace31a6b39be153fb812d574b3a4178e201178138652db4fd865377ff.jpg) + +![](images/66b496599d779a5f7cd1e6f20b731bb3671a1d45e310fc5f557897f51085bcb7.jpg) + +b. A cut along a horizontal plane halfway between two actual slices; points are blurred between the neighboring layers. + +![](images/32ab14eecf7d3cc6d54847afe2dfe1c6b0ebff73560d0412947d0b14f0654ff4.jpg) + +![](images/1f9023c73f6e9b960e58e3ebeb704f1fc728b207cc9a7fa43da8858a1c879ec2.jpg) + +c. A vertical cut; the image is blurry in the $z$ dimension, where the voxels are very tall. + +![](images/eee576d055ba2667c8bd5654c9e87bd50c8967bc2175ef1f8946de0f886395e2.jpg) +Figure 9. Slices along various planes through Data Set 1. + +![](images/854a65bec6f6cd453420df124ccb47fc5de31393c2e6014fc0b6dcccbe58c843.jpg) + +d. A maximally oblique cut, across the long diagonal of a voxel; note the jaggedness along the edge of the skull. + +![](images/f2506ed81e98e5a73593b0d8792b0c1dfd83878f361a9f837129dc2ccbe84fd0.jpg) +a. A cut along a data plane (center) and a cut halfway between data planes (right). + +![](images/92f032ad46fc5447900faf9a00226819c3b91665992e3716c0cc23523f5eb69a.jpg) +c. A cut along a grossly oblique plane produces two families of interference bands. +Figure 10. Various cross sections of 3-D checkerboards. + +![](images/f1185787b6e41aca77bdb627c1451ad169487dab95ba19c854e9c4aadc47a167.jpg) + +![](images/8f74720fbf5d9e6666bc572b64321625c169a462b03b536795802192f8b3db17.jpg) +b. A cut along a nearly horizontal but slightly oblique plane produces interference bands. +d. A cut along an oblique plane through a checkerboard with $2 \times 2 \times 2$ voxels. + +checked pattern more clearly than the $1 \times 1 \times 1$ at oblique angles (Figures 10cd), and real data are even better behaved. + +One other problematic situation is machine calibration. Suppose that the user has scanned a solid cube to align the machine and now wishes to know the exact angle at which that cube is oriented in the data array. The user could use our algorithm to align a cut with the top of the cube. However, the precision of the image degrades when the angle between the cross section and the cube is very small, especially if the cube is only slightly offset from the axes of the data array. The pixelated top of the cube produces a mild interference pattern, and the user would have to re-scan once or twice to align the scanner past sample resolution. In this case, an edge-fitting enhancement algorithm would be entirely appropriate. + +# Conclusion + +Our algorithm's strengths working with real MRI data are: + +- robust and intuitive specification of the cross-section plane, +- preservation of large-scale image features, +- preservation of fine detail present in the source image, + +- preservation of image proportions given knowledge of the voxel dimensions, +- preservation of features of diagnostic interest, +- conceptually useful coloring of three-dimensional features in the image, and +- real-time performance sufficient to allow interactive exploration of the data. + +There is, of course, a good deal of room for improvement. Our current implementation is not as fast as it should be, nor as easy to use. There are circumstances where it would be useful to offer several interpolation options (e.g., nearest-neighbor for cross sections of line drawings), or to perform edge-fitting and sharpening on the interpolated image (e.g., for the calibration situation described above). It would also be nice to extend the algorithm to deal with unequally spaced slices; this would require a more general (and significantly slower) implementation of cubic interpolation. However, our algorithm serves its principal purpose very well, giving good results on a variety of real data. + +# Acknowledgment + +The authors wish to Alexa Pragman for her help in preparing the figures for publication. + +# References + +Ballinger, Ray. 1997. Gainesville VAMC MRI Teaching File. http://www.xray.ufl.edu/~rball/teach/mriteach.html. +1998. The MRI Tutor. http://128.227.164.224/mritutor/index.html. +Hornak, Joseph. 1997. The Basics of MRI. http://www.cis.rit.edu/htbooks/mri/. +Johnson, Keith, and Alex Becker. 1997. The Whole Brain Atlas. http://www.med.harvard.edu/AANLIB/home.html. +Mahan, Steven L. 1996. Resolution enhancement. http://aurora.phys.utk.edu/~mahan/enhancement.html. +Makivic, Miloje. 1996. Bicubic interpolation. http://www.npac.syr.edu/projects/nasa/MILOJE/final/node36.html. +Mnuk, Michal. 1997. Lagrange interpolation (in German). http://www.risc.uni-linz.ac.at/people/mmnuk/FHS/MTD/MAT2/Skriptum/K5node3.html. +Rodriguez, Paul. 1995. MRI Indications for the Referring Physician. http://www.gcnet.com/maven/aurora/mri/toc.html. + +# MRI Slice Picturing + +Ni Jiang + +Chen Jun + +Li Ling + +Tsinghua University + +Beijing, China + +Advisor: Ye Jun + +# Summary + +We set up two coordinate systems, one in the object space and the other on the computer screen. We introduce six parameters to describe the slice plane, and we formulate the coordinate mapping from the screen to the object space. + +We designed six alternative algorithms that use the given data to estimate the density at any location in space and produce a slice of a three-dimensional array. Some of the algorithms exploit global information and some are self-adaptive; all but one have advantages in certain circumstances. + +We extended a well-known two-dimensional model of a human head model to build a three-dimensional model of a head, consisting of 10 ellipsoids of different size, orientation, and density. We produced the data sets by sampling in the object space (the head model) at evenly spaced intervals; the dimension of the data set is $128 \times 128 \times 128$ . + +We devised several test slices to test our model and algorithms. Some test slices have a complex shape, some are critical in their position, and some are really disasters to most algorithms. We also tried different sampling intervals to verify our ideas about the model. + +Based on subjective and objective comparisons, we summarize the strengths and weaknesses of the algorithms. For common use, we suggest the gradient algorithm and our GNP-integrated algorithm. In most cases, both can render well slices with both sharp and smooth edges. + +# Facts about MRI + +MRI has several features relevant to the problem: + +- High precision. The scanning precision of MRI is about $1 - 3\mathrm{mm}$ . That is, MRI can easily distinguish nuances at the size of $1 - 3\mathrm{mm}$ . Commonly used MRI slices are no larger than $25\mathrm{cm} \times 25\mathrm{cm}$ [Gao 1996]. +- High contrast. One of the advantages of MRI is the high contrast of its images, which makes the boundaries of the organs sharp enough for diagnosis [Frommhold and Otto 1985, Gao 1996]. +- Long performance time. The performance time of MRI is several minutes. For example, a typical scanning of a two-dimensional image $(128 \times 128 \times 256)$ with pulse repeat time $T_{R} = 1.5$ s needs about 6 min [Gao 1996]. The time required is still one of the main drawbacks of MRI. Thus, we cannot expect that the given data set will be thorough enough to produce a good slice picture (that may require too much time). Our algorithms should not be too complex or time consuming. +- Reconstruction Algorithms. Two commonly used methods to reconstruct the three-dimensional information from the raw data produced by MRI are Projection Reconstruction (PR) and Fourier Transformation. They require that the data be evenly sampled through space, so we assume that. + +# Assumptions, Coordinates, and Notations + +# Assumptions + +From the request of the problem and some facts of MRI, we take the following assumptions: + +- The dimension of the examined object is $256 \mathrm{~mm} \times 256 \mathrm{~mm} \times 256 \mathrm{~mm}$ , if not smaller; this is big enough in most cases. If a larger object is scanned, we can divide up the data into several cubes. +- The desired precision of pictured slices is $1\mathrm{mm}$ . We will picture the slices produced by our algorithms on the computer screen, using one pixel to present an area of $1\mathrm{mm} \times 1\mathrm{mm}$ . +- The given data set is a three-dimensional array $A(i,j,k)$ sampled in the whole object space with evenly spaced intervals along the coordinate axes. Such intervals are about 2–4 mm and are big enough for MRI to scan in not too long a time. Later we discuss the case of the data not being evenly spaced. +- $A(i,j,k)$ takes an integer value from 0 through 255, indicating the water density, from high density to low density. On our screen, 0 is represented by black and 255 is represented by white. + +- The examined object consists of several different components. We assume that the density does not change much within one component. +- The object is the body of some animal or of a human being. Since the organs or tissues in such body are likely to be tender, we can image that the boundaries are smooth and sharp in most cases. Exceptions will happen only between some kind of bones, such as the backbone (they are sharp but not smooth) or some sick tissues. +- The unknown density of a location is affected by all the given data. However, the distance between points plays an important role in this problem. Locations far away (for instance, $50\mathrm{mm}$ ) from the unknown point are assumed to have little or no effect. + +# Coordinate Systems + +We set up two coordinate systems, one in the data (or object) space and one on the computer screen, as presented in Figures 1-2. The units (pixel in the screen image) in these two systems are both $1\mathrm{mm}$ , for the sake of convenience. Since the object is of $256\mathrm{mm} \times 256\mathrm{mm} \times 256\mathrm{mm}$ , the data space is just $0 \leq x, y, z < 256$ . + +![](images/6af9a573dc1170cf4c5219d7dc118a7892f4ada80067d11250aecc5385d5712b.jpg) +Figure 1. The data space coordinate system. + +![](images/321892387186e81cb4541ebc7e626dd5407ddf64a03eba109fd444784f8b14c7.jpg) +Figure 2. The screen image coordinate system. The origin $O$ is the left bottom corner of the screen image. + +# Notation + +"Density" indicates the water concentration in a small region of the scanned object at some location. The phrase "unknown point" or "unknown location" means the point (or the location) where the density of the object is not given as known data, hence needs to be calculated. + +The symbols commonly used in this paper are: + +$A(i,j,k)$ The given three-dimensional data indicating the density of a location. In some contexts, $A$ also represents the location. + +$s_X, s_Y, s_Z$ The three sampling intervals along the Cartesian axes. Thus, $A(i,j,k)$ is the density of location $(i \cdot s_X, j \cdot s_Y, k \cdot s_Z)$ . + +$\alpha, \beta, \gamma, x_0, y_0, z_0$ The six parameters to define a slice plane. + +$D(x,y,z)$ The density of the object at location $(x,y,z)$ . + +# Analysis of the Problem + +For a plane slicing the object, we want to know the density of the object throughout the plane. If we can convert the coordinates of the points in the slice plane to the real 3-D coordinates in the object, and calculate the corresponding densities, the problem is solved. The first step is simple, with some knowledge of the space geometry. But how about the second step? + +# Can the Unknown Density Be Known? + +From the famous Nyquist sampling theorem, we know that to reconstruct the whole density information of the scanned object exactly, the sampling intervals must satisfy the inequality + +$$ +\max \left(s _ {X}, s _ {Y}, s _ {Z}\right) \leq \frac {1}{2 f _ {m}}, \tag {1} +$$ + +where $f_{m}$ is the upper limit of the spatial frequency of the density. In our problem, (1) would need to be satisfied if a slice is required to be pictured exactly; but we don't need to do that. + +On the one hand, the inequality could never be satisfied, since the $f_{m}$ in an object is always very large—infinity, in reality. No sampling intervals can satisfy such an inequality! On the other hand, to picture the slice we do not need to know exactly what the unknown is. Since the grayscale is from 0 to 255, an error of less than 1 grayscale unit is acceptable. In fact, a blur to some extent is always allowable and unavoidable. In this sense, we can know the unknown density. + +# How to Know the Unknown Density? + +Since we cannot know the unknown density exactly, we must estimate it. We can choose from: + +- Simplicity and Complexity. Our goal is to find an effective but simple algorithm to produce any slice of the object. We also believe that the real object is too complex to describe or estimate by only one kind of algorithm. So our motto is "If it works, it's good enough," and we tried to find several different algorithms to deal with the different aspects of the real object. +- Local and Global Information. Global information is alluring but very difficult to use. As human beings, we can easily locate a vessel or a bone in an MRI image and outline it, using our global impression (thus we can do reconstruction). But it is difficult for computers to know a shape rather than a number. Current algorithms can outline an image, but the information used by computers is local (e.g., the difference between adjacent pixels) but not global. So we base our main idea on local information but remain alert to global information. Our experiments show that appropriately using even a little global information brings great benefit. +- Static and Self-adaptive Algorithm. There are several advantages to a static algorithm: it is fast and simple (in most cases), it is often designed by aiming at some aspect of the real application and may be effective in that aspect, and it is easily controlled and safe. Similar to global information, a self-adaptive algorithm is powerful but difficult to control. + +# Description of Our Model + +Our model consists of three parts: + +the given data, +- the description of the slice plane, and +- the algorithm to estimate the density of the object at any location, whether or not this location is included in the given data. + +The description of the slice plane is used to convert the coordinates of a point on the screen to coordinates in space, while the density-estimating algorithm obtains the density of the point from the known data. Thus, the slice can be easily displayed on the screen. + +We propose six different algorithms to estimate the density, discussed in detail in the next section. The given data are already described in the problem and the subsection on Assumptions, so here we treat the mapping from the screen to space. + +# Slice Plane + +In order to define a slice plane at any orientation and any location in space, we perform four steps to transform from the $XY$ -plane to any other plane in the space: + +1. Put a plane, say $P$ , with its own coordinate system $ST$ (the same as the screen coordinate system), onto the $XY$ -plane and make the two coordinates exactly the same origin and orientation. +2. Rotate $P$ around its normal line (i.e., the $z$ -axis) by angle $\alpha$ to make the orientation of $ST$ differ from $XY$ . +3. Rotate the normal line of $P$ around the origin to a prescribed orientation. In Figure 3, this orientation is defined by angles $\beta$ and $\gamma$ . + +![](images/fbafce98d49b9d096f8a12e9f9844383fd8e9d38c41cc8d36ecfc63bde33a178.jpg) +Figure 3. Rotate the normal line to a prescribed orientation. + +4. Perform a translation to the plane $P$ , moving the origin of $P$ to some predefined point in the space, say $(x_0, y_0, z_0)$ . + +Thus, using six parameters $(\alpha, \beta, \gamma, x_0, y_0, z_0)$ , we can define a plane anywhere, with a coordinate system the same as the screen system. + +# Mapping from the Screen to the Space + +Since the coordinate systems of the screen and of the slice plane are the same, we can change the screen coordinate to the slice plane and then use the transformation in the last subsection to convert the slice plane to space coordinates. + +Suppose a pixel in the screen is at position $(s, t)$ and the corresponding point in the space is at $(x, y, z)$ . From the transformation, we get the mapping equation from the screen to the space and thereby solve the first step of the problem: + +$$ +\begin{array}{l} \left( \begin{array}{c} x \\ y \\ z \end{array} \right) = \left( \begin{array}{c c c} \cos \gamma & - \sin \gamma & 0 \\ \sin \gamma & \cos \gamma & 0 \\ 0 & 0 & 1 \end{array} \right) \left( \begin{array}{c c c} \cos \beta & 0 & \sin \beta \\ 0 & 1 & 0 \\ - \sin \beta & 0 & \cos \beta \end{array} \right) \left( \begin{array}{c c c} \cos \alpha & - \sin \alpha & 0 \\ \sin \alpha & \cos \alpha & 0 \\ 0 & 0 & 1 \end{array} \right) \left( \begin{array}{c} s \\ t \\ 0 \end{array} \right) \\ + \left( \begin{array}{c} x _ {0} \\ y _ {0} \\ z _ {0} \end{array} \right). \\ \end{array} +$$ + +# Density-Estimating Algorithms + +Consider a pixel on the screen and the corresponding point $U$ at location $(x, y, z)$ in the object space. The task of the density-estimating algorithm is to estimate the density at $U$ , which we denote by $D(x, y, z)$ . + +We tried five basic types of density-estimating algorithms. Based on experimental results, we designed an all-around effective method, which we call GNP-integrated. + +# Trilinear Interpolation + +In general, linear interpolation can produce satisfactory results. In three-dimensional space, we use trilinear interpolation, which interpolates from the eight neighbors in three directions. That is, + +$$ +\begin{array}{l} D (x, y, z) = A (i, j, k) \cdot (1 - u) \cdot (1 - v) \cdot (1 - w) \\ + A (i + 1, j, k) \cdot u \cdot (1 - v) \cdot (1 - w) + A (i, j + 1, k) \cdot (1 - u) \cdot v \cdot (1 - w) \\ + A (i, j, k + 1) \cdot (1 - u) \cdot (1 - v) \cdot w + A (i + 1, j + 1, k) \cdot u \cdot v \cdot (1 - w) \\ + A (i + 1, j, k + 1) \cdot u \cdot (1 - v) \cdot w + A (i, j + 1, k + 1) \cdot (1 - u) \cdot v \cdot w \\ + A (i + 1, j + 1, k + 1) \cdot u \cdot v \cdot w, \tag {2} \\ \end{array} +$$ + +with + +$$ +i = \left\lfloor \frac {x}{s _ {X}} \right\rfloor , j = \left\lfloor \frac {y}{s _ {Y}} \right\rfloor , k = \left\lfloor \frac {z}{s _ {Z}} \right\rfloor , +$$ + +$$ +u = \frac {x}{s _ {X}} - i, v = \frac {y}{s _ {Y}} - j, w = \frac {z}{s _ {Z}} - k, +$$ + +where $\lfloor x\rfloor$ is the largest integer no larger than $x$ + +This method uses the density values of eight neighbors to resolve the density at $U$ . Discontinuity at boundaries can be mitigated by this approach in nearly all cases. However, this method tends to blur some sharp edges, because of its intrinsic low-pass filtering attribute. + +# Nearest-Neighbor + +With the preservation of edge sharpness in mind, we tried the nearest-neighbor method, which assigns to $U$ the density of its nearest neighbor in space. + +This method is fairly simple and the computational load is very low. The effect produced by the method is quite unstable, although sometimes it really gives good results. Nevertheless, it partly preserves edge sharpness, and its power can be amplified if properly combined with other methods. + +# Median + +The idea comes from the median filtering that is famous in signal and image processing. Median filtering can preserve the sharp edge of the signal from great damage while smoothing the signal. In our algorithm, we assign the median of the density of $U$ 's eight neighbors to $U$ . The result of this algorithm, as expected, gives sharp edges but has obvious feather-out, which results in an unrealistic contour. + +# Power-Control + +Since we believe that the distance between points is very important, we can conceive that each point within a reasonable distance from $U$ has a "power" to control the density of $U$ , forcing the density of $U$ to be similar to its own, and that such power decreases with distance. The overall result should be the average of the densities of those points, taking their power into account. + +We define the power of $A(i,j,k)$ over distance $d$ as: + +$$ +p = \frac {1}{1 + e ^ {5 (d / d _ {0} - 1)}}, +$$ + +where $d = \sqrt{(x - i \cdot s_X)^2 + (y - j \cdot s_Y)^2 + (z - k \cdot s_Z)^2}$ and $d_0$ is a distance threshold (when $d = d_0$ , the power is $\frac{1}{2}$ ). Then the density of $U$ is estimated as + +$$ +D (x, y, z) = \frac {\sum_ {d _ {\xi} \leq 2 d _ {0}} p _ {\xi} \cdot A \left(i _ {\xi} , j _ {\xi} , k _ {\xi}\right)}{\sum_ {d _ {\xi} \leq 2 d _ {0}} p _ {\xi}}, \tag {3} +$$ + +with the summation over all the known points within a distance $2d_{0}$ . We use $d_{0} = 1$ mm when the sampling interval is $2$ mm. + +Though (3) has some similarity to trilinear interpolation (2), the nonlinearity in the definition of power makes the edge produced by this algorithm smoother—but also more blurred. + +Here we could adopt another type of power, called the optimal interpolation function: + +$$ +p = \operatorname {s i n c} (\pi d) = \frac {\sin (\pi d)}{\pi d}, \mathrm {w h e r e} d = \sqrt {\left(\frac {x}{s _ {X}} - i\right) ^ {2} + \left(\frac {y}{s _ {Y}} - j\right) ^ {2} + \left(\frac {z}{s _ {Z}} - k\right) ^ {2}}. +$$ + +This kind of power is famous, for it is an ideal low-pass filter in the frequency field, and it is used to reconstruct the original signal (from frequency information) in the sampling theorem. But since $f_{m}$ is very large in this problem, we cannot expect that such a power will do a good job. + +# Gradient + +The methods above are all based on the effect of one point on another. If the effect of a point-pair to one unknown point is considered, we can introduce the gradient method. + +![](images/38ee89b1bc4220e3b4f2c51221085d841e77bdb619fa76e683fd305407a17739.jpg) +Figure 4. Gradient in a point-pair. + +Figure 4 shows two given data values $A_{1}(i_{1},j_{1},k_{1})$ , $A_{2}(i_{2},j_{2},k_{2})$ and the unknown point $U$ . The distance between $A_{1}$ and $A_{2}$ is $d$ , the projection of $\overrightarrow{A_1U}$ on $\overrightarrow{A_1A_2}$ is $d_h$ (which is negative when the angle between $\overrightarrow{A_1U}$ and $\overrightarrow{A_1A_2}$ is an obtuse angle), and $d_v$ is the distance from $U$ to $\overrightarrow{A_1A_2}$ . The density of point $U$ , if only estimated by the gradient from $A_{1}$ to $A_{2}$ , is + +$$ +D (x, y, z) = A _ {1} + \frac {d _ {h}}{d} (A _ {2} - A _ {1}). +$$ + +However, when other data-pairs in the neighborhood of point $U$ are considered, the density $D$ is a weighted average of all the effects, and the weight (similar to the "power") is defined as: + +$$ +p = \left\{ \begin{array}{l l} e ^ {- d _ {v}}, & \text {w h e n} d _ {h} \geq 0; \\ \frac {1}{4} e ^ {- d _ {v}}, & \text {w h e n} d _ {h} < 0. \end{array} \right. +$$ + +This algorithm exploits not only the density information around the unknown point $U$ but also the tendency of the density in a local volume. This makes it self-adaptive to some extent. Further, we can add some global information to the algorithm. For example, in our implementation, when $A_{1}$ and $A_{2}$ are close enough $(|A_{1} - A_{2}| < 20)$ , we multiply the weight $p$ by 3; in such a case, $A_{1}$ and $A_{2}$ are deemed to be in the same component, which makes the probability that $U$ is also in the same component very large. Similarly, when $|A_{1} - A_{2}| > 80$ , we multiply the weight $p$ by 0.7, since $A_{1}$ and $A_{2}$ may be in different components. + +# GNP-Integrated + +From experimental results (see the next section), we found out that the gradient and power-control methods are good at making smooth but slightly blurred edges, while the nearest-neighbor method always gives a high-contrast image with rough edges. In an attempt to combine their advantages, we integrated these three methods into one algorithm that we call GNP-integrated. It + +can be described in brief as the combination of the gradient, nearest-neighbor, and power-control methods in the proportions $3:2:1$ . + +# Test of the Algorithms + +# Data Sets: The Head Model + +Suitable data sets are necessary for testing and demonstrating the algorithms, as well as for comparing different algorithms. Real MRI data would be best. However, besides the inconvenience involved in getting such data, another annoying problem is that we would have great difficulty comparing the pictured slices and the actual slices, since we actually can't have the latter! + +Motivated by the widely accepted two-dimensional Sheep-Logan (S-L) head model [Gao 1996], where ten ellipses, different in location, shape, orientation, and intensity, constitute an object representing a head section, we designed a 3-D head model made up of ten ellipsoids different in location, shape, orientation, and density. The empty space inside the head model is filled with an ambient color that differs from that of the background outside the model. + +We adopt ellipsoids for our data model because of their simplicity and because the combination of varied ellipsoids can imitate many real objects, such as a brain or a stomach. We designed three types of ellipsoids with different density distributions: + +- Type 1: Uniform density. + +- Type 2: The density changes linearly from the center to the surface. + +- Type 3: The same as type 2 but with additive random noise of a specific standard deviation. (In our experiments, we don't analyze this type, since noise filtering is beyond our concern in this paper.) + +With the sampling intervals specified, data sets can be produced easily by determining in which ellipsoid a sample point lies. Such data sets are large; for example, when the intervals are all $2\mathrm{mm}$ , the data set is $128 \times 128 \times 128 = 2$ MB. + +In addition, we can also compute the actual slice with our head model. The computation process is similar to the data set producing process. + +# Experiments and Results + +An important issue is how to test the output of different algorithms. The two main objectives of this problem, maintaining the sharpness of the edges and the smoothness of the contours, are difficult to measure numerically, so we made comparisons by visual inspection (subjective as it is). + +At the same time, although the RMS (root mean square) error cannot comprehensively and rationally reflect the quality of a pictured slice, it is still helpful to the assessment of a pictured slice. So we take the minimization of the RMS error as our third goal. (This makes sense only for our simulated data, since the actual slice is unknowable in the real world.) + +We also must take the computational loads into consideration, since the data set is comparatively large. + +We did a number of comparisons, from which we present some representative slices. Typical slices are presented in Figures 5-8. We examine each in detail and then draw some conclusions. (In all cases, $s_{X} = s_{Y} = s_{Z} = 2$ .) + +In Figure 5, the slice is in the middle of the scanned object and parallel to the $XZ$ -plane. In this case, the slice traverses all the ten ellipsoids, so the overall performance of each method is easy to evaluate. In Figure 6, the slice is oblique and all algorithms work well, except for the power-control using the sinc function. In Figure 7, the slice in Figure 6 is translated by just a tiny distance, but the performances of some algorithms fall dramatically. In Figure 8, the slice plane is at an odd angle and in a critical position, which gives our algorithms a chance to show their performance in an awful situation. + +# Assessment of the Algorithms + +Except for the power-control method (with the sinc function), which is clearly unsuited to this problem, each method has advantages and disadvantages. + +# Trilinear + +The trilinear method works well in common cases. It usually has a small RMS error and takes a short time. But it has a tendency to blur the picture, and it disappointed us in the awful situation (see Figure 8b). When the contrast of different components in the scanned object is low, this method is not recommended. + +# Nearest-Neighbor and Median + +Both the nearest-neighbor method and the median method preserve a sharp edge and take the least time. But they lose the smoothness of the contours and cannot discriminate small objects (see Figures 6cd). A small translation of the slice plane also causes them to produce many more zigzag contours (see Figure 7cd). Consequently, they have big RMS errors. In spite of these weaknesses, when the data set is very large and the time element is more important, or when the zigzag has little bad effect on the result, these methods are desirable. + +![](images/ffe4d7c1644e3dbe978c8424170376c9af85d27085d53287ad3105882508520a.jpg) +a. Actual slice. + +![](images/c43cef35af64bad94eb17b9f453525d8fd9f0674073cf57e17c9cf974bd54fa8.jpg) +b. Trilinear interpolation (14.5). + +![](images/72c18f146a3fdde36b60024ad8b60d7639e596da815d5b08e7e9456209d3e41f.jpg) +c. Nearest-neighbor (20.0). + +![](images/0d7b65706d667780c74cf821a799e09e0632e248fcc5cfbfa69450112f628781.jpg) +d. Median (23.9). + +![](images/673b09fc5184331d77588844ca832f35615b5a49826179eda07a85e92f1bc4fc.jpg) +e. Power-control (15.3). + +![](images/a0621bc3c83906d4ef675c3e567b98e82df801b57cd59ed3a5b3cd7e5be50ac3.jpg) +f. Power-control (sinc) (29.3). + +![](images/ddba92996a508634b88c6e64483ecd26ae4890e73a7c3d95d5d4ad22839186fe.jpg) +g. Gradient (12.9). + +![](images/7b3ff4d103581efb7273c85439987b0ad8e5d0416c141a5f615e376f4d961528.jpg) +h. GNP-integrated (14.4). +Figure 5. A slice in the middle of the object and parallel to the $XZ$ -plane, with parameters $x_0 = 0$ , $y_0 = 128$ , $z_0 = 0$ , $\alpha = 0$ , $\beta = 90^\circ$ , $\gamma = 90^\circ$ . The number following each algorithm name is the RMS error. + +![](images/19d6de65d67a638eeef993eedbe259697cbd6ed7944a1ae9df724e8dca66c5c5.jpg) +a. Actual slice. + +![](images/a81a1159e2410c0c84163d5aa672b8bc56bc5992fe658b870490dc4d70120a49.jpg) +b. Trilinear interpolation (12.3). + +![](images/aad5e253845b09e57cc10647ab266f38e8e44c0be7c92f10020b41f8a116e6a6.jpg) +c. Nearest-neighbor (15.5). + +![](images/0479a5432749fa1c699992aba4b5bc8843369b4c86816f0ade760aba1a639629.jpg) +d. Median (18.7). + +![](images/399a458fd6356cee45f080c9ad63b1aeee008250057a2a522a8b77f65d517569.jpg) +e. Power-control (13.7). + +![](images/2d886dc8ae12bcadf5db48712549f8e8a27ddbed58438a7fa46f02ff391be4bc.jpg) +f. Power-control (sinc) (55.8). + +![](images/5893d832cd30aa8a88585409e5b3c6cc82e1e765c393556328c68859717c9c2d.jpg) +g. Gradient (11.3). + +![](images/64ed72d940ec321b5585b330a58b81dcbbaa6625d92deb3570009d426b0c2f78.jpg) +h. GNP-integrated (12.0). +Figure 6. An oblique slice, with parameters $x_0 = 0$ , $y_0 = 128$ , $z_0 = 0$ , $\alpha = 0$ , $\beta = 45^\circ$ , $\gamma = 90^\circ$ . The number following each algorithm name is the RMS error. The black area on the left of each slice is outside the scanned object. + +![](images/590f1c03d6e3437ae5c8c111a80606bbef3d75d1ca3730f599e9e58c9b3810d7.jpg) +a. Actual slice. + +![](images/1e716708caed8b800adb30080e182fff9745fa09aafe5bafcefe15a95116774d.jpg) +b. Trilinear interpolation (13.0). + +![](images/91951f70d17ce861431cc2f37d316d40d4896bcb4a5067eb86967c7f2c818ce0.jpg) +c. Nearest-neighbor (18.8). + +![](images/2bf899247125d2281e13e204393a2054c128c6a5089fe64ace75caac58b47bfa.jpg) +d. Median (20.6). + +![](images/ed48f19488f90a0d160294d759d9db2c9c32a15d246194e381ccab1690a0c633.jpg) +e. Power-control (13.8). + +![](images/0a52f73eda019761b1615d781c860b58316bbea97bdb2965b1ba94395ed9aa7c.jpg) +f. Power-control (sinc) (55.0). + +![](images/34fff885097ad811cda865a70fd7c865c444ebe39c95acd8683f7e8ba8813e50.jpg) +g. Gradient (12.3). + +![](images/911dea67a16168901a4c2d72d25d1d1de124c2e488f82659e301d88adf9c8bbb.jpg) +h. GNP-integrated (13.9). +Figure 7. The oblique slice of Figure 6 translated a tiny distance (one unit in the $y$ -direction), with parameters $x_0 = 0, y_0 = 129, z_0 = 0, \alpha = 0, \beta = 45^\circ, \gamma = 90^\circ$ . The number following each algorithm name is the RMS error. + +![](images/5ed6623651ec432c0b78263b09716c90e29ca9bdac160d81506cc775f57c23a0.jpg) +a. Actual slice. + +![](images/22a38d2f78a9c2b29696aa13e371ae8fab92111fb2f0c13e2ad0440c9c71ca31.jpg) +b. Trilinear interpolation (12.3). + +![](images/21e8b323387115677e850c03786be6c601386126d904a65d7bd77053cbc31e5b.jpg) +c. Nearest-neighbor (17.5). + +![](images/12041ad8d8384029c1b7a2b7b4e905c83bb2a9b28ecb8460ffaa1eee38c9df3e.jpg) +d. Median (18.0). + +![](images/c479a17d16fb4035b2482d97863f1f1b1065894017153440cb2ce6f912193e3a.jpg) +e. Power-control (13.8). + +![](images/416d3deaf3e0ebc4e4b295ee82b892ba7213dbbe04979a5361a72c02c877ae69.jpg) +f. Power-control (sinc) (68.0). + +![](images/96269f0bd031eb02b25486dcc396812f38ee1361c439bd038100a979da28bcc1.jpg) +g. Gradient (12.2). + +![](images/1ed45a0e93919ac0e1d7117fe9f2a662c8701a8c341e58561bea972ff2dda5cc.jpg) +h. GNP-integrated (13.4). +Figure 8. An oblique slice at an odd angle and in a critical position, with parameters $x_0 = 0$ , $y_0 = 126$ , $z_0 = 0$ , $\alpha = 0$ , $\beta = 70^\circ$ , $\gamma = 60^\circ$ . The number following each algorithm name is the RMS error. + +# Gradient + +The gradient method has the amazing advantage that it has the minimal RMS error in all cases. More important, the slice as pictured by this method has rather satisfactory smoothness and sharpness, as is obvious in any of the figures. + +# GNP-Integrated + +The GNP-integrated method has RMS error a little larger than that of the tri-linear method; however, it excels over any other algorithm when the sharpness and smoothness are taken into account. + +# Conclusion + +The gradient and GNP-integrated algorithms are the most competent if the runtime (10-14 s on a Pentium 166 for our implementation) is not a serious consideration (the other algorithms take 2-3 s). They are especially powerful in awful situations when some critical oblique slice is desired. + +# What Happens When the Interval Is Too Large? + +To verify our discussion on the sampling intervals (see the section Can the Unknown Density Be Known?), we also tested the result when $s_X = s_Y = s_Z = 4$ . As expected, the quality of the produced slices deteriorated. For example, some connected thin boundaries in Figure 9 are broken because of insufficiency of the data. + +![](images/67ba94afb11d309d58d58d6c7ef8c8af0ce0eb2f6018da67434b571a37d56bdc.jpg) +Figure 9. The slice of Figure 5a, with sampling interval $4\mathrm{mm}$ , as rendered by the gradient method; compare with Figures 5a and 5g. + +If a data set is not sampled at evenly spaced intervals, or if the data are too scattered, the user should first use simple interpolation to construct a data set with evenly spaced sampling intervals. + +# Strengths and Weaknesses + +- We present several good algorithms that can be selected by the user to fit different situations. +- We present a clear assessment of different algorithms, based on experimentation on simulated data for a head. +- We implemented all of the algorithms in a Windows 95 user-oriented computer simulation with easy input, suitable for repeated experimental research. +- We tried to find objective measurements of sharpness and smoothness but time did not permit. + +# References + +Frommhold, H., and R.Ch. Otto. 1985. New Methods of Medicine Imaging and Their Application (German). 1988. Chinese translation by Wang Zhen and Gu Ying. Beijing, China: Medician Technology Press. + +Gao, S.K. 1996. Imaging System in Medicine. Beijing, China: Dept. of Electrical Engineering, Tsinghua University. + +# Judge's Commentary: The Outstanding Scanner Papers + +William P. Fox + +Dept. of Mathemat + +Francis Marion University + +Florence, SC 29501 + +wfox@fmarion.edu + +Each of the participating schools is to be commended for its fine effort. The judges did not witness a wide range of mathematical modeling by the participants to obtain their solutions. Most teams recognized this problem as an image processing problem. + +According to the problem statement, the current family of MRIs slice a three-dimensional scanned image, vertically or horizontally. One component of the problem required teams to obtain an oblique slice. Teams used one of three basic methods to obtain their oblique slice: + +- The method seen most often was to create a plane, $Ax + By + Cz = D$ , and then rotate it using a standard matrix transformation. +- Selecting two points in three-space and defining a plane between those points. +- Selecting one point and two angles to define their plane. + +Teams realized that a critical element was mapping the coordinates of their oblique plane through their three-dimensional data set in order to obtain a gray scale scheme (0-255) for the elements in the plane. The three-dimensional data set was defined by three integers, while the points in the oblique plane were real numbers. Methods had to be developed to interpolate the gray scale values for all the points in the oblique plane. + +Methods chosen by the teams included: + +- a nearest neighbor algorithm, using eight or more points; +- a weighted point algorithm; +- splines (linear through cubic); and + +The UMAP Journal 19 (3) (1998) 273-275. ©Copyright 1998 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +- Lagrangian polynomials. + +Teams usually tried more than one method to obtain the grayscale values. Comparisons of methodologies were generally sketchy and lacked analysis. Teams that critically compared and analyzed their methodology and results, and reached valid conclusions, impressed the judges. + +The problem statement required teams to design and test an algorithm to produce sections of three-dimensional arrays made by planes in any orientation in space, preserving as closely as possible the original grayscale values. The use of grayscale was a distinguishable characteristic. Some teams used color to enhance their presentation; this was acceptable provided that the grayscale was not totally removed. Several teams suggested color because their grayscale resolution could not detect certain diagnostic elements; this was viewed as a fatal flaw, since the problem statement required the use of grayscale. + +The judges felt that to distinguish teams better, they would focus more closely on the MUST and SHOULD requirements of the problem statement. + +- The team's algorithm MUST produce a picture of the slice of the three-dimensional array by a plane in space. This became one critical element for teams to move beyond the Successful Participant category. Judges wanted to see a picture, not a matrix portrayal. Pictures were closely scrutinized to see if they appeared to be oblique slices. +- The teams SHOULD: + +- Design data sets to test and demonstrate their algorithm. +- Produce data sets that reflect conditions likely to be of diagnostic value. +- Characterize data sets that limit the effectiveness of their algorithm. + +Thus, judges looked for a good description of the data sets chosen and a description of the elements of diagnostic value. Verbal descriptions stating teams were looking for tumors or anomalies in body parts were acceptable. Teams also created spheres inside cubes as their representative data set. Provided teams put something of diagnostic value inside their larger 3-D elements, their data sets were still acceptable. + +The characterization of data sets that limited the effectiveness of the team's algorithm was the most avoided "SHOULD" requirement. A verbal description of any data set content that limited effectiveness was acceptable by the judges in order to separate the top quality papers. + +Another important element not uniformly accomplished was some kind of error analysis. Very few teams even checked for accuracy their integer values in the plane against the corresponding integer point in their three-space data set. The judges praised those teams that accomplished that. Almost every team referred to their pictures—"outputs"—to explain or attempt to show accuracy. Teams, as their only basis for analysis, used the "blurry versus sharp" edges. + +Style and clarity of presentation was viewed as another critical element. Teams' organization and ability to explain their methodologies separated participants. Good organization and a solid layout helped distinguish teams. + +The judges who evaluated this problem were impressed by the quality and completeness of the solutions presented. We were amazed at how much work was accomplished during that weekend. + +# About the Author + +William P. Fox is Chairman of the Dept. of Mathematics and Professor of Mathematics at Francis Marion University. He received his M.S. in operations research from the Naval Postgraduate School in 1982 and earned his Ph.D. from Clemson University in 1990. He has served as a judge and as the associate contest director of the MCM. Bill will be the contest director for the new High School Mathematical Contest in Modeling under a grant through COMAP. + +# Proposer's Commentary: The Outstanding Scanner Papers + +Yves Nievergelt + +Department of Mathematics, MS 32 + +Eastern Washington University + +526 5th Street + +Cheney, WA 99004-2431 + +yves.nievergelt@mail ewu.edu + +Once again, the problem came from the laboratory of Dr. Mark F. Dubach, who is studying the effects of intracerebral drug injections on monkeys with brain diseases at the University of Washington's Regional Primate Research Center in Seattle, WA. + +This year, the striking novelty about the winning solutions of this purely mathematical modeling problem is the student teams' mastery of several electronic tools, which they used very adeptly along with mathematics. + +- The first such electronic tool is the World Wide Web, which teams used in various degrees to find general medical information about Magnetic Resonance Imaging, real and simulated three-dimensional data sets for the human brain, and mathematical algorithms for two-dimensional interpolation. Two teams, however, found all the information they needed in printed form, and then adroitly generated their own test data. +- The second electronic tool lies in computer graphics, which all teams employed efficiently to communicate their results. For this problem, as one team noted, there does not seem to exist any numerical estimate of performance—such as a root-mean square or any other norm—that can substitute for the final visual medical diagnostic, and hence graphics may remain the best way to compare algorithms to reality. +- The third electronic tool consists of computer programming, which the teams utilized for the change of coordinates, in effect an isometric parametrization of a plane in space, and for three-dimensional interpolation. +- The fourth electronic tool, used appropriately by all teams, is the preparation of a final document containing prose, mathematical formulae, and graphics. + +All these tools helped, of course, with the essential part of the problem, namely, mathematics. Within mathematics, the teams demonstrated a good command of concepts and details. As a first example, one crucial place for concepts is at the start, where all teams realized that the practical problem could be cast as a mathematical problem in three-dimensional interpolation. As a second example, one place where detail became important is in the generalization from one- or two-dimensional to three-dimensional interpolation. While one team (Tsinghua University) already knew the result, other teams (Eastern Oregon University, Harvey Mudd College) offered excellent explanations and proofs of their mathematical generalizations. + +Finally, all teams demonstrated an efficient use of their time in balancing time devoted to searches and time devoted to in-house production for such items as data and algorithms. Such a balancing act between finding and reinventing the wheel can be critical in practice to deliver a working computer program in time. For example, none of the teams appears to have used a three-dimensional interpolation computer program from the World Wide Web, perhaps because it is not obvious where to get one. Indeed, a search of Netlib at http://netlib2.cs.utk.edu for "three-dimensional interpolation" shows such one- and two-dimensional routines as toms/474 (bicubic interpolation) but does not reveal any specifically three-dimensional routines. Nevertheless, such routines exist, but finding them and using them may demand more time than available. For instance, there is a multidimensional (with an unlimited number of dimensions) interpolation routine using nonuniform rational B-splines (NURBS) at http://dtnet33-199.dt.navy.mil/dtnurbs/about.htm. + +# About the Author + +Yves Nievergelt graduated in mathematics from the École Polytechnique Fédérale de Lausanne (Switzerland) in 1976, with concentrations in functional and numerical analysis of PDEs. He obtained a Ph.D. from the University of Washington in 1984, with a dissertation in several complex variables under the guidance of James R. King. He now teaches complex and numerical analysis at Eastern Washington University. + +Prof. Nievergelt is an associate editor of The UMAP Journal. He is the author of many UMAP Modules, a bibliography of case studies of applications of lower-division mathematics (The UMAP Journal 6 (2) (1985): 37-56), and Mathematics in Business Administration (Irwin, 1989). + +# Alternatives to the Grade Point Average for Ranking Students + +Jeffrey A. Mermin + +W. Garrett Mitchener + +John A. Thacker + +Duke University + +Durham, NC 27708-0320 + +wgm2@acpub.duke.edu + +Advisor: Greg Lawler + +# Introduction + +The customary ranking of students by grade point average (GPA) encourages students to take easy courses, thereby contributing to grade inflation. Furthermore, many ties occur, especially when most grades are high. We consider several alternatives to the plain GPA ranking that attempt to eliminate these problems while ranking students sensibly. Each is based on computing a revised GPA, called an ability score, for each student. We evaluate these alternative methods within the context of the fictitious ABC College, where grades are inflated to the extreme that the average grade is A-. + +- The standardized GPA replaces each grade by the number of standard deviations above or below the course mean. Students are then ordered by the average of their revised grades. +- The iterated adjusted GPA compares the average grade given in a course to the average GPA of students taking it, thereby estimating how difficult the course is. It repeatedly adjusts the grades until average grade equals the average GPA and uses the corrected GPA to determine rank. +- The least-squares method assumes that the difference between two students' grades in a course is equal to the difference between their ability scores. It then sets up a large matrix of linear equations, with an optional handicap for courses taken outside a student's major, and solves for the ability scores with a least-squares algorithm. + +An acceptable ranking method must reward students for scoring well, while taking into account the relative difficulties of their courses. It must clearly distinguish the top $10\%$ of students. Preferably, the method should make allowances for the fact that students often earn lower grades in courses outside their majors and should not discourage them from taking such courses. + +We used a small simulated student body to explore how the different methods work and to test the effects of changing a single grade. The least-squares method gave the most intuitive and stable results, followed by the iterated adjusted, the standardized, and finally the plain GPA. Under the least-squares and iterated adjusted methods, when a certain student's grade was changed in one course, that student and other students in that course changed position but most of the other students moved very little. + +We used a larger simulated student body, generated by a computer program, to compare the iterated adjusted and standardized algorithms. They agree on most of students in the top decile, around $89\%$ if plus and minus grades are included. They did not agree well with the plain GPA ranking, due to massive ties in the latter. + +All four methods are more reliable when plus and minus grades are included, since a great deal of information is lost if only letter grades are given. + +We recommend the least-squares method, since it is not very sensitive to small changes in grades and yields intuitive results. It can also be adapted to encourage well-roundedness of students, if the college chooses. + +However, if there are more than about 6,000 students, the least-squares method can be prohibitively difficult to compute. In that case, we recommend the iterated adjusted GPA, which is easier to calculate and is the best of the remaining methods. + +We recommend against the standardized GPA, because it does not properly correct for course difficulty, makes assumptions that are inappropriate for small or specialized courses, and produces counterintuitive results. We also recommend against the plain GPA, because it assumes that all courses are graded on the same scale and results in too many ties when grades are inflated. + +To avoid confusion, we use the following terminology: A class is a group of students who all graduate at the same time, for example, the class of 1999. A course is a group of students being instructed by a professor, who assigns a grade to each student. + +# Assumptions and Hypotheses + +- It is possible to assign a single number, or "ability score" (this will be the revised GPA), to each student, which indicates the student's relative scholastic ability and, in particular, the student's worthiness for the scholarship. In other words, we can rank students. +- The rank should be transitive; that is, if $X$ is ranked higher than $Y$ , and $Y$ is ranked higher than $Z$ , then $X$ should be ranked higher than $Z$ . We can + +therefore completely order students by rank. + +- The performances of an individual student in all courses are positively correlated, since: + +- There is a degree of general aptitude corresponding to the ability score that every student possesses. +- All instructors, while their grade averages may differ, rank students within their courses according to similar criteria. + +- While there may be a difference between grades in courses that reflects the student's aptitude for the particular subjects, this has only a small effect, because: + +- Students select electives in a manner highly influenced by their skill at the subjects available, that is, students tend to select courses at which they are most talented. +- All students should major in an area of expertise, so that they are most talented at courses within or closely related to their majors. +- The college may require courses that reflect its emphasis; even if the required courses could be considered "unfair" because they are weighted towards one subject (e.g., writing), that is the college's choice and highly ranked students must do well in such required courses. + +- Not all courses have the same difficulty. That is, it is easier to earn a high grade in some courses than in others. +- The correspondence of grades to grade points is as follows: $A = 4.0$ , $B = 3.0$ , $C = 2.0$ , $D = 1.0$ , $F = 0.0$ . A plus following a grade raises the grade point by one-third, while a minus lowers it by the same amount (i.e., $A - \approx 3.7$ , while $C + \approx 2.3$ ). +- Students take a fixed courseload for each semester for eight semesters. +- The average grade given at ABC College is A-. Thus we assume that the average GPA of students is at least 3.5, the smallest number that rounds to an A-. +- In general, $X$ should be ranked ahead of $Y$ (we write $X > Y$ ) if: + +- X has better grades than Y, and +- X takes a more challenging payload than Y, and +- X has a more well-roundedczy workload (we recognize that this point is debatable). + +# Analysis of Problem and Possible Models + +# The Problem with Plain GPA Ranking + +The traditional method of ranking students, commonly known as the grade-point average, or GPA, consists of taking the mean of the grade points that a student earns in each course and then comparing these values to determine the student's class rank. + +The immediate problem with the plain GPA ranking is that it does not sufficiently distinguish between students. When the average grade is an A-, all above-average students within any class receive the same grade, A. Thus, with only four to six classes per semester, fully one sixth of the student body can be expected to earn a 4.0 or higher GPA. $^{1}$ This makes it all but impossible to distinguish between the first and second deciles with anything resembling reliability. Furthermore, any high-ranking student earning a below-average grade, for any reason, is brutally punished, dropping to the bottom of the second decile, if not farther. This is a result of the extremely high average grade; if the average grade were lower, there would be a margin for error for top students. + +Unfortunately, the plain GPA exacerbates its own problems by encouraging the grade inflation that makes it so useless. Since the plain GPA does not correct for course difficulty, students may seek out courses in which it is easy to get a good grade. Faced with the prospect of declining enrollment and poor student evaluations, instructors who grade strictly may feel pressure to relax their grading standards. Instructors who grade easily may be rewarded with high enrollment and excellent evaluations, potentially leading to promotion. The entire process may create a strong push towards grade inflation, since the plain GPA punishes both the student taking a difficult course and the instructor teaching it. Any system intended to replace the plain GPA should address this problem, so that grade inflation will be arrested and hopefully reversed. + +Another potential concern is that the plain GPA encourages specialization by students. Since students tend to perform better in courses related to their majors, the GPA rewards students who take as few courses outside their "comfort zone" as possible and punish students who attempt to expand their horizons. We note, however, that individual colleges may or may not regard this as a problem; the relative values of specialization and well-roundedness are open to debate. + +# Three Possible Solutions + +Several potential alternatives to GPA ranking directly compare grades within each course. Under such a system, the following considerations come into play: + +- It is not possible to compare students just to others in their own class. Students often take courses in which all other students belong to another class. +- We have to compute rankings separately each semester, because the pool of students changes due to graduation and matriculation. +- It is not possible to take into account independent studies, because there is nobody to compare to. +- It is not possible to take into account pass/fail courses, because they do not assign relative grades. + +We recognize three potential solutions to this problem. The following sections describe them in more detail. + +- For the standardized GPA each student is given a revised GPA based on the student's grade's position in the distribution of grades for each course. +- The iterated adjusted GPA attempts to correct for the varying difficulties of courses. In theory, every grade given to a student should be approximately equal to the student's GPA, so that the average grade given in a course should be about equal to the average GPA of students in that course. This scheme repeatedly adjusts all the grade points in each course until the average grade in every course equals the average GPA of the enrolled students. +- The least squares method assumes that, other things being equal, the difference between two students' grades will be equal to the difference in their ability scores. It attempts to find these ability scores by solving the system of equations generated by each course (for example, if student X gets an A but student Y gets a B, then $X - Y = 4.0 - 3.0 = 1.0$ ). Since in any nontrivial population this system has no solution, methods of least-squares approximation are used to approximate these values. The students are then ranked according to ability score. + +# Standardized GPA + +# How It Works + +The standardized GPA is perhaps the simplest method and one most in keeping with the dean's suggestion. In each course, we determine how many standard deviations above or below the mean each student's grade is. This standard score becomes the student's "grade" for the class, the student's standard scores are averaged for a standardized GPA, and students are ranked by standardized GPA. This is a quantified version of the dean's suggestion to rank each student as average, below average, or above average in each class, and then combine the information for a ranking. + +# Strengths + +- The standardized GPA is not much more difficult to calculate than the plain GPA measurement. +- Each course can be considered independently. Instead of waiting for all results to come in, the registrar can calculate the standardized scores for each course as grades come in, possibly saving time in sending grades out. +- The standard deviations do correct for differing course averages, for example, getting a $\mathrm{B} +$ when the course average is a $\mathrm{C} +$ looks better than getting an $\mathrm{A} -$ when the course average is an A. At the same time, this method continues to rank students in the order in which they scored in each course. Student X is thus always ranked above student Y if X and Y take similar courses and X has better grades. + +# Weaknesses + +The standardized GPA suffers from many of the same problems as the plain GPA. + +- It does not reward students who have a more well-rounded curriculum. Instead, students are punished severely if they perform at less than the course average; for example, a student who takes a course outside his or her major is likely to score worse than students majoring in the course's subject. +- The plain GPA makes no distinction between easy and difficult courses and thus encourages easy courses. The standardized GPA attempts to correct this but ends up claiming that a low average grade is equivalent to a difficult course. This is not always true and has some interesting quirks: + +- Higher-level courses may be populated only by students who excel both in the subject of the course and in general, so only high grades are given. But if all grades are high, this method treats the course as easy! +- This method boosts one student's grade if the other students in the course have lower scores. +- Additionally, ability scores may be significantly raised by adding poor students to the course. + +- The standardized GPA does not assume that instructors assign grades based on a normal curve or to fit any other prespecified distribution. Not all instructors grade on the normal curve or even on any curve. Some courses may require grades to fit some other distribution in order to be fair, for example, if all the students are extraordinarily talented. +- The method does not compensate for the skill of the students when deciding the difficulty of a course. A good student who takes courses with other good + +students will look worse than a slightly less able student who takes courses among significantly less able students. The difficulty of a course should be measured not only by the grades of its students but also by the aptitudes of those students. + +# Consequences + +Grading based on deviation from the mean fosters cutthroat competition among students, since any student's ability score may be significantly raised by lowering the ability scores of other students. + +# Iterated Adjusted GPA + +# How it Works + +Rather than directly comparing students, this method compares courses. Suppose that a course is unusually difficult. Then students should receive lower grades in that course relative to their others, so the average grade in that course should be lower than the average GPA of all students enrolled in it. We should therefore be able to correct for courses that are unusually difficult by adding a small amount to the point value of every grade given in that course. Likewise, we can correct for easy courses by subtracting a small amount. Of course, once we have corrected everyone's grades, their new GPAs will be different, and most likely some courses will need further correction. The iterated adjusted GPA method makes ten corrections to all grades, then sorts students in order of corrected GPA. (Our numerical experiments show that ten iterations are sufficient to bring the difference between the average GPA and the average grade down to zero, to three decimal places.) + +# Strengths + +- This algorithm is fairly quick to compute, taking only a couple of minutes for 1,000 students, 200 courses, and 6 courses per student. +- The computation is straightforward to explain and easily understood by non-experts. + +# Weaknesses + +- All grades from all courses must be known in order to run the computation. +- The corrected grades cannot be computed independently by students. + +- There is no guarantee that the corrected GPAs will be comparable across semesters; to compute overall class rank at graduation, it will be necessary to average ranks across semesters, rather than average corrected GPAs. + +# Consequences + +This method systematically corrects for instructor bias in giving grades, thus eliminating the tendency of students to select easy courses, and therefore makes progress toward reversing grade inflation. The total correction made for each course may be used as an indicator of the course's grade bias. + +This algorithm tends to "punish" students in courses where grades are unusually high. If students score high in a course relative to their other grades, it could be because the course was easy or because the students put forth extra effort. If the course was easy, then the punishment is due; if the difference was due to extra effort, then such effort is not typical of the students in question and the punishment is arguably due. + +Although the correction can be applied to very small classes and independent studies, strange things are likely to happen. If a student in an independent study gets a grade above his GPA, he is punished by the correction, and if he gets a lower grade, he is rewarded—which is clearly undesirable. Using the sample data set presented later in Table 1, we experimented with independent studies and determined that they had minimal impact on the rank order. However, to avoid the possibility of such strange results, independent studies should be ignored in the computation. + +# The Least-Squares Algorithm + +# How It Works + +The least-squares method assumes that the difference between two students' abilities will be reflected in the difference between their grades. Hence, if $X$ and $Y$ take the same course, and get grades A and B, then we have a difference $X - Y = 4.0 - 3.0 = 1.0$ . We further assume that students majoring in natural science fields perform better in natural science courses than in humanities courses, and vice versa, and that the difference is of approximately the same for all students; we call it $H_{H}$ . Hence, if, in the example above, students $X$ and $Y$ are taking a mathematics course, but $X$ is majoring in physics and $Y$ is majoring in literature, we have $X - (Y + H_{H}) = 1.0$ . + +A course with $N$ students generates $N(N + 1)/2$ such linear equations; the abilities of each student are the solution to the set of all such equations from every course offered during the semester. In practice, these equations never have a solution. Hence, methods of least-squares approximation must be employed. The system is converted into the matrix equation $Ax = b$ , where $A$ is the matrix of the coefficients of the left-hand side of each equation, $x$ is the + +vector of the abilities of each student and the constant $H_{H}$ , and $b$ is the right-hand side of each equation. Multiplication by the transpose of $A$ yields the equation $A^T A x = A^T b$ . This matrix equation has a one-dimensional solution set, with nullspace equal to scalar multiples of $(111 \ldots 10)^T$ , where the 1s correspond to the student's abilities and the 0 to the constant $H_{H}$ . Thus, one student's ability score may be assigned arbitrarily, and the rest will then be well determined. This arbitrary assignment will in no way affect the ordering of any two students' ability scores, or the magnitude of the difference between two students. After these scores are determined, the difference between a 2.0 and the median score is added to every student's score, so that the scores will be easily interpretable in terms of the plain GPA. These scores can be averaged over all eight semesters to produce a ranking at graduation. + +# Strengths + +- Least squares corrects for the difficulty of every student's payload. +- Least squares can reward students for carrying a well-rounded courseload. This second strength is extremely flexible, and deserves further enumeration. + +- If a school wishes not to account for well-roundedness, the factor $H_{H}$ may be omitted, with no consequence except that the ability scores will no longer consider the balance or specialization in each student's curriculum load. +- If a school wishes to emphasize several areas of specialization rather than just two, it could do so by replacing $H_{H}$ with constants representing the difficulty of the transitions between each pair. +- A school wanting to assure that certain emphasized courses (e.g., a freshman writing course) not unduly benefit students majoring in some departments could categorize such courses as belonging to every area of specialization, or to none. +- Similarly, if a school wishes to dictate that certain de-emphasized courses (e.g., physical education) not reward students with a well-roundedness correction, it may also dictate that they be categorized in every area of specialization or in none. +- Other corrections may be made for students with special circumstances; for example, if a student double-majors in two different areas of specialization, each well-roundedness correction might be replaced by the average of the two corrections from each of the student's major areas. + +# Weaknesses + +The most glaring weakness of this method is that it involves huge amounts of computation and may severely tax computing resources at larger universities. + +For a student body of 6,000, with 120 courses of size 20 and each student taking 4 courses, we have $1,200 \times 21(21 + 1) / 2 \approx 250,000$ pairs of grades. This results in a sparse $A$ with 250,000 rows, 6,000 columns, and only 4 nonzero entries in each column (for the 4 courses that the student took). Then $A^T A$ has 36,000,000 entries; at 4 bytes per entry, keeping it in memory requires 144 MB, barely within range of current medium-size computers. Computing $A^T A$ takes on the order of $250,000 \times 6,000^2 = 9 \times 10^{12}$ multiplications, computing $A^T b$ takes only about $1.5 \times 10^9$ multiplications, and solving $A^T A x = A^T b$ takes about $6,000^3 = 2.2 \times 10^{11}$ operations. Thus, the time to solve the system is about $10^{13}$ operations, which would take 50,000 sec $\approx$ 14 hr on a 200 MHz computer. + +The memory needed increases with the square of the number of students and quickly becomes infeasible with this approach and current technology. + +# Consequences + +An immediate consequence of changing to this ranking will be that, so long as the average grade remains an A-, all ability scores will be tightly packed into a range between about 1.0 and about 3.0; no student will appear to carry an A average. This will likely result in instructors widening their grading scales, in order to reward their best students, thus reducing grade inflation to something more reasonable. + +# A Small Test Population + +We postulate a minicollege, with 18 students (A-R), that offers only the following courses: Math, Physics, Computer Science, Physical Education, Health, English, French, History, Philosophy, Psychology, and Music History. + +Math, Physics, and English are generally believed to be prohibitively difficult courses, while Physical Education, Health, and Music History are generally considered to be very easy. Students' transcripts are listed in Table 1. Just looking at these transcripts, without analyzing them numerically, we find that we should have the following, which any valid ranking system must satisfy (recall that $X > Y$ means that $X$ should be ranked above $Y$ ): + +- $\mathrm{A} > \mathrm{B}; \mathrm{C} > \mathrm{D}$ ; and $\mathrm{E} > \mathrm{F}$ , and so on, because A, C, etc., carry better grades than B, D, etc., in courseloads of similar difficulty. +- $\mathrm{O},\mathrm{D} > \mathrm{J}$ because $\mathrm{O}$ and $\mathrm{D}$ have slightly better grades than $\mathrm{J}$ in more difficult courses. +- $\mathrm{E} > \mathrm{D}$ because $\mathrm{E}$ has better grades in a more difficult courseload. + +We also recognize the following relationships as desirable: + +- $\mathrm{O} > \mathrm{Q}, \mathrm{R}$ and $\mathrm{P} > \mathrm{R}$ , because $\mathrm{O}$ and $\mathrm{P}$ have almost as good grades and much more difficult schedules. + +Table 1. Transcripts of the test population. A star indicate the student's major. "CPS" means Computer Science and "PhysEd" means Physical Education. + +
StudentCourses
APhysEd 4.3, Health 4.0, *History 3.0, Math 2.3
BPhysEd 4.3, Health 3.3, *Psychology 2.0, CPS 2.0
CMath 4.0, *Physics 4.3, CPS 4.0, Philosophy 3.7
D*Math 4.0, Physics 3.7, CPS 4.0, French 3.0
E*Math 4.3, Physics 4.0, English 3.3, History 3.7
FPhysics 3.7, *CPS 4.0, French 3.7, History 3.0
GMath 4.0, *CPS 4.3, Health 4.0, English 3.7
HCPS 3.0, *Physics 4.0, PhysEd 4.0, Psychology 3.0
IEnglish 4.0, French 4.3, CPS 3.7, *Philosophy 4.3
JEnglish 3.7, *French 4.0, Music History 4.0, Math 2.7
K*English 4.3, Philosophy 4.0, Psychology 4.0, Music History 4.3
LEnglish 3.7, *History 4.0, Psychology 4.0, Music History 4.0
MMusic History 4.3, Psychology 4.3, *French 4.3, PhysEd 4.0
N*Music History 4.0, Psychology 4.0, French 4.0, Health 4.0
OPhysics 4.0, English 3.3, *Math 4.0, Philosophy 4.0
PPhysics 3.0, *English 3.7, Math 3.3, Philosophy 4.0
QPhysEd 4.0, Health 4.3, Music History 4.3, *Psychology 4.3
RPhysEd 4.0, Health 4.0, Music History 4.0, *CPS 4.0
+ +- $\mathrm{M} > \mathrm{Q}$ and $\mathrm{N} > \mathrm{R}$ , because $\mathrm{M}$ and $\mathrm{Q}$ have similar grades but $\mathrm{M}$ has a more difficult schedule, and similarly for $\mathrm{N}$ and $\mathrm{R}$ . +- $\mathrm{K} > \mathrm{M}, \mathrm{N}, \mathrm{Q}, \mathrm{R}$ because $\mathrm{K}$ has similar grades in a much more difficult schedule. +- C, G, and K should be ranked near each other because they have similar grades in similar schedules. +- $\mathrm{P} > \mathrm{J}$ because $\mathrm{P}$ has similar grades against a significantly more difficult schedule and has higher grades in the two classes that they share. + +If we postulate that the well-roundedness of a student's schedule should affect rank, we also find the following relationships: + +- $\mathrm{E} > \mathrm{C}$ , $\mathrm{D}$ because $\mathrm{E}$ has almost as good grades in a more difficult, much more well-rounded schedule. +- $\mathrm{I} > \mathrm{K}, \mathrm{M}$ because I has similar grades against a more well-rounded schedule. + +The rankings of this sample population are given in the Table 2. A comparison of the different methods relative to the criteria that we have set out is in Table 3. Least squares does best, followed by iterated adjusted, standardized, and plain. + +Table 2. Rankings of the sample population under the various methods. + +
RankWith +/-Without +/-
PlainStandardizedIteratedLSPlainStandardizedIterated
1Q4.25K0.84K4.22E2.32R4.00G0.53L4.12
2M4.25I0.81I4.17I2.26Q4.00L0.49G4.07
3K4.17Q0.60M4.09G2.24C4.00C0.39I4.06
4I4.08M0.52C4.08C2.24N4.00I0.36C4.05
5R4.00G0.39G4.07O2.18M4.00N0.34K4.03
6N4.00C0.22L4.06K2.14G4.00K0.27N3.96
7C4.00E0.21E4.05M2.05K4.00M0.24E3.92
8G4.00L0.16Q4.02Q2.03I4.00Q0.24M3.89
9L3.92N-0.01O3.96F2.01L4.00R0.23J3.87
10E3.83O-0.03N3.90R1.99O3.75F0.11D3.84
11O3.83R-0.20R3.76P1.94J3.75J0.07F3.84
12D3.67D-0.26D3.74D1.93F3.75E0.07O3.83
13J3.58A-0.27F3.69L1.92E3.75D-0.12Q3.81
14F3.58F-0.28J3.66N1.87D3.75O-0.15R3.80
15H3.50H-0.45P3.62J1.74H3.50H-0.30P3.58
16P3.50J-0.49H3.41H1.60P3.50P-0.60H3.39
17A3.42P-0.59A3.36A1.44A3.25A-0.61A3.18
18B2.92B-1.16B2.76B0.89B2.75B-1.56B2.59
+ +Table 3. Number of criteria satisfied by each method on the minicollege data set, for $+ / -$ grades. + +
PlainStandardizedIteratedLeast Squares
Required (20)allallallall
Desirable (13)5689
Well-roundedness (4)122all
+ +# Test Population Redux (No +/− Grades) + +We now take the test population and drop all pluses and minuses from the grades. Again, we determine some basic required relationships that any valid ranking system must satisfy: + +- A > B; C > D; and G > H since A, C, and G have better grades in similar courses. + +We also recognize the following relationships as desirable: + +- $\mathrm{O} > \mathrm{P}$ because $\mathrm{O}$ has slightly better grades in the same courseload. +- $\mathrm{E} > \mathrm{F}$ because $\mathrm{E}$ has the same grades in a more difficult course load. +- $\mathrm{O} > \mathrm{Q},\mathrm{R}$ because $\mathrm{O}$ has almost equivalent grades in a much more difficult courseload. +- $C > I$ , $G$ because $C$ has the same grades in a more difficult courseload. +- $\mathrm{I} > \mathrm{K}, \mathrm{L}$ because I has the same grades in a more difficult courseload. +- K, L > M, N because K and L have the same grades in a more difficult courseload. + +- M, N > Q, R because M and N have the same grades in a more difficult courseload. + +If we postulate that the well-roundedness of a student's schedule should affect rank, we also find that C, E, G, and I should be ranked near each other because + +- E has slightly worse grades in a more difficult, better-rounded courseload; and +- C has the same grades as G and I in a slightly more difficult, slightly less well-rounded courseload. + +The rankings of this sample population are given in the right-hand half of Table 2. Table 4 gives a comparison of the methods. + +Table 4. Number of criteria satisfied by each method on the minicollege data set (no $+ / -$ grades). + +
PlainStandardizedIteratedLeast Squares
Required (3)allallallall
Desirable (12)1699
Well-roundedness (6)3134
+ +# Stability + +# How Well Do the Models Agree? + +We have four ways of ordering students: plain GPA, standardized GPA, iterated adjusted GPA, and least squares. Since all four are more or less reasonable, they should agree fairly well with each other. One way to test agreement is to plot each student's rank under one method with his rank under the others. If the plot is scattered randomly, then the rankings do not agree about anything. If the plot is a straight line, then the rankings agree completely. + +To get an idea for how each model works, we created by means of a computer simulation a population of 1,000 students and 200 courses, with 6 courses per student. The details of the simulator are explained in the Appendix. We implemented all of the algorithms except least squares, which was too difficult for the available time. A single run of the simulation is analyzed here, but these results are typical of other runs. + +# With Plus and Minus Grades + +See Figures 1-3 for graphs of the agreement, using simulated students and courses, and allowing plus and minus grades. The comparisons to plain GPA rankings are rather scattered, especially toward the lower left corner, where + +the highest rankings are. The plain GPA rankings do not appear to agree particularly well with either the iterated adjusted or the standardized rankings. There are lots of scattered points, which is due mostly to the facts that there are lots of ties in plain GPA rankings (especially near the top of the class) and that tied students are ordered more or less at random. Very few ties are present in any of the other methods. + +![](images/1ceb4f11b08c18f312dccd70946d6b05466994b4bcfb4ca907f5c369e3f39a87.jpg) +Figure 1. Plain GPA rankings vs. standardized GPA rankings, using simulated students. + +![](images/8cfb30d8e8cae5f7644ff0e770262c5b10d23916972a5e0313dffdfb35c34b09.jpg) +Figure 2. Plain GPA rankings vs. iterated adjusted GPA rankings, using simulated students. + +![](images/c7430bf5749f7c86a5254f659454d0c392e2d2e7edc61f282ae6d5aebb4c645f.jpg) +Figure 3. Standardized GPA rankings vs. iterated adjusted GPA rankings, using simulated students. + +![](images/07e68346920152e67e8ceeae32b19770587b4fceac0f1a5d77ac1244486ca120.jpg) +Figure 4. Plain GPA rankings vs. standardized GPA rankings, using simulated students, with no plus or minus grades. + +The iterated adjusted and standardized rankings are in better agreement, with fewer outlying points. These two methods agree on 89 of the 100 students in the top decile. + +# Without Plus and Minus Grades + +See Figures 4-6 for graphs of the agreement, using simulated students and courses, and disallowing plus and minus grades. + +A great deal of information is lost without the use of plus and minus grades. In particular, there are many more ties in the plain GPA-based ranking, which show up as large squares of scattered points. The large square at the bottom left shows the massive tie among people with 4.0 averages. Again, the plain GPA is not in good agreement with the nontraditional methods due to these + +![](images/4935ae60b6b90c7ed71374560d3b0c6609c599b3446c67356d4599d38fec1fe0.jpg) +Figure 5. Plain GPA rankings vs. iterated adjusted GPA rankings, using simulated students, with no plus or minus grades. + +![](images/cb5164cc37fea006a90005fbc1d8d021417b76049c7cca17729a50de68d6d916.jpg) +Figure 6. Standardized GPA rankings vs. iterated adjusted GPA rankings, using simulated students, with no plus or minus grades. + +ties. Both new models agree with each other on 79 of the 100 students in the top decile. Apparently, the loss of information is responsible for the greater lack of agreement. + +# How Much Does Changing One Grade Affect the Outcome? + +If one grade of one student is changed, the student's rank can be expected to change as well. For plain GPA rankings, changing one student's grades can only move that student from one place to another. In the nontraditional rankings, each student's rank is determined relative to the other students, and one changed grade might trigger a chain of rank changes. + +To test sensitivity, the sample population was modified slightly: Student Q's grade of A+ in Music History was changed to a C-, a very drastic change. The change was tested including plus and minus grades and using only whole letter grades. (When only letter grades are considered, the change is to a C.) + +- Using the GPA ranking and plus and minus grades, Q dropped from 1st to 14th; with only whole letter grades, Q dropped from 2nd to 16th. In both cases, there were no changes in the order of other students except to make room for Q. +- For the standardized GPA with plus and minus grades, Q dropped from 3rd to 12th. L, N, and J improved several places, apparently because they also took Music History and benefitted from the drop in mean grade. R improved one spot, apparently for the same reason. K dropped by one. Without plus and minus grades, Q dropped 8 places, and J, L, and N improved one rank each. Student I dropped three places, perhaps because of how N and K benefitted from Music History. +- The iterated adjusted GPA including plus and minus grades was rather stable. Q dropped 9 places, and J and L improved a couple of places each, + +benefiting from the apparent increase in the difficulty of Music History. G dropped two places, possibly because he scored lower in Health. When only whole letter grades are used, Q dropped from 13th to 16th. J and K improved a couple of places, benefitting from the increased difficulty of Music History, while G dropped again, three places this time. O and F switched places for no obvious reason. + +- Using least squares and plus and minus grades, Q dropped 9 places. Other members of the Music History course J and K improved a bit, and L improved a lot. With letter grades only, Q dropped from 15th to 16th, and J, K, and L improved. For no obvious reason, E and C switched places. O dropped by two because of improvements by K and L. + +Thus, it would seem that plain GPA ranking is the most stable, since at most one person changes rank and the rest move up or down at most one rank to compensate. The next most stable seems to be least squares, followed by iterated adjusted, and finally standardized. In each scheme, the coursemates of the person whose grade changed are most likely to change rank. There were a few chain-reaction reorderings, which are harder to explain. Also, having plus and minus grades appears to improve stability in general. + +# How Does Course Size Affect the Outcome? + +Another simulation was run with 1,000 students, 500 courses, and 6 courses per student. Courses came out smaller, and the correlation between the standardized ranking and the iterated adjusted ranking was weaker. This is probably due to the fact that standard deviations computed on smaller data sets tend to be less reliable, as are average grades and average GPAs. + +# Strengths and Weaknesses of Each Model and Recommendations + +If the college wishes to promote well-roundedness over specialization (we would suggest this), and has a fairly small population (fewer than about 6,000 students), we recommend the least-squares method. Otherwise, we recommend the iterated adjusted GPA method. + +We feel that the least-squares method is superior to the other two because: + +- It does not punish students for attempting to expand their horizons. +- It produces results more consistent with intuitive observation than do the iterated or standardized GPA. +- It is more flexible than either the iterated or standardized GPA. + +- It is clear and easily understood. + +The iterated adjusted GPA method has a few definite advantages as well: + +- It is significantly faster than the least-squares method. +- If the well-roundedness of students is not a consideration, it produces results that are roughly as consistent with intuitive observation as the least-squares method. + +We feel that the standardized GPA method is decidedly inferior, and should not be recommended, because: + +- It makes no attempt to correct for schedule difficulty or well-roundedness. +- It assumes that all courses have the same range of ability among their students. +- It produces results that are no more consistent with intuitive observation than those produced by the plain GPA. + +# Further Recommendations + +# Transition from GPA Ranking + +The three methods given here all rank an entire student body for one semester of courses. Thus, to rank students just within a single class, we must either average their ability scores (revised GPAs) or their ranks within their class over each semester. The new system could be phased in at any time if grades for enough preceding years are kept on record. The new ranking algorithm could be applied to students who have graduated to determining rankings for the next class. However, we recommend careful testing on several past years of data as well as current grades. The administration should be prepared for a great deal of student and faculty opposition because it is a new, untested system. The standardized and iterated adjusted schemes are likely to encounter opposition because they directly alter the point values of grades during computation. The least-squares method simply reinterprets them and is less likely to make instructors feel that their authority has been violated. + +# Transfer Students + +ABC College will have to come up with its own policy concerning the ranking of transfer students. One option is to translate transferred grades to an equivalent grade in a particular course at ABC. That allows the ranking algorithm to run on the maximum amount of information. However, someone will have to compare all other colleges to ABC very carefully to create the official + +translation policy. Another possibility is to ignore transferred grades when computing the rankings. That avoids the problem of estimating how grades at other schools compare to ABC's, but at the expense of throwing out a lot of information. + +# Importance of Plus and Minus Grades + +It seems that plus and minus grades are extremely helpful in determining class rank, especially since grades are so heavily inflated. Without them, ABC has to rank its students primarily on the basis of just two grades, A and B, and a considerable fraction of the students have exactly the same grades. With pluses and minuses, there are six different grades, A+, A, A-, B+, B, and B-, which come into play, thus differentiating students more precisely. All four ranking systems appear to work better when plus and minus grades are used. ABC should encourage its instructors to use them with care. + +# Appendix: Details of the Simulation + +# Simulating Courses + +We want to take the following things into consideration when creating courses: + +- Students tend to pick more courses in areas they are comfortable in. In particular, they are required to select courses in their majors. +- Courses vary in subject matter. Some require a lot of math and scientific experience, while others focus more on human nature, history, and literature. +- Courses vary in difficulty. Here, we are not considering the difficulty of the material, but rather how difficult it is to get a good grade in the course. Students generally prefer courses where they expect to get better grades. +Students are able to estimate their grade in a course fairly accurately. + +Each simulated course $c$ therefore has three attributes. The first two are fractions, $c_{s}$ and $c_{h}$ , which represent how much the course emphasizes the sciences and the humanities, respectively. Since these are fractions of the total effort required for a course, we have $c_{s} + c_{h} = 1$ . In the simulation, $c_{s}$ is determined by generating uniformly distributed random numbers between 0 and 1, and $c_{h} = 1 - c_{s}$ . + +The third attribute $c_{e}$ is the "easiness" of the course, that is, how easy it is to get a good grade. This number represents the tendency of the instructor to give higher or lower grades. In the simulation, $c_{e}$ is determined by taking a uniformly distributed random number between -0.5 and 0.5, indicating that + +instructors may skew their grades by up to half a letter grade up or down. We use a uniform distribution rather than a normal distribution so as to make the courses vary in difficulty over the entirety of a small range. + +# Simulating Students + +We want to take the following things into consideration when creating simulated students: + +- Students have varying strengths and weaknesses. In particular, some students have different ability levels in the sciences and humanities. Students prefer courses within their comfort zones. +Students prefer getting higher grades. + +Each simulated student S has two attributes, $S_{s}$ and $S_{h}$ . Both of these are numbers representing grades that indicate the student's abilities in the sciences and humanities, respectively. Both range from 0 to $g_{\mathrm{max}}$ , which is either 4.0 or 4.3 depending on the grading scale. + +Given a course $c$ and a student $S$ , the grade for that student in that course is given by + +$$ +g = \min \left(S _ {s} c _ {s} + S _ {h} c _ {h} + c _ {e}, g _ {\max }\right). \tag {1} +$$ + +In the simulation, $S_{s}$ and $S_{h}$ are determined by taking random numbers from a normal distribution with mean 3.5 and standard deviation 1.0, with a maximum of $g_{\mathrm{max}}$ . + +# Generating a Simulated Population + +The simulated population is created by first generating a number of courses and a number of students. A caseload is selected for each student S by repeating the following: First, a course $c$ is selected at random. If the student is weak in science ( $S_{s} < 2.5$ ) and the course is heavy in science ( $c_{s} > 0.75$ ), then the course is rejected. Similarly, if the student is weak in humanities and the course is heavy in humanities, the course is rejected. If the student estimates his or her overall grade at less than 2.5, the course is rejected. This process of selection and rejection is repeated until a course is not rejected, but at most ten times, and then the last course is taken no matter what. The selected course is then added to the student's schedule and the grade computed as stated in (1), rounded to the nearest possible grade. + +The rejection process allows for the students' preferences in selecting courses, and the fact that at most ten courses can be rejected allows for distribution requirements. + +# Analysis of the Simulated Data + +The simulation program was used to create 1,000 students and 200 courses, where the courseload was six. Thus, there were around $1,000 \times 6 / 200 \approx 30$ people in each course, which is reasonable. Two runs were made, one with only whole grades, and one with + and - grades allowed. + +We can determine a lower bound for the average GPA at ABC College. Suppose we have $N$ students, each of whom takes $M$ courses. Denote by $g_{ij}$ the grade of student $i$ in that student's $j$ th course. Then the average grade for that entire class is given by + +$$ +\frac {\sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {M} g _ {i j}}{N M}, +$$ + +and the average GPA is given by + +$$ +\frac {\sum_ {i = 1} ^ {N} \frac {\mathbf {\Pi} _ {j = 1} ^ {M} g _ {i j}}{M}}{N}. +$$ + +The two are equal, so if the average grade at ABC College is $\mathrm{A} -$ , then the average GPA should be no more than 3.5. Any GPA less than 3.5 would be rounded to a $\mathrm{B} +$ or less, and those greater than 3.5 would be rounded to $\mathrm{A} -$ or better. In the both data sets, the median GPA was 3.5, which agrees with the information given about ABC College. + +# Strengths and Weaknesses of the Simulation + +The computation runs very quickly—in a few minutes—even though it was written in a high-level interpreted language (Python). It is very flexible and can be adjusted to reflect different grade distributions, as may be found in different colleges. It takes into account variation in student interest and in course material. + +However, most of the courses turn out roughly the same size. Many colleges have a high proportion of small, seminar-style courses, and there are almost always some very large lectures. The simulation ranks the whole school together and does not distinguish among the classes. There are only two majors in the simulation, sciences and humanities; and while there are forces within the simulation that push students into taking more courses in their preferred area of knowledge, there are no guarantees that the resulting schedules accurately reflect major requirements. There are also no prerequisites enforced, and thus no courses that are predominantly populated by freshmen and seniors. This also means that the simulator cannot realistically create courses for more than one year. + +# A Case for Stricter Grading + +Aaron F. Archer + +Andrew D. Hutchings + +Brian Johnson + +Harvey Mudd College + +Claremont, California 91711 + +{aarcher,ahutchin,bjohnson}@hmc.edu + +Advisor: Michael Moody + +# Abstract + +We develop a ranking method that corrects the student's grades to take into account the harshness or leniency of the instructor's grading tendencies. + +We simulate grade assignment to a student based on the student's inherent ability to perform well, the student's specific aptitude for the course, the difficulty of the course, and the harshness or leniency of the instructor's grading. + +We assume that we have access to a instructor's previous grading history, so that we can judge how harsh or lenient a grader each instructor is. After making this determination, we adjust each grade given by that instructor to systematically correct for that instructor's bias. + +After correction, the student body has an aggregate GPA of approximately 2.7, corresponding to an uninflated grade of B-. The corrected GPA values do a considerably better job of accurately ranking the students by ability, especially for students in the bottom eight deciles. + +# Assumptions + +1. We wish to evaluate students purely based on their ability to perform well in courses. +2. Each student has a quality attribute that is not directly measurable but influences the ability to do well in courses. The ideal ranking of students is by highest quality attribute. + +3. The instructor has an accurate perception of each student's performance in a course. +4. The cause of high grades is lenient grading practices by the average instructor at ABC College. +5. A more lenient instructor tends to grade all students higher, not just students of a certain ability level. +6. Scholarship selection is completed in the first half of a student's undergraduate career, to allow her to enjoy the scholarships while she is still in school. +7. Because the students are early in their careers at ABC College, they are still taking primarily general education courses, rather than courses in their major. Therefore, we assume that they select courses randomly, and thereby we model the breadth of course selection across disciplines. +8. Since the students know that their grades are going to be adjusted to filter out the harshness of their instructors' grading, they do not gravitate toward courses taught by lenient instructors. +9. Each student has a varying aptitude for each course. Presumably, a student has more aptitude for courses in her major. But since general course requirements tend to be broad and these are the courses we are examining, we assume that a student's aptitude for a course is random. +10. Each course has an inherent difficulty. In an easy course it is difficult to differentiate the high ability students from the rest, whereas tougher material produces a greater spread of performances. +11. Instructors know when a course is difficult. Presumably all students (even the top ones) will attain a lesser mastery of more difficult material, but the instructor will take this into account when assigning grades. +12. The college is on a semester system and each student takes four courses per semester. +13. A student's performance in a course is not influenced by which other students are taking the course. Neither is the student's grade, since we assume that instructors do not grade the students in a given course on a curve but rather on some absolute standard of performance. +14. An instructor's harshness in grading does not depend on the course and remains constant over a period of several years. Data on instructors' grading histories are available. +15. All instructors rate a student's performance the same, but they have different standards for what grade that performance should earn. + +# Practical Considerations + +The concept of a single quality attribute that describes each student is not one that plays well politically and in the media. Not many people would advocate that a student's overall ability to do well in courses can be accurately characterized by a single real number. Therefore, our adjusted measure of student ability should be some sort of adjusted GPA, which will be easier for a general audience to accept and understand. This does not present a problem from the modeling point of view, as long as we know how quality rankings correspond to GPA values, and vice versa. + +Ultimately, as we construct our model, we will run into a fundamental grading problem. The average grade at ABC College is an $\mathrm{A} -$ , which corresponds to a 3.67 GPA. Grade point averages that are this high result in very uninteresting grade distributions. The majority of the grades must be $\mathrm{A} +$ , $\mathrm{A}$ , or $\mathrm{A} -$ . In other words, if we look at the transcript of any above-average student at ABC, we will probably see a page full of $\mathrm{A} +$ , $\mathrm{A}$ , and $\mathrm{A} -$ grades. In this kind of environment, it will be extremely difficult to pick out the top few students, because the top half of the school is separated by only about 0.6 grade points. In contrast, the bottom half of the school is spread over the remaining 3.67 grade points, so it will be much easier to rank them by ability. + +One radical solution to this dilemma is to require additional feedback on student performance from the instructors. We outline one possible system here, before we move on to a less radical approach. In addition to giving grades on the usual A to F scale, we could require an instructor to give each student a ranking between 1 and 10. At least one student in each course must receive a 1, and at least one student must receive a 10. This forces a spread in the instructor's rankings, so that even an easy-grading instructor (all $\mathrm{A}+$ grades) must rank the better-performing students above the less able students. Next, the instructor is allowed to give a context to the scale. If the instructor has taught that course before, she would be asked to rate the current course in terms of previous ones. We ask the instructor to identify, on some absolute scale of ability, which interval corresponds to the 1 to 10 relative scale for the course. For example, if the instructor felt that her best student was about as competent as a student at the 90th percentile, then she would identify the right endpoint of the scale with the 90th percentile of absolute student ability. If she felt that her worst student was the poorest student to attend the college over an entire five-year span, then she would identify the left end of the relative scale with that point on the absolute scale. This two-stage evaluation system forces the students to be differentiated by performance puts the measures of performance into an absolute (rather than instructor-dependent) context. + +# What Characterizes a Good Evaluation Method? + +As we attempt to rank the students at ABC College, we assume that the students have underlying quality scores that are reflected in their grades. We try to approximate the ranking induced by the hidden quality values. It may be inappropriate (for political reasons) to refer to our rankings as "estimated student qualities," so we instead calculate an adjusted GPA. + +As we calculate adjusted GPA values, we keep in mind several goals: + +- We wish to allocate correctly the available scholarships to the top $10\%$ of the student body. To test whether or not we succeed, we must compare the ranking induced by our adjusted GPA values with the actual ranking of the students by intrinsic quality. Our first measure of the accuracy of our adjusted GPAs will just be the number of scholarships that we correctly allocated to deserving students. +- If the top-ranked student somehow fails to receive a scholarship, this is considerably more unjust than if a student who just barely deserves a scholarship misses out. Thus, we compute a second measure of accuracy by summing the severity of the mistakes made in awarding scholarships. +- It is important for all of the student rankings to be accurate, not just the top $10\%$ , because they are used for much more than just scholarship determination. For instance, class rank is often cited in graduate school and job applications. Therefore, we consider a third measure of accuracy that gives a total error measure for our entire set of adjusted GPA rankings, rather than for just the top decile. + +# Modeling College Composition and Grade Assignment + +According to Assumption 13, we do not need to consider the other students in a course when we determine a student's performance in the course and the grade the student receives; in other words, the composition of students in the course does not significantly affect the students' ability to learn, and none of the instructors grades on a curve. Thus, we model a student's grade as a function of + +- her inherent quality, +- her aptitude for the specific course, +- the difficulty of the course, and + +- the harshness of the instructor grading the course. + +We treat each of these quantities as real-valued random variables and generate their values by computer. + +We let $q_{i}$ denote the inherent quality of student $i$ . We will consider $q_{i}$ to be distributed normally with mean 0 and standard deviation $\sigma_{q}$ . This is reasonable, since we know that the normal distribution gives a good approximation for many characteristics of a large population. + +We let $c_{i,j}$ represent the random course aptitude adjustment for student $i$ when she takes course $j$ . Again, it makes sense to let $c_{i,j}$ be normally distributed about 0, and we denote the standard deviation of this aptitude adjustment by $\sigma_c$ . We let the net aptitude of student $i$ in course $j$ be $q_i + c_{i,j}$ , which is normally distributed with mean 0 and standard deviation $\sqrt{\sigma_q^2 + \sigma_c^2}$ . We choose our unit of measure so that $\sigma_q^2 + \sigma_c^2 = 1$ . Furthermore, we estimate that a student's intrinsic quality influences her success at least five times as much as her aptitude adjustment for the particular course she is taking. Hence, we choose $\sigma_c < 0.2$ . + +Next, we consider how the difficulty of a particular course affects the grades that the instructor gives. We assume (see Assumption 10) that a difficult course spreads out the distribution of grades given; this means that poor students tend to do worse in difficult courses, but also that excellent students will do better, since they are being given an opportunity to excel. Conversely, in an easy course, the grades tend to bunch closer together, since the poor students are being given an opportunity to excel and the best students' performances are limited by the ease of the subject matter. Let $d_{j}$ denote the difficulty of course $j$ ; then this interpretation leads us to consider a performance rating $N_{i,j}$ of student $i$ in course $j$ given by + +$$ +N _ {i, j} = (q _ {i} + c _ {i, j}) d _ {j}, +$$ + +where $d_{j}$ is a positive number, equal to 1 for a course of average difficulty, greater than 1 for a difficult course, and less than 1 for an easy course. Note that we are assuming the performance of a student in a course is random only in that the student's inherent ability is modified by a random aptitude adjustment factor. Once this factor is applied, the student's performance is determined, given the difficulty of the course. + +Finally, we must take into account the grading philosophy of the instructor. Notice that a difficult course does not shift the performance distribution to the left, because the performance is measured relative to the instructor's expectation. We assume that the instructor is aware of the difficulty of the course and compensates accordingly in grading. This brings up a delicate distinction. An instructor's harshness does not reflect her expectation level but only her tendencies in grading. That is, we assume that the instructor's harshness does not pertain to her assessment of a student's performance but rather to what grade she thinks that performance deserves. + +Let $h_k$ denote the harshness of instructor $k$ ; then we should let the student's grade depend on $N_{i,j} - h_k$ , since the harshness causes a systematic bias in all + +of the grades that the instructor gives. We let a harshness of 0 correspond to an average instructor at an institution without grade inflation. At Duke University and presumably at other institutions, 2.7 was the average GPA prior to the grade inflation that began to appear in the 1970s [Gose 1997]. Therefore, letting $G$ denote the grading function that maps real numbers to discrete letter grades, we should center $G(0)$ on a grade of B-. Furthermore, among instructors who grade on a curve, an interval of one letter grade is often equated with one sample standard deviation in the course scores. So, we let one standard deviation for a course of average difficulty correspond to a whole letter grade in our model. Thus, the instructors in our model are grading on a virtual curve; that is, they grade on an absolute standard that simulates grading on a curve in a hypothetical course in which the full distribution of students is enrolled. + +This analysis leads to a grading function + +$$ +G: \quad \rightarrow \{0, 3, 4, 5, 6, 7, 8, 9, 1 0, 1 1, 1 2, 1 3 \} +$$ + +defined by + +$$ +G (x) = \left\{ \begin{array}{l l} 0, & \text {i f} x \leq - \frac {1 1}{6}; \\ 3, & \text {i f} - \frac {1 1}{6} < x \leq - \frac {9}{6}; \\ 4, & \text {i f} - \frac {9}{6} < x \leq - \frac {7}{6}; \\ \vdots \\ 1 2, & \text {i f} \frac {7}{6} < x \leq \frac {9}{6}; \\ 1 3, & \text {i f} x > \frac {9}{6}, \end{array} \right. +$$ + +where the values $0, \ldots, 13$ represent the letter grades F, D, D+, C-, C, C+, B-, B, B+, A-, A, A+. To convert the numeric value to grade points, we divide by 3; thus, an A- average means a GPA of 3.67. If student $i$ takes course $j$ taught by instructor $k$ , she will receive a grade of + +$$ +G \left(N _ {i, j} - h _ {k}\right) = G \left(\left(q _ {i} + c _ {i, j}\right) d _ {j} - h _ {k}\right). +$$ + +For convenience, we define $l(g)$ and $r(g)$ to be the left-hand and right-hand endpoints of the interval on which $G = g$ . For instance, $r(13) = \infty$ and $l(12) = \frac{7}{6}$ . + +A simple calculation reveals that in a course where $d = 1$ and $h = 0$ , the expected grade is 2.63 (see Table 2 and Figure 1). We intend harshness 0 to represent a reasonable level of strictness in grading. It centers the grades at B-, which is the exact middle of all passing grades, and yields a GPA in line with the "reasonable" historical number of 2.7 at Duke University. + +We can visualize the grading method by graphing a normal(0,1) density function, which represents $N$ (in the case where difficulty is 1), with the $x$ -axis partitioned into intervals representing grades according to the grading function $G$ (see Figure 2). A difficult course spreads out the distribution, resulting in more Fs (because the poor students cannot keep up) and more As (because the top students have an opportunity to shine). A positive (negative) harshness effectively shifts the grade intervals to the right (left). + +Table 1. +Symbol table. + +
ci,jcourse aptitude adjustment for student i taking course j
destimate of course difficulty
djdifficulty of course j
Ggrading function, from performance rating to letter grade
gaverage grade given by instructor, from historical data
gadjadjusted grade that an instructor of harshness zero would give
hkharshness of instructor k
Iinterval in which student's performance value is estimated to lie
l(g), r(g)endpoints of performance rating interval corresponding to letter grade g
Ni,jperformance rating of student i in course j
Nestestimate of student performance value
Ni,estestimate of student i's performance value
Φstandard normal cumulative distribution function
qiinherent quality of student i
qi,estestimate of inherent quality of student i
σqSD of inherent quality of student i
σiSD of course aptitude adjustment of student i taking course j
σ(d,h)SD of grades given by a professor of harshness h in a course of difficulty d
t0left endpoint for performance rating interval corresponding to grade of A+
t1right endpoint for performance rating interval corresponding to grade of F
+ +Table 2. +Expected grade on a 4-point scale as a function of course difficulty $d$ and instructor harshness $h$ . + +
hd
0.50.60.70.80.91.01.11.21.31.41.5
-3.04.334.334.324.314.304.294.274.254.234.204.18
-2.84.334.324.314.304.284.274.244.224.194.164.13
-2.64.324.314.304.284.264.244.214.184.154.124.08
-2.44.314.304.284.254.224.194.164.134.104.064.03
-2.24.294.274.244.214.184.144.114.074.034.003.96
-2.04.264.224.194.154.114.084.044.003.963.923.88
-1.84.194.154.114.074.033.993.953.913.873.833.79
-1.64.104.064.023.983.943.903.863.823.783.733.69
-1.43.973.943.903.863.823.783.743.703.663.623.58
-1.23.823.793.763.723.693.653.623.583.543.503.46
-1.03.643.623.603.573.543.513.483.443.413.373.33
-0.83.453.443.433.413.383.353.323.293.263.233.19
-0.63.263.253.243.233.213.193.163.133.103.073.04
-0.43.063.063.053.043.033.012.992.962.942.912.88
-0.22.862.862.862.852.842.822.802.782.762.742.72
0.02.662.662.662.652.642.632.612.602.582.572.55
0.22.462.462.462.452.442.432.422.412.402.392.38
0.42.262.262.252.242.232.232.222.212.212.202.20
0.62.062.052.042.032.032.022.022.022.022.022.02
0.81.851.841.831.821.811.811.821.821.831.841.85
1.01.631.621.611.601.601.611.621.631.641.661.67
+ +![](images/3bf506f13933392806eaf9ebdab74ec6438eb01248b2d7550606472c8ebd6001.jpg) +Figure 1. Expected grade on a 4-point scale as a function of instructor harshness $h$ , given that course difficulty $d = 1$ . + +![](images/f454ec30d9cdc4b3a59e82feb74921f5308ea1652f3e2061c3a14bb00839890d.jpg) +Figure 2. The probability density of the performance variable $N$ . The vertical bars represent the grade ranges for an instructor of zero harshness. For $h > 0$ , the ranges shift to the right by $h$ , making it harder to earn a high grade. + +Since we have no data on the students, courses, instructors, or grades at ABC College, we generated a random set of students, courses, and instructors. Since we don't know the exact composition of the college, we generated various scenarios. + +- We assigned each instructor to teach five courses per year, which is a typical teaching load at many colleges. +- We computed a quality variable $q$ for each student by sampling randomly from a normal $\left( {0,{\sigma }_{q}}\right)$ distribution. +- We assigned each student to eight courses per year (also a standard load for many colleges), for either one or two years, using a uniform probability of selecting each course. +- We generated course aptitude adjustments $c$ for each course that a student enrolled in by sampling from a normal $\left( {0,{\sigma }_{c}}\right)$ distribution. +- We assigned difficulties to each course by sampling from a symmetric beta distribution centered at 1. A typical choice would be $\mathrm{beta}(3,3)$ on [0.7, 1.3] (see Figure 3). +- We assigned harshnesses to instructors by sampling from an asymmetric beta distribution skewed and translated towards leniency (to represent the tendency to inflate grades at the college). A typical choice would be beta(2, 3) on $[-2, 0]$ (see Figure 4). One can use Table 2 to guide the choice of distribution according to the average GPA that we desire. + +We did not restrict ourselves to considering the case where the average GPA at the college is 3.67. In early 1997, Duke University considered revising + +![](images/6f3b22c6578e28e8ee0e0e6666fd27a93e2d8ad703ee258f82fb104b7ce598ca.jpg) +Figure 3. Typical course difficulty distribution: beta(3,3) on [0.7,1.3]. + +![](images/ff08d106371708e86320d6dffa99bdf89f65a617a96652086dc303c65bc7e1a5.jpg) +Figure 4. Typical instructor harshness distribution: beta(2,3) on $[-2,0]$ . + +its calculation of GPAs to take into account the difficulty of the course in which a grade was received, the quality of the other students in the course, and the historical grading tendencies of the instructor. The reason for this move was alarm that the average GPA at the University had risen from 2.7 in 1969 to 3.3 by fall 1996 [Gose 1997]. If an average GPA of 3.3 is considered evidence of rampant grade inflation, then 3.3 is a more likely estimate of the average GPA at ABC College than 3.67 is. + +Some word on our rationale for choosing distributions is in order here. The normal distribution is a standard choice for representing abilities in a population. Our model of how course difficulty affects student performance and grades loses validity for $d$ outside the range of approximately [0.5, 2], and of course a negative difficulty makes no sense at all. Thus, the normal distribution is not an appropriate choice. We chose a beta distribution because it takes values on a finite interval. Similarly, any harshness value outside the range [-3, 2] is patently ridiculous (see Figure 1). In fact, a harshness value of -2 is fairly ridiculous; but to obtain a school-wide average GPA of 3.67, we have to allow that some instructors are that lenient. In any event, it behooves us to choose a distribution over a finite interval. We also desire an asymmetric distribution, to represent the tendency at the college toward lenient grading. Thus, a beta(a,b) distribution with $a < b$ is appealing for our purposes. + +Our model indicates that the ABC College administrators' concerns about being unable to distinguish among the top students are justified. Indeed, when we generate an incoming freshman class of 500 students and make the instructors lenient enough to yield a 3.64 average GPA, 60 of these students still have a straight $A +$ average after two years! In this environment it is clearly necessary to search for a better evaluation method than simple GPA. + +# The Modified GPA algorithm + +Our algorithm for establishing a class rank involves a number of distinct stages. We first attempt to gain additional information from the instructor's historical grade awards and use this information to refine our knowledge of the courses and instructors that the students are taking currently. Using an estimate of the instructor's harshness based on the grades that he has given historically, we correct the grades awarded in a particular course by estimating the mean value by which any leniency or harshness changed a student's letter grade. Incorporating this correction factor allows us to provide an adjusted GPA measure that represents more fully the actual performances and, hence, the quality of the students. + +We assume that the instructors have been teaching at the college for at least two years and that we have access to the grades that they assigned during those years. We simulate these data just as we generated the data for the students, as described in section Modeling College Composition and Grade Assignment. New students are generated randomly as in the original student data. Given the actual harshness of the instructors and the difficulties of the courses taught, numerical performances and grades for all of each instructor's courses are generated as above. + +From these historical data, we compute the average grade $\overline{g}$ granted by each instructor. One can calculate the expected grade granted in a course as a function of difficulty $d$ and harshness $h$ (see Table 2 and Figure 1). For a given value of $d$ , this function decreases monotonically with $h$ , so we can calculate the inverse function. Assuming that $d = 1$ we estimate the instructor's harshness $h_{\mathrm{est}}$ by evaluating this inverse function at $\overline{g}$ . + +Notice that we never even try to estimate the difficulty or take the actual grade distribution into account. Despite this crude method of estimating harshness, we achieve surprisingly good results. Using courses of 40 students each, the harshness that we estimate is usually within about 0.05 of the actual harshness, though it is not too uncommon to err by as much as 0.12. The error tends to decrease the closer the actual harshness is to zero. + +We can now adjust the grades of the students in each of an instructor's courses based on our harshness estimate $h_{\mathrm{est}}$ for that instructor. For simplicity we assume $d = 1$ for the course. The fact that a student receives a grade $g$ in a course with a instructor of harshness $h$ means that the student's performance value $N$ lies in the interval + +$$ +\left(l (g) + h, r (g) + h \right]. +$$ + +Thus we estimate that $N$ lies in the interval + +$$ +I = \left(l (g) + h _ {\mathrm {e s t}}, r (g) + h _ {\mathrm {e s t}} \right]. +$$ + +We estimate $N$ to be the expected value of the distribution of $N$ given that $N$ lies in $I$ . For grades other than $\mathrm{A}+$ and $\mathrm{F}$ , this interval has width $\frac{1}{3}$ . Assuming + +$d = 1$ , the a priori distribution of $N$ is standard normal. Given that it lies in $I$ , the probability density function is just the indicator function for $I$ times the standard normal density times a constant factor. Since the density function for the standard normal distribution is almost linear over any interval of width $\frac{1}{3}$ , we estimate $N$ as + +$$ +N _ {e s t} = h _ {\mathrm {e s t}} + \frac {l (g) + r (g)}{2}. +$$ + +If the grade is $\mathrm{A} +$ or $\mathrm{F}$ , we can calculate the expected value analytically, with the following results: + +$$ +E [ N | N > t _ {0} ] = \frac {e ^ {- t _ {0} ^ {2} / 2}}{\sqrt {2 \pi} (1 - \Phi (t _ {0}))} \tag {1} +$$ + +$$ +E [ N | N \leq t _ {1} ] = - \frac {e ^ {- t _ {1} ^ {2} / 2}}{\sqrt {2 \pi} \Phi (t _ {1})} \tag {2} +$$ + +where $\Phi$ is the standard normal cumulative distribution function and the relevant $t$ values are $t_0 = l(A+)$ and $t_1 = r(F)$ . So when the student's grade is A+ or F, we set $N_{\mathrm{est}}$ to (1) or to (2), respectively. + +Now that we have a value for $N_{\mathrm{est}}$ , we assign the student an adjusted grade for the course. The adjusted grade $g_{\mathrm{adj}}$ is the grade that an instructor of zero harshness would have given, except that we assign a real number grade instead of an integer. Specifically, + +$$ +g _ {\mathrm {a d j}} = 3 N _ {\mathrm {e s t}} + 8. +$$ + +To avoid a discontinuity at $N_{\mathrm{est}} = r(F)$ , we treated an F as a grade of 2 for this purpose. + +Since the grades given by a lenient grader exhibit a smaller spread and hence do not differentiate the students as well as those given by a strict grader, we grant them less import when calculating a student's adjusted GPA. Specifically, the student's GPA is a sum of the grades received weighted by $\sigma(1, h_{\mathrm{est}})$ , where $h_{\mathrm{est}}$ is the estimated harshness of the instructor who assigned the grade and $\sigma(d, h)$ is the standard deviation of grades given by an instructor of harshness $h$ in a course of difficulty $d$ (see Figure 5). + +If we wished to refine this method, we might use the spread of grades in a course to estimate its difficulty both when examining the instructors' grading histories and when adjusting grades at the end. However, even without this enhancement, the basic method that we have just outlined performs well, as we demonstrate in Results of the Model. One has to be very clever to estimate the difficulty of a course in a way that is numerically stable. The most obvious method is to note that for student $i$ , we have $N_{i} = q_{i}d$ , then use $N_{i,\mathrm{est}}$ and some estimate $q_{i,\mathrm{est}}$ of $q_{i}$ that we pull from some other source to estimate + +$$ +d \approx \frac {N _ {i , e s t}}{q _ {i , e s t}}. +$$ + +![](images/68653b1380b2b42f259fe2e3c77e9fcdc43c83fb4fed2394c3e412688b5b08af.jpg) +Figure 5. Standard deviation of grade distribution as a function of $h$ , assuming difficulty $d = 1$ . + +The difficulty is that $q_{i}$ is likely to be near zero, so the error $|q_{i} - q_{i,\mathrm{est}}|$ is magnified when we divide. + +# Results of the Model + +We generated a number of scenarios to elucidate the features both of our simulated data and of the modified GPA algorithm. We ran these simulations on a test student population of 500 students, each of whom took 16 courses, with average course size 40. + +We use a number of these scenarios to demonstrate the results of our simulations. Plots of actual quality ranks vs. rank by GPAs or adjusted GPAs demonstrate the effectiveness of each ranking method over all tiers of students. For a perfect ranking of students, this plot would lie along the line $y = x$ . + +We define three error metrics to aid in the comparison between the ranking generated by our revised GPA method and the raw GPA ranking. + +- We define a misassigned scholarship candidate to be a student who either received a scholarship but was not in the top $10\%$ in quality, or who was in the top $10\%$ of quality but did not receive a scholarship. A simple count of the number of misassigned scholarship candidates measures the method's effectiveness at identifying the highest caliber of student. We refer to this quantity as the MS (Missed Scholarship) metric for a given estimated ranking. +- For each student who is ranked incorrectly, the rank errs by some number of places. Summing these rank errors over all students gives us a measure of how our ranking compares to the actual quality ranking across the entire spectrum of students. We refer to this as the SE (Scaled Error) metric. + +- Finally, to determine the injustice with which scholarships are assigned, we sum over all misassigned scholarship candidates the distance between their quality ranking and the scholarship cutoff rank. We refer to this measure of error as the SI (Scholarship Injustice) metric. + +The first scenario has difficulty scaled to be between 0.7 and 1.3, while the harshness distribution is relatively lenient, with values ranging between $-2.1$ and $-0.1$ . The variation due to course material is set to be 0. This yields, as one might expect, a student population with rampant grade inflation. Overall GPA is 3.64, with 60 students receiving perfect A+ averages. Figure 6 plots GPA rank against actual quality rank, where we observe significant discrepancies between the estimated and actual rankings. At this level of grade inflation, the top tiers of students are almost entirely indistinguishable by GPA. Attempting to correct for harshness by using the corrected GPA does not significantly improve the results at the high ranks. It does, however, improve the SE number from 9,124 to 5,352, representing a superior evaluation of the middle and lower tiers of students (see Figure 7). + +We now alter the parameters in our model to fit what we feel is a far more realistic situation. Harshness is set to vary between $-1.569$ and .431, yielding a scenario that has an average GPA of 3.34. Now only 38 students have perfect $\mathrm{A + }$ averages. The effect of ranking based on the adjusted GPA is definitely apparent, in Figures 8-9. The discrepancies are clearly less for middle-ranked and low-ranked students. The SE measure improves from 6,916 using simple GPAs to 4,842 using adjusted GPAs. + +Note the loss of ranking accuracy at high quality levels. The two methods of ranking perform nearly identically, with raw GPA giving an MS of 6 while the adjusted GPA rank gives an MS of 7. + +Table 3 summarizes results for a sampling of the scenarios, all of which use the usual range of difficulty from 0.7 to 1.3. + +As our simulations show, the modified GPA ranking fares well against the raw GPA method, with SE numbers significantly lower in each trial. This means that the students in the lower deciles are ranked much more accurately in each case. This suggests a certain robustness and indicates that judgments based on the modified GPA rank will be more fair. + +As in all other trials performed, both the raw and adjusted GPA ranking methods performed poorly at the high end of the ability curve, according to all three measures. + +# Strengths and Weaknesses + +One weakness of our model is that it does not allow for a completely analytic solution to the scholarship selection problem. Computer simulation is the only means we have to test and evaluate our methods against the simple-minded raw GPA ranking method. + +![](images/1a4fa4185b643a0f0d3fc0ba340b69eadcf6905e96efbb639c00740d4164389e.jpg) +Figure 6. Wild grade inflation resulting in an average GPA of 3.64. The raw GPA estimate makes significant mistakes in the entire range but is especially inaccurate in the top two deciles. + +![](images/88132619f4cf525d498d240a3c926e3a2dbbd354bcfd176ed192f50f56c57035.jpg) +Figure 7. The same scenario as Figure 6 with rank determined by GPAs modified for instructor harshness. + +![](images/ea363b2b1e943aff6735da120de18b33b6c7c6b7eae0766b9609c6f27fcdbfc3.jpg) +Figure 8. A more reasonable scenario. The raw GPA rank maintains some level of inaccuracy throughout the spectrum of student ability. + +![](images/399a00dcd71f823bc989f90d4829497b0220c3d581008d64b6bce5f189d06b6f.jpg) +Figure 9. Same scenario as Figure 8 but with ranks determined by the modified GPAs. + +Table 3. The relevant information for several simulations. Note how the modified GPA ranking produces smaller $SE$ numbers in all cases, representing greater overall accuracy. + +
TrialHarsh lowHarsh highRaw GPARaw MSAdj. MSRaw SEAdj. SERaw SIAdj. SI
1-2.1-0.13.6491691245352139118
2-1.550.353.2281071783122247596
3-1.5690.3313.346769164842215197
4-1.90.13.4881088965424101226
5-1.490.513.19857454441015381
+ +Potentially the greatest weakness in our model and techniques is the lack of a good ranking of the top two deciles. Whether or not a robust method exists is, we believe, debatable. Nothing we have seen indicates that the information required to form a confident ability ranking is even contained in the GPA information we have. It is likely that complete rank-ordering cannot be achieved given the information our model provides. We do not know this to be true, but it is certainly consistent with the results we have witnessed. + +Another weakness is that our model cannot take into account the effect of curved grading systems and the possibility of student grades being altered by the performances of the fellow students in the course. Similarly, other interactions between the entities in our model, such as the formation of study groups, can affect the performance (as distinguished from grade) of a student in a course in a way that is dependent upon the other members of the course. Our model also includes parameters, namely, the course difficulties, that are difficult to estimate accurately, and thus remain a complete unknown throughout our attempts to rank based on ability. + +In spite of these features, our model has a number of compelling features. By changing just a few parameters, one can generate an entirely new scenario that has a plausible distribution of grades and GPAs. Furthermore, it takes into account the three functional parts of any educational experience: the students, the instructors, and the courses. Arguably, no model could be complete without accounting for variations of each of the three parts. + +Despite all of our problems in classifying the scholarship winners, the adjusted GPA method we use is almost uncannily good at identifying the lower deciles, which in a real context is important to the students and the school. + +From a practical standpoint, our model and methods are fairly simple to implement. The number and size of the calculations performed is linear in the size of the student body and could be executed with modest computer resources at even a large institution. + +To sum up, it would behoove ABC College to use our ranking system, since it more accurately identifies the bottom eight deciles of student ability. However, if the administration seeks to accurately rank the top tier of students, it must realize that a bloated aggregate GPA from excessively lenient grading can quickly lead to a situation where no amount of calculations and statistics can recover the desired information about the intrinsic quality of the students. + +# References + +Gose, Ben. 1997. Duke rejects a controversial plan to revise the calculation of grade-point averages. Chronicle of Higher Education 43 (21 March 1997): A53. + +# Grade Inflation: A Systematic Approach to Fair Achievement Indexing + +Amanda M. Richardson + +Jeff P. Fay + +Matthew Galati + +Stetson University + +421 N. Woodland Blvd. + +Deland, FL 32720 + +vgalati@bellatlantic.net, jfay@steton.edu + +Advisor: Erich Friedman + +# Background + +Constantly rising grade-point averages at universities across the nation have made it increasingly difficult to distinguish between "excellent" and "average" students. For example, The Chronicle of Higher Education found that the mean grade-point average (GPA) at Duke University was 3.3 in 1997, up from the 1969 mean of 2.7. It also found that Duke is not alone in this trend. + +Average grades have consistently increased while the system of measurement has remained unchanged. Receiving an A in a course does not necessarily denote exceptional performance, since the percentage of students receiving As has increased dramatically over the last few decades. In 1995, the Yale Daily News reported that As and Bs constituted $80\%$ of grades at Yale. According to the New York Times (4 June 1994), nearly $90\%$ of grades at Stanford were As or Bs, and an estimated $43\%$ of grades at Harvard and $40\%$ at Princeton were As. This situation has led many universities to seek new methods for ranking student performance. + +Some say that expectations and grading difficulty have dropped from the faculty point of view, while others argue that the quality of student has been on the rise. Whatever the cause, a major problem arises when scholarship foundations or graduate schools try to distinguish exactly who deserves to be + +The UMAP Journal 19 (3) (1998) 315-322. ©Copyright 1998 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +in the top $10\%$ , etc. For this reason, an alteration of the current quantitative ranking system is necessary. + +One approach is a system of quality points based on comparative performance within each course. In this system, a student's quality points for a given course can be calculated based on performance relative to students' overall performance in the course. To obtain this objective, overall course performance may be measured by the mean or by the median of all the grades in the course. Then the student's individual performance can be measured in terms of standard deviations from the mean/median. + +Many schools are looking for a feasible system to reach this goal. For example, in 1994 the faculty at Dartmouth voted to include on a student's transcript the course size and median course grade next to each grade, plus a summary telling in how many courses the student surpassed the median, met the median, or performed lower than the median (Boston Globe, 4 June 1994). This year, Duke University's faculty considered (but did not implement) using an "achievement index" (AI) to rank students. The factors considered in this index would include the course's difficulty level, the grades received by all the other students taking the course, and the grades those students received in other courses. + +Most arguments against this type of indexed system appear qualitative rather than quantitative. For example, in an article in *The Chronicle* at Duke, Prof. Robert Erickson explains that "the reason [the AI system] won't work is that the faculty will not agree to have their grading system tinkered with." Many faculty fear that such a system may put their students at a disadvantage until such a system became more widely used, and hence students may seek to attend other schools. No one wants to be the guinea pig. + +A quantitative question that arises when determining the best index is, when determining "average" performance in a course, should the index use the course's mean grade or its median grade? Some argue for the median, since it is more robust, while others argue for the mean, since it is a better estimator when the distribution is close to a normal distribution. + +Our model attempts to find a solution to the problem of ranking. + +# Assumptions + +- The sample data that we made up, with grades for 68 students, effectively represents the entire population of 2,000 students at ABC College. +- Past performance has no effect on performance in any given course. +- Results from one semester can be extrapolated to a ranking system cumulative over semesters. +- Course size and difficulty level do not change the model's effectiveness. + +- The system is implemented at the administrative level, after standard grades are reported by professors. +- In comparing a plus-minus grading system (including A+, A, A-, B+, etc.) to one with only straight letter grades (A, B, C, etc.), the latter's A would encompass the former's A+, A, and A-. +- The model may compare students at large universities with those at small colleges without loss of generality. + +# Motivation for the Model + +The goal is to find a way to order, or compare, students with only slightly varying grades; so an index rating students' performance relative to other students is germane. To arrive at this end, we must first determine how students in a particular course performed overall. Assuming that we have determined which estimator (mean or median) to use, we measure a student's standing by how many standard deviations from the given estimator the student's grade lies. Thus, if the student's grade is 2 standard deviations above the estimator, the student receives 2 quality points; if the student's grade equals the mean/median, the student receives 0 points; and if the student's grade falls below the estimator, say by 1.2 standard deviations, negative points are issued $(-1.2)$ . + +# The Model + +We now determine the best index to use in implementing such a system. The question of using mean or median as our standard of comparison is of utmost importance. Perhaps instead of choosing one or the other for all courses, we should determine which is more effective for a particular course. The mean is the preferable estimator for data resembling a normal distribution (Figure 1) but is more sensitive to outlying data than the median when the distribution is strongly skewed. Thus, the skewness of the distribution of grades in a given course can help to determine which estimator to use for grade comparison in that particular course. + +Through trial and error, we decided that if a course's distribution has skewness greater than 0.2, we would use the median as the estimator. If the skewness is smaller than that, the distribution is sufficiently close to normal that we can use the mean as the estimator. + +In our indexing system, after determining the most appropriate estimator for comparison, we define a student's relative performance in terms of standard deviations. The quality points awarded for a given course are then weighted by the courses' credit hours. Finally, a student's overall index is computed by summing total points and dividing by total credit hours. + +![](images/9ce6b29d722218794702c81eca2ff439d6923df70e23cddc1049afaf3a3b067e.jpg) +Figure 1. A normal distribution. + +Let $G_{i}$ be the student grade for course $i$ , $G_{i} \in [0, \text{MaxGrade}]$ , where course $i$ has $C_{i}$ credit hours. Then + +$$ +\mathrm {G P A} = \frac {\sum G _ {i} C _ {i}}{\sum C _ {i}}, +$$ + +where the sums are taken from $i = 1$ to the number $n$ of courses taken by the student. + +Our procedure is as follows. Let $A_{i} = a_{0}, a_{1}, a_{2}, \ldots, a_{n_{i}}$ be all of the $n$ grades for course $i$ . Let $\mu$ be the mean of $A_{i}, \chi$ its median, and $\sigma$ its standard deviation. The skewness $S$ is defined as the third moment about the mean divided by $\sigma^3$ : + +$$ +S = \frac {\sum (a _ {i} - \mu) ^ {3}}{\sigma^ {3}}, +$$ + +where the sum is over all $n_i$ students in the course. + +If $|S| < 0.2$ , we use $\mu$ as the estimator; if $|S| \geq 0.2$ , we use $\chi$ as the estimator. We define + +$$ +\mathrm {I n d e x} = \left\{ \begin{array}{l l} \frac {\sum \frac {G _ {i} - \mu}{\sigma} \cdot C _ {i}}{\sum C _ {i}}, & \text {i f | S | < 0 . 2}; \\ \frac {\sum \frac {G _ {i} - \chi}{\sigma} \cdot C _ {i}}{\sum C _ {i}}, & \text {i f | S | \geq 0 . 2}. \end{array} \right. +$$ + +# Analysis + +First, we offer some justification of our assumptions about course size and difficulty level. + +Since the model seeks a comparative ranking system, the number of students in a course is not the issue; rather, each student's performance relative to each other student in the course is the important factor. Hence, course size is not directly involved in the model. + +The difficulty level of a course, although not explicitly dealt with in determining a student's index rating, is accounted for indirectly in our model, in the overall course performance. For example, one would suspect that a relatively easy course would tend to have a large percentage of high grades, while grades in a more challenging course might be more evenly distributed or even tend toward lower grades. While the standard grading system would rate students purely on the grade they received, and thus not take difficulty level into account, the comparative nature of our indexing system causes inherent dependence on this factor. + +# Sensitivity + +In the analysis of our data, we have seen a number of cases where the system used had a great effect on the ranking of particular students. We offer three students as examples of what can occur in our minicollege of 68 students. [EDITOR'S NOTE: We do not reproduce here the authors' complete table of grades, indexes, and ranks under the various systems.] + +Student 603 is ranked 15th by GPA with plus/minus grades (Figure 2), with grades of B, B, A, A. This student's rank would be the same for GPA with straight letter grades. However, using our index ranking system, Student 603's rank drops dramatically. With plus/minus grades, the student's rank falls three deciles from the 30th percentile to the 60th percentile; that is, the student's rank drops from 15th to 43rd. With straight letter grades, the drop is even more drastic: The student falls to 5th and plummets to the 70th percentile. + +![](images/16112e0dfc0a471cc9be3958ec07edd6aee6be70ce7c74e1163c5a227d5bd7b0.jpg) +Figure 2. Rankings of some students under various systems. The leftmost bar is for GPA with plus/minus grades; the bar second from the left is for GPA with straight letter grades; the bar second from the right is for our index system with plus/minus grades; and the rightmost bar is for our index system with straight letter grades. + +![](images/feaa17af4c1dc148c5a9dad1608e560d19da0e312de0dd96e41b750c9822fc9c.jpg) + +To understand better how such an event could occur, we take a closer look at the courses that Student 603 took. In the first two courses, the student received a B. However, the value of that B is what is under consideration in our model. The grades in the first course are 16 As, 13 Bs, and 1 C. Due to the large skewness coefficient, our formula uses the median, which is an A. So the student's grade is actually "below average." The second course has an + +average of an A-; here too, a B is a relatively low grade. In both of the other two courses, the mean/median grade is an A, and Student 603's As in these courses represent average performance in them. Our ranking system takes the average of the deviations from the estimator to find an index for the student. Specifically, the student has deviated from the estimator by a factor of 0.6387. + +Student 957 is ranked 12th by GPA with plus/minus grades, with grades A, A, A-, B+, A, A, A, B+, A, B, A. In the first six courses, the lowest overall grade out of all of the students is a C, and the mean/median for these courses is A, A, A, B+, A, and A. It is obvious from these statistics that the student's ranking has been overestimated. Under our system, Student 957 is ranked 23rd and drops from the 20th percentile to the 40th percentile. Although this is a dramatic drop, it is not as drastic as for Student 603. Looking more carefully at the other courses, we focus on the tenth course. Here, the mean/median is $\mathrm{C}+$ , while Student 957 received a B. + +Similarly, Student 1028 is ranked 28th by GPA. However, when his grades are compared with those in his respective courses, his ranking drops to 22nd. Once again, the strong level of grades throughout his courses lowers the "value" of the grades that he received. + +What about students who fall out of the top $10\%$ under the new rating system? The new system is supposed to determine better which students are worthy of scholarship and advancement, so this distinction is key. + +Student 609 is ranked 7th in GPA with plus/minus grades and 8th with straight letter grades, with grades A, A, B+, A. However, due to the relatively high average grades in these courses, this student suffers a loss in ranking under our system, to 12th with plus/minus grades. This drop causes Student 609 to fall out of the first decile. + +Student 713 benefits from our system. This student's grades are B, A-, A, B, and B, but the mean/median for each course is below the student's respective grades. Due to this, the student's ranking rises from 19th to 8th. As in the case for Student 609, the new system of ranking has a large effect on the awarding of scholarship. + +We need to ensure that our model is not too susceptible to a single grade change in a single course. How does such a change affect the overall ranking of this person and the overall rankings of other students? Since students' rankings are dependent on other students' grades, a single change could affect other students' rankings across the board. + +The most extreme scenario is when the course size is small and the grades are skewed. For example, let's take the case of a course with two students. Suppose that student X and student Y both receive As, so the mean/median is an A. If student X's grade changes to an F, the mean/median becomes a C. This change increases student Y's index, since student Y is no longer "average" in this course but above average. This change in student Y's index could potentially alter the rank of a few students. This scenario is the most extreme case but would also be extremely rare. + +With a much larger course, a similar grade change would have minimal + +effect. Suppose that a course has 20 students with a distribution highly skewed towards the high end. The grades could be as follows: + +$$ +\mathrm {A} +, \mathrm {A} +, \mathrm {A}, \mathrm {A}, \mathrm {A}, \mathrm {A}, \mathrm {A} -, \mathrm {A} -, \mathrm {A} -, \mathrm {A} -, \mathrm {B} +, \mathrm {B}, \mathrm {B}, \mathrm {B}, \mathrm {B}, \mathrm {C} +, \mathrm {C}, \mathrm {F}. +$$ + +We use the median $(\mathrm{A}-)$ instead of the mean. Suppose that a student who received an $\mathrm{A+}$ should have received an F. Once the correction is made, the grades would still be skewed and we would still use the median, which would still be approximately $\mathrm{A-}$ . Thus, the only person who would be affected would be the one student whose grade was corrected. + +In certain circumstances, a grade change could potentially change the median of a skewed course. Since the grades in the course are skewed, the median will not change by much and thus the change will not constitute a major problem. A few students may move ranks, and in some cases, deciles. + +If the distribution is more normal, then we use the mean; and as long as the course size is relatively large, the effect of a grade change is minimal. + +# Strengths + +Implementing our system would involve just the introduction of a computer program that would use data already in the current system in the registrar's office. + +Since our index system is implemented at the administrative level, professors would not have to alter their methods of grading at all. The index system is merely a new method of interpreting the grades currently issued by professors. + +Another strength of the index model relates to the issue of grade inflation itself. One problem with grade inflation is that it may not be universal. In other words, certain departments or colleges may be more or less affected by grade inflation. Thus, for employers or graduate institutions seeking the best candidates from a wide variety of undergraduate institutions, our model using the index system takes away the problem of comparing universities with varying levels of grade inflation. + +# Weaknesses + +One possible flaw with our index model is the lack of consideration of the quality of student in a given course. For example, consider courses X and Y in which all students earn a letter grade of an A. Our model gives each student the same quality points. Perhaps consideration should be given to the quality of students taking a given course. If all the students in course X also received As in their other courses, while students in class Y had a wide range of grades in their other courses, then performing at the "average" level in course X is + +theoretically more difficult than in course Y, so awarding equal points does not effectively differentiate as we would desire. + +Another uncertainty may arise with courses with multiple sections. It may happen that higher-level students all take a certain section of the course, thus making the comparisons of course performance invalid. For this reason, larger universities especially may need to group all sections of a given course before computing the mean/median and calculating the comparative index to rate each student's performance in that course. + +Another weakness is our trial-and-error choice of 0.2 as the value of skewness that determines which formula to use for the index. Our data are limited to fairly small course sizes, and different values may be work better for larger course sizes. In rare cases when only one student is enrolled in a course (say, for a senior project or independent study), the student's grade would determine the mean/median and thus would always equal the estimator, resulting in the issuance of zero quality points. Thus, a student can never be rewarded for doing well or hurt by doing poorly in such a situation. + +# Future Models + +More research into the effects of different skewness factors is necessary before implementation of such an index system. Also, including a method of evaluating the quality of students in a given course would further help the comparative ranking idea essential to this model. Looking at each student's grades outside a given course may help determine the quality of student. Thus, if a course is full of straight-A students and a particular student performs "above average" in that course according to our index system, that student should be awarded higher points than a student performing "above average" in a course full of students who have lower grades outside that course. + +Also, further investigation into the choice of mean and median may yield more effective determination of which is the better estimator for a given course or perhaps for a particular university as a whole. Consideration may also be given to the different effects an index system has in large universities compared to smaller colleges and private universities. + +# Judge's Commentary: The Outstanding Grade Inflation Papers + +Daniel Zwillinger + +Waltham, MA 02453 + +zwillinger@alum.mit.edu + +Grade point average (GPA) is the most widely used summary of undergraduate student performance. Unfortunately, combining student grades using simple averaging to obtain a GPA score results in systematic biases against students enrolled in more rigorous curricula and/or taking more courses. Here is an example [Larkey and Caulkins 1992] of four students (call them I-IV) in which Student I always obtains the best grade in every course that she takes and Student IV always obtains the worse grade in every course that he takes, yet Student I has a lower GPA than Student IV does: + +
Student IStudent IIStudent IIIStudent IVCourse GPA
Course 1B+B-3.00
Course 2C+C2.15
Course 3AB+3.65
Course 4C-D1.35
Course 5AA-3.85
Course 6B+B3.15
Course 7B+B3.15
Course 8B+BB-C+2.83
Course 9BB-2.85
Student GPA2.782.862.883.0
+ +The MCM problem was to determine a "better" ranking than one using pure GPAs; this problem has no simple "solution." Johnson [1997] refers to many studies of this topic and suggests a technique that was considered—but not accepted—by the faculty at Duke University. + +Each participating team is to be commended for its efforts in tackling this problem. As in any open-ended mathematical modeling problem, there is not only great latitude for innovative solution techniques but also the risk of finding + +no results valuable in supporting one's thesis. Solutions submitted contained a wide variety of approaches, including graph theory and fuzzy logic. + +Unfortunately, several teams were confused as to the exact problem that the dean wanted solved. Assigning students to deciles, by itself, was not the problem; for example, deciles could be assigned from any list of student names by choosing the first $10\%$ of the students to be in the first decile, etc. The dean wanted meaningful deciles, based on students' relative course performance. Simply re-scaling GPAs so that the average became lower (and the top $10\%$ became more spread out) would not change the inherent problem. + +The problem statement suggested that relative rankings of students within courses should be used to evaluate student performance. With this assumption, possible approaches include: + +- using relative ranking with grade information + +(A useful additional assumption might be that faculty would give grades based on an absolute concept of what constitutes mastery of a course.) + +- using relative ranking without grade information + +In the latter approach (chosen by most teams), an instructor who assigns As to all students in a course provides exactly the same information as an instructor who assigns all Cs to the same students in another course. + +Specific items that the judges looked for in the papers included: + +- Reference to ranking problems in other fields that use relative performance results, such as chess and golf. +- A detailed worked-out example, illustrating the method(s) proposed, even if there were only 4 students in the example. +- Computational results (when appropriate) and proper consideration of large datasets. Teams that used only a small sample in their computational analysis (say 20 students) did not appreciate many of the difficulties with implementing a grade adjustment technique. +- Mention (if not use) of the fact that even though the GPA may not be as "good" a discriminator as the various solutions obtained by the teams, it seems reasonable that there be some correlation between the two. +- A response indicating understanding of the question about the changing of an individual student's grade. Such a grade change could affect that student's ranking, but if it affected many other students' ranks then the model is probably unstable. +- A clear, concise, complete, and meaningful list of assumptions. Needed assumptions included: + +- The average grade was A-. (This must be assumed, as it was stated in the problem statement--amazingly, several teams assumed other starting averages!) +- in an $\{\mathrm{A}+, \mathrm{A}, \mathrm{A}-, \ldots\}$ system, not all grades were A-. (Otherwise, there is no hope for distinguishing student performance.) + +Many teams confused assumptions with the results that they were trying to obtain. Teams also made assumptions that were not used in their solution, were naive, or needed further justification. For example, + +- Many teams assumed a continuous distribution of grades. As an approximation of a discrete distribution, this is fine. However, several teams allowed grades higher than $A +$ , and other teams neglected to convert to a discrete one when actually simulating grades. +- Several teams assumed that teachers routinely turn in a percentage score or course ranking with each letter grade. This, of course, would be very useful information but is not realistic. +- Low grades in a course do not necessarily imply that a course is difficult. A course could be scheduled only for students who at "at risk." Likewise, a listing of faculty grading does not necessarily allow "tough" graders to be identified: an instructor may teach only "at risk" students. + +- The most straightforward approaches to solving this problem were: + +- Use of information about how a specific student in a course compared to the statistics of a course. For example, "Student 1's grade was 1.2 standard deviations above the mean, Student 2's grade was equal to the mean, . . ." The numbers \{1.2, 0, . . . \} can be used to construct a ranking. +- Use of information about how a specific student in a course compared to other specific students. For example, "in Course 1, Student 1 was better than Student 2, Student 1 was better than Student 3, ..." This information can be used to construct a ranking. + +The judges rewarded mention of these techniques, even if other techniques were pursued. + +- Other features of an outstanding paper included: + +- Clear presentation throughout +- Concise abstract with major results stated specifically +- A section devoted to answering the specific questions raised in the problem statement or stating why answers could not be given. +- Some mention of whether the data available (i.e., sophomores being ranked with only two years worth of data) would lead to statistically valid conclusions. + +None of the papers had all of the components mentioned above, but the outstanding papers had many of these features. Specific pluses of the outstanding papers included: + +Duke team + +- Their summary was exemplary. By reading the summary, you could tell what they were proposing and why, what the issues they saw were, and what the models they produced were. +- Their use of least squares to solve an overdetermined set of equations was innovative. +- Their figures of raw and "adjusted" GPAs clearly and visually showed the correlation between the two and also the amount of "error" caused by exclusive use of GPAs. + +Harvey Mudd team + +- Their sections on "Practical Considerations" and "What Characterizes a Good Evaluation Method" demonstrated a clear understanding of the problem. +- Their figures of "Raw GPA" versus "Student Quality" clearly and visually showed the correlation between the two and also the amount of "error" caused by exclusive use of GPAs. + +Stetson team + +- Their use of the median, as well the mean, in comparing a specific student to the statistics of a course was innovative. (Use of the median reduces the effects of outliers.) +- They interpreted the results of rank adjustment for specific individuals in their sample. +- An awareness of the problem as indicated by literature references in their "Background Information" section. + +# Reference + +Johnson, Valen E. 1997. An alternative to traditional GPA for evaluating student performance. Statistical Science 12 (4): 251-278. + +Larkey, P., and J. Caulkins. 1982. Incentives to fail. Working Paper 92-51. Pittsburgh, PA: Heinz School of Public Policy and Management, Carnegie Mellon University. + +# About the Author + +Daniel Zwillinger attended MIT and Caltech, where he obtained a Ph.D. in applied mathematics. He taught at Rensselaer Polytechnic Institute, worked in industry (Sandia Labs, Jet Propulsion Lab, Exxon, IDA, Mitre, BBN), and has been managing a consulting group for the last five years. He has worked in many areas of applied mathematics (signal processing, image processing, communications, and statistics) and is the author of several reference books. + +# Practitioner's Commentary: The Outstanding Grade Inflation Papers + +Valen E. Johnson + +Institute of Statistics and Decision Sciences + +Duke University + +Box 90251 + +Durham, NC 27708-0251 + +valen@isds.duke.edu + +# Introduction + +I would like to begin my comments by congratulating all three teams on their innovative solutions to what is both a difficult and important societal problem. The depth of thought given to this problem in the short time each team had to generate a solution is very impressive, and all teams present solutions that are surprisingly close to proposals that have appeared in the educational research literature. In fact, the three proposed solutions span the range of previously proposed alternatives to GPAs in terms of model complexity, ranging from relatively simple to highly complex. In the discussion that follows, I focus on the problems associated with each proposal rather than their strengths. I chose this course not because the proposals are weak but instead because they are of sufficient quality to merit serious criticism. + +As all three teams note, GPA plays an important role in our educational system. In the particular scenario presented, an adjusted GPA is needed to more equitably allocate scholarships to students at ABC College. More generally, however, GPA is arguably the single most important summary of a student's academic performance while in college. It plays a critical role in determining the success of a student in the job market and is influential in determining whether or not a student is admitted to professional or graduate school. + +A more subtle influence of GPA is the impact that GPA has on student course selection. Because GPA is perceived to play such a critical role in a student's career, it is now common for students to select courses based on their expectations of how courses will be graded. In a recent survey of Duke University + +undergraduates, $69\%$ of participating students indicated that expected grading policy had some influence in their decision to enroll in courses taken to satisfy a distributional requirement! This fact suggests that fewer "hard" courses are taken by undergraduates as a result of differential grading policies and probably causes a net decrease in the number of science and mathematics courses taken. To a large extent, it also explains the spiraling assignment of grades that has taken place over the last decade. Students gravitate towards courses that are graded leniently, and professors soften grading standards to ensure adequate course enrollments and favorable course evaluations. + +Changing the way GPA is computed can solve all of these problems, but implementing such a change is a difficult proposition. Any change to the current GPA system will be opposed by faculty and students who do not benefit from the change, and every meaningful modification of the GPA will produce a sizable proportion of each. For this reason, it is crucial that any modification to the current GPA system be both logically consistent and fair. Additionally, any change to the GPA must be understandable by nonstatisticians, at least at a rudimentary level. Simplicity is a benefit. Unfortunately, the fact that essentially all American universities report traditional GPA attests to the fact that no alternative is available that is + +simple, +- fair, and +- logically consistent. + +Compromises in one or more of these three criteria are therefore inevitable. In evaluating the proposed solutions to ABC College's grade inflation problem, I will attempt to consider all three of these criteria and indicate the extent to which I feel each is compromised. + +# Stetson University + +The proposal by the team from Stetson University represents the simplest solution to the problem. Their proposal essentially is to standardize grades in each class using either the median or mean grade, depending on the value of the coefficient of skewness.1 + +From a statistical standpoint, the use of the median in the standardization offers robustness to outliers, or extreme observations. For grade data, extreme observations are usually Ds, and Fs. One or two Fs in a class can significantly affect the mean grade assigned but does not affect the median grade any more than one or two "below average" grades would. + +Unfortunately, the use of the median grade in the standardization process introduces several potential difficulties. + +- First, for highly discrete data (i.e., data taking on only two or three values) the median value can be very uninformative. As an illustration of how bad the median can be, consider the problem of estimating the probability of success for a new cancer treatment. If cures are coded as 1s and deaths as 0s, the median estimate for the probability of a cure in a clinical trial conducted with an odd number of patients must be either 0 or 1! A similar difficulty arises when analyzing student grades when only two or three unique grades are assigned. When grades are inflated, the median grade is likely to be either an A or A-, but this says little about the relative proportions of As and A-s that were awarded, and less still about the proportion of Bs, Cs, Ds and Fs. +- Next, if skewness and outliers are considered problematic, then should not a robust estimate of the variance also be used? The Stetson team uses the sample variance to standardize the grades, but this estimate of the spread of a distribution is more sensitive to outliers than is the sample mean when estimating location. As an alternative to the sample variance, I would recommend that the interquartile range (IQR) be used as a measure of distributional spread when the median is used as a measure of centrality. Like the median, the IQR is robust to outliers; for normally distributed data, it is nominally equal to 1.35 standard deviations. + +A compromise between the median (which is robust against outliers) and the mean (which is often statistically efficient) would be a trimmed mean. To compute a trimmed mean, a fixed proportion of the extreme values of the data is ignored. Thus, to compute the $10\%$ trimmed mean, the lowest and highest $5\%$ of grades are thrown out and the mean of the remaining data is computed. Trimmed means have the advantage of offering some robustness against outliers while at the same time maintaining good statistical efficiency. Use of the trimmed mean in the Stetson team's standardization procedure might also eliminate the problem of deciding when to switch between mean and median, which in itself introduces some potentially large jumps in the standardized values for small changes in skewness. + +Although the Stetson team's proposal wins in terms of simplicity, it is somewhat weaker in terms of fairness and consistency. + +- In my opinion, fairness is compromised because the standardization procedure does not account for the quality of students within a class, as the team members themselves comment. At my institution (Duke University), there are many courses known to be populated by top students, and if implemented, this proposal would encourage students to opt out of these courses. Accounting for the quality of students in a course is an important facet of GPA adjustment, and I would hesitate to recommend any method that didn't account for this aspect of classroom grading. Implementing such a method + +would encourage students to register for lower-level classes with less talented students. + +- Consistency is also a problem. To see why, consider two students who take identical courses through their senior years, and receive A+ in all of their courses. In the last semester of their senior year, the second student develops an interest in art history and takes an introductory course in that subject (in addition to the other courses that both he and the first student take). Both students again receive A+s in all of their courses, but unfortunately everyone in the art history course also receives an A+. Which student graduates on top? + +According to the Stetson team's adjustment method, the first student beats out the second student for valedictorian, even though the second student tied the first student in all of the courses they took together and got an $\mathrm{A + }$ in the one course he took above their normal course load. Why? The standardized grade for an $\mathrm{A + }$ in the art history course is 0, which when averaged into the other $\mathrm{A + s}$ the student received would lower his adjusted GPA. It is interesting to note that the same problem exists if the art history course is replaced with an independent study course, though in that case it is not clear how the estimate of the standard deviation would be determined. + +# Duke University + +The team from Duke University discusses three proposals for adjusting GPA, the first of which is a nonrobust version of the standardization scheme proposed by the Stetson team. Their other proposals are based on regression-type adjustments to traditional GPA. In their iterated GPA adjustment, the grades from each class are adjusted for the difference between the mean grade assigned in a class and the mean adjusted GPA of students in that class. This difference is then used to compute new adjusted GPAs, which led to new adjustments to the class grades. The team's least-squares estimate of the adjusted GPA is based on the assumption that students tend to receive higher grades in classes taken in their academic majors. As I understand this proposal, they estimate the adjustments for each combination of major and course, conditionally on observed grades. + +Both adjustment schemes are quite similar to an adjusted GPA proposed in a more formal framework by Caulkin et al. [1996], though the "least-squares" proposal is also similar to the pairwise course/department differences estimated by Strenta and Elliot [1987], Goldman and Widawski [1976], and Goldman et al. + +[1974]. For comparison, the models proposed by Caulkin et al. [1996] have the general form + +$$ +\mathrm {G r a d e} _ {i j} = \mathrm {T r u e} \mathrm {G P A} _ {i} + \mathrm {C o u r s e} \mathrm {e f f e c t} _ {j} + \epsilon_ {i j}. +$$ + +Under this model, the grade received by student $i$ in course $j$ is assumed to be an additive function of their "true GPA" plus a course effect for the $j$ th course, plus a (normally distributed) random error. Like the Duke team, Caulkin et al. [1996] also propose an iterative procedure for estimating all student's "true GPA" along with all course effects. + +From a technical standpoint, I regard this modeling approach as a significant improvement on simple standardization schemes. This approach implicitly accounts for both the grading policies of individual instructors and the quality of students within each class. Algorithmically, such models are comparatively simple to estimate and can be computed in less than two minutes on a PC for datasets containing 12,000 students and 17,000 classes. + +The primary drawback of these regression-type models is the assumption that grades are internally scaled. In other words, it usually does not make sense to assume that the difference between an A and A- is the same as the difference between a C and C-, or that a D added to a B is equal to an A. Typical grading scales assign more probability to As and Bs than to Cs and Ds, and by not taking the ordinal nature of grade data into account, substantial statistical efficiency is lost. These models also suffer from the paradox presented above for standardized GPAs but to a lesser extent. The art history course would lower the second student's "true GPA," but an independent study course would leave it unaffected. + +Substantively, I have several minor objections to the modeling assumptions made by the Duke team. Perhaps most important, they premise their model on the assumption that "it is possible to assign a single number, or 'ability score,' to each student, which indicates their relative scholastic ability, and in particular, their worthiness of the scholarship." A similar assumption is made by the team from Harvey Mudd College. In fact, this assumption is not necessary or appropriate. Each student's true GPA can instead be interpreted as the ability score for the student in courses that that student chose to take. Some courses are required by the university to satisfy distributional requirements, but most will be courses that students choose to take in their areas of interest and competence. The model proposed by Caulkin et al. [1996] and the variation of it proposed by the Duke team can be applied without difficulty to colleges in which, say, humanities students are completely separated from engineering students in the sense that they have no common classes. In such cases, "true GPA" corresponds to the ability of students in the classes that they took. The grades assigned to engineering students in engineering classes should not be used to estimate the abilities of engineering students in humanities classes. + +I also feel that it is important to distinguish difficult courses from courses that are graded stringently. They are not (always) the same, so it does not follow + +that students should be penalized for taking courses that are graded leniently. In fact, I would argue that any student who receives the highest grade in all classes that she takes should be awarded a high adjusted GPA. + +As a final comment on this proposal, I think it is dangerous to devise a ranking algorithm that rewards students for the type of curriculum chosen. For some students and some majors, a "well-rounded" curriculum is desirable. For others, a more concentrated curriculum is apropos. A student who has satisfied the relevant distributional requirements for their university should be free to choose whatever courses they wish—without penalty. Indeed, I lament the fact that American mathematics undergraduates are often ill-prepared for graduate studies in statistics programs because they have taken so few mathematics courses. + +# Harvey Mudd College + +The proposal by the team from Harvey Mudd College is also very interesting. Though there are several technical problems in their model specification, the paradigm proposed by this team is surprisingly close to a statistical model called the Graded Response Model (GRM). GRMs are normally introduced in advanced graduate-level statistics courses, and I was impressed by the extent to which this team exposed the underlying assumptions of these models. + +As suggested by the Harvey Mudd College team, the basic assumption of a GRM is that instructors choose thresholds on an underlying achievement scale and assign grades to students based on the grade intervals into which their classroom achievement is observed to fall. Letting $z_{ij}$ denote the classroom performance of student $i$ in class $j$ , and $\gamma_F^j, \gamma_D^j, \ldots, \gamma_B^j$ the upper cutoffs for each grade on the ability scale, the GRM assumes that student $i$ receives a grade of, say, C in class $j$ if + +$$ +\gamma_ {D} < z _ {i j} < \gamma_ {C}. +$$ + +A further assumption of the model is that the mean ability of student $i$ , say $z_{i}$ , for the courses that student $i$ chooses to take, is related to his performance in class $j$ according to + +$$ +z _ {i j} = z _ {i} + \epsilon_ {i j}. +$$ + +Here, $\epsilon_{ij}$ denotes a random deviation. In the GRM, grade cutoff vectors, mean student achievements, and the distribution of the error terms are estimated jointly. Details of this model are discussed in further in Johnson [1997]. + +In terms of the GRM, the Harvey Mudd team's $q_{i}$ is roughly equivalent to $z_{i}$ , while $c_{ij}$ is comparable to $\epsilon_{ij}$ , and $d_{j}$ plays a role similar to the variance of the distribution of the error term $\epsilon_{ij}$ if this distribution is assumed to be the same for all students in a given class. The primary difference between the proposed model and the GRM is that the grade-cutoff vectors $\gamma^{j}$ are fixed in the former and estimated in the latter. Although the team takes a reasonable approach toward fixing these cutoffs, doing so leads to several inconsistencies in the + +resulting model. Partially to overcome these difficulties, the team introduces a harshness term $h_k$ . This harshness term models a uniform shift of the gradecutoff values for all classes taught by professor $k$ . The team estimates values of $h_k$ from the mean grades assigned by each professor. + +The most important technical defect in this model is caused by the assumption that the grade-cutoffs for each class can be obtained by a contraction and shift of the baseline cutoffs. When harshness is 0, it follows that large increases in $d_{j}$ result in more extreme grades (that is, more As and Fs) and fewer middle grades. Shifts in harshness can change the As to Bs or Cs, but there is still a gap between the high and low marks. This is not typical of the grading patterns observed in actual grade data. To accommodate the distribution of grades actually observed, it is usually necessary to adjust the relative widths of the intervals associated with each grade on a class-by-class basis. + +By estimating the grade-cutoffs separately for each class, several of the more controversial assumptions made by this team regarding the properties of undergraduate grades can be eliminated. For example, if grade-cutoffs are estimated individually for each class, it is not necessary to assume that one letter grade corresponds to one standard deviation in student achievement, or that professors do not grade on curves or compare performances of students within classes, or that professors uniformly adjust for course difficulty. + +Other questionable model assumptions include the statements that + +- students select courses randomly, +- students do not gravitate to courses that are graded leniently, and +- professors have accurate perceptions of student achievements. + +None of these assumptions is required for the GRM; by liberalizing the interpretation of this model's parameters, they could be eliminated here as well. + +# Conclusion + +Of the three models proposed, only the model proposed by the team from Harvey Mudd College seems to handle the two-student paradox mentioned above. Their model also attempts to combine information about the grading patterns of instructors across classes, which is an aspect of model fitting not normally included even in GRMs. The primary substantive disadvantage of this team's proposal is its complexity. It is clearly the most difficult model to explain. + +In summary, all three teams proposed models that would improve the rankings of students within most undergraduate institutions. Importantly, each proposal would also reduce the incentives introduced by traditional GPA for students to enroll in "easy" classes, and would therefore improve the academic environment at colleges where they were applied. Of course, the greatest weakness of each proposal is that it is only a proposal! I encourage each team truly to + +make their models an application of mathematics by lobbying for the adoption of an adjusted GPA at their institution. + +# References + +Caulkin, J., P. Larkey, and J. Wei, J. 1996. Adjusting GPA to reflect course difficulty. Working paper, Heinz School of Public Policy and Management, Carnegie Mellon University. +Goldman, R., D. Schmidt, B. Hewitt, and R. Fisher, R. 1974. Grading practices in different major fields. American Education Research Journal 11: 343-357. +Goldman, R., and Widawski, M. 1976. A within-subjects technique for comparing college grading standards: implications in the validity of the evaluation of college achievement. Educational and Psychological Measurement 36: 381-390. +Johnson, Valen E. 1997. An alternative to traditional GPA for evaluating student performance. Statistical Science 12 (4): 251-278. +Strenta, A., and R. Elliott. 1987. Differential grading standards revisited. Journal of Educational Measurement 24: 281-291. + +# About the Author + +Valen E. Johnson is Associate Professor of Statistics and Decision Sciences at Duke University. His research interests include statistical image analysis, ordinal data modeling, and Markov Chain Monte Carlo simulation methods. \ No newline at end of file diff --git a/MCM/1995-2008/1999MCM&ICM/1999MCM&ICM.md b/MCM/1995-2008/1999MCM&ICM/1999MCM&ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..4d6875c6b00a4c7a3bec45c0a95d9d502ecc7e80 --- /dev/null +++ b/MCM/1995-2008/1999MCM&ICM/1999MCM&ICM.md @@ -0,0 +1,4542 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +P.O.Box 210667 + +Montgomery, AL 36121-0667 + +JMCargal@sprintmail.com + +Development Director + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Project Manager + +Roland Cheyne + +Copy Editors + +Seth A. Maislin + +Pauline Wright + +Distribution Manager + +Kevin Darcy + +Production Secretary + +Gail Wessell + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 20, No. 3 + +# Associate Editors + +Don Adolphson + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. ("Gene") Woolsey + +Brigham Young University + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription includes print copies of quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in their classes, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2020 $69 + +(Outside U.S.) #2021 $79 + +# INSTITUTIONAL PLUS MEMBERSHIP SUBSCRIBERS + +Institutions can subscribe to the Journal through either Institutional Pus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in any class taught in the institution, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2070 $395 + +(Outside U.S.) #2071 $405 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +Regular Institutional members receive only print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2040 $165 + +(Outside U.S.) #2041 $175 + +# LIBRARY SUBSCRIPTIONS + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching and our organizational newsletter Consortium. + +(Domestic) #2030 $140 + +(Outside U.S.) #2031 $160 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 1999 by COMAP, Inc. All rights reserved. + +# Table of Contents + +Publisher's Editorial + +Mathematics and Its Applications for All + +Solomon A. Garfunkel 185 + +Modeling Forum + +Results of the 1999 Mathematical Contest in Modeling + +Frank Giordano 187 + +The Asteroid Impact Problem + +Deep Impact + +Dominic Mazzoni, Matthew Fluent, and Joel Miller 211 + +Asteroid Impact at the South Pole: A Model-Based Risk Assessment + +Michael Rust, Paul Sangiorgio, and Ian Weiner 225 + +Antarctic Asteroid Effects + +Nicholas R. Baeth, Andrew M. Meyers, and Jacob E. Nelson 241 + +Not an Armageddon + +Mikhail Khlystov, Ilya Shpitser, and Seth Sulivant 253 + +The Sky is Falling! + +Daniel Forrest, Garrett Aufdemberg, and Murray Johnson 263 + +Judge's Commentary: The Outstanding Asteroid Impact Papers + +Patrick J. Driscoll 269 + +The Lawful Capacity Problem + +Determining the People Capacity of a Structure + +Samuel W. Malone, W. Garrett Mitchener, and John Alexander Thacker 273 + +Hexagonal Unpacking + +David Rudel, Joshua Greene, and Cameron McLeman 287 + +Don't Panic! + +Timothy Jones, Jeremy Katz, and Allison Master 297 + +Standing Room Only + +Frederick D. Franzwa, Jonathan L. Matthews, and James I. Meye 311 + +Room Capacity Analysis Using a Pair of Evacuation Models + +Gregg A. Christopher, Orion Lawler, and Jason Tedor 321 + +Judge's Commentary: The Outstanding Lawful Capacity Papers + +Jerrold R. Griggs 331 + +Practitioner's Commentary: The Outstanding Lawful Capacity Papers: + +The Answer Is Not the Solution + +Richard Hewitt 335 + +Interdisciplinary Contest in Modeling (ICM) + +The Ground Pollution Problem + +Pollution Detection: Modeling an Underground Spill through Hydro-Chemical Analysis + +James R. Garlick and Savannah N. Crites 343 + +Locate the Pollution Source + +Shen Quan, Yang Zhenyu, and He Xiaofei 355 + +Judge's Commentary: The Ground Pollution Papers + +David L. Elliott 369 + +Author's Commentary: The Outstanding Ground Pollution Papers + +Yves Nievergelt + +# Our Winners Don’t Have All the Answers + +![](images/810d2df374dd559623a6b5fff8f6b430e2b552d745fd5c599066aa026f3156fe.jpg) + +A collection of student solution papers to open-ended problems from the Mathematical Contest in Modeling. + +# With MCM:The First Ten Years you can: + +Build a modeling course + +Twenty interesting, open-ended problems + +Inspire your students + +Creative and eloquent MCM team solution papers + +Prepare a team for MCM + +Advice from successful advisors and judges' commentary on winning papers + +COMAP ANNOUNCES + +# MATHEMATICAL CONTEST IN MODELING + +FEBRUARY 4-7, 2000 + +# MOM 2000 + +The contest offers + +students the + +opportunity to + +compete in a + +team setting + +using applied + +mathematics + +in the solving of + +real-world pr.rob- + +Iems. + +The sixteenth annual international Mathematical Contest in Modeling will be held February 4-7, 2000. The contest will offer students the opportunity to compete in a team setting, using mathematics to solve real-world problems. + +For registration information, contact: + +Attn: Clarice Callahan + +MCM, COMAP, Inc., Suite 210, 57 Bedford Street, Lexington, MA 02420 + +email: mcm@comap.com voice: 781/862-7878 ext. 37 + +Major funding provided by the National Security Agency. + +Additional support for this project is provided by the Institute for Operations Research and the Management Sciences, the Society for Industrial and Applied Mathematics, and the Mathematical Association of America. + +![](images/0ae49fafcf9f4d8cb4262127fe918ced267af326fa904da77e5da27dc10082a8.jpg) + +# Publisher's Editorial Mathematics and Its Applications for All + +Solomon A. Garfunkel +Executive Director +COMAP, Inc. +57 Bedford St., Suite 210 +Lexington, MA 02420 +s.garfunkel@mail.comap.com + +The MCM issue of the UMAP Journal has become an occasion for me to reflect out loud on where COMAP is and where I feel we are going. Like craming for finals, or preparing for a Board of Trustees meeting, this is a valuable exercise. It is too easy to get caught up in the day-to-day work, answering email, going to meetings, reading brochure copy, and so on, rather than taking a longer view. + +Two major new COMAP initiatives were funded this year by the National Science Foundation. They are very closely related and move COMAP in an important new direction. The first of these projects is the Developmental Mathematics and its Applications Project (DevMap) funded through the Advanced Technology Education (ATE) section of the Division of Undergraduate Education (DUE). The goal of this project is to create a new two-semester undergraduate developmental mathematics sequence that embodies the philosophy of reform in the spirit of the NCTM Standards. In other words, we are creating a sequence of developmental courses, which assumes + +- graphing calculator use, spreadsheets, and geometric utility programs; +- presents mathematical ideas in the context of their contemporary applications; and +- encourages group activities and a more open pedagogical approach. + +Clearly, given our recent high-school efforts in producing Mathematics: Modeling Our World, this is a natural direction. But there is much for us to learn. + +The great majority of students in developmental courses are in two-year colleges. There we find a bimodal distribution of age cohorts—namely, about half of the students coming directly from high school and the other half going back to school frequently after a prolonged absence. Most of these students work and are looking to improve their career options. There is a larger responsibility in courses such as these to use contexts that are immediately relevant to these aspirations. + +We have similar concerns in our other major new effort—TechMap. This project's goal is to create a series of modules for use in secondary-school mathematics courses that feature more technically oriented contexts, such as manufacturing, agriculture, finance, and so on. The intent here is to create materials that can serve as replacement units for any core high-school program and at a variety of levels. The important point is that these units are intended for all students, rather than only those students tracked into a vocational course of study. Far too often, important and interesting technical applications of mathematics are considered inappropriate for core curricula. But we all need to learn how the world works and to see the broadest possible range of mathematical models, regardless of whether we know what we want to do when we grow up. + +Working in developmental and technical arenas is a new and important direction for COMAP. We look forward to the challenge of serving this large and diverse student population and expanding our mission to include "mathematics and its applications for all." + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for eleven years and has dedicated the last 20 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he appeared as the on-camera host), Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary school mathematics. + +# Modeling Forum + +# Results of the 1999 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +f.giordano@mail.comap.com + +# Introduction + +A total of 479 teams of undergraduates, from 223 institutions in 9 countries, spent the first weekend in February working on applied mathematics problems. They were part of the 15th annual Mathematical Contest in Modeling (MCM). On Friday morning, the MCM faculty advisor opened a packet and presented each team of three students with a choice of one of two problems. After a weekend of hard work, typed solution papers were mailed to COMAP on Monday. Twelve of the top papers appear in this issue of The UMAP Journal. + +A new feature this year was the inauguration of the Interdisciplinary Contest in Modeling (ICM), an extension of the MCM designed explicitly to stimulate participation by students from different disciplines and to develop and advance interdisciplinary problem-solving skills. The ICM featured a new kind of problem (Problem C) that in 1999 involved downloading and analyzing data and reflected a situation in which concepts from mathematics, chemistry, environmental science, and environmental engineering were useful. Each year's ICM announcement brochure advises which disciplines are likely to be helpful. + +Results and winning papers from the first thirteen contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-1998). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +# Problem A: Deep Impact + +For some time, the National Aeronautics and Space Administration (NASA) has been considering the consequences of a large asteroid impact on the earth. + +As part of this effort, your team has been asked to consider the effects of such an impact were the asteroid to land in Antarctica. There are concerns that an impact there could have considerably different consequences than one striking elsewhere on the planet. + +You are to assume that an asteroid is on the order of $1,000\mathrm{m}$ in diameter, and that it strikes the Antarctic continent directly at the South Pole. + +Your team has been asked to provide an assessment of the impact of such an asteroid. In particular, NASA would like an estimate of the amount and location of likely human casualties from this impact, an estimate of the damage done to the food production regions in the oceans of the southern hemisphere, and an estimate of possible coastal flooding caused by large-scale melting of the Antarctic polar ice sheet. + +# Problem B: Unlawful Assembly + +Many facilities for public gatherings have signs that state that it is "unlawful" for their rooms to be occupied by more than a specified number of people. Presumably, this number is based on the speed with which people in the room could be evacuated from the room's exits in case of an emergency. Similarly, elevators and other facilities often have "maximum capacities" posted. + +Develop a mathematical model for deciding what number to post on such a sign as being the "lawful capacity." As part of your solution, discuss criteria—other than public safety in the case of a fire or other emergency—that might govern the number of people considered "unlawful" to occupy the room (or space). Also, for the model that you construct, consider the differences between a room with movable furniture such as a cafeteria (with tables and chairs), a gymnasium, a public swimming pool, and a lecture hall with a pattern of rows and aisles. You may wish to compare and contrast what might be done for a variety of different environments: elevator, lecture hall, swimming pool, cafeteria, or gymnasium. Gatherings such as rock concerts and soccer tournaments may present special conditions. + +Apply your model to one or more public facilities at your institution (or neighboring town). Compare your results with the stated capacity, if one is posted. If used, your model is likely to be challenged by parties with interests in increasing the capacity. Write an article for the local newspaper defending your analysis. + +# Problem C: Ground Pollution + +# Background + +Several practically important but theoretically difficult mathematical problems pertain to the assessment of pollution. One such problem consists in deriving accurate estimates of the location and amount of pollutants seeping inaccessiblely underground, and the location of their source, on the basis of very few measurements taken only around, but not necessarily directly in, the suspected polluted region. + +# Example + +The data set (an Excel file at http://www.comap.com/mcm/prodata.xls, downloadable into most spreadsheets) shows measurements of pollutants in underground water from 10 monitoring wells (MW) from 1990 to 1997. The units are micrograms per liter ( $\mu \mathrm{g} / \mathrm{l}$ ). The location and elevation for eight wells are known and given in Table 1. The first two numbers are the coordinates of the location of the well on a Cartesian grid on a map. The third number is the altitude in feet above Mean Sea Level of the water level in the well. + +Table 1. Locations for eight wells in Problem C. + +
Well Numberx-Coordinate (ft)y-Coordinate (ft)Elevation (ft)
MW-14187.56375.01482.23
MW-39062.54375.01387.92
MW-77625.05812.51400.19
MW-99125.04000.01384.53
MW-119062.55187.51394.26
MW-129062.54562.51388.94
MW-139062.55000.01394.25
MW-144750.02562.51412.00
+ +The locations and elevations of the other two wells in the data set (MW-27 and MW-33) are not known. In the data set, you will also see the letter T, M, or B after the well number, indicating that the measurements were taken at the Top, Middle, or Bottom of the aquifer in the well. Thus, MW-7B and MW-7M are from the same well, but from the bottom and from the middle. Also, other measurements indicate that water tends to flow toward well MW-9 in this area. + +# Problem One + +Build a mathematical model to determine whether any new pollution has begun during this time period in the area represented by the data set. If so, identify the new pollutants and estimate the location and time of their source. + +# Problem Two + +Before the collection of any data, the question arises whether the intended type of data and model can yield the desired assessment of the location and amount of pollutants. Liquid chemicals may have leaked from one of the storage tanks among many similar tanks in a storage facility built over a homogeneous soil. Because probing under the many large tanks would be prohibitively expensive and dangerous, measuring only near the periphery of the storage facility or on the surface of the terrain seems preferable. Determine what type and number of measurements, taken only outside the boundary or on the surface of the entire storage facility, can be used in a mathematical model to determine whether a leak has occurred, when it occurred, where (from which tank) it occurred, and how much liquid has leaked. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at Southern Connecticut State University (Problem A), Carroll College (Montana) (Problem B), or University of New Hampshire (Problem C). At the triage stage, the summary and overall organization were important for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Asteroid Impact53461112212
Lawful Capacity5397291207
Ground Pollution29173260
1282150235479
+ +The twelve papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +# Asteroid Impact Papers + +"Deep Impact" + +Harvey Mudd College + +Claremont, CA + +Michael Moody + +Dominic Mazzoni + +Matthew Fluent + +Joel Miller + +"Asteroid Impact at the South Pole: + +A Model-Based Risk Assessment" + +Harvey Mudd College + +Claremont, CA + +Michael Moody + +Michael Rust + +Paul Sangiorgio + +Ian Weiner + +"Antarctic Asteroid Effects" + +Pacific Lutheran University + +Tacom, WA + +Rachid Benkhalti + +Nicholas R. Baeth + +Andrew M. Meyers + +Jacob E. Nelson + +"Not an Armageddon" + +University of California-Berkeley + +Berkeley, CA + +Rainer K. Sachs + +Mikhail Khlystov + +Ilya Shpitser + +Seth Sulivant + +"The Sky Is Falling" + +University of Puget Sound + +Tacom, WA + +Perry Fizzano + +Daniel Forrest + +Garrett Aufdemberg + +Murray Johnson + +# Lawful Capacity Papers + +"Determining the People Capacity of a Structure" + +Duke University + +Durham, NC + +David P. Kraines + +Samuel W. Malone + +W. Garrett Mitchener + +John Alexander Thacker + +"Hexagonal Unpacking" + +Harvey Mudd College + +Claremont, CA + +Ran Libeskind-Hadas + +David Rudel + +Joshua Greene + +Cameron McLeman + +"Don't Panic!" + +North Carolina School of Science and Mathematics + +Durham, NC + +Dot Doyle + +Timothy Jones + +Jeremy Katz + +Allison Master + +"Standing Room Only" + +Rose-Hulman Institute of Technology + +Terre Haute, IN + +Aaron D. Klebanoff + +Frederick D. Franzwa + +Jonathan L. Matthews + +James I. Meyer + +"Room Capacity Analysis Using a Pair of Evacuation Models" + +University of Alaska Fairbanks + +Fairbanks, AK + +Chris Hartman + +Gregg A. Christopher + +Orion Lawler + +Jason Tedor + +# Ground Pollution Papers + +"Pollution Detection: Modeling an Underground Spill Through Hydro-Chemical Analysis" + +Earlham College + +Richmond, IN + +Mic Jackson + +James R. Garlick + +Savannah N. Crites + +"Locate the Pollution Source" + +Zhejiang University + +Hangzhou, China + +Zhang Chong + +Shen Quan + +Yang Zhenyu + +He Xiaofei + +# Meritorious Teams + +Asteroid Impact Papers (34 teams) + +Asbury College, Wilmore, KY (Kenneth P. Rietz) + +Beijing University of Post and Telecommunications, Beijing, China (He Zuguo) + +Beloit College, Beloit, WI (Philip D. Straffin) + +Brandon University, Brandon, Manitoba, Canada (Doug Pickering) + +California Polytechnic State University, San Luis Obispo, CA (two teams) (Thomas O'Neil) + +College of Wooster, Wooster, OH (Reuben Settergren) + +Governor's School for Government and International Studies, Richmond, VA (two teams) (Crista Hamilton) + +Greenville College, Greenville, IL (Galen R. Peters) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Wang Yong) + +Hefei University of Technology, Hefei, Anhui, China (Jie Bao) + +Lewis and Clark College, Portland, OR (Robert W. Owens) + +Montclair State University, Upper Montclair, NJ (Mary Lou West) + +North Carolina School of Science and Mathematics, Durham, NC (Dot Doyle) + +Nanjing University of Science and Technology, Nanjing, Jiangsu, China (Wu Xing Min) + +National U. of Defence Technology, Chang Sha, Hunan, China (Wu MengDa) + +National University of Ireland, Galway, Ireland (Martin Meere) + +Pacific Lutheran University, Tacoma, WA (Rachid Benkhalti) + +Rose-Hulman Institute of Technology, Terre Haute, IN (Frank Young) + +South China University of Technology, Guangzhou, Guangdong, China (Xie Lejun) + +Trinity College Dublin, Dublin, Ireland (Timothy G. Murphy) + +Trinity University, San Antonio, TX (Jeffrey Lawson) + +U.S. Air Force Academy, USAF Academy, CO (Dawn Stewart) + +U.S. Military Academy, West Point, NY (Charles C. Tappert) + +University of Alaska Fairbanks, Fairbanks, AK (Chris Hartman) + +University of Colorado, Colorado Springs, CO (Holly Zullo) + +University of South Carolina, Aiken, SC (Nieves A. McNulty) + +Virginia Western Community College, Roanoke, VA (Ruth Sherman) + +Western Washington University, Bellingham, WA (Igor Averbakh) + +Westminster College, New Wilmington, PA (Barbara Faires) + +Wuhan University of Hydraulics and Engineering, Wuhan, Hubei, China (Huang Chongchao) + +Zhejiang University, Hangzhou, Zhejiang, China (Qifan Yang) + +Zhong Shan University, Guonmgzhou, Guong, China (Wang Shousong) + +Lawful Capacity Papers (39 teams) + +Abilene Christian University, Abilene, TX (David Hendricks) + +Asbury College, Wilmore, KY (Kenneth P. Rietz) + +Baylor University, Waco, TX (Frank H. Mathis) + +Beijing University of Chemical Technology, Beijing, China (Zhao Baoyuan) + +Beloit College, Beloit, WI (Philip D. Straffin) + +China University of Mining and Technology, Xuzhou, Jiangsu, China (Zhang Xingyong) + +Drake University, Des Moines, IA (Alexander F. Kleiner) + +Eastern Mennonite University, Harrisonburg, VA (John L. Horst) + +Gettysburg College, Gettysburg, PA (two teams) (James P. Fink) + +Grinnell College, Grinnell, IA (Marc Chamberland) + +Grinnell College, Grinnell, IA (William Case) + +Hefei University of Technology, Hefei, Anhui, China (Youdu Huang) + +Indiana University, Bloomington, IN (Russell Lyons) + +James Madison University, Harrisonburg, VA (James S. Sochacki) + +Macalester College, St. Paul, MN (Daniel Kaplan) + +Michigan State University, E. Lansing, MI (C.R. MacCluer) + +National University of Defence Technology, Chang Sha, Hunan, China (Cheng LiZhi) + +National University of Ireland, Galway, Ireland (Michael P. Tuite) + +Rose-Hulman Institute of Technology, Terre Haute, IN (Aaron D. Klebanoff) + +Seattle Pacific University, Seattle, WA (Steven D. Johnson) + +Shanghai Jiaotong University, Shanghai, China (Song Baorui) +Shanghai Jiaotong University, Shanghai, China (Zhou Gang) +Trinity University, San Antonio, TX (Jeffrey Lawson) +U.S. Military Academy, West Point, NY (Ed Connors) +University of California, Berkeley, CA (Rainer K. Sachs) +University of Dayton, Dayton, OH (Ralph C. Steinlage) +University of New South Wales, Sydney, Australia (James Franklin) +University of Puget Sound, Tacoma, WA (Robert A. Beezer) +University of Richmond, Richmond, VA (Kathy W. Hoke) +University of Science and Technology of China, Hefei, Anhui, China (Zhou Jingren) +University of Washington, Seattle, WA (Randall J. LeVeque) +Wake Forest University, Winston-Salem, NC (Stephen B. Robinson and Edward Allen) +Western Washington University, Bellingham, WA (Igor Averbakh) +Wheaton College, Wheaton, IL (Paul Isihara) +Wisconsin Lutheran College, Milwaukee, WI (Marvin C. Papenfuss) +Worcester Polytechnic Institute, Worcester, MA (Bogdan Vernescu) +Xidian University, Xian, Shaanxi, China (Mao Yongcai) +Youngstown State University, Youngstown, OH (Thomas Smotzer) + +Pollution Detection Papers (9 teams) +East China University of Science and Technology, Shanghai, China (Shao Nianci) +Eastern Oregon State College, LaGrande, OR (Jenny Woodworth) +Harvey Mudd College, Claremont, CA (Michael Moody) +Humboldt State University, Arcata, CA (Margaret Lang) +South China University of Technology, Guangzhou, Guangdong, China (He Chunxiong) +Tsinghua University, Beijing, China (Ye Jun) +University of Science and Technology of China, Hefei, Anhui, China (Du Zheng) +Xian Jiaotong University, Xian, Shaanxi, China (two teams) (Dai YongHong) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, gave a cash award and a three-year membership to each member of the teams from the University of Puget Sound (Asteroid Impact Problem), Duke University (Lawful Capacity Problem), and Zhejiang University (Ground Pollution Problem). Moreover, INFORMS gave free one-year memberships to all members of Meritorious and Honorable Mention teams. + +The Society for Industrial and Applied Mathematics (SIAM) designated as SIAM Winners two teams from Harvey Mudd College—Mazzoni et al. (Aster- + +oid Impact Problem) and Rudel et al. (Lawful Capacity Problem)—and the team from Earlham College (Ground Pollution Problem). Each of the team members was awarded a $300 cash prize. Their school was given a framed certificate hand-lettered in gold leaf. The Harvey Mudd team presented its results at a special Minisymposium of the SIAM Annual Meeting in Atlanta, GA in May. + +The Mathematical Association of America (MAA) designated as MAA Winners the teams from the University of Alaska Fairbanks (Lawful Capacity Problem) and Earlham College (Ground Pollution Problem). Both teams presented their solutions at a special session of the MAA Mathfest in Providence, RI in August. Each team member was presented a certificate by MAA President-Elect Thomas Banchoff. + +# Judging + +Director + +Frank R. Giordano, COMAP, Lexington, MA + +Associate Directors + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +William Fox, Chair, Dept. of Mathematics, Francis Marion University, Florence, SC + +# Asteroid Impact Problem + +Head Judge + +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy (INFORMS) + +Associate Judges + +Ron Barnes, University of Houston-Downtown, Houston, TX (MAA) + +Paul Boisen, Defense Dept., Ft. Meade, MD + +Courtney Coleman, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Lisette de Pillis, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY (INFORMS) + +Ben Fusaro, Mathematics Dept., Florida State University, Tallahassee, FL (SIAM/MAA) + +Richard Haberman, Mathematics Dept., Southern Methodist University, Dallas, TX (SIAM) + +Mario Juncosa, RAND Corporation, Santa Monica, CA + +Mark Levinson, Edmonds, WA (SIAM) + +Keith Miller, National Security Agency, Ft. Meade, MD + +Jack Robertson, Head, Mathematics and Computer Science, Georgia College and State University, Milledgeville, GA (MAA) + +Lee Seitelman, Glastonbury, CT + +Robert M. Tardiff, Dept. of Mathematical Sciences, Salisbury State University, Salisbury, MD + +Daniel Zwillinger, Zwillinger & Associates, Arlington, MA + +# Lawful Capacity Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +John Boland, Center for Industrial and Applied Mathematics (CIAM), University of South Australia, Australia + +Karen Bolinger, Dept. of Mathematics, Clarion University of Pennsylvania, Clarion, PA + +Doug Faires, Dept. of Mathematics and Statistics, Youngstown State University, Youngstown, OH + +Jerry Griggs, Dept. of Mathematics, University of South Carolina, Columbia, SC (SIAM) + +Jeff Hartzler, Dept. of Mathematics, Penn State University, Middletown, PA (MAA) + +Karla Hoffman, Chair, Dept. of Operations Research, George Mason University, Fairfax, VA (INFORMS) + +Daphne Liu, Mathematics Dept., California State University, Los Angeles, CA +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Dept. of Mathematics, St. Mary's College, Notre Dame, IN +Peter Olsen, Charles Stark Draper Lab, Arlington, VA (INFORMS) + +Mark Parker, Dept. of Mathematical Sciences, U.S. Air Force Academy, CO (SIAM) + +Catherine Roberts, Northern Arizona University, Flagstaff, AZ (SIAM) + +Michael Tortorella, Lucent Technologies, Holmdel, NJ + +Marie Vanisko, Carroll College, Helena, MT (Triage) + +Martin Wildberger, Electric Power Research Institute, Palo Alton, CA (SIAM) + +# Ground Pollution Problem + +Head Judge + +David C. Arney, Dept. of Mathematical Sciences, U.S. Military Academy + +Associate Judges + +Kelly Black, Mathematics Dept., University of New Hampshire, Durham, NH (Triage) + +James Case, Baltimore, Maryland (SIAM) + +David L. Elliott, Institute for Systems Research, University of Maryland, College Park, MD (SIAM) + +John Kobza, Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA (INFORMS) + +John L. Scharf, Carroll College, Helena, MT + +Kathleen M. Shannon, Salisbury State University, Salisbury, MD (MAA) + +# Triage Session + +# Asteroid Impact Problem + +Head Triage Judge + +Theresa M. Sandifer, Southern Connecticut State University, New Haven, CT + +Associate Judges + +Therese L. Bennett, Southern Connecticut State University, New Haven, CT +Cynthia B. Gubitose, Western Connecticut State University, Danbury, CT +David Hahn + +Ronald E. Kutz, Western Connecticut State University, Danbury, CT + +Val Pinciu, Mathematics Dept., SUNY at Buffalo, Buffalo, NY + +C. Edward Sandifer, Western Connecticut State University, Danbury, CT + +Lawful Capacity Problem + +(all from Mathematics Dept., Carroll College, Helena, MT) + +Head Triage Judge + +Marie Vanisko + +Associate Judges + +Peter Biskis, Terence J. Mullen, and Jack Oberweiser + +Ground Pollution Problem + +(all from Mathematics Dept., University of New Hampshire, Durham, NH) + +Head Triage Judge + +Kelly Black + +Associate Judges + +A.H. Copeland, John B. Geddes, Loren D. Meeker, Kevin Short, Lee L. Zia + +# Sources of the Problems + +Contributors of the problems were as follows: + +- Asteroid Impact Problem: Jack Robertson, Mathematics Dept., Georgia College and State University. +- *Lawful Capacity Problem: Joe Malkevitch, Mathematics Dept., York College*, City University of New York. + +- Ground Pollution Problem: Yves Nievergelt, Mathematics Dept., Eastern Washington University. + +# Acknowledgments + +The MCM was funded this year by the National Security Agency, whose support we deeply appreciate. The ICM received major funding also from the National Science Foundation. We thank Dr. Gene Berg of NSA for his coordinating efforts. The MCM is also indebted to INFORMS, SIAM, and the MAA, which provided judges and prizes. + +I thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +
P = Successful ParticipationA = Asteroid Impact Problem
H = Honorable MentionB = Lawful Capacity Problem
M = MeritoriousC = Ground Pollution Problem
O = Outstanding (published in this special issue)
+ +
INSTITUTIONCITYADVISORABC
ALASKA
Univ. of Alaska FairbanksFairbanksChris HartmanMO
ARIZONA
Northern Arizona Univ.FlagstaffJames W. SwiftH
CALIFORNIA
California Lutheran Univ.Thousand OaksCynthia J. WyelsP
Calif. Poly. State Univ.San Luis ObispoThomas O'NeilM,M
California State Univ.BakersfieldMaureen E. RushP
SeasideDaniel M. FernandezPH
NorthridgeGholam-Ali ZakeriPP
FullertonMario U. MartelliPP
Harvey Mudd CollegeClaremontMichael MoodyO,OM
Ran Libeskind-HadasO
Humboldt State Univ.ArcataMargaret LangM
Loyola Marymount Univ.Los AngelesThomas M. ZachariahP,P
Pepperdine UniversityMalibuJane GanskeP
Pomona CollegeClaremontRichard ElderkinH
Shasta CollegeReddingCathy AndersonP
Sonoma State UniversityRohnert ParkClement E. FalboP
Univ. of CaliforniaBerkeleyRainer K. SachsOM
COLORADO
Colorado CollegeColorado SpringsSteven JankeP
Mesa State CollegeGrand JunctionEdward Bonan-HamadaH,P
Metro. State CollegeDenverThomas E. KelleyP
Regis UniversityDenverLinda DuchrowP
U.S. Air Force AcademyUSAF AcademyJeff BolengP
Capt Kirsten MesserH
Dawn StewartM
Univ. of ColoradoBoulderAnne DoughertyH
Colorado SpringsHolly ZulloM
Shannon MichauxP
Univ. of Southern Colo.PuebloBruce N. LundbergP
Paul R. ChaconP
CONNECTICUT
Connecticut CollegeNew LondonKathy McKeonP
Sacred Heart Univ.FairfieldAntonio MagliaroP
Southern Conn. State Univ.New HavenRoss GingrichH
Theresa SandiferH
Western Conn. State Univ.DanburyEd SandiferH
Paul HinesH
DISTRICT OF COLUMBIA
Georgetown UniversityWashingtonAndrew VogtPH
FLORIDA
Florida Southern CollegeLakelandAllen WuertzP
Jacksonville UniversityJacksonvilleLucinda B. SonnenbergP
Paul R. SimonyH
Robert A. HollisterP
Stetson UniversityDelandDaniel R. PlanteH
University of North FloridaJacksonvillePeter A. BrazaP
GEORGIA
Georgia College & State Univ.MilledgevilleCraig TurnerP
State Univ. of West GeorgiaCarrolltonScott GordonP
IDAHO
Boise State UniversityBoiseStephen BrillPP
Idaho State UniversityPocatelloJim HoffmanH
ILLINOIS
Greenville CollegeGreenvilleGalen R. PetersM,P
Illinois Wesleyan UniversityBloomingtonZahia DriciP
Northern Illinois UniversityDekalbHamid BelloutP
Wheaton CollegeWheatonPaul IsiharaPM
INDIANA
Earlham CollegeRichmondCharlie PeckP
Mic JacksonO
Goshen CollegeGoshenDavid HousmanP
Patricia OakleyPH
Indiana UniversityBloomingtonRussell LyonsM,H
South BendMorteza Shafii-MousaviP
Steven ShoreP
Rose-Hulman Inst. of Tech.Terre HauteFrank YoungMP
Aaron D. KlebanoffO,M
Saint Mary's CollegeNotre DameJoanne SnowHH
Wabash CollegeCrawfordsvilleEsteban PoffaldP,P
IOWA
Drake UniversityDes MoinesAlexander F. KleinerM
Luz M. De AlbaP
Grinnell CollegeGrinnellWilliam CaseM,P
Marc ChamberlandHM
Iowa State UniversityAmesStephen J. WillsonH,H
Luther CollegeDecorahReginald D. LaursenPH
Marycrest Int'l Univ.DavenportJeff DickersonP
Mt. Mercy CollegeCedar RapidsKent KnoppP,P
Simpson CollegeIndianolaDavid OlsgaardHP
M.E. Murphy WaggonerHH
Univ. of Northern IowaCedar FallsGregory M. DotsethP
Timothy L. HardyP
Wartburg CollegeWaverlyMariah BirgenP
KANSAS
Baker UniversityBaldwin CityBob FragaPP
Bethel CollegeNorth NewtonMonica MeissenP,P
KENTUCKY
Asbury CollegeWilmoreKenneth P. RietzMM
Brescia CollegeOwensboroChris A. TiahrtP
LOUISIANA
McNeese State Univ.Lake CharlesKaren AucoinP
Robert DoucetteP
MAINE
Colby CollegeWatervilleTom BergerHP
MARYLAND
Loyola CollegeBaltimoreJohn HennesseyPP
Mt. St. Mary's CollegeEmmitsburgFred J. PortierP
John AugustP
Salisbury State Univ.SalisburyMichael BardzellP
MASSACHUSETTS
Holy Cross CollegeWorcesterJohn AndersonP
Simon's Rock CollegeGreat BarringtonAllen B. AltmanP,P
Smith CollegeNorthamptonYung Pin-ChenP
Yung-Pin Chen and
Cristina SuarezP
Univ. of MassachusettsLowellJames Kiwi Graham-EagleH
Lou RossiP
W. New England Coll.SpringfieldEric HaffnerP
Williams CollegeWilliamstownStewart JohnsonP
Worcester Poly. Inst.WorcesterArthur C. HeinricherP
Bogdan VernescuM
MICHIGAN
Albion CollegeAlbionScott DilleryPP
Eastern Michigan UniversityYpsilantiChristopher E. HeeP,P
Hillsdale CollegeHillsdaleJohn P. BoardmanH,P
Kettering UniversityFlintCraig AndresP
Lawrence Technological Univ.SouthfieldHoward WhitstonH
Ruth G. FavroH
Scott SchneiderH
Michigan State UniversityE. LansingC. R. MacCluerM,H
George StockmanP
Siena Heights CollegeAdrianRick TrujilloP
Univ. of Michigan-DearbornDearbornDavid A. JamesH
MINNESOTA
Augsburg CollegeMinneapolisRebekah ValdiviaPP
Macalester CollegeSt. PaulSusan FoxP
Daniel KaplanM
University of MinnesotaMorrisPeh NgH,P
Winona State UniversityWinonaBarry A. PerattP
MISSOURI
Missouri Southern State Coll.JoplinPatrick CassensPH
Northeast Missouri State Univ.KirksvilleSteven J. SmithP
Northwest Missouri State Univ.MaryvilleRussell N. EulerPP
Southeast Missouri State Univ.Cape GirardeauRobert W. SheetsP
Univ. of MissouriRollaMichael G. HilgersH
Wentworth Military AcademyLexingtonJacque MaxwellP,P
MONTANTA
Carroll CollegeHelenaTerence J. MullenH
Jack OberweiserP
Kevin WolkaP
NEBRASKA
Nebraska Wesleyan Univ.LincolnP. Gavin LaRoseH
NEVADA
University of NevadaRenoMark M. MeerschaertH
NEW JERSEY
Kean UniversityUnionPablo ZafraP
Montclair State UniversityUpper MontclairMary Lou WestM
Michael JonesP
NEW MEXICO
New Mexico State UniversityLas CrucesMarcus S. CohenH
NEW YORK
Ithaca CollegeIthacaJohn C. MaceliP
Nazareth CollegeRochesterKelly M. FullerH,P
Joe LanzafameP
Niagara UniversityNiagaraSteven L. SiegelP
St. Bonaventure Univ.St. BonaventureMaureen P CoxP
Albert G. WhiteP
U.S. Military AcademyWest PointEd ConnorsM
James S. RolfH
Charles C. TappertM
Joanne E. WalserP
Wells CollegeAuroraCarol C. ShilepskyH
Westchester Comm. Coll.ValhallaRowan LindleyP
Sheela WhelanP
NORTH CAROLINA
Appalachian State Univ.BooneHolly HirstH
Duke UniversityDurhamDavid P. KrainesO
Elon CollegeElon CollegeTodd LeePP
Mount Olive CollegeMt OliveOllie J RoseP
N.C. School of Sci. & MathDurhamDot DoyleMO
North Carolina State Univ.RaleighJoyce HatchH
Robert T. RamsayPH
Salem CollegeWinston-SalemPaula G. YoungPP
Univ. of N. CarolinaWilmingtonRussell L. HermanHP
Wake Forest UniversityWinston-SalemStephen B. Robinson and Edward AllenM
Western Carolina Univ.Cullowheeulie BarnesH
Kathy IveyH,P
OHIO
Baldwin-Wallace CollegeBereaSusan D PenkoP
College of WoosterWoosterReuben SettergrenM
Hiram CollegeHiramBrad GubserAP
Miami UniversityOxfordDoug WardH
University of DaytonDaytonRalph C. SteinlageM
Wright State UniversityDaytonThomas P. SvobodnyH
Youngstown State Univ.YoungstownStephen HanzelyH
Robert KramerH
Scott MartinP
Thomas SmotzerHMH
OKLAHOMA
Southeastern Okla. State Univ.DurantJohn McArthurP
Karla OtyP
OREGON
Eastern Oregon State CollegeLaGrandeDavid AllenP
John ThurberH
Anthony TovarP
Jenny WoodworthM
Lewis and Clark CollegePortlandRobert W. OwensMP
Southern Oregon State CollegeAshlandKemble R. YatesH
PENNSYLVANIA
Carnegie Mellon UniversityPittsburghWalkerP
Clarion Univ. of PennsylvaniaClarionMichael BarrettP
Jon BealP
Bill KrughH
Dickinson CollegeCarlisleLorelei KossP
Gettysburg CollegeGettysburgJames P. FinkM,M
Haverford CollegeHaverfordStephanie SingerP
Messiah CollegeGranthamLamarr C. WidmerH
Shippensburg UniversityShippensburgCheryl OlsenP
Westminster CollegeNew WilmingtonBarbara FairesM,P
Richard SprowP
SOUTH CAROLINA
Charleston Southern Univ.CharlestonStan PerrineP
Francis Marion UniversityFlorenceC. AbbottP
Univ. of South CarolinaAikenLaurene FausettH
Nieves A. McNultyM
SOUTH DAKOTA
Northern State UniversityAberdeenA.S. ElkhaderP
TENNESSEE
Carson-Newman CollegeJefferson CityCatherine KongH
Christian Brothers UniversityMemphisCathy W. CarterP
Lipscomb UniversityNashvilleMark A. MillerH
TEXAS
Abilene Christian UniversityAbileneDavid HendricksM
Baylor UniversityWacoFrank H. MathisM
Southwestern UniversityGeorgetownT. SheltonP
Trinity UniversitySan AntonioJeffrey LawsonMM
Fred LoxomH
University of DallasIrvingPete McGillP
University of HoustonHoustonBarbara Lee KeyfitzP
University of Texas at El PasoEl PasoMichael D. O'NeillP
UTAH
University of UtahSalt Lake CityDon H. TuckerH
VERMONT
Johnson State CollegeJohnsonGlenn D. SproulPP
VIRGINIA
College of William & MaryWilliamsburgMichael TrossetH
Eastern Mennonite Univ.HarrisonburgJohn L. HorstP
Governor's School for Gov. & Int'l StudiesRichmondJohn BarnesH
Crista HamiltonM,M
James Madison UniversityHarrisonburgDonna FengyaP
James S. SochackiM
Thomas Jefferson H.S. for Sci. & Tech.AlexandriaJohn DellH
University of RichmondRichmondKathy W. HokeM
Virginia Western Comm. Coll.RoanokeRuth ShermanM,P
WASHINGTON
Pacific Lutheran UniversityTacomaRachid BenkhaltiO,M
Seattle Pacific UniversitySeattleSteven D. JohnsonM
University of Puget SoundTacomaRobert A. BeezerM,H
Perry FizzanoO,P
Univ. of WashingtonSeattleRandall J LeVequeM
Washington State Univ.PullmanRichard GomulkiewiczP
Western Washington Univ.BellinghamIgor AverbakhMMP
Saim UralPP
WISCONSIN
Beloit CollegeBeloitPhilip D. StraffinMM
Northcentral Technical Coll.WausauFrank J. FernandesP
Robert J. HenningP
Univ. of WisconsinMadisonDavid MoultonP
PlattevilleMike PennP
Sheryl WillsH
Stevens PointNathan WetzelH
Robert KrecznerP
Wisconsin Lutheran CollegeMilwaukeeMarvin C. PapenfussM
AUSTRALIA
Curtin University of Tech.PerthYong Hong WuP
Univ. of New South WalesSydneyDr James FranklinM
Univ. of Southern QueenslandToowoombaChristopher J. HarmanH
Tony RobertsP
CANADA
Brandon UniversityBrandon, MBDoug PickeringM
Memorial Univ. of NfldSt. John's, NFAndy FosterP
University of CalgaryCalgary, ABDavid R. WestbrookP
University of SaskatchewanSaskatoon, SKProf. Raj SrinivasanH
Univ. of Western OntarioLondon, ONC. Lindsay DennisonP
Peter H. PooleHP
York UniversityToronto, ONJuris StepransH,P
CHINA
Anhui UniversityHefei, AnhuiSjyang ShangjunH
Wang HaixianH
Anhui UniversityHefei, AnhuiZhang QuanbinP
Beijing Institute of Tech.BeijingYang GuoxiaoP
Xiao Di CuiH
Beijing Union UniversityBeijingRen KaiLongH
Wang XinfengH
Xing Chun FengP
Zeng QingliP
Beijing U. of Aero. & Astro.BeijingPeng LinpingP
Beijing Univ. of Chem. Tech.BeijingLiu DaminP
Shi XiaodingH
Zhao BaoyuanM
Beijing U. of Post & Telecom.BeijingHe ZuguoMH
Luo ShoushanP,P
Central-South Inst. of Techn.Hengyang, HunanLiu YachunP,P
China Univ. of Mining & Techn.Xuzhou, JiangsuZhang XingyongM
Zhou ShengwuH
Chongqing UniversityChongqingFu LiH
Gong QuP
He ZhongshiH
Liu QiongsenP
Yang DaDiH
Dalian University of Tech.Dalian, LiaoningHe MingfengPP
Yu HongquanH,H
Zhao LizhongHP
E. China Univ. of Sci. and Tech.ShanghaiLiu ZhaohuiH
Lu XiwenH
Shao NianciHM
Lu XiwenP
Lu YuanhongH
Exp'l H.S. Beijing U. NormalBeijingHan LeqingP
ZhangjilinP
First Middle School of +Jiading DistrictJiading, ShanghaiXiexilinH
XurongH
Fudan UniversityShanghaiWan HuilingP
Zhou XiHP
Cai ZhijieP,PH
Harbin Engineering Univ.Harbin, HeilongjiangLuo YueshengH
Shen JihongHP
Shi JiuyuP
Zhang XiaoweiP
Harbin Inst. of TechnologyHarbin, HeilongjiangLiu JinH
Shang ShoutingHPH
Wang YongM
Yu XiujuanH
Hefei University of Tech.Hefei, AnhuiBao JieM
Du XueqiaoP
Zhou YongwuH
Huang YouduM
JianPing Senior High SchoolShanghaiHuang LiangfuH
Zhu WeizhengH
Jilin UniversityChangchun, JilinLu Xian VuiP
Jilin University of Tech.Changchun, JilinFang PeichenP
Liu XiaoyuH
Zhang PeiyuanH
Jinan UniversityGuangzhou, GuangdongYe ShiqiP
Fan SuohaiH
Mechanical Eng'ng Coll.ShijiazhuangYang PinghuaH
Zhao RuiqingH
Nanjing Univ. of Sci. & Tech.Nanjing, Jiang SuWu Xing minM
Yang XiaopingP
Nankai UniversityTianjinWang BinP
Huang WuqunH
Zhou XingWeiP
Chen ZenzqiangP
National U. of Defence Tech.Chang Sha, HuNanCheng LiZhiM
Wu MengDaM
Northwestern Polytech. Univ.Xian, ShaanxiLu Xiao DongH
Nie Yu FengH
Wang MingYuH
Xu WeiH
Peking UniversityBeijingLei GongyanH,PP
Deng MinghuaP,PP
Shandong UniversityJinan, ShandongCui YuquanP
Shanghai Jiaotong Univ.ShanghaiHuang JianguoH
Song BaoruiHM
Yang BingP
Zhou GangMH
Shanghai Maritime Univ.ShanghaiDing SongkangP
Sheng ZiningP
Shanghai Normal Univ.ShanghaiZhu DetongP,P
Guo ShenghuanP
South China Univ. of Tech.Guangzhou, GuangdongFu HongzhuoH
Hao ZhifengH
He ChunxiongHM
Xie LejunM
Zhu FengfengH
Southeast UniversityJiangsu, NanjingHe Lin and Wei FangfangP
Huang JunH
Zhou Jian huaH
Zhu Dao-YuanP
Tsinghua UniversityBeijingHu ZhimingP,PP
Ye JunH,PM
Univ. of Elec. Sci. & Tech.ChengduDu HongfeiP
Wang JianguoH
Xu QuanzhiP
Zhong ErjieH
U. of Sci. and Tech. of ChinaHefei, AnhuiGao JieH
Xu QingqingH
Du ZhengM
Guo QujiP
Lai JunwenH
Zhou JingrenM
Wuhan U. of Hydraul. & Eng'ngWuhan, HubeiCheng GuixingH
Huang ChongchaoM,H
Peng ZhuzengH
Xian Jiaotong UniversityXian, ShaanxiHe XiaoliangHP
Zhou YicangH,P
Dai YongHongM,M
Xidian UniversityXian, ShaanxiHu YupuH
Li JunminH
Mao YongcaiM
Zhejiang UniversityHangzhou, ZhejiangYang QifanMP
He YongHH
Zhang ChongO
Zhengzhou Electr. Power Coll.Zhengzhou, HenanCheng RuizhongH
Cui YingjianP
Zhengzhou Univ. of Tech.Zhengzhou, HenanWang JinlingH
Wang ShubinP
Zhong Shan Univ.Guonmgzhou, GuongJiang XiaolongH
Wang ShousongM
Wang Yuan ShiP
Yang Yin ZhaoH
FINLAND
Paivola CollegeValkeakoskiMerikki LappiP,P
HONG KONG
Hong Kong Baptist Univ.KowloonChong Sze TongP,P
IRELAND
National Univ. of IrelandGalwayMichael P. TuiteM
Martin MeereM
Trinity College DublinDublinTimothy G. MurphyM,H
University College CorkCorkPatrick FitzpatrickP
Jim GrannellP
Michael QuinlanH
University College DublinBelfield, DublinTed CoxH
Maria MeehanH,P
LITHUANIA
Vilnius UniversityVilniusRicardas KudzmaP
Antanus MitasiusanasP
Saulius RagaisisP
Algirdas ZabulionisP
SOUTH AFRICA
University of StellenboschMatielandJ.H. Van VuurenP
+ +# Acknowledgment + +The editor wishes to thank Chia Tzun Goh '01 of Beloit College for his help in identifying family names of team advisors from China, for whom the family name is listed first. + +# Deep Impact + +Dominic Mazzoni + +Matthew Fluent + +Joel Miller + +Harvey Mudd College + +Claremont, CA 91711 + +Advisor: Michael Moody + +# Abstract + +We consider the impact of a $1,000\mathrm{m}$ -diameter asteroid with the South Pole. Impacts of this magnitude can have substantial effects, including earthquakes and tsunamis on a regional scale and the possibilities of global climatic change and catastrophic agricultural damage from dust ejected into the atmosphere. + +Luckily, an Antarctic collision would result in a far less disastrous scenario. By modeling the possible trajectories of the asteroid, we determined that the angle of incidence would be relatively small, resulting in a smaller, more shallow crater. Since Antarctica is covered by a thick ice cap, very little dust would be ejected into the atmosphere. The heat of the collision would melt an insignificant amount of ice. The worst scenario would be if the shock wave created by the impact resulted in a large tsunami, so we predict which coastal areas would be flooded. + +# Initial Assumptions + +1. The asteroid is spherical, is $1,000 \mathrm{~m}$ in diameter, has a typical composition and density, and strikes the Earth at the South Pole. +2. The asteroid originated in our solar system and so before the collision was orbiting the Sun in the same plane as the Earth [Transcript—Plane of the Solar System 1996]. +3. The only bodies significantly affecting the trajectory of the asteroid are the Sun, the Earth, and the Moon. The trajectories of the four bodies can be predicted using a Newtonian model of gravitation. + +4. Near the South Pole, the Antarctic ice cap is uniformly $2\mathrm{km}$ deep, has roughly constant density, and is at $-76^{\circ}\mathrm{C}$ everywhere. + +# Properties of the Asteroid + +# Location, Angle, and Velocity of Impact + +We investigate the relative probability of impacting at the South Pole vs. elsewhere. So we simulate the motions of the Sun, Earth, Moon, and asteroid, based on the Newtonian model of gravitation, in which + +$$ +F = \frac {G m _ {1} m _ {2}}{d ^ {2}} +$$ + +describes the magnitude of the force $F$ exerted on two masses, $m_{1}$ and $m_{2}$ separated by a distance $d$ . The direction of the force is along a straight line connecting the center of mass of each object. The universal gravitational constant $G$ has the value $6.67259 \times 10^{-20} \, \mathrm{km}^3 \, \mathrm{s}^{-2} \, \mathrm{kg}^{-1}$ . Gravitational force accelerates a body according to $\vec{a} = \vec{F} / m$ . This acceleration changes the body's velocity $\vec{v}$ , which in turn affects the body's position $\vec{x}$ . + +We use a time-discretized numerical simulation. The location $\vec{x}_{i,t + \Delta t}$ of a body $i$ at time $t + \Delta t$ is calculated using the locations and masses of the other planetary bodies in addition to the location, velocity, and mass of body $i$ at time $t$ . In particular, we perform the following calculations on each body in the system: + +$$ +{\vec {F}} {=} {\sum_ {j, j \neq i} \frac {G m _ {i} m _ {j}}{| \vec {x} _ {i , t} - \vec {x} _ {j , t} | ^ {2}} \times \frac {\vec {x} _ {i , t} - \vec {x} _ {j , t}}{| \vec {x} _ {i , t} - \vec {x} _ {j , t} |}} +$$ + +$$ +{\vec {a} _ {i, t + \Delta t}} = {\frac {\vec {F}}{m _ {i}}} +$$ + +$$ +{\vec {v} _ {i, t + \Delta t}} = {\vec {v} _ {i, t} + \vec {a} _ {i, t + \Delta t} \times \Delta t} +$$ + +$$ +{\vec {x} _ {i, t + \Delta t}} = {\vec {x} _ {i, t} + \vec {v} _ {i, t + \Delta t} \times \Delta t + \frac {1}{2} \vec {a} _ {i, t + \Delta t} \times (\Delta t) ^ {2}.} +$$ + +The Sun, Earth, and Moon initially have the characteristics in Table 1 [Lide 1992, 14-26, 14-27]. + +We choose our coordinate system with the Sun at the origin and both the Earth and the Moon in the $xy$ -plane. By Assumption 1, the asteroid is spherical with diameter $1,000\mathrm{m}$ . Thus, it has volume of $V_{\mathrm{ast}} = \frac{4}{3}\pi (0.5\mathrm{km})^3$ , or $0.524\mathrm{km}^3$ . A typical asteroid has density $\rho_{\mathrm{ast}} = 2.5\times 10^{12}\mathrm{kg~km}^{-3}$ [Toon et al. 1997, 44]. Thus, the asteroid has mass of $m_{\mathrm{ast}} = 1.31\times 10^{12}\mathrm{kg}$ . + +We distinguish between asteroids that approach the Earth from within the solar system plane (such as ones from the asteroid belt of the solar system) and those that approach from outside that plane. Would an asteroid approaching + +Table 1. Mass, radius, position, and speed of the Sun, Earth, and Moon. + +
SunEarthMoon
mi1.99 × 1030 kg5.97 × 1024 kg7.35 × 1023 kg
ri6.96 × 105 km6.38 × 103 km1.74 × 103 km
(0,0,0) kmx̅Sun + (1.50 × 108,0,0) kmx̅Earth + (0,3.84 × 105,0) km
v(0,0,0) km s-1vSun + (0,29.8,0) km s-1vEarth + (-1.02,0,0) km s-1
+ +from outside the plane be more likely to hit the South Pole? To find out, we simulate both kinds of asteroids. + +We place the asteroid at a random location $1.54 \times 10^{6} \mathrm{~km}$ (about four lunar distances away) from the Earth. We give the asteroid the same velocity that the Earth has relative to the Sun, as though the asteroid were falling through an orbit that coincides with that of the Earth. We put the asteroid on a collision course with the Earth by adding $10 \mathrm{~km} \mathrm{~s}^{-1}$ to the asteroid's velocity in a direction towards a random point no more than $9.57 \times 10^{3} \mathrm{~km}$ from the center of the Earth (i.e., the asteroid is approaching a point contained within a sphere centered on the Earth with a radius 1.5 times that of the Earth). + +We ran the simulation with $\Delta t = 10$ s. A collision occurred if the distance between the asteroid and the Earth was less than the sum of their radii. We calculate the latitude of the impact from the vector. The angle that the vector $\vec{x}_{\mathrm{ast}} - \vec{x}_{\mathrm{Earth}}$ makes with the $xy$ -plane determines the latitude of the impact: + +$$ +\Delta \vec {x} = \vec {x} _ {\mathrm {a s t}} - \vec {x} _ {\mathrm {E a r t h}} +$$ + +$$ +\mathrm {l a t i t u d e} = \arctan \left(\frac {\Delta \vec {x} \times (0 , 0 , 1)}{\sqrt {(\Delta \vec {x} \times (1 , 0 , 0)) ^ {2} + (\Delta \vec {x} \times (0 , 1 , 0)) ^ {2}}}\right). +$$ + +Similarly, we calculate the angle of incidence and the velocity of the impact from the vectors $\vec{x}_{\mathrm{ast}} - \vec{x}_{\mathrm{Earth}}$ and $\vec{v}_{\mathrm{ast}} - \vec{v}_{\mathrm{Earth}}$ , since the angle that the vector makes with the tangent plane of the Earth's surface at the point of impact is the angle of the impact, and the magnitude of this vector is the speed of impact: + +$$ +{\Delta \vec {x}} = {\vec {x} _ {\mathrm {a s t}} - \vec {x} _ {\mathrm {E a r t h}}} +$$ + +$$ +{\Delta \vec {v}} = {\vec {v} _ {\mathrm {a s t}} - \vec {v} _ {\mathrm {E a r t h}}} +$$ + +$$ +{\mathrm {a n g l e}} = {- \arcsin \left(\frac {\Delta \vec {x}}{| \Delta \vec {x} |} \times \frac {\Delta \vec {v}}{| \Delta \vec {v} |}\right)} +$$ + +$$ +\mathrm {s p e e d} = | \Delta \vec {v} | +$$ + +We ran 20,000 simulations of the asteroid, half for an asteroid approaching from within the solar system plane and half for an asteroid approaching from outside the plane. In both cases, slightly under one-fourth of the asteroids avoided colliding with the Earth. Figure 1 shows the distribution by latitude of those hitting the Earth. For either approach, there is about a $1\%$ chance of an asteroid impacting above $80^{\circ}$ S. + +![](images/536125674551c8a75c065e7521a39fccc67a6c763e082ba90f851f4b9eb7472b.jpg) +Figure 1. Probability of impact for asteroid from within the solar system plane (left) and from outside (right). + +![](images/54b564d44ed27e767cae9323a404e50c6ec8e0f58470463c28a2a80038b1fc1e.jpg) + +![](images/3caa966d94df9e9b33b4bd3de9d3ce6b4b378eba155c373c9ba1e93d4515dc8c.jpg) +Figure 2. Angle of incidence for asteroid from within the solar system plane (left) and from outside (right). + +![](images/a6f15db900d36093ae0facf6049a3cb66a424e94a7fa2f776883a1d6f8c00221.jpg) + +However, an asteroid from within the solar system plane impacting in the highest latitude ranges is more likely to impact at a shallower angle: $18^{\circ} \pm 1^{\circ}$ vs. $41^{\circ} \pm 5^{\circ}$ (see Figure 2). + +For asteroids with radii exceeding $100\mathrm{m}$ , air resistance is negligible; such asteroids hit the ground with most of their original kinetic energy [Hills and Mader 1995]. Our simulations show impact at a relative speed of $15\mathrm{km/s}$ , consistent with the literature [Chapman and Morrison 1994, 34]. + +The calculations do not take into account the Earth's tilt relative to the $xy$ -plane (about $22^{\circ}$ ), since we have no information about the time of year of the collision. Hence, the probability of impact at the South Pole cannot be taken simply by reading the height of the bar in Figure 1, nor can the expected angle of incidence be read directly from Figure 2. + +# Dynamics of Collision + +Using the calculated mass of $1.31 \times 10^{12} \mathrm{~kg}$ and the velocity of $15 \mathrm{~km} / \mathrm{s}$ , we find that the energy of the asteroid reaching the Earth's atmosphere is + +$$ +E = \frac {m v ^ {2}}{2} = \frac {(1 . 3 1 \times 1 0 ^ {1 2} \mathrm {k g}) (1 5 \mathrm {k m / s}) ^ {2}}{2} = 1. 5 \times 1 0 ^ {2 0} \mathrm {J o u l e s}. +$$ + +This is equivalent to $3.5 \times 10^{4}$ megatons (MT) of TNT. (If the collision were with a comet rather than an asteroid, the speed of impact would be $50~\mathrm{km / s}$ ; but because of the comet's lesser density of $1\mathrm{g / cm}$ , the energy of impact would be slightly less [Toon et al. 1997, 44].) + +The ice cover over Antarctica makes predicting the size of the crater problematic. For impacts on land, Toon gives the following two formulas for the expected value of the diameter of the crater (km): + +$$ +D = 0. 6 4 \left(\frac {Y}{\rho_ {t}}\right) ^ {1 / 3. 4} \left(\frac {2 0 0 0 0}{v _ {i}}\right) ^ {0. 1} (\cos \theta) ^ {0. 5} \left(\frac {\rho_ {i}}{\rho_ {t}}\right) ^ {0. 0 8 3}, +$$ + +$$ +D = 0. 5 3 c _ {f} \left(\frac {Y}{\rho_ {t}}\right) ^ {1 / 3. 4} (\cos \theta) ^ {2 / 3}, +$$ + +where + +$Y$ is the energy in megatons, + +$\rho_{t}$ is the density of the target, + +$\rho_{i}$ is the density of the impactor, + +$v_{i}$ is the speed of the impactor, + +$c_{f}$ is a correction factor with value approximately 1.37, and + +$\theta$ is the angle of impact [1997, 45]. + +The value $\theta = 18^{\circ}$ from our model for an asteroid from within the plane of the solar system is probably a little low, because asteroids are likely to have some perturbation from the plane. Hence we use $\theta = 30^{\circ}$ . With a density of $0.9\mathrm{g/cm}^3$ for ice, both formulas give a crater diameter of about $15\mathrm{km}$ . + +Since a crater for a "typical" asteroid has a depth-to-diameter ratio of about 1:5 or 1:7 [Terrestrial Impact Craters 1999], a crater of this diameter would have a depth of 2.5 to $3\mathrm{km}$ . However, a "typical" asteroid does not hit at as shallow an angle as the asteroid of our model, which plows through a large swath of the ice to create a crater that is wider but not as deep. If, despite its reduced downward momentum, the asteroid were to break through the ice, it would encounter a much denser bedrock 2 to $2.5\mathrm{km}$ deep; so we do not expect the crater to be much deeper than $2\mathrm{km}$ . + +Since ice melts more readily than the rock, we could be underestimating the size of the crater. Ice around the South Pole has a depth of $2\mathrm{km}$ and an average temperature of $-76^{\circ}\mathrm{C}$ . A crater of diameter $15\mathrm{km}$ and average depth just $1\mathrm{km}$ would displace about $175\mathrm{km}^3$ of ice. However, melting so much ice is unlikely. Also, the collision would send a large amount of ice/water, and some bedrock, into the atmosphere, but less rock than if the asteroid hit another continent. + +# Effects on Antarctica + +The effects of an asteroid hitting Antarctica would be profoundly different from those of a collision elsewhere. Although Antarctica is far from most centers of population, the melting of the ice cap is a concern. Our calculations use the data of Table 2. + +Table 2. Data on the Antarctic ice cap. + +
FeatureSizeSource
Volume30 × 106km3Virtual Antarctica [1999]
Area14 × 106km2World Factbook [1998]
Avg. thickness2 kmComputerworld Antarctica [1999]
Avg. temperature-76°CAssumption 4
+ +In theory, a large-enough collision could melt the entire ice cap, raising the water level of the world's oceans by $70\mathrm{m}$ [Computerworld Antarctica 1999]. Assume for the moment that all of the energy produced in the collision were converted to heat. To calculate the volume of ice that could be melted by the collision, we need some thermal properties of water (see Table 3). + +Table 3. Thermal properties of water, from Lide [1992, 6-172, 6-174]. + +
PhaseConductivity (Wm-1K-1)Specific Heat c (J K-1kg-1)k/cρ (m2s-1)
Ice1.8820309.26 × 10-7
Water0.6148101.27 × 10-7
Vapor0.02720201.34 × 10-8
Enthalpy of Fusion3.33 × 105J kg-1
Enthalpy of Vaporization2.26 × 106J kg-1
+ +Let us suppose that all of the $1.5 \times 10^{20}$ J of collision energy raised the temperature of mass $M_{\mathrm{ice}}$ of ice to $0^{\circ}\mathrm{C}$ and melted the ice, without heating the resulting water. Then we have: + +$$ +1. 5 \times 1 0 ^ {2 0} \mathrm {J} = (2 7 3. 2 \mathrm {K} - 1 9 7. 2 \mathrm {K}) \times 2 0 3 0 \mathrm {J K} ^ {- 1} \mathrm {k g} \times \mathrm {M _ {i c e}} + 3. 3 3 \times 1 0 ^ {5} \mathrm {J k g} ^ {- 1} \times \mathrm {M _ {i c e}}. +$$ + +Solving gives $M_{\mathrm{ice}} = 3.1 \times 10^{14}$ kg of ice. Using a density of $0.9 \, \mathrm{g/cm^3}$ for ice, the impact would melt $340 \, \mathrm{km^3}$ of ice, slightly more than $1/100,000$ of the total volume of the Antarctic ice cap! If all of the water could reach the ocean, it would raise the levels of the oceans by less than $1 \, \mathrm{mm}$ . However, since the South Pole is over $500 \, \mathrm{km}$ from the nearest Antarctic coast, it is unlikely that any of the water would reach the ocean. + +For a more accurate model of the heat in the Antarctic ice cap, let us suppose that all $1.5 \times 10^{20}$ J of the energy raises the temperature of the ice beneath the asteroid, which we approximate by a cylinder of diameter $1\mathrm{km}$ . The mass of ice under a circle $1\mathrm{km}$ in diameter would be $1.5 \times 10^{12}$ kg. + +- The energy to heat the ice by $76^{\circ}$ C (to its melting point) would be the mass times the temperature change times the specific heat of ice, giving $2.31 \times 10^{17}$ J. +- The enthalpy of fusion of ice is $3.33 \times 10^{5} \mathrm{~J} / \mathrm{kg}$ , so the energy to melt the ice would be $5.00 \times 10^{17} \mathrm{~J}$ . +- Raising the ice another $100^{\circ} \mathrm{C}$ would expend $7.2 \times 10^{17} \mathrm{~J}$ . +- Vaporizing the water at boiling point would expend $3.39 \times 10^{18} \mathrm{~J}$ (the enthalpy of vaporization is $2.26 \times 10^{6} \mathrm{~J} / \mathrm{kg}$ ). + +Subtracting these four numbers from the initial energy still would leave $1.46 \times 10^{20} \mathrm{~J}$ . If we assume that all of the remaining energy would be used to heat the water vapor, we find (by applying the specific heat of water vapor) that the water vapor could be raised to $48,000^{\circ} \mathrm{C}$ . + +It is unlikely that all of the energy of the impact would go into heating the ice, and some of the water vapor would escape before passing on its heat to the surrounding ice, so this model gives a huge overestimate of the effects of the collision. + +# The Heat Equation + +We model the spread of the temperature distribution by using the heat equation. Since the ice sheet is an average of only $2\mathrm{km}$ thick but more than $6,000\mathrm{km}$ wide, we model it using just two dimensions. Let $u(x,y,t)$ represent the temperature $(^{\circ}\mathrm{C})$ of the ice sheet at position $(x,y)$ (meters) at time $t$ (seconds). Set the coordinate system so that $u(0,0,0)$ is the temperature of the center of the impact location at the time of impact. In the heat equation + +$$ +\frac {\partial u}{\partial t} = \frac {k}{c \rho} \left(\frac {\partial^ {2} u}{\partial x ^ {2}} + \frac {\partial^ {2} u}{\partial y ^ {2}}\right), +$$ + +the constant $k$ is the thermal conductivity of the substance, $c$ is the specific heat, and $\rho$ is the density. Because all three values change as the ice turns to water and then to vapor, the only way to solve this equation is numerically. We use a + +method based on an algorithm given in Burden and Faires [1997]. Let the time step between successive iterations be $\Delta t$ and let the physical distances between temperature readings be $\Delta x$ and $\Delta y$ . To derive the method of finite differences, first consider the Taylor series approximations of $u$ in $t$ , $x$ , and $y$ : + +$$ +\frac {\partial}{\partial t} u (x, y, t) = \frac {u (x , y , t + \Delta t) - u (x , y , t)}{\Delta t} - \frac {\Delta t}{2} \frac {\partial^ {2}}{\partial t ^ {2}} u (x, y, \tau), +$$ + +$$ +\frac {\partial^ {2}}{\partial x ^ {2}} u (x, y, t) = \frac {u (x + \Delta x , y , t) - 2 u (x , y , t) + u (x - \Delta x , y , t)}{(\Delta x) ^ {2}} - \frac {(\Delta x) ^ {2}}{1 2} \frac {\partial^ {4}}{\partial x ^ {4}} u (\chi , y, t), +$$ + +$$ +\frac {\partial^ {2}}{\partial y ^ {2}} u (x, y, t) = \frac {u (x , y + \Delta y , t) - 2 u (x , y , t) + u (x , y - \Delta y , t)}{(\Delta y) ^ {2}} - \frac {(\Delta y) ^ {2}}{1 2} \frac {\partial^ {4}}{\partial y ^ {4}} u (x, \psi , t), +$$ + +for some $\tau \in (t,t + \Delta t),\chi \in (x - \Delta x,x + \Delta x)$ , and $\psi \in (y - \Delta y,y + \Delta y)$ . Let us assume that $\Delta x = \Delta y$ , so that we can combine more terms. Substituting these into the partial differential equation, and separating out the error terms as $E(x,y,t)$ , yields the following relations: + +$$ +\frac {u (x , y , t + \Delta t) - u (x , y , t)}{\Delta t} = +$$ + +$$ +\frac {k}{c \rho} \left(\frac {u (x + \Delta x , y , t) + u (x - \Delta x , y , t) + u (x , y + \Delta y , t) + u (x , y - \Delta y , t) - 4 u (x , y , t)}{(\Delta x) ^ {2}}\right), +$$ + +$$ +E (x, y, t) = \frac {\Delta t}{2} \frac {\partial^ {2}}{\partial t ^ {2}} u (x, y, \tau) - \frac {k}{c \rho} \frac {(\Delta x) ^ {2}}{1 2} \left(\frac {\partial^ {4}}{\partial x ^ {4}} u (\chi , y, t) + \frac {\partial^ {4}}{\partial y ^ {4}} u (x, \psi , t)\right). +$$ + +The constants can be grouped into a single term $K$ : + +$$ +K = \frac {k}{c \rho} \frac {\Delta t}{(\Delta x) ^ {2}}. +$$ + +Then solving the first equation for $u(x,y,t + \Delta t)$ yields the following equation, which allows us to solve for the temperature distribution at time $t + \Delta t$ given the distribution at time $t$ : + +$$ +\begin{array}{l} u (x, y, t + \Delta t) = u (x, y, t) (1 - 4 K) + K (u (x + \Delta x, y, t) + u (x - \Delta x, y, t) \\ + u (x, y + \Delta y, t) + u (x, y - \Delta y, t) - 4 u (x, y, t)). \\ \end{array} +$$ + +# Simulation Results + +We wrote a computer program in C to solve this equation for an initial temperature distribution of $-76^{\circ}$ C everywhere except for a circular region of diameter $1\mathrm{km}$ , to which we gave an initial temperature of $48,000^{\circ}$ C. We discovered that if all of the hot water vapor remained in place (instead of rising into the atmosphere, as we would expect), it could at most melt $5.7\times 10^{7}\mathrm{m}^{3}$ of ice (enough to raise the ocean level by $2\times 10^{-7}\mathrm{m}$ ), and melting this much would take more than 10 days! Long before then, one would expect everything + +to cool off. Even the water that would melt would have a difficult time reaching the ocean, because much of the surface of Antarctica has been pushed below sea level by the enormous weight of the ice [Virtual Antarctica 1999]. + +The results of the simulation are not too surprising once one considers the order of magnitude difference in the thermal conductivities of ice and water. The initial heat melts a large amount of ice fairly quickly, but the rate of temperature increase slows rapidly as the temperature rises, reaching almost an equilibrium at less than $100^{\circ}$ C. The process provides a layer of insulation between the hot vapor that was supposed to transfer the heat and the surrounding ice, which is now adjacent to just very warm water. As that ice melts, it does not transmit much heat to the next layer of ice. + +Conclusion: The last thing that we need to worry about is any significant amount of ice melting. + +# Antarctic Ecosystem + +The waters around Antarctica are inhabited by small crustaceans called krill, which are central to the food chain in this region. Small differences in water temperature, or any natural disaster that upsets the balance of nature, could affect their population, with global repercussions. However, since little ice would melt and there would be no other significant long-term effects of the collision, our best estimate is that nature would repair itself over time. + +# Effects on a Global Scale + +# Tsunami + +One of the most significant things we need to worry about is the possibility that the collision would cause an earthquake large enough to start a tsunami (tidal wave), which tends to be more severe if caused by a disturbance near the surface. Tsunamis can measure 10 to $30\mathrm{m}$ in height but extend 4 to $5\mathrm{km}$ from front to back [Monastersky 1998b], and they typically travel at about $800\mathrm{km / hr}$ [Monastersky 1998a]. (Technically, the word "tsunami" refers to a large wave that has slowed down and increased in height as it hits a continental shelf before a coastline, but we use the term in a broader sense to refer to any large wave with the potential of becoming a tsunami.) + +Tsunamis are extremely hard to forecast. It is not the magnitude of the earthquake that determines the height of the wave, but rather the frequency; in particular, the cause is long-period vibrations over time that drive the wave up higher and higher [Monastersky 1998a]. When the waves hit the coastline, they rise up even higher and flood the land with a wall of water, causing mass destruction. Between 1992 and 1997, 1,200 lives were lost as a result of tsunamis in the Pacific; and the July 1998 tsunami in Papua New Guinea claimed at least + +2,500 more. A primary reason for loss of life is little to no warning of the tsunami's approach. + +A tsunami caused by an asteroid impact in Antarctica would take more an hour to reach the southernmost tip of South America, and we would know far in advance of the approach of an asteroid of that size (within 10 years, $90\%$ of Earth-orbit-crossing asteroids of this size should be identified and their orbits should be plotted [Asteroid Comet Impact Hazards 1999].) Therefore, human casualties could be almost completely avoided by evacuating coastal areas. + +The maximum distance that a tsunami travels inland can be determined from its height when it hits the shore, the depth of the water at the shoreline, the roughness of the terrain, and the slope of the shore away from the coast. As an example, for a terrain corresponding to a typical developed area, a 40-m tsunami could travel inland about $9\mathrm{km}$ , and a 100-m one could travel inland about $100\mathrm{km}$ [Hills and Mader 1995]. + +An accurate simulation of a tsunami on a global scale would require complicated fluid-dynamics equations, but these simulations would be meaningless without extremely good initial data. Instead, we created a much simpler model. Assume that the shock wave caused by the asteroid collision would travel through the Antarctic continent quickly enough that it would reach the coastline at all places at approximately the same time. Then the initial wavefront would take on the shape of Antarctica and travel north. Consider a two-dimensional grid of lattice points representing the surface of the Earth. Label each point initially as either water or land. Points that represent water are given two variables: a height (a scalar) and a direction of wave motion (a vector). Initially, all points representing water are given zero height and no direction, except for a wavefront at the border of Antarctica directed away from the continent at all points. Each time step, the waves propagate in the direction of motion and interfere constructively or destructively with other parts of the wave. Unless acted upon by another wavefront, water above sea level falls back towards the ocean. + +This model of a tsunami is limited, but it is about as much as can be expected without more information on the type of earthquake that might cause the tsunami to start. Figure 3 shows the output of our computer simulation of the tsunami at various moments in time. + +While the exact locations of coastal flooding are unpredictable, some general trends can be observed: + +- The west coast of South America is more likely to be flooded than the east coast, mostly because of the shape of the Antarctic Peninsula. +- The southern-facing borders of Africa, Madagascar, India, and Australia have nothing in their way, so they are likely to be flooded. +- Both the east coast of North America and most of Europe are shielded from the tsunami and need not evacuate. + +![](images/cd6fa02b95a282c66c0050ecbd2c97325ee8bc7470b1c88d6990cf3729305d27.jpg) + +One hour after impact. The shape of the Antarctic Peninsula has caused the wave to avoid hitting the southeastern coast of South America. + +![](images/b48ac69e8886d2ef1216055c15e047fa5b08c4fc84f4ef4214235ff5188bd205.jpg) + +One hour later. The tsunami has struck South America. + +![](images/a57ebd66912d4838959954f642a4f5c7d375745450030ba24cbb177cf9f13cf4.jpg) + +Hours later. The tsunami has flooded parts of Africa and Australia. + +![](images/05406f47e25147ad58e1462ce3b7c3bc93a5279a4da69284a68ad697b821696f.jpg) +Figure 3. Computer-generated model of tsunami wavefront. + +Final image, showing all flooded coastlines in black. + +# Dust and Ice Loading + +To the person living in Tibet, a tsunami doesn't much matter. But if huge amounts of dust were released into the atmosphere, Tibetans would find the weather unusually cold, and the plants that they depend on for food would stop photosynthesizing and die. This effect worldwide could destroy all traces of civilization. + +We can see from volcanic activity what dust can do. On 15 June 1993, Mt. Pinatubo in the Philippines erupted. It had the largest effect on the particulate levels in the atmosphere of any event this century [McCormick et al. 1995, 399]. An asteroid hitting land with energy of $10^{4}$ MT of TNT would send up about the same amount of dust; with energy of $10^{5}$ MT of TNT, it would produce results similar to the eruption in Tambora in 1815 [Toon et al. 1992, 59]. Pinatubo lowered global temperatures by about $0.5^{\circ}$ C [McCormick et al. 1995] and Tambora by $0.75^{\circ}$ C [Tambora, Indonesia, 1815 1999]. + +This gives us a rough bracketing of what to expect from our asteroid with energy $3.4 \times 10^{4}$ MT of TNT, though hitting the thick ice sheet at a shallow angle should produce less dust than striking another continent. + +The dust put into the atmosphere would not have sufficient energy to go into orbit and so would spread around through the atmosphere [Toon et al. 1992, 57]. We expect the spread of dust to be restricted by the prevailing winds, which create bands blowing in opposing directions. To cross from the South Pole to the Northern Hemisphere by following prevailing winds, the dust would have to change altitude significantly several times. It took 2 to 3 months for the dust from the Pinatubo eruption to spread from the equator to the Northern and Southern Hemispheres. We would expect a similar (if not greater) time lag for transporting dust from the South Pole to the Northern Hemisphere. + +While the dust from the impact would have consequences to the global climate, it is unlikely to be of a magnitude to damage civilization significantly. It would have moderate and temporary effects on agriculture, particularly in the Southern Hemisphere. + +In addition to the dust, a significant amount of ice would be put into the atmosphere. A substantial part of this would become rain or snow and fall back to Earth, possibly removing some of the dust from the atmosphere. Increased water vapor would lower the temperature in the upper atmosphere, because water is a strong infrared radiator, causing more water vapor to condense and precipitate out [Toon et al. 1992, 68-69]. + +An impact in the ocean with energy of $10^{4}$ MT of TNT would about double the amount of water in the ambient upper atmosphere. This would have a minor greenhouse effect that would be somewhat canceled by ice clouds blocking the sun. It would not have any significant impact unless it lasted longer than the 10-year response time of the oceans to temperature change [Toon et al. 1992, 69]. + +# Conclusions + +An asteroid $1000\mathrm{m}$ in diameter could cause a serious global disaster. An impact near a heavily populated area could cause mass destruction and loss of life, and an ocean collision could create a tremendous tsunami. If an asteroid were to strike a continent not near the poles, it could send up enough dust into the atmosphere to cause long-term environmental damage. Compared to any of these scenarios, an Antarctic collision would be far less disastrous. + +The angle of incidence would likely be small for an asteroid originating in our solar system, creating a wider, shallower crater than if it hit perpendicularly. Because the ice cap is $2\mathrm{km}$ thick at the South Pole, most of what would get thrown into the atmosphere is shards of ice, not dust. Escaping dust would not travel north to more populated areas, because of the prevailing wind currents. The ice that would reach the atmosphere could cause a greenhouse effect, but only if it remained for many years; it is likely that much of it would fall in the form of rain. + +The possibility of a tsunami is real but impossible to predict. Our model predicts the possible flood locations, but these simulations would need to be done in greater detail with a more accurate model of the world and a more sophisticated model of a tsunami. In any event, because of advance warning, coastal areas could be evacuated. Serious flooding could damage millions of square kilometers of food production regions, but these effects would be short-term. + +One would hope for enough warning to evacuate the 4,000 people on Antarctica doing exploration and scientific research. + +# References + +Asteroid CometImpactHazards. 1999. http://impact.arc.nasa.gov/index.html. +Burden, Richard L., and J. Douglas Faires. 1997. Numerical Analysis. New York: Brooks/Cole. +Chapman, R.C., and D. Morrison. 1994. Impacts on the Earth by asteroids and comets: Assessing the hazard. Nature 367 (1994): 33-40. +Computerworld Antarctica. 1999. http://antarctica.com/world/. +Hills, Jack G., and Charles L. Mader. 1995. Tsunami produced by the impacts of small asteroids. Proceedings of the Planetary Defense Workshop. Livermore, CA. Available at http://www.llnl.gov/planetary. +Lide, David R., editor. 1992. CRC Handbook of Chemistry and Physics. 73rd ed. Boca Raton, FL: Chemical Rubber Company Press. Thermal properties of water: 6-172, 6-174. + +McCormick, M.P., L.W. Thomason, and C.R. Trepte. 1995. Atmospheric effects of the Mt. Pinatubo eruption. Nature 373: 399-404. +Monastersky, Richard. 1998a. How a middling quake made a giant tsunami. Science News 154 (1 August 1998): 69. +_____. 1998b Waves of death: Why the New Guinea tsunami carries bad news for North America. Science News 154 (3 October 1998): 221-223. +Montgomery, Carla W. 1995. Environmental Geology. 4th ed. Dubuque, IA: Wm. C. Brown Communications, Inc. +Tambora, Indonesia, 1815. 1999. http://volcano.und.nodak.edu/vwdocs/Gases/tambora.html. +Terrestrial Impact Craters. http://www.cpk.lv/hata/solarsys/solar/tercrate.htm. +Toon, O.B., R.P. Turco, and C. Covey. 1997. Environmental perturbations caused by the impacts of asteroids and comets. *Reviews of Geophysics* 35: 41-78. +Transcript—Plane of the Solar System (October 20, 1996). 1996. http://www.earthsky.com/1996/es961020.html. +Virtual Antarctica. http://www.terraquest.com/va/science/snow/snow.html. +World Factbook. 1998. http://www.odci.gov/cia/publications/factbook/ay.html. + +# Asteroid Impact at the South Pole: A Model-Based Risk Assessment + +Michael Rust + +Paul Sangiorgio + +Ian Weiner + +Harvey Mudd College + +Claremont, CA 91711 + +Advisor: Michael Moody + +# Introduction + +We consider approximate upper bounds for the magnitude of various environmental consequences of a spherical iron asteroid with a diameter of $1,000\mathrm{m}$ impacting at the South Pole. + +The increase in worldwide ocean levels would be on the order of a millimeter, except for the possibility of an unstable ice sheet being dislodged into the ocean. There would be global warming effects, though they would not be much greater than those caused by human-based industrial emissions. Significant amounts of acidic water vapor would likely be produced, and the subsequent precipitation of this acid rain in nearby fishing areas would disrupt ecosystems and lead to decreased fish harvests. + +# Simplifying Assumptions and Modeling + +# A Worst-Case Scenario + +The possible consequences of the impact of a comparatively large asteroid on the Earth are immense in number and in potential impact. A practical model would treat quantitatively only those effects that can be characterized by well-understood physical processes. For each effect, we consider the upper limit of potential environmental impact. This method estimates the "worst-case scenario." + +# The Size and Composition of the Asteroid + +We assume that the asteroid is spherical and made of iron. Although many near-Earth asteroids are composed of other materials [Kieffer 1980], an iron asteroid would pose the gravest threat, due to its high density, which implies a higher kinetic energy and hence a greater capacity to induce seismic shocks. + +# Velocities and Energies + +We offer a celestial mechanics model to estimate the incident velocity of an asteroid on the Earth's outer atmosphere. To consider the descent of the asteroid, we present a detailed dynamical model to calculate impact velocities as well as to estimate the energy transferred to the atmosphere. + +# Ice Melting, Vaporization, and Ejection + +The asteroid would impact in the heart of the largest reservoir of frozen water on the planet. How much ice would be liberated from the continent? We consider three likely routes by which ice may be affected directly: + +- direct heating from the impact kinetic energy, +- melting and vaporization from the pressure wave released, and +- ice fragments breaking off the continent and entering the ocean. + +To place upper bounds on these effects, we consider the cases of maximum possible energy transfer to each of these reservoirs. For the pressure wave and the ice fracturing, the bounds are less exact, since empirical estimates must be made of quantities such as the seismic efficiency. + +# Seismic Shocks and Ice + +There is potential for the unstable western ice sheet to become dislodged, partially slip into the ocean, and melt, possibly having dramatic consequences for ocean levels. The calculation is accomplished using a simple wave-equation model for the shock wave and is sensitive to the empirically determined proportion of kinetic energy that goes into the pressure wave. + +# Climatic Changes from Atmospheric Water Vapor + +A large asteroid impact into the ice would transfer much water vapor into the atmosphere, possibly creating an effect similar to the increased greenhouse gases that have raised the mean surface temperature of the Earth. We analyze these potentialities, treating the Earth as a blackbody radiator with a varying albedo. + +# Chemical Effects and Acid Rain + +A final consideration is the production of chemicals in the atmosphere such as nitrous oxides, which can undergo later reactions to produce nitric acids and result in acid rain. Using empirical data for the rate of production of these compounds and the modeled atmospheric energy transfer, we estimate upper bounds on the acidity of rain in the region and for its effects. + +# Celestial Dynamics of the Asteroid + +# Gravity + +By Newton's Law of Gravitation, the force of attraction between the asteroid and any body is + +$$ +F = - \frac {G m _ {a} M}{r ^ {2}} \hat {r}, +$$ + +where $G$ is the gravitational constant, $m_{a}$ is the mass of the asteroid, $M$ is the mass of the other body, and $r$ is a vector directed from the other body to the asteroid. Being a central force, $F$ may be written as the gradient of a scalar function $\phi$ : + +$$ +\boldsymbol {F} = - \nabla \phi = - \nabla \left(- \frac {G m _ {a} M}{r}\right). +$$ + +An integral of $F$ over a path depends only on the endpoints of integration and not on the path itself. Because of this feature, $\phi$ is known as the gravitational potential energy. The work-energy theorem of classical mechanics implies the following relationship between the change in the kinetic energy $T$ of the asteroid and the changes in gravitational potential energy for any physical process [Marion and Thornton 1995]: + +$$ +\Delta T = \Delta \left(\frac {1}{2} m _ {a} v ^ {2}\right) = \sum_ {i} \Delta \left(\phi_ {i}\right), +$$ + +where the sum is over all bodies with which the asteroid is interacting. + +# A Lower Estimate of Asteroid Velocity + +To obtain a lower estimate of the asteroid's impact velocity, we follow the approach of Melosh in using Earth's escape velocity [Melosh 1989]. The escape velocity for a planet is the velocity required for a body to escape completely from the planet's gravitational field. That is, the escape velocity corresponds to + +the change in gravitational potential energy in bringing an object from infinity to the surface of the planet. We have + +$$ +\frac {1}{2} m _ {a} v ^ {2} = \lim _ {R \to \infty} G m _ {a} m _ {\mathrm {e}} \left(\frac {1}{r _ {\mathrm {e}}} - \frac {1}{R}\right), +$$ + +where $m_{\mathrm{e}}$ and $r_{\mathrm{e}}$ are the mass and radius of the Earth. The asteroid does not come from infinity but more plausibly from the asteroid belt, perhaps 3 au from the Sun (1 au ["astronomical unit"] = mean radius of the orbit of the Earth). + +This is a lower bound because we neglect both the previous kinetic energy of the asteroid in its orbit and its gravitational interaction with the Sun. We obtain $11.2\mathrm{km / s}$ for the speed of the asteroid (terminal velocity) upon impacting the outer atmosphere, using $32,000\mathrm{m}$ as the height of the atmosphere. This result is independent of the mass of the asteroid. + +# A More Realistic Estimate of Asteroid Velocity + +A more realistic model of the asteroid's incident velocity would include its interaction with the Sun and the kinetic energy of its orbit. We assume that the asteroid occupies a pre-collision orbit that is totally determined by its interaction with the Sun. + +# The Virial Theorem + +The total energy of the asteroid's orbit may be determined from the geometry of its orbit and the classical virial theorem. A result of the general equations of motion, the virial theorem states [Marion and Thornton 1995]: + +$$ +T = - \frac {1}{2} \left\langle F \cdot r \right\rangle . +$$ + +The right-hand side of the equation, called the virial, reduces in the case of a circular orbit in a central gravitational field to + +$$ +- \frac {1}{2} \left\langle F \cdot r \right\rangle = - \frac {1}{2} \left(\frac {G m _ {a} m _ {\mathrm {S u n}}}{r ^ {2}} \hat {r} \cdot r\right) = - \frac {1}{2} \frac {G m _ {a} m _ {\mathrm {S u n}}}{r} = - \frac {1}{2} \phi . +$$ + +Though the asteroid's orbit is almost certainly not circular if it is to collide with Earth, we make the approximation of circularity to obtain a simple velocity estimate. + +# Total Energy of the Asteroid's Orbit + +Using the virial, we can write the total energy as + +$$ +E = T + \phi = \frac {1}{2} \phi . +$$ + +When the asteroid reaches the Earth, it is at a distance 1 au from the Sun. In this model, conservation of energy requires that the change in potential must be absorbed as kinetic energy of the asteroid. + +$$ +\frac {1}{2} m _ {a} v ^ {2} = \frac {1}{2} \phi - \phi_ {\mathrm {a t E a r t h}} +$$ + +Again, the mass of the asteroid cancels out, since it appears in both $\phi$ and $\phi_{\mathrm{atEarth}}$ . If the asteroid is from the midst of the asteroid belt, so that the average radius in the circular-orbit approximation is, say, 2.6 au [Gehrels 1979], we get a velocity of $v = 23.5~\mathrm{km / s}$ . + +We have neglected the interaction with the gravitational field of the Earth. To obtain a total estimate, we add on the energy from the calculation of the Earth's terminal velocity. Our best estimate for the incident velocity is $v = 26 \, \mathrm{km/s}$ , in good agreement with impact data [Gehrels 1994; Kieffer 1980]. + +# From Atmosphere to Impact + +What would happen to the asteroid as it descended through the atmosphere? This question is of interest in determining both the kinetic energy on impact and the energy transferred to the atmosphere during the descent. This latter aspect is important because energy transfer plays a significant role in atmospheric chemistry [Melosh 1989]. + +# Dynamical Equations + +In describing the dynamics of the asteroid descent, we follow the approach of Melosh [1989]. By considering the physics of the descent, we acquire a simplified model consisting of four coupled nonlinear differential equations. We begin by introducing coordinates to specify the state of the system: + +$v(t)$ is the speed of the asteroid; + +$\theta (t)$ is the instantaneous angle that the trajectory makes with the horizon; + +$m(t)$ is the mass of the asteroid, which changes in time due to ablation caused by heat generated by friction with the atmosphere; and + +$Z(t)$ is the vertical altitude of the asteroid. + +We also consider the following functions of the state variables: + +$\rho_{g}(Z)$ is the density of air at the altitude of the asteroid, and + +$A(m)$ is the cross-sectional area of the asteroid. + +# The $v(t)$ Equation + +The only forces that change the speed of the asteroid act along the line of motion. There is a component of gravity $g \sin \theta$ acting in the direction of motion, as well as a drag force retarding the motion. To calculate the drag force, we assume with Melosh that the air immediately behind the asteroid is at effectively zero pressure and that the air immediately in front is at the stagnation pressure $\rho_g v^2$ [Melosh 1989]. This pressure differential produces a force equal to $A \rho_g v^2$ , where $A$ is the cross-sectional area of the asteroid. Combining these forces with Newton's Second Law, we obtain the differential equation for $v(t)$ : + +$$ +v ^ {\prime} = - \frac {A \rho_ {g} v ^ {2}}{m} + g \sin \theta . +$$ + +# The $\theta(t)$ Equation + +We approximate the surface of the Earth as flat. This assumption does not have significant consequences as long as $\theta$ is not particularly small, since the horizontal displacement is thus smaller than the curvature scale of the Earth's surface. + +We assume that the only force acting perpendicular to the direction of motion is gravity. This neglects possible lift forces, but Melosh's analysis suggests that lift is small compared to the force of gravity [1989]. Suppose that the velocity vector of the asteroid undergoes a small perpendicular displacement denoted $\Delta v_{\perp}$ . Then the change in the angle made with the horizontal $\Delta \theta$ is + +$$ +\Delta \theta = \tan \left(\frac {\Delta v _ {\perp}}{v}\right). +$$ + +As we allow the size of the displacements to become infinitesimally small, we can neglect all but the first term in the Taylor series expansion of this function: + +$$ +\Delta \theta \approx \frac {\Delta v _ {\perp}}{v}. +$$ + +But $\Delta v_{\perp}$ for small changes is just the acceleration of the asteroid perpendicular to its motion times $\Delta t$ . Since we assume that gravity is the only significant component to this acceleration, we obtain + +$$ +\frac {\Delta \theta}{\Delta t} \approx \frac {g \cos \theta}{v}. +$$ + +In the limit, we obtain the differential equation for $\theta$ + +$$ +\theta^ {\prime} = \frac {g \cos \theta}{v}. +$$ + +# The $m(t)$ Equation + +The issue of ablation due to heating is less straightforward. Our approach follows that of Melosh [1989]. A calculation of the available energy due to the pressurized heating of the atmosphere tells us how much ablation could occur. Two dimensionless empirical parameters are involved: $C_h$ , which is an ablation efficiency, and a term involving the velocity squared, $1 - (v_{\mathrm{cr}} / v)^2$ , which accounts for a critical speed $v_{\mathrm{cr}}$ below which ablation does not occur. The resulting differential equation is + +$$ +m ^ {\prime} = - \frac {C _ {h} \rho_ {g} A v}{2 \zeta} \left(1 - \left(\frac {v _ {\mathrm {c r}}}{v}\right) ^ {2}\right), +$$ + +where $\zeta$ is the heat of ablation for the material. For reasonable values of $C_h$ and $v_{\mathrm{cr}}$ , Melosh gives empirically determined values of about 0.02 and $3,000~\mathrm{m / s}$ . + +# The $Z(t)$ Equation + +The equation for $Z(t)$ is particularly simple, resulting from purely kinematic considerations. We need only project the total speed $v$ into its velocity component in the vertical direction. Since we approximate the surface of the Earth as a plane, this component is simply + +$$ +Z ^ {\prime} = - v \sin \theta . +$$ + +# Air Density and Cross-Sectional Area + +To obtain $A(m)$ , the cross-sectional area, we must presume a shape for the asteroid. Though many asteroids have large eccentricities [Gehrels 1979], modeling the impactor as a solid uniform ball has the advantage of being mathematically tractable as well as not too far from reality. From the geometric formula for the volume of a ball, the expression for the asteroid radius $R$ in terms of its mass $m$ and its density $\rho$ is + +$$ +R = \left(\frac {3}{4 \pi} \frac {m}{\rho}\right) ^ {1 / 3}. +$$ + +From this, the cross-sectional area $A = \pi R^2$ becomes + +$$ +A = \left(\frac {9 \pi}{1 6}\right) ^ {1 / 3} \left(\frac {m}{\rho}\right) ^ {2 / 3}. +$$ + +To find an expression for $\rho_g(Z)$ , we use a simple model in which the density decreases exponentially as altitude increases. A good value for the characteristic height scale of the atmosphere is $10\mathrm{km}$ [Melosh 1989]. Thus, we have + +$$ +\rho_ {g} = \rho_ {0} e ^ {Z / 1 0}, +$$ + +where $\rho_0$ is the atmospheric density at sea level and $Z$ is measured in kilometers. + +# Numerical Solution + +As promised, we have obtained a coupled set of nonlinear ordinary differential equations. Substituting the algebraic relations from the previous section removes dependence on $A$ and $\rho_{g}$ . With a given set of initial values for the height, angle, speed, and mass, we have an initial-value problem that completely specifies the physics of the asteroid descent. + +Though some of the equations have simple cascade relationships to the other state variables, the nonlinearities make an analytic approach intractable. We solved the system using the fourth-order Runge-Kutta integration method in the software package ODE Architect, published by Intellipro. + +# Results + +To obtain numerical results, we must take another step away from the world of theory toward empiricism, imposing additional constraints on the composition of the asteroid. We presume that the asteroid is composed of iron with approximate density $\rho = 2,600\mathrm{kg / m^3}$ . Many asteroids are largely iron [Gehrels 1979]; impacting on ice, iron creates some of the most severe pressure effects of any common asteroid composition [Kieffer 1980]. + +A solid iron ball of radius $R = 500 \, \text{m}$ has a mass of approximately $1.4 \times 10^{12} \, \text{kg}$ . We use the estimated velocity of $26 \, \text{km/s}$ for an asteroid at a distance of $Z = 32,000 \, \text{km}$ away from the Earth's surface. + +# Impact Velocity + +Using the above parameter values, we consider the dynamics of an asteroid with initial angles of $30^{\circ}$ , $45^{\circ}$ , $60^{\circ}$ , and $90^{\circ}$ from the horizontal. Because the mass of the asteroid is comparatively large, we do not expect a large fraction of the asteroid's energy to be given up during the descent. + +For all four trajectories, the final velocity decreases to $2.58 \times 10^{4} \mathrm{~m/s}$ and the mass to $1.38 \times 10^{12} \mathrm{~kg}$ , though, as shown in Figures 1 and 2, the incident angle has a significant effect on the flight time. + +![](images/a4a073574478b2cc6176e2a99c22777560c89372fe1d1c3b2be1f79348a23e78.jpg) +Figure 1. Initial angle of $90^{\circ}$ . + +![](images/7eb3db6dde66070596957b43279ed40998881aa21755fba397d9601a6d0b0194.jpg) +Figure 2. Initial angle of $45^{\circ}$ . + +# Atmospheric Energy Transfer + +To obtain an upper bound on the energy transfer to the atmosphere, we again appeal to the tools of energy conservation. Neglecting the energy retained by ablated asteroid material, we obtain + +$$ +E _ {\mathrm {t o a t m}} = \Delta T - m g Z = \frac {1}{2} m _ {f} v _ {f} ^ {2} - \frac {1}{2} m v ^ {2} - m g Z. +$$ + +That is, whatever kinetic energy is lost by the asteroid must go somewhere, so that gravity, the only conservative force in effect, can be accounted for. Since Melosh finds $45^{\circ}$ to be the most probable incident angle [1989], we take our calculations for $45^{\circ}$ as representative. + +With these data, we obtain an energy transfer of $E = 1.57 \times 10^{19}$ J, a figure which later has relevance to the treatment of environmental effects. + +# Effects on the Earth's Oceans + +# Introduction + +Since the global-warming crisis came to the forefront of environmental thought in the mid-1980s, scientists have warned of the possibility of melting an Antarctic ice sheet and the inevitable catastrophic consequences. With regard to ice melting, the impact of a high-velocity asteroid with Antarctica could have two potentially disastrous results: + +- the sheer amount of kinetic energy could vaporize a large amount of ice, which would eventually rain down into the ocean; or +- the collision could act as a pseudo-earthquake, generating seismic waves strong enough to break apart and move above-ground ice sheets, forcing large amounts of ice into the ocean. + +Either of these events could potentially raise the water level by a significant amount. In addition, whenever there is a large seismic event, there is the possibility of a tsunami, which could have deadly consequences. + +# Vaporization of Ice + +If we assume that all of the asteroid's kinetic energy is converted directly into thermal energy that is used solely for melting and vaporizing the ice, we can estimate an upper bound — however unrealistic — on the amount of water deposited into the Earth's atmosphere. + +For ease of calculation, we first calculate the number of moles of $\mathrm{H}_2\mathrm{O}$ vaporized by the impact. We assume that the ice is initially at $-40^{\circ}\mathrm{C}$ . The amount + +of energy needed to vaporize the ice is + +$$ +\begin{array}{l} \Delta H = \Delta H _ {\mathrm {H} _ {2} \mathrm {O} \text {a t} - 4 0 ^ {\circ} \rightarrow \mathrm {H} _ {2} \mathrm {O} \text {a t} 0 ^ {\circ}} + \Delta H _ {\text {f u s i o n}} + \Delta H _ {\mathrm {H} _ {2} \mathrm {O} \text {a t} 0 ^ {\circ} \rightarrow \mathrm {H} _ {2} \mathrm {O} \text {a t} 1 0 0 ^ {\circ}} \\ + \Delta H _ {\text {v a p o r i z a t i o n}}. \\ \end{array} +$$ + +Using the values [Atkins and Jones 1997] specific heat of ice $= 2.03 \, \mathrm{J} \cdot {}^{\circ}\mathrm{C}^{-1} \cdot \mathrm{g}^{-1}$ , latent heat of fusion $= 6.01 \, \mathrm{KJ} \cdot \mathrm{mol}^{-1}$ , specific heat of water $= 4.18 \, \mathrm{J} \cdot {}^{\circ}\mathrm{C}^{-1} \cdot \mathrm{g}^{-1}$ , and the latent heat of vaporization $= 40.7 \, \mathrm{KJ} \cdot \mathrm{mol}^{-1}$ , we calculate the most that could be vaporized would be $5.03 \times 10^{15}$ moles, or $9.05 \times 10^{13}$ kg, of water. If this water were spread around the water-covered surface of the Earth—given that the radius of the Earth is approximately $6.4 \times 10^{6}$ m and the surface area covered by water is approximately $3.6 \times 10^{14}$ m²—it would change the water level by only $\Delta h \approx 0.4$ mm. This amount is insignificant in comparison with seasonal changes, so we conclude that melting of the ice would have almost no effect on sea level. + +# Breaking Up of Ice Sheets + +Given the very complex nature of the ice sheets on Antarctica, it is beyond the scope of this paper to give a detailed model of the amount of damage that a seismic wave generated by the impact could bring about. Melosh claims that based on geological evidence from previously studied asteroid impacts, the seismic efficiency, or fraction of impact energy converted into seismic energy, is on the order of $10^{-4}$ [1989]. Given an impact energy of $4 \times 10^{20} \mathrm{~J}$ , the approximate seismic energy, as measured on the Richter scale, is $M \approx 7.9$ . This is a significant earthquake, but Melosh also points out that this would produce mainly p-waves, while s-waves—the transverse waves created by the slipping of plates in an earthquake—are far more destructive. Therefore, he suggests that the damage of the pseudo-earthquake would be comparable to an actual earthquake of an order of magnitude less, in this case, an earthquake of magnitude 6.9. + +Although a magnitude 6.9 earthquake is very large, we must consider that the nearest floating ice sheet to the South Pole, the Ross Ice Shelf, is nearly $400\mathrm{km}$ away. If we assume that all of the seismic energy radiates in a hemispherical pattern, then from the point of view of conservation of energy, we can say that at a distance $r$ from the South Pole, the energy density, $J$ , is + +$$ +J = \frac {E _ {\mathrm {i n i t i a l}}}{2 \pi r ^ {2}}. +$$ + +For $r = 400 \, \text{m}$ and the initial seismic energy of a magnitude 6.9 earthquake, we find an energy density of $1.5 \, \text{KJ/m}^2$ . This amount is equivalent to a large man falling five or six feet to the ground, which, considering the density of the ice, should not be that significant. This rough estimate is in agreement with observed impacts with the moon. Melosh claims that "few surface features on the moon or other planets can be directly attributed to impact-induced seismic + +shaking"[1989]. Still, the possibility that large chunks of ice could fall into the ocean is not negligible but is beyond the scope of this model. + +# Tsunami Generation + +The generation of tsunamis by underground seismological events is poorly understood; their generation by land-based seismological events is even less understood. Many people have attempted to calculate the size of the water wave that would be created by a near-shore nuclear blast, yet the strength of the tsunami is always overestimated [Murty 1977]. If we also consider that by the time the shock wave would reach the ocean, over $1,000\mathrm{km}$ away, the energy density would be around $250\mathrm{J} / \mathrm{m}^2$ , then the chances of a dangerous oceanic shock wave are negligible. + +# Potential Impact on the Earth's Climate + +# Introduction/General Modeling Concept + +Many models of asteroid impact on land masses predict that a large dust cloud would be released into the atmosphere, causing massive global warming due to greenhouse effects. An asteroid's impact on Antarctica would be substantially different in this respect. Antarctica is covered by an average of $2\mathrm{km}$ of ice, so little or no dust would be released. However, the rapid release of a large amount of water vapor into the Earth's atmosphere has the potential to significantly alter climate, much as the release of greenhouse gases has stimulated global warming. We present here a simple climatic model of the Earth based on the assumption that the Sun and the Earth behave as blackbody radiators. This model allows us to determine a reasonable upper bound on potential global warming due to vapor injection. + +The two defining characteristics of a blackbody radiator are: + +- All radiation incident upon it is absorbed. +- Energy is re-radiated over the entire spectrum of wavelengths. + +Through quantization of the radiation field, quantum theory tells us that the radiancy, or power per area, radiated by a blackbody is given by Stefan's law: + +$$ +R _ {T} = \sigma T ^ {4} \quad [ \text {E i s b e r g a n d R e s n i c k 1 9 8 5} ], +$$ + +where $\sigma$ is experimentally determined to be $5.67 \times 10^{-8} \mathrm{~W} / \mathrm{m}^{2} \cdot \mathrm{K}^{4}$ . We treat both the Sun and the Earth as ideal blackbody radiators; this treatment is commonly accepted as an accurate assumption for the purposes of global climate studies [Toon and Pollack 1980]. The majority of the light incident upon the Earth from the Sun is in the visible spectrum, while the light radiated from the Earth is + +mostly in the infrared. For this reason, we model the atmosphere as a shell of particles with two distinct albedos: one for visible light, the other for infrared light. We denote the percentage of the Sun's light that is transmitted through to the Earth by $K_{\mathrm{visible}}$ and the percentage of the Earth's light transmitted out into space by $K_{\mathrm{IR}}$ . For the Earth to be in thermal equilibrium, the power entering the atmosphere must equal the power leaving, that is, + +$$ +P _ {\mathrm {f r o m S u n}} = P _ {\mathrm {r e f l e c t e d s u n l i g h t}} + P _ {\mathrm {t r a n s m i t t e d f r o m E a r t h}}. +$$ + +The power that reaches the atmosphere from the Sun is just + +$$ +\sigma T _ {\mathrm {S u n}} ^ {4} \pi R _ {\mathrm {a t m}} ^ {2} R _ {\mathrm {S u n}} ^ {2} / d ^ {2}, +$$ + +where $d$ is the distance from the Sun to the Earth's atmosphere and $R_{\mathrm{atm}}$ represents the radius of the Earth's atmosphere. The $R_{\mathrm{Sun}}^2 / d^2$ term in the equation accounts for the fact that the power from the Sun that actually reaches the Earth is determined by the solid angle subtended by the Earth. The $\pi R_{\mathrm{atm}}^2$ term accounts for the effective area of the atmosphere that is exposed to the radiation, characterized by the cross section of the Earth. Using Stefan's law, we have the following relation for thermal equilibrium: + +$$ +\sigma K _ {\mathrm {v i s i b l e}} T _ {\mathrm {S u n}} ^ {4} \pi R _ {\mathrm {a t m}} ^ {2} \frac {R _ {\mathrm {S u n}} ^ {2}}{d ^ {2}} = \sigma K _ {\mathrm {I R}} T _ {\mathrm {E a r t h}} ^ {4}. +$$ + +# Worst-Case Heating of the Atmosphere + +Upon injection of water vapor into the upper atmosphere, the values of $K_{\mathrm{visible}}$ and $K_{\mathrm{IR}}$ can be expected to change significantly. We first concern ourselves with the worst-case heating of the atmosphere due to water vapor. In this case, the vapor would act to reflect the Earth's infrared radiation back to the surface, causing a greenhouse effect corresponding to a decrease in $K_{\mathrm{IR}}$ and/or an increase in $K_{\mathrm{visible}}$ . From our general equation for equilibrium, we see that + +$$ +T _ {\mathrm {E a r t h}} \propto \left(\frac {K _ {\mathrm {v i s i b l e}}}{K _ {\mathrm {I R}}}\right) ^ {1 / 4}, +$$ + +which follows from the fact that all other quantities of interest would remain constant. To place an upper bound on potential global heating, we make the simplifying assumption that $K_{\mathrm{visible}} / K_{\mathrm{IR}}$ is directly proportional to the amount of water vapor in the atmosphere. This is not a completely unreasonable assumption; Toon and Pollack note that "the radiation budget of the Earth is dominated by water vapor and clouds" [1980]. For an upper bound on the effects, we take the vapor released to be the maximal amount, roughly $9 \times 10^{13} \mathrm{~kg}$ , calculated in the previous section. According to Trewartha, the average moisture in the atmosphere is about $1.31 \times 10^{16} \mathrm{~kg}$ , yielding an increase of about $0.69\%$ [1954]. Using Trewartha's value for the mean global blackbody temperature of $287 \mathrm{~K}$ , we find that the equilibrium value for the temperature of the + +Earth after water injection is $287.50\mathrm{K}$ , an increase of half a degree. Such climatic changes are roughly equivalent to the already observed increase in global mean temperature due to global warming [Oppenheimer 1998]; and while it not a totally insignificant consequence of impact, the effects would merely accelerate the global warming that is already under way. + +# Worst-Case Cooling of the Atmosphere + +Although it is less likely that the injection of water into the atmosphere could exhibit a cooling effect, it has been suggested by some sources that, below a certain threshold droplet size, and high enough in the atmosphere, water could act to reflect the Sun's light, thus causing a trend of overall cooling [Toon and Pollack 1980]. We can show, using our simple atmospheric model, that the worst-case cooling effect is negligible. Returning to our general equilibrium expression, we see that + +$$ +T _ {\mathrm {E a r t h}} \propto \left(\frac {K _ {\mathrm {v i s i b l e}}}{K _ {\mathrm {I R}}}\right) ^ {1 / 4}. +$$ + +Using the simplifying assumption this time that $K_{\mathrm{visible}} / K_{\mathrm{IR}}$ is inversely proportional to the amount of vapor in the atmosphere, we find once again, that there is roughly a decrease of half a degree in global mean temperature. Not only would such an effect be negligible, but it would actually work to counteract the global warming process underway. + +# Conclusions/Limitations of Climatic Model + +The predictions of our model suggest that the worst-case climatic changes to the atmosphere would be a change in global mean temperature of roughly $\pm 1/2$ K. We state a few limitations of this simple model. + +- First, the calculations are based on water that is vaporized by the energy of the asteroid. In actuality, there would likely be some ice ejecta mechanically propelled into the atmosphere by the impact. However, this effect should fit within our upper bound, because the ratio of energy to mass of water required to overcome the Earth's gravity, in addition to the energy expended by the damping effect of traversing the atmosphere itself (which would likely vaporize the ice particles), is substantially higher than the ratio for merely vaporizing the water. In addition, somewhat less than the maximum possible amount of water would be vaporized, since much of the asteroid's energy would be dissipated elsewhere. +- The model offers only an equilibrium value for the temperature of the Earth; the climatic changes are gradual, not sudden. Due to the large thermal mass of the Earth's oceans, temperature effects on the scale of global warming could take more than a decade to reach equilibrium [Toon and Pollack 1980]. + +- Finally, the model assumes that the percentages of light transmitted are proportional to the amount of water vapor in the atmosphere. Ignoring other particles in the atmosphere has the effect of making the predicted climatic changes worse than they might be in reality, which is acceptable, given the modest climatic changes predicted. The assumption that the percentages are merely proportional, however, is somewhat simplistic, given the complex nature of the atmosphere. This assumption is the most severe limitation on the model. + +# Nitric Acid Contamination + +# Generation of Nitrous Oxide + +As the asteroid passes through the atmosphere, it is likely that the heat generated would cause nitric oxide to be generated. The nitric oxide would subsequently form nitric acid, possibly leading to severe acid rain in the vicinity of the impact. + +A reaction that forms nitric oxide in the air is + +$$ +\mathrm {N} _ {2} + \mathrm {O} _ {2} \longrightarrow 2 \mathrm {N O}. +$$ + +The energy required for this reaction is $173.1\mathrm{kJ}$ per mole of NO generated [Atkins and Jones 1997]. The theoretical maximum amount of NO that can be generated is therefore about $1.73\times 10^{-7}\mathrm{kg / J}$ . However, according to Melosh, the actual amount generated is closer to $7\times 10^{-9}\mathrm{kg / J}$ [1989]. Using the amount of energy released into the atmosphere calculated previously, we find that $1.1\times$ $10^{11}\mathrm{kg},$ or $3.66\times 10^{13}$ moles, of NO would be produced. This NO would react in the atmosphere to produce nitric acid, $\mathrm{HNO}_3$ . + +# A Simple Lower Bound on Acid Rain Damage + +We can get an idea of the minimum amount of damage that this nitric oxide production could cause by assuming that each mole of NO produced becomes a mole of nitric acid and that the nitric acid is homogeneously distributed into five years' worth of rain throughout the globe. The estimate of five years is based on Toon's assertion that it would take about five years to remove the nitric acid generated by a large impact from the atmosphere [Gehrels 1994]. The average yearly rainfall over the Earth is $5 \times 10^{20}$ g of $\mathrm{H}_2\mathrm{O}$ /year [Gehrels 1994]. This corresponds to $5 \times 10^{17}$ L of water, yielding a molarity of 14.6 micromolar. Toon notes that many regions in Europe and the eastern United States receive acid rain at more than 100 micromolar [Toon and Pollack 1980]. Therefore, the effects of such a minimum damage scenario would be negligible. + +# Upper Bound on Acidification of the Earth's Oceans + +If we assume that the nitric acid would find its way into the surface layers of the Earth's oceans, we can predict what percentage of the world's oceans would be rendered corrosively acidic. We define "corrosively acidic" as a 600 micromolar solution of nitric acid in water; this is the nitric-acid concentration necessary to dissolve calcite [Gehrels 1994]. We take the depth of "surface layers" to be $75\mathrm{m}$ , as suggested by Toon [Gehrels 1994]. Using the surface area of the Earth covered by water, we find the total volume of surface layer water to be $2.7\times 10^{16}\mathrm{m}^3$ . The density of water is $5.5\times 10^{4}\mathrm{mol / m}^{3}$ ; so $0.23\%$ of the Earth's oceans could, in principle, be rendered corrosively acidic, corresponding to an area of about $8.2\times 10^{11}\mathrm{m}^2$ of ocean, or a cylinder of water with a radius of about $510\mathrm{km}$ . For a relatively strong acid such as $\mathrm{HNO}_3$ , we expect the pH of the water to be close to the negative log of the molarity, which yields a pH value of 3.2. The actual value, however, is likely to be higher because of the buffering effect of the salt in ocean water. According to Howells, a pH of 3.5 to 4.0 will kill almost any fish [1995]. + +# Conclusions + +A large quantity of nitric acid would likely be released into the oceans surrounding Antarctica. This release would likely devastated the fish population of the region, including Australia and the southernmost parts of South America. It is difficult to determine the exact effects because of the lack of data regarding acid rain pollution of seawater environments. + +# Conclusions and Limitations of the Models + +The models that we developed are focused on placing an upper-bound estimate on the damage that could occur if a $1,000\mathrm{m}$ -diameter asteroid were to impact the South Pole. Using Newtonian mechanics, we estimated the probable impact velocity of the asteroid and the energy released from the asteroid onto the Earth. These estimates allow us to place an upper bound on many of the possible destructive consequences of the impact. + +The primary concern associated with impact is the potential raising of the water levels of the Earth's oceans. A simple argument based on energy conservation demonstrates that the water vapor created by the impact would not substantially raise the level of the Earth's oceans. Another argument demonstrates that the seismic shock waves generated would likely have little effect on the ice sheets near the Antarctic coast, although there is always the possibility of instabilities in the ice sheet structure, which is not accounted for by the model. + +Another significant cause for concern is the long-term climatic impact of the asteroid. While very little dust is expected to be released into the atmosphere, a large quantity of water vapor almost certainly would be. Calculations show, + +however, that the upper bound on climatic changes amounts to, at most, the same climatic changes of about $0.5\mathrm{K}$ that are a consequence of current global warming. Therefore, while global warming would be accelerated somewhat, it is not a primary concern. The model implicitly assumes that the albedo of the Earth's atmosphere is simply proportional to the amount of water vapor it holds. + +The major cause for concern, according to our model, is the large quantities of nitric acid that would be released on the oceans surrounding Antarctica. These waters would be contaminated with enough nitric acid to completely destroy their food-production capabilities. + +# References + +Atkins, P., and Loretta Jones. 1997. Chemistry: Molecules, Matter, and Change. New York, NY: W.H. Freeman and Co. +Eisberg, Robert, and Robert Resnick. 1985. Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles. New York, NY: John Wiley and Sons. +Gehrels, Tom, editor. 1979. Asteroids. Tucson, AZ: University of Arizona Press. ___________, editor. 1994. Hazards Due to Comets and Asteroids. Tucson, AZ: University of Arizona Press. +Howells, G. 1995. Acid Rain and Acid Waters. Great Britain: Ellis Horwood Limited. +Kagan, Boris A. 1995. Ocean-Atmosphere Interaction and Climate Modelling. New York, NY: Cambridge University Press. +Kieffer, Susan Werner. 1980. The role of volatiles and lithology in the impact cratering process. *Reviews of Geophysics and Space Physics* 18 (1): 143-181. +Marion, Jerry B., and Stephen T. Thornton. 1995. Classical Dynamics of Particles and Systems. Fort Worth, TX: Saunders College Publishing. +Melosh, H.J. 1989. Impact Cratering. New York, NY: Oxford University Press. +Murty, T.S. 1977. Seismic Sea Waves: Tsunamis. Canada: Ministry of Supply and Services. +Oppenheimer, Michael. 1998. Global warming and the stability of the west Antarctic ice sheet. Nature 393: 325-332. +Toon, Owen B., and James B. Pollack. 1980. Atmospheric aerosols. American Scientist 68: 268-277. +Trewartha, Glenn T. 1954. An Introduction to Climate. New York, NY: McGraw-Hill Book Company. + +# Antarctic Asteroid Effects + +Nicholas R. Baeth + +Andrew M. Meyers + +Jacob E. Nelson + +Pacific Lutheran University + +Tacom, WA 98447 + +Advisor: Rachid Benkhalti + +# Motivation for the Model + +The only type of energy expansion relatively similar in scale to an asteroid impact is a nuclear explosion. The National Research Council (NRC) assessed the impacts of nuclear war on the atmosphere [Press et al. 1985], and their report details the potential for major atmospheric effects for blasts of various sizes. Our model extrapolates these effects for much higher yields. + +We calculate the energy yield of the asteroid impact using Newtonian mechanics. We then use the NRC findings to estimate the impacts of such a yield. Finally, we assess human casualties from the impact in terms of food production, rising sea levels, and atmospheric fluctuations. + +# Initial Assumptions + +- The asteroid is $1\mathrm{km}$ in diameter at impact. +- The asteroid is approximately spherical. A different shape would affect air drag and thus impact velocity but little else. Hence, a nonspherical asteroid is equivalent to a spherical asteroid with a different velocity. +- The asteroid has a density of between 2 and $8\mathrm{g/cm^3}$ , typical of a stony meteorite or asteroid [Wasson 1974]. +- The asteroid's velocity when striking the earth is between 11 and $70 \mathrm{~km} / \mathrm{s}$ [Wasson 1974]. +- The Earth-asteroid collision is inelastic. + +- The crater produced by the impact has a parabolic shape. This is consistent with models in King [1976]. +- There is no great difference in effect due to the angle of impact. We assume that pieces of the asteroid burnt away during descent are significantly less important than the impact itself; an asteroid reduced in this way should be equivalent to a spherical asteroid of a different velocity. +- The explosion of impact is similar to a nuclear blast. + +Table 1. Symbols used in equations. + +
SymbolDescriptionUnits
αNitric oxide constant0.8 × 1032(molecules NO)/MT
CInitial amount of nitric oxide post-impactmol
DDiameter of craterm
dDensity of asteroidkg/m3
ΔTChange in temperature°C/yr
GGreenhouse constant0.007°C/yr
kJoule-kiloton proportion constant4.2 × 1012J/kT
KEKinetic energy of asteroid at impactJoules (= kg·m2/s2)
MMass of asteroidkg
ωPercentage of ozone lost
ρRatio of diameter to depth
VasteroidVolume of asteroidm3
VcraterVolume of craterm3
vVelocity of asteroid at impactm/s
wEjecta of H2O into atmospheremol
+ +# Impact + +# Diameter of the Crater + +The diameter of the crater depends on the kinetic energy of the asteroid. A basic Newtonian formula gives the kinetic energy of the asteroid as + +$$ +\mathrm {K E} = \frac {1}{2} M v ^ {2}. +$$ + +The mass is simply the volume multiplied by the density, giving + +$$ +\mathrm {K E} = \frac {1}{2} V _ {\mathrm {a s t e r o i d}} d v ^ {2}. +$$ + +For a spherical asteroid, the volume is + +$$ +V _ {\text {a s t e r o i d}} = \frac {4}{3} \pi r ^ {3}. +$$ + +The diameter of the crater, according to Wasson [1974], is + +$$ +D = 4 9 W ^ {0. 2 9 4}, +$$ + +where $W$ is the total work done by the asteroid on the Earth, in kilotons of TNT. Thus, to find the diameter, we substitute kinetic energy for total work: + +$$ +D = 4 9 \left(\frac {1}{2} V d v ^ {2} k\right) ^ {0. 2 9 4}, +$$ + +where $k$ is a proportionality constant that converts Joules to kilotons of TNT: + +$$ +k = \frac {1}{4 . 2 \times 1 0 ^ {1 2}} \frac {\mathrm {k T}}{\mathrm {J}}. +$$ + +Thus, the diameter depends on the volume of the asteroid, its density, and the velocity at which it impacts the South Pole. See Table 2 for different scenarios for varying densities (2, 3, 5, and $8\mathrm{g/cm^3}$ ) and velocities. + +Table 2. +Scenarios. + +
ScenarioAsteroidCrater
Velocity v km/sMass M ×1012kgEnergy ×104MTDiameter ×103mρVolume ×1010m3Depth ×103m
A111.051.518.2754.441.67
2.623.76
4.146.02
202.627.48
B201.5712.510.179.221.44
301.5716.8
C302.6228.111.8718.61.69
501.5746.7
D502.6278.120.2812.92.53
+ +# Volume of the Crater + +Let the crater have a diameter-to-depth ratio of $\rho$ ; according to King [1976], $5 \leq \rho \leq 8$ . Thus the crater has radius of $D / 2$ and depth $D / \rho$ . + +We assume that the crater is a paraboloidal cap with cross-sectional parabola + +$$ +y = A x ^ {2}. +$$ + +Algebraic manipulation gives + +$$ +y = \frac {4}{\rho D} x ^ {2}. +$$ + +We rotate the parabola around the $y$ -axis and use circular disks to find the volume of the paraboloidal cap: + +$$ +V _ {\mathrm {c r a t e r}} = \int_ {0} ^ {D / \rho} \pi x ^ {2} y d y = \int_ {0} ^ {D / \rho} \frac {\pi \rho D}{4} y d y = \frac {\pi D ^ {3}}{8 \rho}. +$$ + +For a density of $3.0\mathrm{g/cm^3}$ and an impact velocity of $20\mathrm{km/sec}$ , we find that the crater has volume $9.22 \times 10^{10}\mathrm{m^3}$ . + +# Effects of Impact + +We first look at how much water would be ejected by the impact and then consider the effects of the shock wave created by impact. + +# Ejecta + +We assume that due to the depth of the ice at the South Pole, the amount of dust ejected from the crater would be minimal. However, the amount of ice ejected is a different matter. For this, we look at the effects of nuclear blasts and ejections of dust with diameter less than one micrometer. + +In nuclear blasts, about $1\%$ of the volume of the crater is ejected into the stratosphere [Press et al. 1985]. We assume that ice would eject at a higher rate than dust because instead of rock, this asteroid would vaporize ice—a much easier task. So we use $5\%$ of the volume of the crater as the total amount of ejecta (water as vapor, water, and ice) into the stratosphere. + +The amount of dust vapor from a nuclear blast of 1 MT or less is between 0.2 and $0.5 \times 10^{12} \, \mathrm{g} / \mathrm{MT}$ [Press et al. 1985]. For Scenario B (an average case), with energy $7.5 \times 10^{4}$ megatons, a yield of $5\%$ of volume gives $0.06 \times 10^{12} \, \mathrm{g} / \mathrm{MT}$ . + +Some dust would be ejected into the atmosphere, some would come from the asteroid itself, and some could even from Antarctic soil if the depth of the crater exceeds the depth of the ice $(2,800\mathrm{m})$ but this last does not occur in any of our scenarios (see last column of Table 2). + +# Shock Waves + +At impact, a spherical shock wave would be emitted, affecting both land and air but at different rates. Most effects of the impact would come from the shock wave rather than from the ejecta. We discuss the effects of the shock wave in terms of overpressurization, a thermal radiation contour, and the making of nitric oxide. + +# Overpressurization + +According to Press et al. [1985], an area is "overpressurized" if the shock wave creates a 5 psi (pounds per square inch) increase over normal atmospheric pressure. + +Table 3 provides sample radii of overpressurization, given certain energy levels. The Ross Ice Shelf is $600\mathrm{km}$ , and the West Ice Shelf is $2600\mathrm{km}$ , from the impact site. Overpressurization past these points would overstress the respective ice shelves, possibly causing massive volumes of ice to break off and float into the ocean. + +In addition to the ice shelves, there are more massive ice sheets that hold the vast majority of water in the continent. The Western Ice Sheet may be unstable [Glacier ... 1999; Is Global Warming ... 1999], and overpressurization of the sheet could cause much of it to break off from the continent. + +Table 3. Shock wave effect radii in km, from regression on the yield $x$ in kilotons. + +
ScenarioOverpressure radius +ro=8.62010+0.132189x +(×103km)Incineration radius +ri=2.50300+0.247767x +(×103km)
A1.261.72
B1.772.43
C3.444.71
D5.737.85
+ +# Thermal radiation contour + +The shock wave also would create a $30~\mathrm{cal/cm^2}$ thermal radiation contour. This is essentially a heat wave; there is little on the Antarctic plain that would incinerate, but the heat would instantaneously melt the surface ice. Much of the water would refreeze as quickly as it melted, but water that doesn't refreeze quickly could account for additional water vapor in the air. + +At higher energy levels, the incineration radius encompasses large portions of South America and could lead to forest fires (see Table 3). + +# Nitric Oxide Emission + +From Press et al. [1985], we know that large blasts create and release large volumes of nitric oxide (NO) into the stratosphere. The chemical reaction is + +$$ +\mathrm {O} _ {2} + \mathrm {N} _ {2} \rightarrow 2 \mathrm {N O}, +$$ + +and NO is produced at a rate of + +$$ +\alpha = 0. 8 \times 1 0 ^ {3 2} \mathrm {m o l e c u l e s N O / M T}. +$$ + +Table 4 uses this rate to give the amounts of NO that would result from explosions of various megatonnage. + +Table 4. Quantities of $\mathrm{H}_2\mathrm{O}$ and NO (in moles) lofted by explosion. + +
ScenarioH2ONO
A1.22×10145.0 ×1012
B2.56×10149.93×1012
C5.16×10143.74×1013
D3.58×10141.04×1014
+ +# Atmospheric Effects + +The impact would have major consequences in the upper levels of the atmosphere, including a significant decline in stratospheric ozone. Also, some nitric oxide would convert into nitric acid and cause acid rain. + +# Ozone + +The atmosphere currently contains $3.3 \times 10^{15} \mathrm{~g}$ of ozone. With the emission of nitric oxide caused by the heat created by the impact, a significant amount of stratospheric ozone would decompose into nitrogen dioxide and oxygen, according to the reaction + +$$ +\mathrm {N O} + \mathrm {O} _ {3} \rightarrow \mathrm {N O} _ {2} + \mathrm {O} _ {2}. +$$ + +From how much nitric oxide is put into the air and the rate at which it reacts with ozone, we can find how much ozone would decompose and how fast. + +The amount of nitric oxide emitted by the shockwave is $\alpha E$ , where $E$ is the energy of the explosion given in megatons. + +Normally in the atmosphere, $99\%$ of the nitric oxide reacts with ozone to form nitrogen dioxide and oxygen. However, given the substantial amount of water displaced into the stratosphere by the asteroid, we predict that only $97\%$ of the NO would react with $\mathrm{O}_3$ . Thus, the rate at which NO is lost over time due to reaction is given by + +$$ +\frac {d (\mathrm {N O})}{d t} = - 0. 0 3 (\mathrm {N O}). \tag {1} +$$ + +The nitrogen dioxide produced by the reaction is reconverted into nitric oxide (NO) according to + +$$ +\mathrm {N O} _ {2} + \mathrm {O} \rightarrow \mathrm {N O} + \mathrm {O} _ {2}. +$$ + +Thus, nitric oxide is replenished naturally after reacting with ozone. + +Solving (1), we get + +$$ +\mathrm {N O} = C e ^ {- 0. 0 3 t}, \tag {2} +$$ + +where $C$ is the initial amount of nitric oxide after the explosion. The normal amount of NO in the atmosphere is $10^{10}$ moles. To judge the effects of additional + +NO in the atmosphere, we need the time for the NO levels to return to normal (see Table 5). + +Table 5. +NO normalization time. + +
ScenarioTime (days)
A207
B230
C274
D308
+ +Since ozone reacts with NO in a one-to-one ratio of molecules, the area under the curve of (2) yields the mass of ozone (in moles) that is decomposed: + +$$ +\int_ {0} ^ {t} C e ^ {- 0. 0 3 t} d t. +$$ + +Solving, we find that all of the ozone would be depleted long before NO levels return to normal. Table 6 shows estimates of ozone depletion from nuclear blasts. Regression on Table 6 gives a fairly linear fit, which produces results similar to our own. + +Table 6. Nuclear war ozone depletion estimates, from Press et al. [1985]. + +
ScenarioYield (MT)Maximum Ozone Depletion (%)
Baseline6,50017
Excursion8,50043
Chang Case A10,60051
Chang Case B5,30032
Chang Case C5,67042
Chang Case D4,93016
Chang Case E6,72039
Chang Case F3,89020
Ambio excursion10,00065
Turco et al. (1983)10,00050
+ +# Acid Rain + +The other $3\%$ of NO reacts in the stratosphere first to form $\mathrm{NO}_2$ and then with water to form nitric acid: + +$$ +\mathrm {N O} _ {2} + 2 \mathrm {H} _ {2} \mathrm {O} \rightarrow \mathrm {H N O} _ {3} + \mathrm {H} _ {2}. \tag {3} +$$ + +Only $3\%$ of NO converts to $\mathrm{HNO}_3$ , and only $3.8\%$ of $\mathrm{HNO}_3$ turns into a cloud form [Walker 1977]. Therefore, only $0.114\%$ of the NO turns into acid rain. + +However, this concentration is much higher than that of normal acid rain. The usual concentration of $\mathrm{HNO}_3$ in water vapor is 30 ppb [Walker 1977]. Table 7 shows the different concentrations of $\mathrm{HNO}_3$ in water vapor after varying scenarios. + +Table 7. Acid rain; the usual $\mathrm{HNO_3 / H_2O}$ is 30 parts per billion. + +
ScenarioHNO3/H2O (ppm)
A46
B44
C83
D330
+ +# Environmental Impacts + +# Global Warming + +With the complete loss of the ozone, it is safe to say that catastrophic events would occur, especially with respect to global climate. Let $\omega$ stand for the percentage loss of ozone. Let $G$ stand for the "greenhouse constant," which is the amount of temperature increase given no ozone depletion. Thus, our model equation is given by + +$$ +\Delta T = G + \zeta \omega , +$$ + +where $\zeta$ is a proportionality constant. By experimentation (of others), we find that $G \approx 0.007^{\circ}\mathrm{C} / \mathrm{yr}$ . Earth's climate has increased approximately $4^{\circ}\mathrm{C}$ over a twenty-year period. In that time, the ozone level decreased by $4\%$ , giving $\omega \approx 0.04$ . Solving for $\zeta$ gives $\zeta \approx 4.83$ . Thus, our equation for the change in temperature in ${}^{\circ}\mathrm{C}$ /year given $\omega$ is + +$$ +\Delta T = 0. 0 0 7 + 4. 8 3 \omega . +$$ + +This model works nicely but only for small values of $\omega$ . When $\omega \geq 0.2$ , as in our scenarios, the model loses most of its usefulness. The temperature increase due to a lack of ozone would be large and pose a threat to human existence. + +# Sea Levels + +One of the major concerns of global warming theorists is the effect of polar melting and a change in sea level. A rise of $1\mathrm{cm}$ in sea level would saline coastal rivers up to $1\mathrm{km}$ inland [Glacier ... 1999]. The melting of the Western Ice Sheet of Antarctica would cause a $6\mathrm{m}$ rise. Having no ozone would + +eventually lead to these events. Besides the Western Ice Sheet, there are many other ice formations in Antarctica that could be affected by global warming. If all of Antarctica's ice melted, sea level would rise $60\mathrm{m}$ . Other possible causes of a rise in sea level are the overpressurization of Antarctic ice and the thermal radiation contour created by the blast. The thermal radiation contour might weaken the ice, and the overpressurization would then break the ice off the continent. + +Our model predicts that the oceans would rise 7 to $10\mathrm{m}$ within 10 years. This would cause most small island countries to become uninhabitable. Coastal seaports would be flooded all over the world. With no ozone in the atmosphere, this rise would surely continue over the following years. + +# Food Supplies + +The impact would significantly decrease crop yields, because of the rise in sea level, the rise in temperature, and the overabundance of ultraviolet radiation. + +The rise in the sea level would wipe out all crops in coastal regions, especially in Brazil, southeastern China, the Mediterranean, and India. Salinization of coastal rivers would significantly reduce the amount of irrigation that can be done, thus affecting the midwestern United States along with interior Africa. + +The desalinization of the ocean due to the melting to polar ice caps would significantly affect the South Atlantic, South Pacific, and Indian Oceans. This desalinization would pose serious health risks to shallow-water fish and other sea life that relies on salt water. Thus, fishing would significantly decrease off the coasts of South America, Africa, the Indian Peninsula, and Southeast Asia. + +The rise in temperature and the overabundance of UV would also greatly affect all plant and animal life. Since this change is so rapid, the threat of extinction of multiple species would be imminent. + +# Impacts to Humans + +Crops highly affected would be ones requiring lots of fresh water, such as rice, corn, grain, bananas, coffee, sugar, and other staples. The regions most affected would be those that grow enough just to support their own nations, not those that export these staples. Nations such as Brazil, China, India, and much of the non-industrialized world would find a severe lack of food. Countries like the United States, which have enough grain and corn to export, would barely be able to sustain themselves, much less export food to other countries. Much of North Africa, the Middle East, Central Asia, and central South America, which have little natural arable land, would indubitably starve. The impact of this alone could cause the population in those areas to suffer tremendous losses. + +For example, "soybean yield may drop one percent for each one percent drop in ozone" [Hidore 1996]. By extrapolation, a complete loss of the ozone + +layer would result in the complete loss of soybean crops. The same holds true for many other crops. Even if the loss were not this drastic, most of the world's crop production would be lost. + +The severe increase in ultraviolet radiation would significantly impact the entire planet. According to Hidore [1996], "models show that a $16\%$ reduction in ozone will result in a $44\%$ increase in UVB radiation. A $30\%$ global reduction in ozone will produce a doubling of surface UVB radiation." A $100\%$ loss of ozone would more than sextuple the amount of radiation, given a linear model. "The EPA forecasts that for every $1\%$ decrease in the ozone layer, there will be a $3\%$ increase in non-melanoma skin cancer." Hence non-melanoma skin cancer cases would triple under our scenario. + +A rise of sea levels of 7 to $10\mathrm{m}$ would displace the half of the world's population that resides near coastlines. Most of this would occur in India and Southeast Asia. Other major seaports such as New York, Tampa, Rio, Bangladesh, Cape Town, and Cairo would also be affected. This major relocation of people would lead to massive overcrowding in the rest of the world and impose on land designated for the growing of crops and the raising of livestock. + +Although less likely, a very serious threat that could threaten the population of southern Chile and Argentina is a large-scale earthquake. The shock wave, if large enough (Scenario D), would overpressurize the Tierra del Fuego and surrounding areas. This could also create tsunamis that would ravage the coasts in this area. + +# Other Effects + +Our model concentrates mostly on atmospheric effects of the asteroid's impact, but focuses little on two areas of possible importance: severe cloud cover and tsunami. Severe cloud cover caused by the tremendous amount of water vapor and/or dust ejected into the stratosphere could cover the earth for several months. However, since the average water cloud lasts for only an hour, the probability of long-term cloud cover is minimal. The force of the impact itself and the shockwaves from the explosion, or even ice shelves falling into the ocean, might cause enough seismic disturbance to create a tsunami. Such a tsunami could immediately threaten coastal areas in the Southern Hemisphere, resulting in severe flooding along the coasts of South America, Southern Africa, India, and Southeast Asia. + +# Strengths and Weakness of the Analysis + +# Strengths + +The greatest strength of the model is the ease with which the equations are derived and can be recalculated given different scenarios, such as a smaller or + +larger asteroid. The model also incorporates the significant parameters of the impact, including velocity and density of the asteroid. + +# Weaknesses + +The greatest weakness is the model's simplicity. Our model does not take into account carbon dioxide effects in terms of global warming, deal with desalinization of the ocean currents, nor calculate the pH difference due to a significant amount of $\mathrm{HNO}_3$ being displaced due to acid rain. + +In basing our modeling on smaller-scale detonations, we extrapolate into uncharted territory, which could very easily lead us to overestimate or underestimate the impacts of the asteroid. + +# Conclusions + +Since our model suggests that the environment would be ravaged by such an impact, that sea levels would rise due to global warming and the breaking off of Antarctic ice sheets, and that a severe loss of food due to flooding and salinization of previously freshwater coastal river would occur, the probability of a substantial number of deaths is quite high. + +Our model shows significant damage done to staple crops (such as soybeans) and a significant increase in ocean temperature, along with desalinization due to ice melting and rain. In the short term, we predict a 7 to $10\mathrm{m}$ rise in sea level, with a long term forecast of a $60\mathrm{m}$ rise with the complete melting of the Antarctic ice sheets. + +Human deaths would result from multiple factors: + +- Flooding would destroy much of the coastal regions of the earth. Half of the Earth's population would be displaced to higher and less arable ground. +- The tripling of UV-B radiation would certainly shorten the life expectancy of humankind. +- There would be an intense increase in the world's mean temperature, which has not happened since the dawn of man. + +It is hard to picture the drastic conclusions that we have reached because we have no experience with events of this nature. We certainly hope that science can find a way to prevent such an event. + +# References + +Asteroid and Comet Impact Hazards: Spaceguard Survey. 1991. + +http://impact.arc.nasa.gov/reports/spaceguard/sg_2.htm1. + +Congressional Testimony. 1993. +http://impact.arc.nasa.gov/congress/1993-mar/dmorison.html. +Evans, Daniel J., et al. 1992. *Policy Implications of Greenhouse Warming*. Washington DC: National Academy Press. +Glacier: The Ice Sheet Today. 1999. +http://glacier.rice.edu/land/5_antarcticicesheetparts.html. +Hidore, John J. 1996. Global Environmental Change: Its Nature and Impact. Upper Saddle River, NJ: Prentice Hall. +Hills, Jack G., and Charles L. Mader. 1995. Tsunami produced by the impacts of small asteroids. Proceedings of the Planetary Defense Workshop. Livermore, CA. Available at http://1lnl.gov/planetary. +Is Global Warming Melting Antarctica? 1999. +http://livingearth.com/Home/Antarctica.html. +Isobe, Syuzo, and Makoto Yoshikawa. 1995. Colliding asteroids with very short warning time. In Proceedings of the Planetary Defense Workshop. Livermore, CA. Available at http://11nl.gov/planetary. +King, Elbert A. 1976. *Space Geology: An Introduction*. New York: John Wiley & Sons. +Litvinov, B.V., et al. 1995. Investigation of cavities and craters formed as the result of nuclear explosions for the purpose of solving the problem of Earth protection against near-Earth objects. In Proceedings of the Planetary Defense Workshop. Livermore, CA. Available at http://llnl.gov/planetary. +Nemtchinov, I.V., et al. 1995. Historical evidence of recent impacts on the Earth. In Proceedings of the Planetary Defense Workshop. Livermore, CA. Available at http://11nl.gov/planetary. +Press, Frank, et al. 1985. The Effects on the Atmosphere of a Major Nuclear Exchange. Washington DC: National Academy Press, 1985. +Sea Level Rise. 1999. +http://fao.org/waicent/faoinfo/sustdev/Eldirect/Elre0046.htm. +Shapley, Deborah. 1985. The Seventh Continent: Antarctica in a Resource Age. Washington, DC: Resources for the Future. +Toon, Owen B., et al. Environmental perturbations caused by the impacts of asteroids and comets. Proceedings of the Planetary Defense Workshop. Livermore, CA. Available at http://11nl.gov/planetary. +Walker, James C.G. 1977. Evolution of the Atmosphere. New York: Macmillan. +Ward, Peter D. 1995. Impacts and mass extinctions. Proceedings of the Planetary Defense Workshop. Livermore, CA. Available at http://llnl.gov/planetary. +Wasson, J.T. 1974. Meteorites: Classification and Properties. New York: Springer-Verlag. + +# Not an Armageddon + +Mikhail Khlystov + +Ilya Shpitser + +Seth Sulivant + +University of California-Berkeley + +Berkeley, CA 94720 + +Advisor: Rainer K. Sachs + +# Abstract + +We separate the effects of the impact into three periods. + +- Pre-impact Loss of kinetic energy due to air resistance as the asteroid travels through the atmosphere is less than $0.15\%$ , which is negligible. +Short-term + +- The impact could produce at most $2,940 \mathrm{~km}^{3}$ of liquid water, which is not sufficient to affect sea level. +- The maximum volume of ice turned to water vapor would be $383 \, \mathrm{km}^3$ , insufficient to cause long-term weather changes. +- We anticipate global seismic effects on the order of 4 to 6 on the Richter scale, depending on the velocity and composition of the asteroid and the distance from the South Pole. + +- Long-term + +- Even in the worst case, the water vapor introduced to the atmosphere would condense and precipitate quickly, due to the low dewpoint of the polar air and the presence of iron particles to serve as condensation nuclei. We are uncertain as to the effects of iron fallout on the ecology of the Southern Hemisphere. +- There would be moderate loss of life and property damage in the southern hemisphere. We remain uncertain as to the ecological effects of such an impact but suspect that they would be negligible. We expect that there would be no threats of coastal flooding due to asteroid impact. + +# Assumptions + +- The asteroid is spherical in shape (this shape maximizes the asteroid's mass, and therefore its kinetic energy and the impact effects). +- The asteroid has an approximate diameter of $1\mathrm{km}$ . +- The asteroid strikes at the South Pole. +- Since an asteroid that "strikes the earth" does not explode in the atmosphere or rebound off the atmosphere (like a stone skipping on water), we assume that the angle of entry must be greater than $10^{\circ}$ . +- The asteroid is primarily composed of iron and nickel. While this assumption is slightly inaccurate (other constituents being varieties of minerals, such as silica and magnesia), it facilitates calculations; because these are the densest materials in an asteroid, this composition results in the greatest mass and thus the greatest energy of impact. The average density of such an asteroid is $5,000 \mathrm{~kg} / \mathrm{m}^{3}$ . Moreover, an asteroid composed primarily of iron is significantly less likely to explode before impact. +- The energy of impact of an asteroid is divided between heating the target, heating the asteroid, deformation, and kinetic energy of ejecta. This assumption comes from the 1978 NASA Conference Proceedings on Asteroids [Morrison and Wells 1978, 148]. +- The energy of an earthquake is inversely proportional to the square of the distance from the epicenter. +- Seismic waves propagate mainly along the surface of the earth, rather than in a straight line between two points on the surface. +- The depth of ice at the South Pole is approximately $2.5 \mathrm{~km}$ . + +# Model Inputs + +- Density of asteroid (depending upon its composition), $\rho_{\mathrm{ast}}$ +- Velocity of the asteroid at impact, $V_{0}$ +Angle of entry, $\alpha$ +- Initial average temperature of the ice at the South Pole, $T_{\mathrm{in}}$ + +Other inputs are the percentages of kinetic energy that go into: + +Heating the asteroid, $H_{\mathrm{ast}}$ +Heating the planet (ice), $H_{\mathrm{ice}}$ + +- Energy of deformation, $E_{\mathrm{def}}$ , and +- Kinetic energy of the ejecta, $E_{\mathrm{eje}}$ . + +# Atmospheric Entrance Model + +We calculate how much energy is transferred from the kinetic energy of the asteroid to heating the asteroid via air resistance. + +$$ +\mathrm {M a s s :} \qquad \qquad M _ {\mathrm {a s t}} = \rho_ {\mathrm {a s t}} \pi d _ {\mathrm {a s t}} ^ {3} / 6 +$$ + +$$ +\mathrm {K i n e t i c e n e r g y o f i m p a c t : K E} = M _ {\mathrm {a s t}} V _ {0} ^ {2} / 2 +$$ + +The kinetic energy converted to heating the asteroid, due to air resistance, is + +$$ +\mathrm {K E} = \frac {1 0 ^ {5} C A V ^ {2}}{6 \ln 1 0 \sin \alpha}, +$$ + +where $c$ is a physical constant based on viscosity and density of air, $v$ is velocity, $A$ is the cross-sectional area of the asteroid, and the rest is a factor corresponding to the changing density of air as the asteroid enters the atmosphere. (We derive this formula in the Appendix.) + +# Entrance Results + +The fraction of kinetic energy turned into heat energy by the air resistance would be only $0.02\%$ to $0.15\%$ ; therefore, the effect of air resistance would not be significant. + +# Impact Model + +We compute + +- the mass of the evaporated water, $M_{\mathrm{H}_20}$ ; +the size of the impact crater, $D_{\mathrm{cra}}$ +- the mass of ejected debris from the impact, $M_{\mathrm{eje}}$ ; and +- the approximate Richter value of the shock wave generated by the impact and the earthquake intensities felt at certain southern hemisphere cities. + +We proceed in the following fashion: + +$$ +\mathbf {M a s s :} M _ {\mathrm {a s t}} = \rho_ {\mathrm {a s t}} \pi d _ {\mathrm {a s t}} ^ {3} / 6 +$$ + +Kinetic energy of impact: $\mathrm{KE} = M_{\mathrm{ast}} V_0^2 / 2$ + +Crater diameter: $d_{\mathrm{cra}} = (\mathrm{KE} / k)^{2 / 7}$ [Davies 1986, 103], with $k \approx 10^{15}$ . (We estimated this proportionality constant based upon experimental data on size and kinetic energy involved in the formation of craters given in Davies.) + +Volume of crater ejecta: Since crater depth is approximately one-tenth of the diameter [Verschuur 1996, 17], we have $\mathrm{Vol}_{\mathrm{cra}} = \pi d_{\mathrm{cra}}^3 /40$ + +Approximate earthquake forces: These calculations are based on a linear regression (of data gathered from the Cascades Volcanoes Observatory Homepage [1999]) of Joules of energy compared to Richter scale value. For cities distant from the South Pole, we use an inverse-square law to determine how much energy would reach the city and calculate the magnitude of an earthquake with epicenter there to give an approximate "Richter" value. The distance, $d$ , is taken to be the ratio of the distance to the edge of the epicenter (about $1\mathrm{km}$ ) of a quake at the city and the distance from the South Pole to the city. This "Richter" value is an extreme exaggeration and should be taken as the absolute upper bound on the vibration and damage that is done to a given city. + +Epicenter Richter value: $R_{\mathrm{epi}} = \log_{10}(E_{\mathrm{def}}KE) / 1.4995 - 3.2035$ . + +Distance "Richter" value: $R_{d} = \log_{10}(E_{\mathrm{def}}KE / d^{2}) / 1.4995 - 3.2035$ . + +Mass and volume of water vapor: We assume that all of the heat that goes toward heating the planet and ice evaporates ice. This assumption gives the worst case in terms of the amount of new water vapor introduced into the atmosphere. + +Mass vaporized: + +$$ +M | \mathrm {v a p} = \frac {H _ {\mathrm {i c e}} K E}{T _ {\mathrm {i n}} S _ {\mathrm {i c e}} + \mathrm {H F} _ {\mathrm {i c e}} + 1 0 0 ^ {\circ} C \times S _ {\mathrm {H} _ {2} 0} + \mathrm {H V} _ {\mathrm {H} _ {2} 0}}, +$$ + +where $S_{x}$ is the specific heat, HF is the heat of fusion, and HV is the heat of vaporization. + +Volume vaporized: $\mathrm{Vol}_{\mathrm{vap}} = M_{\mathrm{vap}} / r_{\mathrm{ice}}$ + +Mass and volume of liquid water: We suppose that all of the kinetic energy of the asteroid goes into melting the ice. + +Mass melted: $M_{\mathrm{mel}} = \frac{H_{\mathrm{ice}} K E}{T_{\mathrm{in}} S_{\mathrm{ice}} + \mathrm{HF}_{\mathrm{ice}}}$ . + +Volume melted: $\mathrm{Vol}_{\mathrm{mel}} = M_{\mathrm{mel}} / \rho_{\mathrm{ice}}$ + +# Scenarios + +We ran a number of scenarios to determine the worst possible global damage, based on unfavorable assumptions about energy distribution upon impact, asteroid densities, and asteroid velocities. We also ran a scenario with relatively more realistic assumptions about the distribution of kinetic energy and with densities and velocities closer to average for an asteroid of the given size. + +First we supposed that the impact velocity of the asteroid would be $30\mathrm{km / s}$ (approximate upper bound for asteroid velocity), with a density of $5,000\mathrm{kg / m}^3$ and that all of the energy would go into melting ice near the Pole. Our model calculates that approximately $3.81\times 10^{14}\mathrm{kg}$ of water vapor would be ejected into the atmosphere. This corresponds to $413~\mathrm{km}^3$ or $0.0013\%$ of the frozen ice on the Antarctic continent. Even in this exaggerated scenario, we anticipate that there would be no threat to coastal populations from flooding. On the other hand, this amount of water vapor corresponds to a relatively large increase in atmospheric water vapor $(0.74\%)$ , but mixing with the very cold polar air would lead almost immediately to condensation and then precipitation [Ahrens 1994, 139-141]. + +If we suppose that all the kinetic energy would merely melt the ice (and leave it at $0^{\circ}\mathrm{C}$ ), we find that $2.94 \times 10^{15} \mathrm{~kg}$ of ice would melt, equivalent to $2,940 \mathrm{~km}^{3}$ of water. But this is only $0.0093\%$ of the ice on Antarctica. This corresponds to an average rise of water level of $7 \mathrm{~mm}$ and would not cause any coastal flooding. Moreover, most of this water would remain in the crater, and other water is unlikely to travel the more than $1,200 \mathrm{~km}$ to the ocean. + +Another extreme scenario is that the kinetic energy would all be converted to deformation energy (and thus earthquakes), that the asteroid has a density of $10,000\mathrm{kg} / \mathrm{m}^3$ , and that the asteroid is traveling at $35\mathrm{km} / \mathrm{s}$ . This seismic worst-case gives an epicenter magnitude of 11.1 on the Richter scale. This is about 230 times(!) as powerful as any recorded earthquake. Vostok Station (about $1,333\mathrm{km}$ from the South Pole) would feel the shock of a magnitude-7 earthquake, and major Southern Hemisphere cities would feel a shock of magnitude 6. There might be a considerable number of casualties in those cities. + +A more reasonable scenario is the energy distribution given by Chapman [Morrison and Wells 1978, 145-160]: An asteroid striking the moon at $5\mathrm{km} / \mathrm{s}$ would contribute $20\%$ of its energy to heating the target, $20\%$ to heating the asteroid, $50\%$ to deformation, and $10\%$ to the kinetic energy of the ejecta. + +The $20\%$ of the kinetic energy that would go into heating the asteroid would have the effect of turning part of the asteroid into a super-heated gas. The $20\%$ into heating the ice surrounding the impact site would evaporate $36.8~\mathrm{km}^3$ of ice and send the resulting water vapor up into the atmosphere. The $10\%$ to ejecta would eject a portion of still-solid ice into the atmosphere and onto Antarctica. A crater $35~\mathrm{km}$ in diameter would be created. + +The remaining $50\%$ of the asteroid's kinetic energy would be converted into energy of deformation and produce shock waves, which in turn would cause earthquakes. The earthquake energy at the epicenter would be equivalent to a 10.4-magnitude earthquake, about 20 times as large as any recorded. At Vostok Station, this would feel like a magnitude 6.4 earthquake. Major Southern Hemisphere cities would feel the equivalent of a 5.5-magnitude earthquake. + +In any scenario, the net effect of these shocks to Antarctica would probably be severe cracking of ice and the creation of new icebergs all along the coast of Antarctica. Earthquakes would also cause fatalities in polar research stations. + +Because the epicenter of the earthquake is at least $1,200\mathrm{km}$ from the nearest coast, there is no need to worry about the possibility of tsunamis. + +# Ecology + +Since the depth of ice at the south pole is $2.5\mathrm{km}$ , very little dust from the Earth would be thrown into the atmosphere. Most of the heating would occur in the upper layers of ice, so the rock under the ice would experience fracturing but not vaporize. The air near the impact would be saturated with water vapor [Ahrens 1994, 139-141] and would mix with the very dry and cold polar air. This would produce rapid condensation and formation of clouds. The iron particles thrown into the atmosphere would serve as condensation nuclei, facilitating rain and snow. We can expect the precipitate to settle over much of Antarctica and the Southern Ocean. + +Our model does not incorporate the effects of adding up to $2.6 \times 10^{12} \mathrm{~kg}$ of iron to the world oceans. Coincidentally, Monastersky [1995] writes that addition of iron to the ocean could be a possible solution to global warming by encouraging the growth of phytoplankton that use carbon dioxide in photosynthesis. But a dramatic increase in phytoplankton could lead to the production of methane. At any rate, the theory is uncertain, as is our knowledge of net ecological effects of the fallout. + +# Error Analysis + +There is no way to verify the accuracy of the model results. While our sources give similar values for the energy imparted to the Earth by asteroid collision, as well as values for the asteroid temperature after impact, the values are not based on observed energy emission at impact but are estimates of the energy released after impact of asteroids millions of years ago. These sources are likely working upon the same assumptions that we are, but we cannot realistically say how accurate such assumptions are. + +# General Results + +Testing our model for various energy distributions leads us to believe that: + +- There would be no significant melting of the polar ice cap. +- The seismic effects would be very intense on the Antarctic continent. Nearby major cities should anticipate, at worst, the equivalent shock force of a magnitude-5 earthquake. The force of collision near the impact might cause the ice crust to crack and shift, and ice near the edge of the ice shelves could break off and form icebergs. + +- There would be casualties of some polar research scientists, with minimal other casualties elsewhere. +- Water evaporated in the impact would quickly condense and fall out as rain and snow on the Antarctic continent and Southern Ocean. Such condensation would occur relatively quickly, so we do not anticipate a net warming effect such as might occur if another greenhouse gas of the same volume were released into the atmosphere. +- We are uncertain about the effect of iron and nickel fallout on the ecology of the Southern Ocean. + +# Strengths and Weaknesses + +Strengths of the model include a good estimate of an upper bound on physical damage to Earth. It is highly improbable that shock waves or atmospheric effects could be any more severe than we have calculated. The analytic simplicity of the model is another positive feature. + +Weaknesses of the model are the lack of long-term weather and ecological analysis, though the first two portions of the model indicate that the net effect in these areas would most likely be relatively small. + +# Appendix: Derivation of Formula for Air Resistance + +The formula for the air resistance force is + +$$ +R = c \rho A V ^ {2}, +$$ + +where $V$ is the speed of the object, $A$ is the area of its cross-section, $c$ is a dimensionless constant that depends on the form and the surface structure of the object, and $\rho$ is density of air at sea level. We denote $c\rho$ by $C$ ; at sea level, $C$ should about 0.2 to $0.4\mathrm{kg} / \mathrm{m}^3$ for a spherical asteroid. + +We seek an upper bound for the kinetic energy converted into heat due to air resistance. During the fall from a height of $100\mathrm{km}$ , the speed of the asteroid would increase by only about $1\mathrm{km/sec}$ , which is only 4 to $7\%$ of its speed at impact; this small increase would not affect much the trajectory. So we can assume that the asteroid would fall in a straight line and that the speed at impact is the greatest speed reached. + +The surface level at the Pole is approximately $3\mathrm{km}$ above sea level, so the air pressure there is somewhat less than that at sea level. Hence the work done by the air resistance force would be even less than we calculate below. + +The density of air changes with height, but according to Davies [1986], we need consider air resistance only in the lowest $100\mathrm{km}$ of the Earth's atmosphere. + +For $0 - 100\mathrm{km}$ , the logarithmic fraction of density of air at sea level [Gamow and Cleveland 1976] is approximated well by a linear function, so we take the density to be $10^{ad + b}$ , where $d$ is height above sea level. + +We identify $a$ and $b$ . The 100-km height of the atmosphere is relatively small compared to the Earth's radius of 6,370 km, so we may disregard the Earth's curvature and consider the Earth's surface to be horizontal. Suppose the asteroid enters the atmosphere at an angle $\alpha$ with the horizontal. Then the asteroid must travel $(10^{5} / \sin \alpha)$ m from the height of 100 km to sea level, so + +- At sea level: $10^{0} = 10^{a \cdot 0 + b}$ , so $0 = a \cdot 0 + b$ , that is, $b = 0$ . +- At $(10^{5} / \sin \alpha)$ m from impact, the density of air is $10^{-6}$ of that at sea level, so $10^{-6} = 10^{a \cdot 10^{6} / \sin \alpha}$ , so $-6 = a \cdot 10^{5} / \sin \alpha$ , thus $a = -6 \sin \alpha / 10^{5}$ . + +Thus, the air density at a distance $h$ from the point of impact is a fraction $10^{-6}h\sin \alpha /10^{5}$ of the density at sea level. Air resistance there is + +$$ +R = 1 0 - \frac {6 h \sin \alpha}{1 0 ^ {5}} \cdot C A V ^ {2}. +$$ + +The work of the air resistance force done on this straight line trajectory is + +$$ +\begin{array}{l} W = - \int_ {1 0 ^ {5} / \sin \alpha} ^ {0} R d h \\ = - \int_ {1 0 ^ {5} / \sin \alpha} ^ {0} 1 0 ^ {- 6 h \sin \alpha / 1 0 ^ {5}} C A V ^ {2} d h \\ = \frac {1 0 ^ {5} C A V ^ {2}}{6 \ln 1 0 \sin \alpha} 1 0 ^ {- 6 h \sin \alpha / 1 0 ^ {5}} \Bigg | _ {1 0 ^ {5} / \sin \alpha} ^ {0} \\ = \frac {1 0 ^ {5} C A V ^ {2}}{6 \ln 1 0 \sin \alpha} \left(1 0 ^ {0} - 1 0 ^ {- 6}\right) \\ \approx \frac {1 0 ^ {5} C A V ^ {2}}{6 \ln 1 0 \sin \alpha}. \\ \end{array} +$$ + +We estimate $0.2 < C < 0.4$ and $A \leq \pi r^2$ , where $r = 500 \, \mathrm{m}$ (since the diameter of the asteroid is $1 \, \mathrm{km}$ ). Since both the kinetic energy of asteroid and the work of the air resistance force are proportional to $V^2$ , the percentage of the kinetic energy that is turned into heat by the resistance force does not depend on the speed of the object. + +The angle $\alpha$ cannot be very small, or the asteroid would bounce off the atmosphere like a stone off the surface of water. Table 1 gives the percentage of kinetic energy of the asteroid that is yielded to air resistance for various values of $\alpha$ less than $1\%$ for any entry angle. + +Table 1. Percentage $P$ of kinetic energy changed into heat by air resistance, for $C = {0.4},A = {500}\mathrm{\;m}$ ,and different values of trajectory angle $\alpha$ (in degrees). + +
αP10 0.16320 0.08330 0.05740 0.04450 0.03760 0.03370 0.03080 0.02990 0.028
+ +# References + +Ahrens, Donald. *Meteorology Today*. 1994. Los Angeles, CA: West Publishing Company. +Alaska Seismic Studies Homepage. 1999. http://giseis.alaska.edu/Seis/Input/lahr/lahr.html. +Barnes-Svarney, Patricia. 1996. Asteroid. New York: Plenum Press. +Benest, Daniel, and Claude Froeshle, editors. 1998. Impacts on Earth. New York: Springer. +Cascades Volcanoes Observatory Homepage. 1999. http://vulcan.wr.usgs.gov/home.html. +Davies, John. 1986. *Cosmic Impact*. New York: St. Martin's Press. +Espenshade, Edward, editor. 1995. Goode's World Atlas. New York: Rand McNally. +Gamow, George, and John Cleveland. 1976. Physics: Foundations and Frontiers. Englewood Cliffs, NJ: Prentice Hall. +Hansom, James, and John Gordon. 1998. *Antarctic Environments and Resources*. New York: Addison Wesley Longman. +Harte, John. The Green Fuse. 1993. Berkeley, CA: University of California Press. ___________, and Allen Goldstein, editors. 1997. The Biosphere. Berkeley, CA: University of California Publishing. +Kalinin, Yu.D. 1993. Udary Gigantskih Asteroidov i Geomagnitnye Yavleniya. Krasnoyarsk. +Monastersky, Richard. 1995. Iron versus the greenhouse. Science News (30 September 1995): 220-222. +Morrison, David, and William C. Wells, editors. 1978. Asteroids: An Exploration Assessment: A Workshop. NASA Conference Publication 2053. Washington, DC: NASA, Science and Technology Information Office. +Resnick, Robert, David Halliday, and Kenneth S. Crane. 1992. Physics. New York: John Wiley & Sons. +Verschuur, Gerrit. 1996. Impact! New York: Oxford University Press. + +# The Sky is Falling! + +Daniel Forrest + +Garrett Aufdemberg + +Murray Johnson + +University of Puget Sound + +Tacom, WA 98416 + +Advisor: Perry Fizzano + +# Assumptions + +1. The diameter $(D)$ of the asteroid at impact is $1,000\mathrm{m}$ . Heat and stress while traveling through the Earth's atmosphere would cause some portion to vaporize or burn before impact. However, for an object this large traveling at speeds typical of cosmic objects impacting the earth, one can ignore the deceleration and ablation (loss of mass from the surface of an object due to frictional forces) due to the atmosphere [Steel 1995, 178]. +2. The asteroid strikes the earth at the geographic South Pole. +3. The asteroid is spherical. +4. The asteroid is homogeneous with uniform density $\rho = 2.5\mathrm{g / cm}^3$ ; uniform density allows for simple estimates of the mass. The value of $\rho$ is typical of C-type (carbonaceous) asteroids, which make up the majority of the asteroids in the solar system and therefore are the most likely type to strike earth, and also within the typical range of densities of S-type (stony) asteroids, which make up a majority of the asteroids with orbits that cross the Earth's orbit [Morrison and Owen 1996, 103-132]. + +# Preliminary Calculations + +# Mass of the Asteroid + +The mass of the asteroid $(M_{a})$ is its density $(\rho)$ multiplied by its volume $(V)$ . For a spherical asteroid, the mass is given by + +$$ +M _ {a} = V \rho = \frac {4}{3} \pi \left(\frac {D}{2}\right) ^ {3} \rho . +$$ + +For our asteroid, $D = 1,000\mathrm{m}$ and $\rho = 2.5\mathrm{g / cm^3}$ , thus + +$$ +M _ {a} = 1. 3 \times 1 0 ^ {1 2} \mathrm {k g}. +$$ + +# Upper and Lower Bounds on Impact Speed + +A planet's escape velocity $(v_{\mathrm{esc}})$ is the minimum speed that an object must have to escape the planet. It is calculated by determining the change in potential energy caused by moving an object from the planet's surface to "infinity." To escape the planet, the object's initial kinetic energy must be greater than or equal to the change in potential energy. By symmetry, the escape velocity is also the minimum velocity that an object from beyond the planet can have when it reaches the planet's surface. Thus, the Earth's escape velocity, $v_{\mathrm{esc}} = 11.2 \, \mathrm{km/s}$ , is a lower bound on the asteroid's impact speed $(v_{\mathrm{imp}})$ . + +There is also an upper bound on the impact velocity, "a combination of escape velocity, heliocentric orbital velocity, and the velocity of an object just barely bound to the sun at the planet's orbital position." For Earth, this maximum is $72.8\mathrm{km / s}$ [Melosh 1989, 205]. Thus, the impact velocity is bounded by + +$$ +1 1. 2 \mathrm {k m} / \mathrm {s} \leq v _ {\mathrm {i m p}} \leq 7 2. 8 \mathrm {k m} / \mathrm {s}. \tag {1} +$$ + +# Energy Released on Impact + +The energy of the collision $(E_{\mathrm{imp}})$ , drawn from the kinetic energy of the asteroid, is + +$$ +E _ {\mathrm {i m p}} = \frac {1}{2} M _ {a} v _ {\mathrm {i m p}} ^ {2}. +$$ + +The impact velocity is bounded and the asteroid's mass is fixed. Applying (1), we have + +$$ +8. 2 \times 1 0 ^ {1 9} J \leq E _ {\mathrm {i m p}} \leq 3. 4 \times 1 0 ^ {2 1} J. \tag {2} +$$ + +# Effects of Impact + +# Crater Size + +The crater from the impact would be roughly parabolic in shape, with a diameter of approximately $10\mathrm{km}$ and a depth of approximately $1\mathrm{km}$ [Koeberl and Sharpton 1998]. The pressure is so great in impacts of this sort that the crater + +forms partially from the vaporization of the target material. At the South Pole, the asteroid would be impacting in ice about $2,600\mathrm{m}$ thick. It takes considerably lower energies to vaporize ice than rock or soil, therefore we expect that the impact crater would be larger than similar impact craters in other locations. + +# Melting and Vaporization of Antarctic Polar Ice Cap + +Could an asteroid impact at the South Pole melt the Antarctic polar ice cap and drastically changing global sea levels? The ice cap covers $1.32 \times 10^{13} \mathrm{~m}^2$ with average thickness $2,440 \mathrm{~m}$ [Ronne 1997]. Thus, there is $3.2 \times 10^{16} \mathrm{~m}^3$ of ice, with mass $2.9 \times 10^{19} \mathrm{~kg}$ . + +At most, the asteroid impact could create $3.4 \times 10^{21}$ J. If all the energy were to melt ice, how much ice could be melted? + +Assuming that the ice is at $0^{\circ}\mathrm{C}$ , it would take $3.33 \times 10^{5} \mathrm{~J} / \mathrm{kg}$ to melt $1 \mathrm{~kg}$ of ice [Wilson and Buffa 1997]. So, at most + +$$ +\frac {3 . 4 \times 1 0 ^ {2 1}}{3 . 3 \times 1 0 ^ {5}} \approx 1 \times 1 0 ^ {1 6} \mathrm {k g} +$$ + +of ice could be melted. This translates to $1 \times 10^{4} \mathrm{~km}^{3}$ of liquid water. The area of the world's oceans is approximately $3.61 \times 10^{6} \mathrm{~km}^{2}$ ; so if the melted water were evenly distributed across the world's oceans, sea level would rise less than $3 \mathrm{~cm}$ . This is not enough to endanger human lives or displace human settlements. + +This estimate is an upper bound, since some energy goes into destroying the asteroid on impact; vaporizing part of the asteroid; vaporizing ice; excavating the crater; creating sound, shock, and seismic waves; and heating the air around the impact site. The impact would probably vaporize much of the ice from the impact crater. Assuming that the volume of ice vaporized is equal to the volume of ice in the largest cone that fits in the roughly parabolic crater, the impact would vaporize $2.6 \times 10^{10} \mathrm{~m}^3$ of ice, or $2.4 \times 10^{13} \mathrm{~kg}$ of ice. The energy required to melt a kilogram of ice, heat the kilogram of resulting water to $100^{\circ} \mathrm{C}$ , and vaporize the water is $3 \times 10^{6} \mathrm{~J}$ . Vaporizing so much ice would require $7.2 \times 10^{19} \mathrm{~J}$ . This value is within the bounds on the impact energy in (2). + +# Earthquakes and the Risk of Tsunami + +We can estimate the magnitude $(Q)$ of the seismic disturbance (as measured on the Richter scale) from the formula [Melosh 1989, 67] + +$$ +Q = 0. 6 7 \log_ {1 0} \left(E _ {\mathrm {i m p}}\right) - 4. 8 7. \tag {3} +$$ + +The seismic disturbance due to a cosmic impact is not the same as from normal seismic activity. The effect of impact-generated seismic waves is estimated to be an earthquake of one magnitude less than the approximate magnitude generated by impact [Melosh 1989, 67]. + +For our asteroid, equation (3) (using the energy range from (2)) tells us that the impact would generate a seismic disturbance ranging in magnitude from 8.5 to 9.6 on the Richter scale (Figure 1). Even if the effects are discounted by one magnitude, such an earthquake would cause many human casualties if located in a more-populated part of the world than the South Pole. However, human casualties are negligible because the continent is mostly uninhabited and because Antarctica is large enough that any damage would be limited to Antarctica. + +![](images/3d788628620d95f9604a17affe3c5e66e302bcbff28b98a445d347b80d287137.jpg) +Figure 1. Seismic magnitude vs. impact speed. + +Because the impact is at least $500\mathrm{km}$ from the closest shoreline and $1,500\mathrm{km}$ from most of the shoreline, the risk of a catastrophic tsunami being generated is negligible. A large percentage of the coast of Antarctica is lined with sheer walls of ice (on the order of $30\mathrm{m}$ in height). There is indeed a very real danger that the seismic disturbance could cause large fragments to break off, fall into the water, and cause tsunamis. Landslide-generated tsunamis can be large; the 1936 tsunami in Lituya Bay, Alaska, reached a height of $150\mathrm{m}$ [Hamilton 1998a]. However, they dissipate quickly and are unable to cross the great, transoceanic distances associated with earthquake-generated tsunamis. The greatest risk would be to coastal areas on the southern tip of South America. + +# Atmospheric Effects + +Upon impact, the asteroid would disintegrate. Approximately $10\%$ of the mass, $3.1 \times 10^{11} \mathrm{~kg}$ , would be vaporized into submicron particles that would rise to the stratosphere (an altitude of 16 to $48 \mathrm{~km}$ ) and would remain there for months [Steel 1995, 67]. If dust made up of 1-micron particles were spread + +evenly in a 1-micron-thick spherical layer at height $H$ above the surface of the earth, it would cover approximately $10\%$ of the surface area of the imaginary sphere and would block $10\%$ of incoming solar radiation. On a very cloudy day, the intensity of light reaching the surface of the earth is roughly $10\%$ of the intensity of light on a clear day [Steel 1995, 66]. A $10\%$ drop in intensity would allow 9 times the intensity of light to reach earth as on a very cloudy day; but over a period of months, such a drop would be significant enough to cause global temperature change. + +The ice vaporized on impact would rise into the atmosphere and form clouds. The water vapor in these clouds would eventually fall to earth as rain, increasing the amount of liquid water on the Earth by $8.1 \times 10^{10} \mathrm{~m}^3$ . If it all ended up in the world's oceans, the global sea level would rise about $2 \mathrm{~cm}$ . + +# Conclusions + +Fear that the ice cap would melt and cause global flooding is unfounded. + +Because the asteroid would impact at the South Pole, the dust levels are far less than if the same asteroid impacted in soil and/or rock. Still, enough dust is lifted into the stratosphere to block up to $10\%$ of the sunlight—enough to impact global temperature but far from the threshold where photosynthesis becomes impossible. Reduced light levels and temperature would affect agricultural production, but the impact on the world's food supply would be small; food surpluses in industrialized countries should be able to make up for agricultural losses in other nations. + +The ice vaporized from the crater would form clouds and eventually fall to earth in liquid form. But the volume of the water is not large enough to cause large-scale coastal flooding, unless it all falls in a limited area in a limited amount of time. The dust that is larger than a micron and does not reach the stratosphere could still have detrimental effects, such as acid rain. But our model has no way of estimating the amount, location, or effects of possible acid rain. + +Because the asteroid hits in Antarctica, the death toll directly due to impact is limited to the few hundred researchers stationed there. These casualties could be eliminated by evacuation if there is enough advance warning. + +# Strengths and Weaknesses + +Our model is successful in that we have quantitative estimates of many of the effects associated with the impact, such as the range of possible impact velocities, the range of possible impact energies, the size of the impact crater, the effect of dust raised by impact in the atmosphere, and the magnitude of seismic disturbance generated by impact. Our model is simple enough that all calculations were performed without resorting to a computer. + +The simplicity of our model also brings about some weaknesses. We have no accurate method to estimate how the total impact energy is distributed. We are also unable to determine long-term environmental consequences. Because of the unpredictable nature of atmospheric dynamics, we are unable to develop a model that would show specific locations and amounts of crops affected by dust raised from the impact. Our model predicts no direct loss of human life, but we are unable to take into account human life lost due to effects on food production. + +Our model, while not sophisticated, offers intuitive results. Our estimates of crater size, impact energy, and magnitude of seismic disturbance correlate nicely to other models' predictions, such as those of Hamilton [1998b]. + +# References + +Fishbane, Paul M., Stephen Gasiorowicz, and Stephen T. Thornton. 1993. Physics for Scientists and Engineers. Englewood Cliffs, NJ: Prentice Hall. +Hamilton, Douglas P. 1998a. Notable Tidal Waves of this Century. http://janus.astro.umd.edu/astro/Wave.htm. +1998b. Solar System Collisions. http://janus.astro.umd.edu/astro/impact.html. +Koeberl, Christian, and Virgil L. Sharpton. 1998. Terrestrial Impact Craters. http://cass.jsc.nasa.gov/publications/slidesets/impacts.html. +Melosh, H.J. 1989. Impact Cratering: A Geological Process. New York: Oxford University Press. +Morrison, David, and Tobias Owen. 1996. The Planetary System. Reading, MA: Addison-Wesley. +Ronne, Finn. 1997. Antarctica. Encyclopedia Americana. +Steel, Duncan. 1995. *Rogue Asteroids and Doomsday Comets*. New York: John Wiley & Sons. +Wilson, Jerry D., and Anthony J. Buffa. 1997. College Physics. Upper Saddle River, NJ: Prentice Hall. + +# Judge's Commentary: The Outstanding Asteroid Impact Papers + +Patrick J. Driscoll + +Department of Mathematical Sciences + +U.S. Military Academy + +West Point, NY 10996 + +ap5543@exmail.usma.army.mil + +# Introduction + +The A problem has, over time, assumed near mythological association, upon first reading by contestants, with challenging mathematical analysis. Upon further discussion and examination by team members, however, the problem typically succumbs to rather straightforward mathematics combined with innovative thinking. The Asteroid Impact problem continued this trend, providing contestants with an opportunity to wrestle with a sophisticated and challenging real-world problem. Despite the wealth of reference material accessible to contestants in reference libraries and on the Internet, the task of clearly identifying the short- and long-term effects of an asteroid impact in Antarctica left plenty of ideas for contestants to explore. + +In past years, the diverse backgrounds of the undergraduate contestants provided teams with an ability to bring more than one discipline's perspective to bear on the problem. This typically resulted in an interesting array of hybrid modeling approaches. This year, however, there seemed to be a convergence to only a handful of approaches despite team demographics. Our speculation is that this effect is in direct response to the astronomical (no pun intended) increase in network connectivity via the Internet. + +As many teams discovered during their weekend effort, using the Internet as a source of information in support of their analyses proved to be a two-edged sword. Sites such as those at Sandia Laboratories or the Jet Propulsion Laboratory provided interesting and in some cases accurate and relevant information dealing with asteroid impacts with Earth. Unfortunately, as evidenced + +in many papers, teams extracting information from these sites without first thinking about and discussing the problem soon found themselves under the spell of the siren, lulled into a mathematical approach that they were not able to bring to successful closure in the time allotted for the competition. Moreover, to judge from their lack of direct supporting documentation and reasoning, in the end they apparently found themselves unprepared to explain clearly and sufficiently the underlying assumptions and reasoning of the mathematics presented in the sites. As in most modeling efforts, this provided an all-too-fatal flaw to their paper. + +A second general note also worth mentioning concerns team strategies for coping with the impending competition deadline. The vast majority of good papers represented a team strategy that, when faced with a decision either to present complicated mathematical analysis on only a portion of the problem or else to attempt a complete modeling effort, chose the latter. The exceptional papers contained a judicious amount of both elements woven together to answer the questions posed by the problem. Exactly how the balance was struck between the degree of inclusion of the two elements varied from paper to paper. However, it was clear that each team had chosen one or two appropriate mathematical techniques (e.g., partial differential equations, kinetic energy modeling, etc.) to develop within the context of a complete modeling effort. + +By and large, the exceptional papers provided conclusive evidence that their teams had dedicated a substantial amount of time thinking about the problem prior to starting their quest for supporting information. This choice seemingly enabled them to weigh the cost and benefit of identifying exact modeling parameters versus making reasonable assumptions and working with approximate, in-range values. The many facts directly associated with the problem—such as the geological composition of both the asteroid and Antarctica, the typical source of Earth-bound asteroids, the angle of incidence upon impact, the human population distribution, and atmospheric currents and circulation—mandated adopting such a strategy. Those papers failing to provide evidence of having considered important problem characteristics, whether implicit or explicit, were eliminated from further consideration. As a minimum, it would have been better to identify and explain the impact of a particular feature (e.g., upper atmospheric wind currents) and then to choose explicitly not to include this factor for reasons of mathematical tractability. + +Modeling assumptions fall into two broad categories: physical assumptions requiring justification with discussion, and numerical parameter assumptions that may result from citations noted. The plausibility and applicability of either type directly depended on how well teams linked a particular assumption to the problem as stated in the MCM, rather than to some problem stated in the reference source document. Regardless of a paper's calculations, a 10-meter instantaneous rise in all of the Earth's oceans is a bit too far-fetched of a result for the problem presented, even for the most devout of science fiction followers to accept. + +As in past competitions, the need for precise supporting documentation + +in the body of the report cannot be stressed enough. The exceptional papers all conveyed a clear link to verifiably credible information sources within the body of their paper. Lesser-quality papers showed a reliance on Internet sites for supporting information that failed to include necessary explanations of why certain parameter values were valid and what assumptions their methods were based on. Although the temptation to "cut-and-paste" directly from Internet sources is recognizably strong, doing so most often resulted in a paper that was predominantly statements of unsupported "facts" rather than one showing that the team had a clear understanding of the model. Additionally, dedicating an inordinate amount of time to display the derivation of known relationships (e.g., Kepler's law of gravitational attraction) added little value to a paper. + +Lastly, the finer papers presented complete summaries, contained few or no grammatical errors, and presented well-designed tables and graphics that illuminated their team's underlying analytical reasoning. + +# About the Author + +Pat Driscoll is an Academy Professor in the Department of Mathematical Sciences at USMA. He received his M.S. in both Operations Research and Engineering Economic Systems from Stanford University, and a Ph.D. in Industrial and Systems Engineering from Virginia Tech. He is currently the program director for math electives at USMA. His research focuses on reformulation-linearization techniques in the context of linear and nonlinear optimization. Pat was the Head Judge for the Asteroid Impact Problem. + +# Determining the People Capacity of a Structure + +Samuel W. Malone + +W. Garrett Mitchener + +John Alexander Thacker + +Duke University + +Durham, NC + +Advisor: David P. Kraines + +# Summary + +Many public facilities are assigned a "maximum legal occupancy" for how many people may be in the facility at one time. For typical facilities, we consider personal space, evacuation time, and ventilation to determine this number. We present several models of evacuation and flow of people to determine how quickly a given number of people can leave a room or complex of rooms in case of an emergency. We estimate the time for a room to become dangerous when toxins are leaking into the atmosphere, including the carbon dioxide produced by human respiration and by fire. In addition to an emergency situation, we investigate how the ventilation through a room might limit its maximum occupancy. + +We expect each person to need 0.5 to $1\mathrm{m}^2$ of personal space. For an elevator or a concert, in which close contact is not considered uncomfortable, smaller values may be used. For a swimming pool, where people need more room to maneuver, we recommend more. + +We use three models of flow of people out of a room with a door. One assumes that the flow rate is constant, the second bounds it by a linear function of people-density (people per unit area) in the room, and the third bounds it by a concave-down quadratic function of people-density. In each case, the rate at which people exit is roughly proportional to the combined flow rates of all the doors. A room with a lot of small furniture is similar to an empty room, since people are not heavily restricted in direction of travel; but a room with large furniture that restricts motion is better considered as a complex of connected rooms. The space taken up by furniture must be subtracted from the whole when calculating capacity. + +A series of rooms can be represented as a graph with nodes for rooms and edges (marked with a flow rate) for doors. For constant flow rate, the Ford-Fulkerson algorithm gives the maximum flow through the room and hence an estimate of the time for any given number of people to evacuate. + +For the constant and quadratic bound models, a computer simulation gives consistent results for a complicated cafeteria on campus. Unless there is a bottleneck somewhere inside, the limiting factor on the evacuation rate seems to be the flow rates of the doors. + +Once we know how long it takes to evacuate $N$ people is known, we can back-solve to determine the maximum number of people who can evacuate in time $T$ . The problem is determining how much time to allow for evacuation. Based on the combustion of wood, we estimate that the sample cafeteria would take 2.5 min to evacuate but 2.5 h to fill with carbon dioxide. + +Our evacuation models are flexible, in good agreement with each other for the sample buildings we used, and give reasonable times for evacuation. The ventilation model is likewise reasonable and flexible. Although we had to guess many of the parameter values used in the models, we designed experiments to determine some of these parameters. In particular, the estimate of time until fatality for a fire was extremely rough and should be refined. + +We recommend that personal space be used as a first estimate of capacity. The evacuation models should then be applied to be sure that there are no bottlenecks. The ventilation system should be examined to ensure that enough fresh air comes in and that the room dissipates heat quickly enough. + +# Introduction + +Two important factors affect capacity: + +- The Emergency Problem: What should be the maximum capacity in terms of minimizing the time for every occupant to exit without sustaining injury? +- The Comfort Problem: How many people can fit in a room, for a given interval, before the room becomes overheated or the carbon dioxide level rises significantly above normal? + +We present two models for the emergency problem, both of which give a method for determining the minimum time for a specified number people to exit a specified structure. Conversely, we use these methods to determine the maximum number of people who can exit a structure in a given period of time. + +For the comfort problem, we estimate the maximum number $N$ of people who can comfortably occupy a given space for a period of time $T$ . + +To avoid ambiguity, we use the following definitions: + +- A structure is an assortment of interconnected spaces, each of which leads to at least one other space or an exit. + +- An emergency is a situation that poses sufficient potential or actual harm to the well-being of the group within a structure to require its complete evacuation. +- The assumption of orderly movement states that no personal injuries or other accidents occur that affect the minimum time to evacuate the structure. +- A panic is a situation in which orderly movement does not hold. +- A room is comfortable if the quality of its air is acceptable and its temperature falls within a specified range. + +# Further Considerations + +One difficulty in developing a model for the emergency problem is deciding how different types of emergencies affect the rate at which people can exit a given structure. A bomb threat and a fire are both pressing reasons to evacuate a building. Imminent danger of smoke inhalation is more serious than the knowledge that five hours later a bomb may or may not explode; but a bomb threat called in five minutes before detonation could cause a panic that might leave many people injured in the rush to exit, whether or not the threat is real. The dynamics of the exiting processes for each of these situations present distinctly different modeling situations. + +In addressing the emergency problem, we first consider orderly movement and then extend our analysis to what might happen in a panic. + +# Assumptions and Hypotheses + +- The people in our models are adults weighing between 100 and 300 lbs. +- There are no "security guards" or individuals responsible for regulating evacuation. That is, every individual desires to exit the structure as quickly as possible and employs the same process for deciding on the best route. +- The ceilings are of normal height, and the uppermost floor is not extremely distant from ground level (i.e., the rooms are not crawl spaces nor are they penthouses of skyscrapers). +- The time for a person to move from one room to another is negligible compared to the time to evacuate all people from a room. +- The room is in a modern building in a town or city. We do not expect our results to apply to submarines, space stations, or other unusual structures. + +# Personal Space Constraints + +The simplest constraint on the capacity of any room is space. Each person requires about $1\mathrm{m}^2$ ( $9\mathrm{ft}^2$ ) to stand and move around comfortably. So if a room is designed for standing or sitting in an upright chair, an upper bound on the room's capacity is given by its area (less any area occupied by furniture) in square meters. + +In special cases, such as a rock concert or an elevator, in which people are willing to stand closer together, the maximum capacity may allow for only 0.75 or $0.5\mathrm{m}^2$ per person. + +# Evacuation Models + +- How long would it take all the people in a full room to exit? +- What is the risk that someone would be injured during the evacuation? (by being trampled, left in the building, etc.) +- In an emergency, how long do people have to get out of the room? + +To answer these questions, we develop several models of evacuation based on assumptions about kinds of emergencies and how people move through doors. + +# The Constant Rate Model + +The constant rate model is based on the following assumptions: + +- A door lets people through at a constant flow rate. +- The time for a person to get in line at a door is negligible compared to the time to evacuate the room. +- Doors do not become blocked during the evacuation. +- People are crowded around each door. Until the room is almost empty, there are enough people standing close to the door to use it to full capacity. When someone exits, the crowd pushes forward to fill the gap. +- People tend to go either to a nearest door or to a door that will allow them to exit the fastest. + +First we analyze a room containing only people; later we add furniture. Similarly, we initially ignore the possibility of a panic. + +# Single Room with One Door + +For a single room with one door, we assume that there are always enough people to use the door to capacity. If the door allows people through at rate $r$ and there are $n$ people in the room, it takes $t = n / r$ time for the room to empty. + +# Single Room with Multiple Doors + +If the room has multiple doors, each person initially goes toward the nearest door. If it becomes clear that one crowd is moving faster than the others, people at the end of slow lines move to the end of the fast line. In this way, all doors are crowded until the room is empty. Suppose that there are $k$ doors with flow rates $r_1, \ldots, r_k$ and that $n_1, \ldots, n_k$ people exit through the doors, respectively. All lines finish at the same time, yielding + +$$ +t = \frac {n _ {1}}{r _ {1}} = \frac {n _ {2}}{r _ {2}} = \dots = \frac {n _ {k}}{r _ {k}}. +$$ + +If we let $n$ be the sum of the $n_i$ , we have + +$$ +n = t r _ {1} + t r _ {2} + \dots + t r _ {k}. +$$ + +Defining $r$ to be the total number of people divided by the total time of evacuation and substituting yields + +$$ +r = \frac {n}{t} = r _ {1} + r _ {2} + \dots + r _ {k}. \tag {1} +$$ + +That is, a room with many doors is equivalent to a room with a single larger door whose flow rate is the sum of the rates of all the smaller doors. + +# Subroom and Corridor Decomposition + +Now we consider furniture and other obstacles. First, imagine a dining room with a large number of tables and chairs (see Figure 1). + +![](images/a242e5309669f767d68810b5410b40e1353e7315a67a04c05c49083f3669a351.jpg) +Figure 1. A dining room, view from above. + +The furniture restricts people to certain paths, but the assumptions of the open-room model still hold. People can generally move in whatever direction they want, there is always a crowd at each door, and each door flows at maximum capacity. It is the combined flow rate of all the doors that determines the evacuation time, as in (1). + +Alternatively, obstacles can divide a room into smaller rooms and corridors, a situation that requires a significantly different model. For example, consider a small lecture hall with rows of seats, a table, and several doors (see Figure 2). People would likely walk between the chairs rather than leap over them. So, the single room is broken up by the furniture into smaller "subrooms" and "corridors," as shown in Figure 3. This situation is different from the dining hall because the furniture of a lecture hall more severely restricts the directions people can move in. A person in the hall must first exit a row of seats, then go down one of the outside aisles. If one end of an aisle is blocked, it takes longer for the last person on that aisle to exit the room. In the dining hall, a blocked passageway is less critical because there are so many other passages. + +![](images/3eb351686b4ba6b38313b18d5a6f301dc968c314de72033a0219e5c44d65bcd0.jpg) +Figure 2. View of a lecture hall from above. + +![](images/ebab5aa643b9eea8f21db7799835b69b373303f4ff73ed7100752e2e6171d14f.jpg) +Figure 3. Corridors of movement (in gray) in the lecture hall. + +Once a room has been broken up into subrooms and corridors, it is useful to think of them each as being separate rooms with doors connecting them, and the evacuation problem becomes one of evacuating a whole complex of rooms (see Figure 4). The diagram can be simplified somewhat by combining doors that lead to the same place as in (1). In this case, the exit doors can operate at maximum capacity the whole time, so the time for evacuation is determined entirely by their combined flow rate. + +For a more complex example, consider the cafeteria floor plan shown in Figure 5 (this is based on an actual building on campus). Most rooms are connected by open arches that function as doors with large flow rates. The + +![](images/0cf83d4a6af9585d7e9627fd22f97c5953631b187fe51e1ad333cec00a874bfb.jpg) +Figure 4. Schematic diagram of subrooms of the lecture hall. Circles represent subrooms, lines represent passage from one subroom to the next, and ground symbols represent doors leading to the outside. Each subroom is marked with how many people are in it and each connection is marked with how many people per second can flow through it. + +cafeteria reduces to the schematic diagram shown in Figure 6. Here it is not so clear that the flow rate of the four exit doors determines the evacuation time, although our simulations and a method that we will describe show that this is in fact the case. If we had a large room connected to a lobby by a single small door, and a large door connecting the lobby to the outside, the evacuation time would be more dependent on the flow of people into the lobby. In other words, sometimes a small interior door is a bottleneck, but sometimes it is not. For a complicated network like the cafeteria, whether or not there is an interior bottleneck is not immediately apparent. + +# Maximum Flow Model + +Curiously, the evacuation problem for a complex of rooms can be solved by ignoring the numbers of people in the rooms. Suppose that people constantly flow out of the complex and other people emerge inside at the same rate (think of people falling out of the ceiling as fast as other people exit). The rooms will have constant numbers of people, since people are replaced as fast as they leave. The problem is to find the flow rate of people through a complex. + +The Ford-Fulkerson algorithm finds the maximum flow through a graph. Suppose that in a directed graph each connection has a known maximum capacity (e.g., people per second who can pass through a crowded door). One of the + +![](images/702df082c6937bc0321f9766728b65e81a65bcf3652a868e435a6182dce07f98.jpg) +Figure 5. A large cafeteria viewed from above. + +![](images/a089b52d0fb2fbcc7de6afffebd2a54d9ba72898906e0942b63cc080041423d0.jpg) +Figure 6. Schematic diagram of the cafeteria. + +nodes is designated the "source" (people falling from the ceiling) and another is designated the "sink" (the outside). We assign to each connection the actual flow through it. Such an assignment can be improved if there is a path from source to sink in which the flow through every connection can be increased. An assignment is maximal if there is no such path. The Ford-Fulkerson algorithm looks at all possible paths until no improvements can be made. + +The time for $n$ people to leave the building can be estimated by dividing $n$ by the maximum flow. To use the Ford-Fulkerson algorithm on a room graph, we must add two nodes: a source is connected to all rooms with lines of infinite capacity; and a sink node representing the outside is connected to all exits from the complex, with connection capacities equal to those of the exit doors. + +For a continuation of the cafeteria example, see Figure 7. This graph is marked with a maximum flow. The flow cannot be improved because all the connections leading to the sink are at their maximum. The figure confirms that the rate of evacuation is determined by the flow rate of the exit doors; in other words, there are no internal bottlenecks. The same technique can be applied to any room graph. + +# Quadratic Rate Model + +# Motivation for the Negative Quadratic Model + +A linear rate model proposes that the rate at which people can exit a room, $f(t)$ , is bounded by a linear function of the number of people in the room. The evacuation problem can be stated as: + +$$ +\begin{array}{r l} \text {m a x i m i z e} & \int_ {0} ^ {T} f (s) d s, \end{array} +$$ + +![](images/5bc5b18946060da4bb2cbf16f7dad454dfdfe8f732fa5ad712f83f5286b7823d.jpg) +Figure 7. The graph for the Ford-Fulkerson algorithm for the dining hall. The $+$ node represents the source and the bull's eye represents the sink. + +that is, maximize the number of people who can evacuate in time $T$ , + +$$ +\text {s u b j e c t} 0 < f (t) < a \int_ {t} ^ {T} f (s) d s + b, \text {f o r} 0 < t < T. +$$ + +The integral gives the total number of people evacuated after time $T$ minus the number of people who evacuated up to time $t$ ; in other words, it is the number of people in the room at time $t$ . + +The linear model represents the situation where the number of people in the room has a "forcing" effect on the flow rate through the exits; the greater the constant $a$ , the greater the forcing effect. The constant $b$ represents the normal rate of flow when the forcing effect is negligible. Assuming that people exit the room in an efficient and orderly manner, the linear model hypothesizes that the maximum flow rate out of the room increases linearly with the number of people in the room. + +However, this model does not take into account that for sufficiently large flows, the capacity function representing the upper bound of the flow rate should decrease to zero. The evacuation dynamics of an emergency require an upper bound model that takes large flow values into consideration: When, other than an emergency or a panic, would such large flow values occur and evacuation time be more crucial? + +# Developing the Negative Quadratic Model + +We pose a model that assumes that the upper bound of the flow rate is a negative quadratic function of the number of people in the room at time $t$ . The evacuation problem becomes: + +$$ +\mathrm {m a x i m i z e} \int_ {0} ^ {T} f (s) d s +$$ + +$$ +\text {s u b j e c t} 0 < f (t) < q - r \left(\int_ {t} ^ {T} f (s) d s - p\right) ^ {2}, \text {f o r} 0 < t < T. \tag {2} +$$ + +The maximum flow rate $q$ occurs when the room is occupied by an optimal capacity $p$ of people. The motivation for the negative quadratic rests on two assumptions: + +- The upper bound decreases when the number of people in the room is substantially less than $p$ , because the time for people to walk to and through the exit becomes nonnegligible compared to the total time required to evacuate all people from the room. +- Conversely, when the number of people in the room noticeably exceeds $p$ , the jostling, discomfort, and limitation of movement that occurs reduces the flow rate through the exits. + +The value of $p$ for a room depends on its floor space $A$ and a critical density $d$ (the number of people per area beyond which impediment to motion increases and flow efficiency decreases), with $p = Ad$ . We assume that $d = 0.75$ people/ft². + +To solve the evacuation problem using the quadratic model, we assume that maximum flow occurs. The constraint (2) becomes + +$$ +f (t) = q - r \left(\int_ {0} ^ {T} f (s) d s - \int_ {0} ^ {t} f (s) d s - p\right) ^ {2}, \qquad 0 < t < T. +$$ + +Differentiating both sides twice with respect to $t$ leads to + +$$ +f ^ {\prime \prime} (t) f (t) - f ^ {\prime} (t) ^ {2} + 2 r f (t) ^ {3} = 0. +$$ + +Using the initial values $f(T) = q - rp^2$ and $f'(T) = 0$ and the package Maple, we get the following solution for the flow rate out of the room at time $t$ : + +$$ +f (t) = \left(\frac {q - r p ^ {2}}{\cos \big ((t - T) \sqrt {- q r + r ^ {2} p ^ {2}} \big)}\right). +$$ + +From this result, we compute the maximum number of people $N$ who can exit the room in a time interval $T$ : + +$$ +N (T) = \int_ {0} ^ {T} f (t) d t = \left(\frac {- \tan \left(T \sqrt {r (- q + r p ^ {2})}\right) (- q + r p ^ {2})}{\sqrt {r (- q + r p ^ {2})}}\right). +$$ + +Solving for $T$ , we have + +$$ +T (N) = \frac {\left(\frac {\arctan \left(- N \sqrt {r (- q + r p ^ {2})}\right)}{- q + r p ^ {2}}\right)}{\sqrt {r (- q + r p ^ {2})}}. +$$ + +# The Relevance of the Negative Quadratic Model + +In a panic, some people may sustain injury, fall down, or disrupt the flow of the crowd. Our justification for the quadratic model assumes something similar: People packed together at a density greater than the critical density slow each other down in their attempt to evacuate a room. The difference between the impediments to flow caused by crowding and the impediments caused by panic is one of degree. + +To illustrate the predictions of the negative quadratic model, consider a room of size $A = 1,000$ square feet and suppose that the optimum flow rate is $q = 90$ people/min, that optimum flow occurs with $p = Ad = (1000)(0.75) = 750$ people, and that we have $T = 6$ min to evacuate. We take the value for $r$ to be $a / p^2 = .01 / (750^2) = 1.8 \times 10^{-8}$ . Doing so yields $N(6) = 540$ for the quadratic model and $N(6) = 557$ for the linear model. It makes sense that these numbers are not too far apart, since we are not dealing with an extreme case where the number of people evacuated greatly exceeds or undercuts the critical value $p$ . When $p$ does not deviate significantly from $Ad$ , this will usually be the case. However, if we set $p$ , for example, to 10,000 and calculate as above with all else held constant, we get $N(6) = 501$ for the quadratic model; if we set $p = 100,000$ , we get $N(6) = 195$ . The negative quadratic model suggests that efforts of a packed crowd to evacuate may actually decrease the number of people evacuated, by causing injuries and inefficient flow. + +# Limitations of the Negative Quadratic Model + +The negative quadratic model is designed to model the evacuation of a space, not of an entire structure. Applying it to a cafeteria on our campus gave results that agree with the constant rate model. An extension of our project would be to simulate a variety of panic situations using the negative quadratic model, the linear model, and the constant rate model and compare the results. + +Our simulation works by computing the probability that a person leaves a room at a given time step. The quadratic model breaks down by giving zero or negative probability when the number of people inside is small, so the program switches to a linear model when there are 10 or fewer people in a room. + +We estimated $p$ and $d$ . Since the results from the model depend heavily on the values of these parameters, it is important to estimate them accurately. + +# Ventilation Models + +Comfort level is another consideration for maximum capacity: + +- The temperature should be between $65^{\circ}$ and $90^{\circ}$ F. In particular, the ventilation system should be able to dissipate the heat produced by the bodies of the people inside. +- Toxins in the air should be kept to harmless levels. The only one likely to apply to all situations is carbon dioxide $\left(\mathrm{CO}_{2}\right)$ , produced naturally by human respiration. Jones [1973] recommends that the $\mathrm{CO}_{2}$ level should be below $0.1\%$ ; at $8\%$ , it can be fatal. +- If smoking is allowed, additional circulation must be allowed for. + +Human bodies produce heat at a rate from $60\mathrm{W}$ (asleep) to $600\mathrm{W}$ (strenuous activity), with $100\mathrm{W}$ for moderate activity [Jones 1973]. Heat dissipation from a room depends upon its insulation, windows, and any air conditioning. Rooms that are used for several hours at a time should be able to dissipate $100\mathrm{W}/$ person so that the temperature remains roughly constant. + +Jones [1973] recommends at least $0.21 / \mathrm{s}$ per person of fresh air, to dilute the $\mathrm{CO}_{2}$ concentration and unpleasant odors, and $251 / \mathrm{s}$ if smoking is allowed. + +The fraction of oxygen in the air can decrease to $13\%$ before it becomes dangerous, so the presence of toxins is the limiting factor [Jones 1973]. In a tightly enclosed space, the $\mathrm{CO}_{2}$ produced naturally by human respiration becomes important. A normal human breath is about $500~\mathrm{cc}$ , $4.1\%$ of which is $\mathrm{CO}_{2}$ , and the breath takes $4\mathrm{s}$ [Hughes 1963]. Thus, humans produce $\mathrm{CO}_{2}$ at a rate of $5\times 10^{-3}\mathrm{mol / s}$ . + +Given a room of volume $V$ , the amount $N$ of air molecules is given by the gas law $PV = NRT$ , where $P$ is pressure, $T$ is the room temperature in Kelvins, and $R$ is the gas constant. Denote by $r$ the constant rate (in moles per second) of creation of a toxin, by $q$ the fraction of the air that is toxic, and by $t$ elapsed. Then we have + +$$ +q N = r t \quad \text {o r} \quad t = \left(\frac {q V}{r}\right) \left(\frac {P}{R T}\right). \tag {3} +$$ + +At room temperature and pressure of 1 atmosphere, $P / RT = 41.4 \, \mathrm{mol} / \mathrm{m}^3$ . Substituting for $q$ the lethal concentration of the toxin yields as $t$ the time for the toxin to reach it. + +Consider, for example, an elevator $3\mathrm{m}$ by $3\mathrm{m}$ by $3\mathrm{m}$ carrying 12 people that becomes stuck and is somehow completely air-tight. The people take up about half its volume. Using (3), we find that in $2.5\mathrm{h}$ the the $\mathrm{CO}_{2}$ level reaches $8\%$ . Hence, we might limit the capacity of the elevator by the time that it takes to get a rescue crew in to open it up. However, elevators are usually well vented, so $\mathrm{CO}_{2}$ buildup will normally not be a significant constraint. + +# Swimming Pools + +For an outdoor swimming pool, evacuation is not much of a consideration. For an indoor swimming pool, evacuation is basically the same as for an open room. People can exit the pool itself on all sides, except for weaker swimmers who may have to use a ladder, and then flow through the exit doors. + +Personal space is the important safety issue. In the water, people must move their arms and legs over a greater range of motion to maneuver than in walking on land. Many swimming strokes limit a swimmer's vision and make collisions more likely. Some swimmers wear floats, which take up additional space. + +We recommend $3\mathrm{m}^2$ , giving each swimmer $1\mathrm{m}$ in all directions to move. A large space should left open around diving boards and slides, perhaps a circle of $4\mathrm{m}$ . + +# Capacities for Elevators + +Elevators usually have very wide doors and hold only a few people. Thus, evacuation time is negligible in case of an emergency. (The real time constraint will be getting the people down the stairs and out of the building, a problem that is similar to the room problem.) + +We already considered the limitations imposed by possible lack of fresh air. More important factors would seem to be weight and space. Elevators have a weight limit supplied by the manufacturer, and a simple elbow-room constraint of $0.5\mathrm{m}^2$ per person should provide sufficient personal space. + +# Strengths and Weaknesses + +Our models are fairly robust, with the negative quadratic model being a more realistic tool than the linear model, since the former more accurately simulates panic. However, the negative quadratic model yields questionable results for large values of room occupancy. + +# Recommendations + +For the negative quadratic model, we could extend an analysis of how to determine the value of $p$ to a more comprehensive understanding of how the "forcing effect" operates to slow the evacuation of a panicked crowd. Also, we could develop techniques for measuring the value of the critical density $d$ , such as observing how many people can evacuate a building in different time intervals $T$ , and using those data to estimate the critical value at which maximum flow occurs. + +For improving our analysis of the comfort problem, we could develop ways to estimate better how long it takes a room takes to become overheated or stuffy. + +# Appendix: Computer Simulation + +To test the evacuation times of complexes of rooms, we wrote a simulation engine in Python. Object-oriented programming techniques allow us to use different kinds of doors (always open, sometimes blocked, variable flow rate, etc.) and different strategies of selecting a path out of the building with the same structural models. Each door has a queue of people waiting to get through. At each time step, all the doors "warp" some number of people into the next room. Then everyone in line is given the opportunity to move to a different queue, based on their perception of the room. A special room object is designated the "outside" and throws an exception to halt the simulation when a specified number of people have arrived outside. A class diagram for the simulation is given in Figure A1. + +![](images/0a08ed7b794bc539f66d0e2efc4ffe66cdb28d69350d707c04f03cf8819ce924.jpg) +Figure A1. Class diagram of the simulation in abbreviated UML. Triangles indicate inheritance, hairline arrows indicate "creates." + +# References + +Francis, R.L. 1984. A negative exponential solution to an evacuation problem. Research Report No. 84-36. Gaithersburg, MD: U.S. Dept. of Commerce National Bureau of Standards Center for Fire Research. +Hughes, G. M. 1963. Comparative Physiology of Vertebrate Respiration. Cambridge, MA: Harvard University Press. +Jones, W.P. 1973. Air Conditioning Engineering. London: Edward Arnold. + +# Hexagonal Unpacking + +David Rudel + +Joshua Greene + +Cameron McLeman + +Harvey Mudd College + +Claremont, CA 91711 + +Advisor: Ran Libeskind-Hadas + +# Abstract + +We present a model for movement within crowded structures, tessellating a room with hexagons and using a waiting-time function based on the harmonic mean of closest neighbors. We determine the maximum time required for all persons to exit, comparing this time to a target time based on the size of the structure. Our model is very general and its parameters can be modified for several types of buildings. We consider various specific cases, giving the maximum occupancy for each. + +# Assumptions + +- When several people jockey for a vacated position, the probability that one of them occupies it is independent of how long each has spent in his current location. A person who has been waiting longer would seem to have an easier time, but this effect is compensated by the tendency, even in groups of people gathered around an exit, to move in lines. The forward momentum of moving into a new position gives an extra advantage, as the person may well be drafting behind someone else, forming a miniature line weaving through the crowd. Whichever advantage prevails, we posit that it is small enough to ignore. +- People exiting a building generally move so as to decrease their overall expected exit time. Years of selecting optimum shopping lines and struggling to get out of crowded theaters, along with a human's natural ability to see where holes are forming in the crowd, constitute a natural tendency to select paths that minimize the time to exit. Even if humans can't see instantly what course + +will have the least resistance, they certainly can ascertain whether any given step will ultimately shorten their expected wait. + +- The average person can quickly accelerate to a speed of least $6\mathrm{ft/s}$ . Normal walking speed is $4\mathrm{ft/s}$ , so a quick acceleration to $6\mathrm{ft/s}$ is feasible. The value of this parameter can be changed for kindergarten auditoriums, retirement homes, and other structures housing those likely to have less robust locomotion. +- When people clump together in attempting to leave a structure, they are packed loosely enough to assign each a cell 1.4 ft in diameter. While one could theoretically line people up in columns and pack them, standing still, into cells a bit smaller, greater space must be allowed for moving chaotic masses. The mechanics of the model depend very little on the size of the cells. +- Movable furniture does not block an exit, though it may be in the immediate vicinity of the exit and thus affect the rate of egress. We allow in the model for tables and other objects to be very close to doors. The safety code provides that doors cannot be blocked by such items, as time for their removal is so prohibitively high as to seriously depreciate the maximum occupancy. We do treat the possibility that one exit (in a multi-exit facility) can become blocked. + +# Practical Considerations + +It is unclear what the target exit time should be. The bulk of our modeling determines exit times based on the parameters of a structure. We then give the maximum occupancy for various exit times. + +# Points That Must Be Considered + +- The number of people exiting a facility during a crowding action is not necessarily the same as the maximum number of people who can leave through doors in orderly lines. To arrive at the total time for evacuation, one cannot simply divide the number of people in a room by how many can go through a door in a given time. +- The movements of individuals leaving a building are made individually, based on the position of the person and openings available. + +# Definitions + +We tessellate the room with regular hexagons, each 1.5 ft along the diagonal (making them 1.299 ft from side to side, with side length 0.75 ft). These represent cells that people occupy while they are in a mass attempting to leave a building. + +Our model has a single exit but is easily extendable to more exits. For simplicity, we assume a rectangular room with the door on the north wall (the top wall in all figures). + +The orientation of the tessellation makes very little difference in the time calculation, as it is only an abstraction allowing for algorithmic-based movement toward positions of greater desirability. + +We define several terms: + +- The neighbors of a hexagon are those six hexes (or fewer in the case of border hexes) with which it shares sides. +- An allowed movement is a movement from a hex to any of its neighbors. +- The radius of a hex is the minimal number of allowed movements to take a person at the hex to the door. +- A level curve for a radius $R$ is the collection of all hexagons with radius $R$ . +- Two hexes are isoradial if they have the same radius. +- A good neighbor for a given hex is a neighbor with a smaller radius. +- The good-neighbor number of a hex is its number of good neighbors. +- A desirable neighbor for a hex is either a neighbor with a smaller radius (a good neighbor) or a neighbor on the same level curve with more good neighbors. The inherent geometry makes some hexes on the same level curve worse than others in terms of waiting time. If we design our model so that people want to go only to hexes of smaller radius, then they will not move toward these better hexes of the same radius. Giving each hex a radius and good-neighbor number accommodates this kind of move. +- The inherent waiting time of a hex is how long it takes to traverse it. This is independent of the vacancy or occupancy of neighboring cells. This parameter can be varied to model situations where the terrain makes moving difficult or has a tendency to cause accidents. +- The actual waiting time of a hex is the expected amount of time one spends in the hex given the inherent waiting time and the waiting times and competition for neighboring hexes. +- The equivalent waiting time of one hex with respect to another is the actual waiting time multiplied by the number of people competing for the hex. +- The expected exit time of a hex is the sum of the waiting times of the hexes that form the minimal path to the exit. +- A click is the basic unit of time for people fleeing a room. It is based on the type of door being used. The click is the time it takes for a single person to leave a single hex next to a door. Thus, if a door were 3 hexes wide and could let out 6 people/s, a click would equal $0.5 \mathrm{~s}$ . + +# Constructing the Model + +We assign each hex an inherent waiting time based on the expected time to traverse it. When a position becomes available, the time to fill the opening is certainly nonnegligible. The base inherent waiting time of a hex that is otherwise free of obstacles and of danger of accident is set to $0.25\mathrm{s}$ , the time it takes to move 1.5 ft (the width of the hex) at the standard pace of 6 ft/s. Tables and other obstacles can be modeled as hexagons with higher waiting times. (One can jump over a row of seats in a theater, but it takes longer, there is increased chance of tripping, and so on) + +To illustrate this step in the model, see Figure 1, our representation of a theater. The chairs are represented as hexes with a waiting time of 1 s. Figure 2 illustrates level curves and good-neighbor numbers for a set of hexes. + +![](images/8f42aeaefdbb50fbaeafa95e65db913fa6af6a95f7092dbe8c4a7b8c91f65820.jpg) +Figure 1. Intrinsic waiting times (in seconds) for cells in a hypothetical theater environment. Cells with value 0.25 represent free space; those with value 1.0 (shaded) correspond to the (fixed) seating. + +After inherent waiting times are assigned, we determine how much time to assign to one click. A standard $7\mathrm{ft} \times 2.5$ ft door takes up two hexes and can exit 3 people/s, so its click time would be $0.67\mathrm{s}$ . In general, + +$$ +\mathrm {c l i c k} \mathrm {t i m e} = \frac {\mathrm {w i d t h} \mathrm {o f} \mathrm {d o o r}}{\mathrm {t o t a l} \mathrm {o u t f l u x}}, +$$ + +where the width is in hexes and the outflux in people/s. This click time is the maximum speed of egress. + +Hexes adjacent to the doors (those with radius 1) are assigned an actual waiting time of 1 click; this represents the expected actual waiting time of someone right next to the door. Some hexes with radius 2 have only one hex + +![](images/38170ff873e60a376921151f51a93c82aa74c856659f386690c9e48270688d0a.jpg) +Figure 2. Cells numbered by their level. Cells with one good neighbor are shaded; all other cells are either adjacent to the exit or have two good neighbors. + +next to them with a smaller radius, while others have two; the hexes with smaller radii are exit hexes. Hexes with only one neighbor of smaller radius have a good-neighbor number of 1 and are less preferred than those with two neighbors of smaller radius (they have a good-neighbor number of 2). The general rules of movement can be summarized as follows: + +- Of two hexes with different radii, the hex of smaller radius is preferred. +- Among isoradial hexes, those with more good neighbors are preferred. + +We consider all the hexes of greatest preference and determine the actual waiting time of these. Then we compute the actual waiting time of the next most preferred set of hexes. We thus work our way out from the door (since radius takes precedence over good-neighbor number). + +Since the actual waiting time of a hex is based only on the actual waiting time of its desirable neighbors, the competition for these hexes, and its own inherent waiting time, we never need to know the actual waiting time of a hex less desirable than the one that we are working on. + +# Computing the Actual Waiting Time + +# Significant Factors + +For each desirable neighbor, we compute an equivalent waiting time by multiplying the actual waiting time of the neighbor by the number of hexes vying for the desirable neighbor. If three people are all attempting to get a certain hex, then the equivalent waiting time for all three is three times the actual waiting time of the hex, since each of the vying hexes has a one-third + +chance of gaining the desirable hex each time it empties. Thus, competition for hexes tends to slow down a person's progress. However, many hexes have more than one desirable neighbor, just as there are typically more than one near position that someone in a crowd would occupy if given the opportunity. We model the effect of multiple desirable neighbors via the reduced harmonic mean (i.e., the harmonic mean divided by the number of desirable neighbors): + +$$ +\mathrm {R H M} (A, B) = \frac {A B}{A + B} = \frac {1}{\frac {1}{A} + \frac {1}{B}}; +$$ + +$$ +\mathrm {R H M} (A, B, C) = \frac {A B C}{A B + B C + A C} = \frac {1}{\frac {1}{A} + \frac {1}{B} + \frac {1}{C}}. +$$ + +# Justifying Use of the Harmonic Mean + +- We can model the actual waiting times of the desirable neighbors as resistors. The higher the actual waiting time, the longer it takes to shove the same current through a wire. When we have two (or more) desirable neighbors to use as conduit, they combine as resistors in parallel. This is precisely the same function as the reduced harmonic mean. +- All that concerns us is the amount of time that we expect to stay in the given hex. If one hex is open to us once every $A$ clicks, and another is open once every $B$ clicks, then every $AB$ clicks there are $B$ openings from the first and $A$ from the second, so the average number of openings per click is $AB / (A + B)$ , which is just the reduced harmonic mean. + +# The Final Factor + +After the reduced harmonic mean of the equivalent waiting time of the various desirable neighbors is computed, the inherent waiting time assigned to the hex is added. This final value is the actual waiting time for the hex. Figures 3 and 4 illustrate this computation applied to a simple model. + +# The Time Factor + +For our model to determine maximum occupancy, it must have a target time. We decided to trust the actual posted occupancy maximums on simple structures with very few obstacles. A curve that fits these numbers well is the power curve $T = 0.4A^{3/4}$ , where $T$ is the target time and $A$ is the area of the building in square feet. + +![](images/b9360d858d5beab84b3f2398c47c75e8f86f460f36618510aeae3a274a1d4f08.jpg) +Figure 3. Illustration of the algorithm to compute waiting time of cell (*) in a rectangular room with a single exit at the top three cells wide. We first identify its good neighbors (cells (1) and (2)) and any neighbors on its level curve with a greater number of good neighbors (none in this case). For each of the cells for which (*) vies, we determine the total number of cells vying for that same spot. In the case of (1), this number is 3 (due to (a), (b), and $(^{*})$ ); for (2), it is 2 $((^{*})$ and (b)). The waiting time of (1) is then multiplied by the number of cells vying for it, and similarly for (2). The harmonic mean of these equivalent waiting times is divided by the total number of spots for which $(^{*})$ vies; this reduced harmonic mean, plus the inherent waiting time of $(^{*})$ , gives the actual waiting time for $(^{*})$ . + +![](images/8ab4b2d4b0515b5917a43b7e79e93541144f2889ce56ed9087cea26cee94e2a4.jpg) +Figure 4. Waiting times for the room of Figure 3. + +# Testing the Model + +Consider a bare room, say a gymnasium, with only one exit. If our model is accurate, the time prescribed by the model should be a bit higher than the ratio of number of occupants to greatest number able to leave in a given time—higher because of time lost due to people competing for positions, etc. + +A standard gymnasium is built as a full-size basketball court (94 ft long) with one full-size volleyball court turned sideways per half (60 ft wide). With buffer space, the size is $80\mathrm{ft}\times 110$ ft, corresponding to a tiling of $84\times 53$ hexagons. At the center of a longer side we place the exit: two sets of double doors, 110 in total width, or approximately 6 hexes. At the standard rate of 2 people/s per door hex, we get a maximum egress rate of 12 people/s. + +Assume that there are 875 people in the building (fewer than the maximum capacity of about 1350 listed on such gymnasiums). The formation of a clump of people takes a while. We use a very elementary dynamical-systems approach to finding how long it takes. Consider a person as far as possible away from the door, $\sqrt{55^2 + 80^2} = 97.1$ ft away. Let $T$ be the time for the aggregate to form when there are initially $P$ people in the gymnasium; this is how long it takes the farthest person to walk freely to the back of the accumulated people. The number of hexes within radius $R$ of the door is a quadratic expression in $R$ with leading coefficient 1.5; the radius of the farthest one out is approximately $\sqrt{P / \pi} \approx P / 3$ . Prior to aggregation, people leave at the maximum rate of 12 people/s, so $P(t) = 875 - 12t$ . Thus, noting that each hex has a diameter of 1.4 ft (the average of its diagonal and its side-to-side lengths), we have + +$$ +T = \frac {9 7 . 1 - 1 . 4 \sqrt {P (T) / 3}}{6}. +$$ + +Substituting and solving gives $T = 10.9$ s. Thus, after $10.9$ s, 131 people have left, leaving 744 clumped around the exit. + +We tessellate the gymnasium and compute the actual waiting times for the various tiles. We then compute for each hex the shortest expected exit time by finding the shortest path (where shortest here is the path that minimizes the sum of the actual waiting times). We sort these times and find the 744th. This is the time that it should take 744 people to leave the gymnasium. Our model gives approximately $221\mathrm{s}$ . This added to the approximately $11\mathrm{s}$ gives a total time of $232\mathrm{s}$ for leaving the building. This total time is three times as long as the $73\mathrm{s}$ predicted by simply divided the number of people in the room (875) by the number of people able to go through the doors per second (12). We feel that the longer estimate from our model is much more realistic. + +# Results of the Model + +Table 1 gives the intrinsic waiting times that we assigned to various entities in rooms, and Table 2 gives our results. + +We ran the model on several structures, giving an occupancy for various times. Each building was created by modifying the appropriate intrinsic wait times. We give a value for the maximum capacity and an estimated time for the evacuation of the building. Some special cases that we feel could not be accurately modeled included structures that do not have discrete doors: swimming pools, open fields, and so on. A similar algorithm described later can deal with most of these. Since there are reasons other than emergencies for wanting to monitor the number of people in a room, for each situation we have provided two other times and their expected total number of exitable people. + +One added use of this model for leaving time is to estimate how long it takes a group of people to get out of a maze or a hall of mirrors at a funhouse. This can be simulated by setting the intrinsic waiting time of the hexes to higher levels. As can be seen in the chart, this greatly reduces the number of people who can exit in a given time. + +This model also gives a nice way to compare various furniture orientations. We give models of a theater with its door in the center of the back, along with a model of a theater with its door in the back corner. Similarly, large and small classrooms are modeled with two different desk configurations. One model used long tables while the other used individual desks. As one would expect, the long tables produce a longer expected exit time for given number of people; consequently, the maximum allowed occupancy is less. + +Table 1. Inherent waiting times for various objects. + +
ObjectTime (sec)
Free air0.25
Theater chair1
Maze in hall of mirrors3
Table0.7
Desk1.2
Stall20
Sink20
+ +# Strengths and Weaknesses + +The strength of this model lies in its general utility to model a broad range of structures by simply varying a few parameters. It demonstrates well how seemingly minor changes such as door position or furniture configuration can change the overall expected exit time. + +Another strength of the model is that it gives more than simply the maximum safe occupancy level: It gives a specific time for any occupancy level, so that a user can estimate how long an exiting should take. + +Table 2. +Results from the model. + +
Room DescriptionArea (sq ft)Time (sec)Max. cap.TimeNo.TimeNo.
Theater, 15 rows, door at corner90063983054120153
Theater, 15 rows, door at back center90063983054120174
Dance room3753483102760125
Elevator606213101026
Large classroom, 4 long tables across75057993057120187
Large classroom, 7 rows of 7 desks75057963055120186
Small classroom, 3 long tables across300255115274575
Small classroom, 5 rows of 4 desks300254415274567
Bathroom, 5 stalls and 4 sinks200212510153033
Hall of mirrors12008232
+ +Implementation requires only a modest microcomputer, and run-time of our program is polynomial in the area of the tessellated region. The code we used took less than one minute of run-time on our computer for each building. + +Limitations of the model include restriction to rooms with only one door, but we address this limitation below. Our implementation is confined to rectangular structures, but this is not a limitation of the model itself. The model does not account for individuals who wish to keep up with certain others in a crowd (family members, etc.). The model also does not use in any way the height of the ceiling; for certain emergencies, the volume of a room may well be more important than the area. + +Possibly the greatest limitation of the model is basing the target exit-time function on existing codes for certain structures. If the simplest buildings cannot be trusted to have an accurate maximum capacity figure, then the target time function must be changed. + +# An Improved Implementation + +A better implementation of the model, though more computationally complex, allows any number of doors (thus allowing for structures such as swimming pools, where the entirety of the border is a door). A hex has several different radii, one for each door, and the smallest radius determines which door to assign it to. The assignment of actual waiting times to hexes starts at the various doors and flows outward from each equally. + +This implementation can be used to model such situations as doors becoming blocked. To model a door becoming blocked at time $t_0$ , we count how many hexes have total exit times less than $t_0$ , subtract this number from the total number, and then remodel the room without the door, using the new (reduced) number. + +# Don't Panic! + +Timothy Jones + +Jeremy Katz + +Allison Master + +North Carolina School + +of Science and Mathematics + +Durham, NC 27705 + +Advisor: Dot Doyle + +# Assumptions and Hypotheses + +- People evacuating in a fire always move towards the nearest exit, regardless of which path to an exit is least crowded and what obstacles are in their way. Thus, a room with multiple exits can be treated as several smaller rooms, each feeding one exit. +- People only become crowded at a finite number of "bottlenecks"—points at which a line develops and evacuating people must wait. In all other areas, people can move freely. However, the line at these points occupies a minimal amount of space. +- Individuals all move through open areas (where there are no bottlenecks) at the same constant rate. This rate depends on the type of occupants in the room and the presence or absence of inanimate obstacles. +- We disregard building construction, panic hardware (such as pushbar doors), and alarm systems. +- The time for an individual to move through the line at a bottleneck follows an exponential probability density function. However, the variation among people is relatively small. + +# Analysis and Model Development + +All of the different reasons to limit the capacity of a building space are more or less independent of one other. For example, during a fire, the sanitation of + +a room and the amount of weight that a structure can carry are not immediately important; likewise, when the day-to-day health of a room's occupants is considered, the fact that there might one day be a fire is irrelevant. Thus, we calculate maximum occupancies considering each concern independently and then choose the lowest value. + +# Simple Rooms in a Fire + +In the event of a fire, two elements contribute to the speed with which a room can be evacuated: + +- the maximum speed at which the occupants can safely move, and +- the extent of crowding in the room. + +We assume that people are free to move except in certain critical areas ("bottlenecks"). At these places, a line builds up, and any individual who reaches the line must wait before moving on to the exit. Thus, in the simplest case—a room with no inanimate obstacles and one exit—the problem can be broken up into two steps: describing how occupants move to the queue at the door, and describing the dynamics of the queue itself. To find how much time is required to evacuate the room, we find the time for the length of the exit queue to drop below 1. + +We define a probability density function $P(x, y)$ for the likelihood that an individual is located within a certain region of the room. The coordinate system has the center of the door at the origin and the wall containing the door along the $y$ -axis (Figure 1). + +![](images/a78f9cc17a7bc69fafe6b3eacb8d2b196d54454ee3035c5c45a32113c7a2a3a8.jpg) +Figure 1. Coordinate system for a room. + +We would like to find a second pdf, $q(t)$ , to describe the probability that any given individual reaches the door within a certain period of time. Then the probability that an individual reaches the door queue within an interval of time $\Delta t$ is + +$$ +\int_ {t} ^ {t + \Delta t} q (t) d t, +$$ + +where $t$ is the time since the alarm first sounded and people began to move to the exits. For simplicity, we call this integral $Q|_t^{t + \Delta t}$ . If there are initially $n$ people in the room, the number of people entering the exit queue over the interval $(t,t + \Delta t)$ is $n\cdot Q|_t^{t + \Delta t}$ and the rate at which people are entering the queue is + +$$ +\frac {n \cdot Q \mid_ {t} ^ {t + \Delta t}}{\Delta t}. \tag {1} +$$ + +Next, we assume that the time that each individual takes to move through the door queue is described by an exponential probability density function of the form $p(t) = \lambda e^{-\lambda t}$ . The expected value of the time required for one person to move through the line is $1 / \lambda$ and the average rate at which people leave the queue is $\lambda$ . Because we assume that most people take the same amount of time to move through the doorway, the rate of a steady stream of people moving through the doorway is never much more or less than $\lambda$ . However, the queue at the door may have so few people that it can empty completely in time $\Delta t$ , so that the rate that it empties may be less than $\lambda$ . In other words, if there are fewer than $\lambda \Delta t$ people in the queue at time $t$ , then by time $t + \Delta t$ they will all have left; if there are more than $\lambda \Delta t$ people, only some can leave. We therefore express the rate at which people leave a queue as + +$$ +\rho = \left\{ \begin{array}{l l} \frac {L _ {n - 1}}{\Delta t}, & \text {f o r} L _ {n - 1} < \lambda \Delta t; \\ \lambda , & \text {o t h e r w i s e}, \end{array} \right. \tag {2} +$$ + +where $L_{n-1}$ is the number of people waiting in the queue at time $t$ and $\rho$ is the rate at which people are leaving the bottleneck. + +Combining (1) and (2) gives the rate at which the length of the exit queue is changing in the situation of a room with one exit: + +$$ +\frac {n \cdot Q | _ {t} ^ {t + \Delta t}}{\Delta t} - \rho . +$$ + +From this, we can write a system of recursive equations (Euler's method) to approximate the length of the line $(L_{n})$ versus time: + +$$ +t _ {n} = t _ {n - 1} + \Delta t, +$$ + +$$ +L _ {n} = L _ {n - 1} + \left(\frac {n \cdot Q | _ {t} ^ {t + \Delta t}}{\Delta t} - \rho\right) \cdot \Delta t, \tag {3} +$$ + +where $L_0 = 0$ is the case for a room in which the people are initially distributed throughout the room. + +If these equations are iterated until the length of the exit queue is less than 1, we will learn the time required to evacuate the room in an emergency. However, we still must find a general form for $Q \left| _t^{t + \Delta t} \right.$ in terms of $P(x,y)$ . To accomplish this, we divide the room into a set of concentric circles centered at the door, such + +that any person located within a ring-shaped region of the room defined by two of these circles requires between $t$ and $t + \Delta t$ seconds to reach the door. We then divide each of these regions into $k$ smaller segments (Figure 2), each of which can be defined in polar coordinates (with the door at the origin) (Figure 3). Since each ring corresponds to a certain interval of time, finding the probability that an individual is located within each ring gives the probability that they arrive at the door within that period of time. To approximate this probability, $P(x,y)$ is evaluated at the center of each small wedge-shaped segment and multiplied by the area $(\Delta A)$ of the segment; all of these probabilities are then summed to give an approximate value of $Q\left|t + \Delta t\right|$ for each particular $t$ :1 + +$$ +Q \left| _ {t} ^ {t + \Delta t} \approx \sum_ {i = 1} ^ {k} P \left(x _ {i}, y _ {i}\right) \cdot \Delta A. \right. \tag {4} +$$ + +![](images/31c6dcecfecb04bcc36332a4d444a9583c40c4ddbaa7b62b9d17feb42739ad06.jpg) +Figure 2. Ring-shaped region divided into $k$ segments. + +To compute the sum, we need to express $x$ and $y$ in terms of $r$ (which is constant over the summation) and $\theta$ (which varies). From Figure 3, we see that + +$$ +x = \frac {r _ {1} + r _ {2}}{2} \cos \left(\frac {\theta_ {1} + \theta_ {2}}{2}\right), y = \frac {r _ {1} + r _ {2}}{2} \sin \left(\frac {\theta_ {1} + \theta_ {2}}{2}\right). +$$ + +Since the speed $s$ at which a person can move is known, we can express $r_1$ and $r_2$ in terms of $t$ : + +$$ +r _ {1} = s t, \quad r _ {2} = s (t + \Delta t). +$$ + +Rewriting $\Delta A$ in terms of $r_1, r_2,$ and $\Delta \theta$ gives: + +$$ +\Delta A = \frac {\pi (r _ {2}) ^ {2} \Delta \theta}{2 \pi} - \frac {\pi (r _ {1}) ^ {2} \Delta \theta}{2 \pi} = \frac {(r _ {2}) ^ {2} \Delta \theta}{2} - \frac {(r _ {1}) ^ {2} \Delta \theta}{2}. +$$ + +![](images/e3e038b4c80ed09b15e0173344dd573a90704752de55cf051d293f3a934617a9.jpg) +Figure 3. Coordinates of center point. + +Thus, (3) can be expressed in terms of $t$ , $\Delta t$ , $\Delta \theta$ , and $s$ . (Note that since the wall containing the door is always along the $y$ -axis, $\theta_i$ need only be incremented from $-\pi / 2$ to $\pi / 2$ .) Given these parameters, along with $P(x, y)$ , we find the value of $Q |_t^{t + \Delta t}$ for each value of $t$ . With these values in hand, we can use (2) to find the length of the exit queue versus time. Once the length of the queue has dropped below one, the room has been successfully evacuated. + +# More Complex Evacuation Patterns + +Because most situations involve multiple bottlenecks, we need to expand the above model for it to be practical. During normal use, occupants of a room are often distributed throughout one or more "reservoirs"—for instance, aisles in an auditorium, regions served by ladders in a swimming pool, sections of seating at a stadium, or tiers of bleachers in a gymnasium—that are separated from the rest of the room by bottlenecks. As soon as the alarm sounds, people begin to move into the first set of bottlenecks and queue at each one (a "feeder queue" of the main exit). As the first set of feeder queues clears, people move to the second bottleneck and enter the queue there. For instance, in an auditorium people must first leave their own aisles (the first level of feeder queues), proceed to the main exit, and wait in line there. + +Since a given group of people head only for one exit, each bottleneck has its own independent reservoir of people; the output rates from all the feeder queues can therefore be added to give the input rate of the final exit queue. The rate at which each feeder queue releases people is given by (1), but since it depends on the value of $L_{n-1}$ the entire set of iterations ((3) and (2)) used to calculate $L_n$ above must be evaluated for each feeder queue in the more complicated situation. Furthermore, the time for a person to move from the output point of a feeder queue to the main exit queue creates a delay; thus, the final Euler's method equation (which approximates $E_n$ , the length of the exit queue at time $t_n$ ) will not depend on the value of $L$ for each feeder queue + +at time $t_n$ . Rather, we use $L_{n - \beta}$ (the rate at time $t_{n - \beta}$ ) for each feeder queue, where $\beta$ represents the number of Euler's method iterations that pass between the time a person leaves a feeder queue and the time she reaches the final exit queue. The value of $\beta$ for feeder queue $j$ is therefore given by + +$$ +\beta_ {j} = \left\lceil \frac {\sqrt {x _ {j} ^ {2} + y _ {j} ^ {2}}}{s \Delta t} \right\rceil , +$$ + +where $(x_{j},y_{j})$ is the location of the output point of feeder queue $j$ , $\sqrt{x_j^2 + y_j^2}$ is the distance between the output point and the door, $s\Delta t$ is the distance that can be traveled in one iteration, and we take the ceiling (nearest integer at least as large) of the resulting value. + +Since people leave the exit queue in the same way as they leave any bottleneck, we can define $\rho_{E}$ , the rate at which people leave the final exit queue: + +$$ +\rho_ {E} = \left\{ \begin{array}{l l} \frac {E _ {n - 1}}{\Delta t}, & \text {f o r E _ {n - 1} < \lambda \Delta t}; \\ \lambda , & \text {o t h e r w i s e}. \end{array} \right. +$$ + +Thus, the Euler's method approximation for $E_{n}$ , the length of the final exit queue at time $t_{n}$ with $m$ feeder queues, becomes + +$$ +E _ {n} = E _ {n - 1} + \left[ \left(\sum_ {j = 1} ^ {m} \rho_ {j - \beta}\right) - \rho_ {E} \right] \cdot \Delta t. \tag {5} +$$ + +As in the previous example, all of the iterative equations (2) to (4) must be evaluated in parallel to calculate values of $E_{n}$ vs. time, and the room is considered to be successfully evacuated after the line length drops below 1. + +Using a hypothetical square room $(10\mathrm{m}\times 10\mathrm{m})$ with $P(x,y) = 0.01$ everywhere within the room and $P(x,y) = 0$ everywhere else, $n = 100$ , $s = 2$ , and $\lambda = 2.8$ , we generated a graph of length of exit queue vs. time (Figure 4). We measured $s$ and $\lambda$ empirically for an average person and an average single-width door. Although we have no experimental or theoretical basis for assigning a maximum allowable exit time, we arbitrarily choose $30~\mathrm{s}$ . Since the line has almost disappeared after $30~\mathrm{s}$ , 100 people is just over the maximum capacity for this hypothetical room. + +# Applications of the Model + +# Case 1: A Lecture Hall + +Our model describes each row of seating in a lecture hall as a bottleneck, the aisle as an open space, and the exit as the final bottleneck. Since the queue at each row forms immediately after the fire alarm sounds and no people enter + +![](images/8a78fa0ddf65e9b04667d0e548058c83948a3467e7d8c6494cf6580e6f06d200.jpg) +Figure 4. Typical graph of line length vs. time. + +the queue thereafter, the value of $Q \mid_{t}^{t + \Delta t}$ is zero for every feeder queue, and the number of people in the room is given by the summation of $L_{0}$ for each feeder queue (in addition to any people on the stage). Lecture halls also typically have a plane of symmetry bisecting the seating and the stage, with each half served by its own exit; thus, only half of the room needs to be considered. + +# Case 2: A Cafeteria + +With the presence of many small obstacles (chairs, tables, etc.), occupants cannot move as fast through the "free" spaces. In other words, the value of $s$ is smaller in a room with movable furniture than in a room without such obstacles. A more thorough discussion of the effects of $s$ on the evacuation time is given in Sensitivity Analysis. + +# Case 3: A Swimming Pool + +For a swimming pool, we can assume that the only way out of the pool itself is up the ladders. In this case, the pool represents one feeder queue for each ladder, with each ladder serving a specific region of the pool. To account for people distributed outside of the pool in the event of an alarm, we simply add a $Q \left|_{t}^{t + \Delta t} \right|$ term to the final exit Euler's method equation (4) to represent the rate + +at which these people enter the exit queue. Thus, + +$$ +E _ {n} = E _ {n - 1} + \left(\sum_ {j = 1} ^ {m} (\rho_ {j} - \beta) + \frac {n _ {\mathrm {o u t s i d e}} \cdot Q | _ {t} ^ {t + \Delta t}}{\Delta t} - \rho_ {E}\right) \cdot \Delta t, +$$ + +where $n_{\mathrm{outside}}$ is the number of people outside the pool when the alarm sounds. Another complication is that people move much more slowly in the water; thus, a smaller $s$ -value must be used inside the pool (computing the value of $\rho_j$ ) than outside (computing $\beta$ -values and $Q|_t^{t + \Delta t}$ for the people outside). To incorporate people exiting the pool at areas besides the ladder, we could add another term into the rate portion of (4). + +# Case 4: An Indoor Arena + +Since sports arenas usually exhibit some approximate radial symmetry, only one segment of the building (served by one final exit gate) must be considered. Within each segment, there are usually several sets of seating sections, each served by one stairwell and one smaller gate leading to an aisle. These are the first set of bottlenecks. Within each section of seating, the aisle serves a number of different rows—these represent the second set of bottlenecks. Thus, a stadium presents a three-stage evacuation system: rows of seating lead to aisles, aisles pass through a small gate to an open stairwell, and several stairwells feed a main exit gate. A similar method could be used to evaluate the fire risks of any large building; however, the number of computations required rapidly becomes cumbersome. + +# Testing the Model: A Lecture Hall + +Although the lack of a definite maximum allowable exit time prevents us from properly testing our model, we applied it to a lecture hall at our school for which the established occupancy limit is 117. Only half of the lecture hall has to be considered, as each half of the room has an exit. From the blueprints, we measured the distance from the output point of each of the aisle lines to the exit (from which the program calculates each $\beta$ ). Then, using $\lambda = 1.4$ for the queue at each row (arbitrarily chosen to be half the value for a standard door), $\lambda = 2.8$ for the exit, $s = 2$ , and $\Delta t = 1$ s, we found that 117 people could be evacuated in about 30 s, confirming our arbitrary choice of 30 s for the maximum allowable time. + +# Sensitivity Analysis + +The major factors that needed to be accounted for are the rate $\lambda$ at which people can move through a queue, the speed $s$ at which they can walk, the + +size of the room, and the pdf $P(x,y)$ used to describe the occupants' initial locations. + +First, we looked at the effect of changes in $\lambda$ on the time required for various numbers of people to exit the particular room ( $4.5\mathrm{m} \times 6.5\mathrm{m}$ ). We iterated the functions until the length of the queue returned to zero, varying both the number of people and the value of $\lambda$ . (We found $\lambda \approx 2.8$ in our own trials.) Our results show that $\lambda$ has a greater effect when the number of people in the room is large (see Table 1). Therefore, we recommend installing double, or very wide, doors in facilities intended to be used by large numbers of people. + +Table 1. Evacuation time as a function of queue distribution $\lambda$ and number of people. + +
n\λ1.001.252.002.804.00
1033333
2044333
3074444
4096544
50117654
60148865
70169965
8018101076
9021111186
10023131297
+ +Second, we looked at the effects of changing the speed of people's movement within the room. For point of reference, in our experiments $s$ ranged from $1\mathrm{m} / \mathrm{s}$ (with obstacles) to $2\mathrm{m} / \mathrm{s}$ (a fast walk in a room with no obstacles). Increased speed does lead to a decreased evacuation time (see Table 2)2. Even so, our model does not incorporate the negative effects of haste, such as tripping over obstacles or being trampled on by others. Therefore, we feel that a swift but moderate pace should be kept. + +We also investigated the effects of increasing area on the evacuation time of a room. Not surprisingly, an increased area (and thus a longer distance to the exit) increases the evacuation time (see Table 3). + +Finally, we considered the effects of different distributions of people throughout the room. Although we used a uniform distribution for all of our simulations so far, we decided that a normal distribution would make more sense in some cases. Redefining $P(x,y)$ as a three-variable normal pdf, with the hump of the curve in the center of the room, and truncating and renormalizing it over the domain of a $10\mathrm{m}\times 10\mathrm{m}$ room, we generated graphs of the length of the exit queue versus time for each distribution (Figure 5). + +Using a distribution in which the majority of people are closer to the door (like the normal pdf used in Figure 5) decreases the time to evacuate. + +Table 2. Evacuation time as a function of speed $(\mathrm{m / s})$ and number of people. + +
n\s0.501.002.003.50
1032
20532
30743
4010743
5012754
6014764
7014865
8014875
9014986
100141097
+ +Table 3. Evacuation time as a function of (square) room size $(\mathrm{m}^2)$ and number of people. + +
n\A251004002,50010,000
104611
20871225
3011101327
401413152951
501716183053
602019203154
702422243155
802725263257
903029293459
1003332323659
+ +A noteworthy feature of all the graphs is that once the line grows to its maximum length, it decreases linearly thereafter. This implies that the rate at which the room can be evacuated depends primarily on the rate at which people can leave the room, since for the entire linear section (from $t = 5$ to $t = 30$ in Figure 5) the rate at which people are entering the queue at the door is close to zero and the rate at which they are leaving is close to $\lambda$ . This confirms our recommendation that rooms intended to hold large numbers of people should use doors as wide as possible. + +# Strengths and Weaknesses + +The greatest strength of our model lies in its flexibility. Without any fundamental changes, our model can be used on any of a number of different kinds of spaces in which maximum occupancy may be an issue, including auditoriums, pools, lecture halls, board rooms, classrooms, cafeterias, and gymnasiums. In addition, the model can be extended to circumstances in which the exits from the room lead only to an intermediary location, such as a hallway, which then + +![](images/1edaa1607d1524974b70f9613708c230f0836d31479773e8006e2ffd306861e9.jpg) +Figure 5. Effect of different pdfs on line length vs. time. + +leads into another exit out of the building. This flexibility also leads to one of the larger weaknesses of our model, that it can get overly complex. But, for the majority of situations, our model remains reasonably simple. Furthermore, because the value of $Q \mid_{t}^{t + \Delta t}$ can be computed numerically for any given $P(x,y)$ , a room of any size or shape and with any distribution of occupants can be considered. + +Another strength of our model is that the initial parameters and constants are flexible. Though we used values for $\lambda$ and $s$ that we determined experimentally, repeated experiments could be done with different audiences to determine more accurate values for these parameters. In addition, $\lambda$ and $s$ can be modified to include many other variables, such as the type of occupant a room usually holds (for instance, small children and adults have different maximum speeds) and door construction (which dictates $\lambda$ ). + +A final strength of our model is its consideration of other factors. By picking the minimum of several maxima, we could determine the maximum occupancy of a room considering other factors such as room size, amount of personal space required by the people in the room, sanitation concerns, and weight capacity. + +Our model also has several weaknesses. A sensitivity analysis is difficult to perform. As we were unable to find much data on many of the constants which were needed, we had to determine just a few values experimentally, which may not be representative. + +A major weakness concerns our assumptions. In many cases, these may be + +large oversimplifications of the actual circumstances of evacuating a room or building (i.e., the assumption of a constant rate leaving a queue). Furthermore, we have no reasonable basis for determining the maximum permissible time to evacuate a room; without this information, we cannot firmly establish the maximum occupancy of a room or building. + +Another weakness of our method is the way in which we determine when everyone has left a room. In general, we do this by seeing when the length of the exit queue returns to zero; unfortunately, some people may still be reaching the door and immediately leaving the room, maintaining the length of the queue at zero but still requiring extra time to evacuate. However, this does not invalidate the overall model, and more sophisticated ways of determining when the room is empty could easily be applied. + +# References + +Allen, Arnold O. 1978. Probability, Statistics, and Queueing Theory. Orlando, FL: Academic Press. +Bartkovich, Kevin G., et al. 1996. Contemporary Calculus Through Applications. Dedlum, MA: Janson Publications. +Mendenhall, William, Richard L. Scheaffer, and Dennis D. Wackerly. 1986. Mathematical Statistics with Applications. Boston, MA: Duxbury Press. +O'Brien-Atkins Architecture. 1995. Education Technology Complex Blueprints. March 24, 1995. +Press, William H., Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterline. 1988. Numerical Recipes in C: The Art of Scientific Computing. New York: Cambridge University Press. + +# Appendix A: Newspaper Article + +A guy walks into a bar and the bartender says, "Hey, man, you can't come in here. We're at maximum capacity already." The man replies, "But there's plenty of room—what's the problem?" + +This illustrates one of the most threatening evils plaguing our society today—apathy about maximum occupancy. People just don't realize the importance of being able to evacuate quickly from a facility in the event of an emergency. Sure, it may seem fun to see how many people can squeeze into a phone booth together, but what if the phone booth catches on fire? We won't even consider the possible catastrophes in a clown car. + +Seriously, public safety is an important concern, especially when when considering evacuation speed from a facility. During emergency situations, people + +are likely to panic or not think clearly, so extensive planning for the event of an emergency could save lives. Since businesses like restaurants may be more concerned with making money than with safety, a standard method of determining maximum safe occupancy would be helpful in enforcing this issue. + +Obviously, one of the most important considerations during an evacuation procedure is that people tend to get backed up at places like doors and other types of exits. We were able to mathematically incorporate this buildup of people into a model to determine the maximum occupancy for many different public facilities. Using queuing theory and Euler's method, we looked at how long various numbers of people took to vacate a certain room. Given a minimum evacuation time (say, thirty seconds or a minute), we can calculate the maximum safe number of people for that room. + +However, additional factors must be considered in addition to evacuation speed. For instance, people need a certain amount of personal space. When people are crowded together for long periods of time, certain health hazards might result, especially in a restaurant or cafeteria. Also, many elevators have a maximum weight capacity, which has more to do with the strength of the cables than with evacuation. Looking at all these different factors enables us to find a reasonable and justifiable maximum capacity. Our model, in accurately representing reality, would be an invaluable tool in predicting the likely outcome of an emergency evacuation. Its ability to handle complex situations and extreme flexibility make it ideal for practical use. + +So, the next time you see a sign stating maximum safe occupancy, take the time to consider it carefully. If the number of people in the room exceeds the limit, you may want to consider immediately hurrying towards an exit. Just don't panic. + +# Standing Room Only + +Frederick D. Franzwa + +Jonathan L. Matthews + +James I. Meyer + +Rose-Hulman Institute of Technology + +Terre Haute, IN 47803 + +Advisor: Aaron D. Klebanoff + +# Letter to the Editor + +To the Editor: + +Recently, the city has decided to review the current public safety ordinances pertaining to the capacity limits on buildings and public areas. Our team has been asked to reconsider the current ordinances and suggest modifications. Our recommendation will be discussed and voted upon at an upcoming city council meeting, which has stimulated public interest and discussion. + +We began our review by defining the purpose of such regulations. We felt that the primary purpose is to preserve public safety. Limiting capacity for other reasons is unnecessary and is likely to promote disagreement and repeated requests for exceptions. + +Threats to public safety take two forms: emergencies that require evacuation, and incidents within the venue that require access by police or rescue personnel. + +Our analysis is based on statistical data taken of crowd motion in public areas. These data were then incorporated into a computer model that allowed us to investigate crowd behavior in the context of many different situations, including general purpose assemblies of various sizes, classrooms, lecture halls, cafes, and banquet halls, as well as outdoor events from small rallies and demonstrations to parades. This investigation was broad enough to encompass nearly every type of public event. + +Only two simple regulations are needed to ensure public safety. First, there must be no more than 40 people for each exit in the facility. This rule assures that any room in a facility can be evacuated in one minute or less. + +The second regulation is that each person must have at least 5 sq ft of floor space. This limitation ensures that small groups of individuals can move throughout a crowd in a reasonable time. Even if there are unlimited exits, a high density of people can be very dangerous; in the event of a personal emergency (heart attack or possibly severe allergic reaction), rescue workers must have sufficient access to the area. These problems are compounded in the event of a riot or mob situation. We found that if the amount of space per person is much less than 5 sq ft the difficulty of moving through the crowd increases drastically. + +Overall, we feel this change to regulation will be beneficial to everyone. It will maintain a high standard for public safety and is not unduly complicated or cumbersome. We urge anyone with questions or concerns to contact us. We would be happy to share any additional information or details of our analysis. + +Sincerely, + +The Room Capacity Assessment Team + +# Reasons for Legal Capacity Limit + +Capacity limits must be considered from the standpoint of convenience and comfort. For example, an overcrowded concert hall can diminish the elegance of the facility, and likewise it is very frustrating to be stuck in line to leave a stadium with fewer exits than optimal. However, we feel that the only justification for a capacity limit is public safety, which falls into two basic categories: + +- Emergency evacuation time: In the event of a fire or other emergency, the room must be able to be completely emptied quickly. +- Mobility of small groups of individuals: Any time a large number of people are packed closely together, there is potential for a mob or riot situation. There is also the possibility of medical emergencies occurring in the crowd. Police, security personnel, and rescue workers must be able to reach any location in a timely manner. + +We discuss special cases with unique considerations in the Appendix. + +# Qualities of a Good Capacity Limit + +A good capacity limit preserves public safety and is fair and defensible. The model must be easy to understand and implement, to reduce confusion and promote fairness. It is very likely that at times an organization will petition for special permission to hold an event, which may require that the regulation be defended in a courtroom, so the model must be well supported. + +# Overview of the Approach + +We created a model of motion of individuals in a crowd and implemented it on a computer. This model was then used to observe the behavior of the crowd and test various capacity-limiting criteria. Test rooms of various sizes and layout were constructed, and for each room several trials were run, each time increasing the density of individuals in the room. + +We made two measurements on each room/crowd-density trial. The first measured the time to empty the room completely, while the second counted the number of individuals whom a security guard is likely to encounter when moving from one side of the room to the other. + +# Assumptions + +- We are concerned only with safety factors. Any other factors need to be addressed by the owner but should not be taken into account in deciding if the capacity is "unlawful." Customers of the facility will hold the venue owner responsible for comfort issues. +- The people have perfect knowledge of the state of the room at all times. This is reasonable because people do have a good sense of the state of the room. +- People try to get out of the room in the shortest possible time. +- People act in their own best interests, not necessarily in the best interests of the group. + +# Crowd Escape Model + +# Description + +We simulated people leaving a room. We created rooms in an Excel spreadsheet and then imported the data into a program written in Visual C++. We represented the room as a grid of 1-ft squares, where every spot on the grid was an open space, an immovable object, or an exit. A specified number of people were placed into the room in random locations, with each person occupying a single 1-ft square. + +In the simulation, people exit the room according to the Personal Movement Algorithm described below. The algorithm is executed for each person in the room then repeated. Each iteration allows a person to move 1 ft. We measured average walking speed to be approximately $3\mathrm{ft/s}$ , so each iteration represents $1/3$ s. + +# Personal Movement Algorithm + +- Find the best exit and move toward it, taking into account the size of the crowd waiting to use that exit and the distance to the exit. +- If your path is blocked, move in the next best direction. If no moves get you closer to the door, stay in place. + +This algorithm is an appropriate model of people exiting a room. People in the model initially head for an exit close to them; but if that exit becomes too congested and another exit clears up, they may head for the less busy exit (much like switching checkout lines at the grocery store). The second rule is reasonable because this is the best way for a person to minimize the time in line. There are two additional restrictions on moves: + +- No person can move onto a square that was occupied by another person in the previous time step. +- Flow through the each exit is limited to one person every 1.33 seconds. + +To gain a qualitative sense of crowd behavior, we collected data by observing people leaving a movie at a local theater. We recorded how many people traveled through a double door per second and the density of the crowd passing through the door, as well as the speed and density in an open area. This information motivated the additional restrictions. + +Without the first restriction, the model is unrealistic because people get much too close to one another. Our observations indicate that the density is much less than 1 person/ft² when people are moving; otherwise people would be walking on the heels of the person in front of them. + +At the theater, we also noticed that people tend to move more slowly going through a doorway than in an open area. This necessitated the introduction of the second restriction to ensure a realistic flow rate through each exit. + +# Description of Test Rooms + +After defining how people leave the room, we constructed a set of test rooms. The characteristics of the room that could influence the exit time can be grouped into four basic categories (Table 1) or a special case. + +For each category, we constructed rooms of various sizes, composed of immovable objects (walls, desks, etc.), open space, and exits (Table 2). + +# Exits + +One exit represents an opening large enough for one person to walk through at a time. Single doors are represented by one exit. Though three people can walk abreast through a set of double doors, our research at the movie theater shows that a crowd will move through such a doorway two at a time; so we model double doors as two adjacent single doors. + +Table 1. +Room classification. + +
CategoryExample
Limited exitsParties, conventions, concerts, many indoor assemblies
Rows and aislesAuditoriums, lecture halls, movie theaters, stadiums, inside sporting events
Unlimited exitsOutside sporting events, rallies, demonstrations, parades, air shows, some outdoor concerts
Movable objectsBanquets, cafeterias, restaurants
+ +Table 2. +Test Rooms—Escape Model. + +
CategoryNameSize (ft2)Description
Limited ExitsSmall room1441 exit
Medium room8003 exits
Large room3,0009 exits
End exit8003 exits all at one end
Rows and aislesG220 (classroom)3802 exits with rows of tables
E104 (lecture hall)11824 exits with rows of seats
Unlimited exitsSoapbox speech229Small outside assembly
Parade931Large outside assembly
Movable objectsSmall cafe4111 exit with tables and chairs
Banquet hall14983 exits with tables and chairs
Special casesAirplane (normal deboarding)2861 exit with rows and aisle
Airplane (crash)2866 exits with rows and aisle
+ +# Movable Objects + +An object small enough to be stepped over should not be considered in the model. Other objects that can impede flow can be treated as immovable. For example, if a chair is in your way in a very crowded room, there is no place to move the chair because of the high density of people. On the other hand, if the density of people is low enough to allow the chair to be moved, you could instead simply use that space to move around the chair. Either way, you are delayed, either by changing your path to move around the chair or by pushing it out of the way. Therefore, our model assumes that people move around movable objects. + +# Test Room Results + +Each test room was simulated with 10 trials, each trial increasing the number of individuals in the room, from $20\mathrm{ft}^2/$ person to $2\mathrm{ft}^2/$ person: + +- Limited Exits: As expected, increasing the number of people in the room also increased the escape time. There was not a noticeable change when the exits were all placed on one end. + +- Rows and Aisles: Rows and aisles did not seem to have a significant impact on the escape time. Like the open-room tests, people simply crowded up around the exits. +- Unlimited Exits: Escape time seemed to be limited by how far the individuals had to move. +- Movable Objects: Movable objects did not seem to have a significant impact. +- Special Cases: When exiting an aircraft, the use of emergency exit doors greatly expedites the evacuation process. + +Overall, we found that + +The ratio of persons to exits determines escape time. + +Variations in the data can be explained by several factors. First, the size of the room does play a role. In large rooms, the exit time is slightly longer than for a small room, mainly because even if there are few people in the room, it is large enough that the exits are underutilized while people initially start moving towards them. This effect is best shown in the large room at low densities. + +Also, the presence of additional objects does seem to slow progress slightly, as in the airplane example, where the number of available paths is very limited. + +# Weaknesses + +- Parameter values are debatable: We based our packing and flow rate constants on a restricted data set. Our observations from the movie theater give us a handle on the situation, but more information is desirable. +- People don't slow for obstacles: When a person has to change path to avoid a chair, person, or other obstruction, the person does not slow down. This would only become a major concern when the limiting factor for room escape is the time to reach the exit, as in a large sparsely populated area. +- People move too much: In real life, people realize that moving left and right will not get them any closer to the exit. In the model, people always try to get closer to the exit by making extraneous movements. These extra movements help to keep people spread out so that the crowd density doesn't get too high, but do not seem to affect exit utilization. +- Abnormal rooms: There are certain room setups where the Personal Movement Algorithm will not lead all persons to exits. For example, people will never move away from an exit to navigate around large obstacles. This can result in people becoming stuck within the room. This problem can be avoided by creating submodels. If a room contains an area in which people can become stuck, then that portion can be modeled separately from the rest of the room. In this way, we can handle any possible room configuration. + +# Strengths + +- Movement models observed data: Our model is grounded with data from a situation very similar to the ones we are modeling. This gives a higher confidence in the conclusions and promotes better applicability to actual situations. +- Movement looks realistic: The path of a single individual in the simulation follows the path that we would expect of a real person. +- Adaptable and robust: Our algorithm works on a large number of rooms and venues. Nearly any given room can be modeled. +- Expandable: It would be easy to incorporate more data into the model. + +# Group Mobility Model + +# Description + +The group mobility model is used to determine the critical factors that predict the ability of a person (e.g., security guard) to get to a specific location in a crowd. This model is a modified version of the crowd escape model. Five guards start on one side of a room filled with people who are moving in random directions, and the guards move across the room to the other side. + +Instead of measuring the time required to cross the room, we count the number of people whom the guards encounter along the trip. An encounter is when any person occupies a grid location directly adjacent to the guard's path. "Encounters per foot traveled" measures how much the crowd impedes travel through the room. + +# Description of Mobility Test Rooms + +With these modifications to the computer simulation, we tried to correlate mobility with the size, shape, number of people, number of obstacles and the density of people in the room. This gave rise to the test rooms of Table 3. + +Table 3. Test rooms—Group Mobility Model. + +
Room nameDescription
Small room144 ft2
Medium room800 ft2
Large room3,000 ft2
Cafe800 ft2 with tables and chairs
+ +# Results of Mobility Test Rooms + +For each room, we ran the simulation 50 times, 5 times each for 10 different densities. We counted the total number of encounters that the 5 guards made while crossing the room. The distance that the guards traveled (as the crow flies) was divided by the total number of encounters. + +The data suggest that + +The number of encounters per foot is inversely proportional to density. + +None of the other four factors is a significant contributor. The number of people and size of room cannot be factors, since the four rooms (when they have the same density) all show the same flow impedance even though they are different sizes and have different numbers of people in them. The four rooms are not the same shape yet exhibit the same behavior under similar densities. Finally, the café has many small obstacles but follows the same trend. (Note: The density in the café was calculated as the number of people divided by the amount of open space.) + +# Weaknesses + +This model exhibits many of the same strengths and weaknesses as the previous model, with some additional characteristics. + +- People move for guards: Usually, when people see a security guard coming, they tend to move out of the way. Our simulation assumes that people never move out of the way—a worst-case scenario. However, the simulation does represent how easy it is for an ordinary person to get out of the room in case of an allergic reaction or other pressing issue. +- Densities vary throughout crowds: Chances are that people will not be evenly spread out. They could be clumped around the place the guard needs to go, creating further difficulty. Our model does not attempt to simulate this sort of behavior. + +# Strengths + +The strengths of this model that pertain solely to the changes we made to the basic model are: + +- Multiple runs: We ran each simulation 5 times with randomized starting positions for the people and came up with data that were highly correlated. This implies that our model is not sensitive to small changes. +- Stability: Our model is not sensitive to relatively large changes in any aspect except the density of people in the room. + +# Proposal and Conclusion + +We feel two limitations should be placed on the number of people in a particular room: + +- Persons per exit: The overwhelming limitation to the escape time is the number of people per exit. If the maximum escape time is to be $1\mathrm{min}$ , then there should be no more than 40 people per exit. This requirement will ensure that the room can be evacuated in a reasonable of time. +- Area per person: To maintain good accessibility by security personnel, each person should have at least $5\mathrm{ft}^2$ of floor space; tables, chairs, and other obstructions must be subtracted from the gross floor space of the room. + +Based on our conclusions, we present this formal proposal: + +The Maximum Occupancy for said room shall hereby be determined as the lesser of the two following quantities: the number of existing exits multiplied by a factor of 40, and the entire square footage of the room that is deemed usable for walking divided by a factor of 5. + +This requirement possesses all the qualities of a good model as specified in the introduction. It ensures public safety by being based on both a maximum time to clear the room and a maximum resistance that people will have trying to traverse through the crowd. It also is simple, requiring only two easy calculations, and defensible, since we have shown above that any other consideration for deciding maximum capacity is negligible. + +The specific values for persons per exit and area per person might need to be adjusted depending on the particular situation. For example, if a room is deep within a building, it may need to be evacuated in 30 sec to ensure that the occupants get outside quickly enough. Also, if there is a known hazard in the room, the occupants may need to be able to evacuate the room even faster. The safe density of people could be different for a room depending on whether it is used for rock concerts or for basketball games. + +An additional desirable restriction would be that every spot in a room should be within a certain number of feet of an exit. This restriction is especially relevant in large rooms where the capacity is low, since neither of the limitations we recommend takes this situation into account. + +# Appendix: Special Cases + +# Elevators + +There would be few evacuation problems with a well-functioning elevator due to the small size of an elevator and the close proximity of all spaces in the elevator to the door. The primary safety consideration with an elevator + +due to capacity would be exceeding the weight limit of the elevator. Access of emergency personnel wouldn't present a problem, either, because the entire elevator could be quickly evacuated to make room for the personnel. + +Because of the safety measures in modern elevators, a broken elevator is rather stable. When an elevator is stuck between floors, the primary time concern for evacuating the elevator would be accessing the elevator compartment rather than removing individuals from the elevator once the elevator has been accessed. There would probably be very little difference in removing one person from a stopped elevator than there would be for removing many. + +# Pools + +The two models can be made to apply to pools by applying them first to the pool itself with different speed information than for walking individuals. The standard model would then be applied to the outer rim area with the actual pool area treated as an immovable object. + +A pool has other considerations that should be regulated. The maximum occupancy of a pool with lifeguards on duty should be limited with proportion to the number of lifeguards on duty. + +# Acknowledgment + +Special thanks to the General Manager of the local Keresotes Theaters for permitting access to theaters to collect data on the motion of patrons. + +# Room Capacity Analysis Using a Pair of Evacuation Models + +Gregg A. Christopher + +Orion Lawler + +Jason Tedor + +University of Alaska Fairbanks + +Fairbanks, AK 99775-6660 + +Advisor: Chris Hartman + +# Introduction + +We present two models for determining the amount of time for a given number of people to evacuate a given room. A room's maximum capacity can be derived from this by imposing a maximum evacuation time, which must take into account factors such as the fire resistance of the room. + +We develop a graph-based network flow simulation. People are modeled as a compressible fluid that flows toward and out the exit. This model assumes people's interaction properties, based on industry research. + +We also develop a discrete particle simulation. People are modeled as disks that attempt to reach the exits. In this model, people's interaction properties emerge from local, per-person assumptions. + +We compare and evaluate the models' outputs and analyze the capacity of several local rooms. + +# Graph Flow Model + +# Overview + +The graph flow model is a pool-flux model that operates on a graph representing areas of open space within a room. The graph consists of a set of nodes $N$ and a set of directed edges $E$ . Each node is valued as the number of people in a square patch of floor. Each edge represents the direction of traffic flow from one node to another node. + +The ability of occupants to exit a node is constrained by the congestion in the node and the bandwidth of the edge leading to another node. Bandwidth represents the rate that people can move between nodes. A higher bandwidth is used when there are no obstacles or doors between nodes, and a lower bandwidth is used when an exit constricts flow. + +The number of occupants who enter a node is constrained by the number of people leaving the node and the tendency of a node to pack tighter. This tendency is referred to as fill rate. + +Because there are interdependent relations, each time step of the model is calculated in a cascading pattern from the exits. After the flow rate out of a node is calculated, it becomes possible to calculate flow into the node. By this method the flow rate calculations can be determined for the entire graph. + +# Assumptions + +1. All people are aware of the emergency and attempt to exit. +2. People move only toward a single exit. +3. People are safe, and removed from the simulation, when they reach an exit node. +4. People in crowds move at a speed determined by the density of the crowd. +5. People's movement is restricted by the width of the area that they are trying to move over. +6. People will move to the exit as quickly as possible, without regard to the effect on crowd density. +7. The increase in crowd density over time is limited. +8. People are treated as continuous populations, allowing for fractions. + +# Weaknesses of the Assumptions + +Assumption 1 is not completely supported by the literature—not everyone is aware of or willing to leave during a real emergency. + +Assumption 2 implies that people pick a single exit and head for it. In reality, people might observe that an exit is less congested and choose that exit as their new target. This assumption precludes the existence of barriers directing traffic flow or preset fire escape routes within a room. + +Assumption 3 ignores the exit discharge capacity. In reality, the number of people leaving by an exit will affect the total evacuation time for a building. With this assumption, the model is limited to a single room. + +Assumptions 4 and 5 are based on literature describing pedestrian movement in a transportation terminal [Benz 1986]. Movement in a transportation + +terminal is not an escape or evacuation situation, so it could involve different dynamics. + +Assumption 6 does not take into account human intelligence or the possible presence of authorities regulating traffic flow. + +Assumption 7 was included to reduce the tendency of nodes to fill from empty to maximum capacity in a very short period of time. + +Assumption 8 can create situations that are contrary to reality. A person cannot split into fractional parts and flow in two different directions. This assumption is loosely justified by the fact that people can be partially across the boundary of two nodes. This assumption is required by the model mechanics when using small time steps. + +# Mathematical Structure of the Graph Flow Model + +# Graph Structure: + +$$ +\begin{array}{l} N _ {i} = \mathrm {g r a p h n o d e} i, \mathrm {r e p r e s e n t i n g a p a t c h o f f l o o r s p a c e} \\ E _ {i} = \text {e x i t s :} N _ {i} \text {m a y e x i t t o} \\ I _ {i} = \text {i n p u t s :} \text {t h e s e t o f a l l n o d e s t h a t e x i t t o N} _ {i} \\ \end{array} +$$ + +# Spatial Values: + +$$ +\begin{array}{l} P _ {i} = \text {n u m b e r o f p e o l e a t N} _ {i} [ \text {p e r s o n s} ] \\ A _ {i} = \mathrm {a r e a o f} N _ {i} [ \mathrm {f t} ^ {2} ] \\ \end{array} +$$ + +# Constants: + +$$ +\begin{array}{l} W _ {i j} = \text {b a n d w i d t h : f l o w r a t e f r o m N} _ {i} \text {t o N} _ {j} [ \text {p e r s o n s / f t} ] (\text {p a r a m e t e r i z e d}) \\ W _ {i j} = 0. 5 4 1 \text {b e t w e e n t w o n o d e s [ p e r s o n s / f t ]} \\ W _ {i j} = 0. 3 2 5 \mathrm {t o a n e x i t n o d e [ p e r s o n s / f t ]} \\ s ^ {\alpha} = \text {b a s e m o v e m e n t c o n s t a n t f o r} S _ {i} [ 5 8. 6 7 8, \text {d i m e n s i o n l e s s} ] \\ s ^ {\beta} = \text {m o v e m e n t m u l t i p l i e r c o n s t a n t f o r} S _ {i} [ 5 8. 6 6 9, \text {d i m e n s i o n l e s s} ] \\ s _ {\min } = \text {m i n i m u m} \\ T = \text {m a x i m u m (t e r n i n a l) c o m p r e s s i o n o f a n a r e a} [ 3 \text {p e o l e / f t} ^ {2} ] \\ r ^ {\alpha} = \text {f i l l r a t e c o n s t a n t [ 4 . 3 3 3 f t / s ] (p a r a m e t r i z e d)} \\ t = \text {t h e t i m e s t e p o f t h e m o d e l} [ 1 \mathrm {s} ] (\text {p a r a m e t r i z e d}) \\ \end{array} +$$ + +Parameter values are derived from industry research. We omit the derivations for brevity. + +# Derived Constants: + +$$ +A _ {i} T = \text {m a x i m u m o c c u p a n c y o f n o d e [ p e r s o n s ]} +$$ + +# Flux Capacity Equations + +Let $S_{i}$ denote the walking speed inside a node due to congestion [ft/sec]: + +$$ +S _ {i} = S (P _ {i}, A _ {i}) = \max \left[ s ^ {\alpha} + s ^ {\beta} \ln \left(\frac {A _ {i}}{P _ {i}}\right), S _ {\mathrm {m i n}} \right]. +$$ + +Let $\mathrm{FR}_i$ denote the fill rate: the maximum number of people who can be added to $N_{i}$ over time $t$ [persons] + +$$ +\mathrm {F R} _ {i} = \mathrm {F R} (N _ {i}, t) = r ^ {\alpha} t \frac {A _ {i} T - P _ {i}}{A _ {i} T}. +$$ + +Let $\mathrm{OF}_{ij}$ be the desired (maximal) outflow: the number of people capable of moving out of $N_{i}$ into $N_{j}$ [persons] + +$$ +\mathrm {O F} _ {i j} = \mathrm {O F} (N _ {i}, N _ {j}, t) = t S _ {i} W _ {i j}. +$$ + +Let $\mathrm{IF}_i$ be the maximum inflow: the number of people who can enter a node from any direction in $t$ [persons]. Note that $\mathrm{IF}_i$ cannot be calculated until $\mathrm{FFA}_{iEj}$ is calculated for all $N_j$ in $E_i$ . + +$$ +\mathrm {I F} _ {i} = \mathrm {I F} (N _ {i}, t) = \sum_ {E _ {i}} \mathrm {F F A} _ {i E _ {i}} + \mathrm {F R} _ {i}. +$$ + +Let $\mathrm{FF}_{ij}$ be the final flow: the number of people capable of moving from $N_{i}$ to $N_{j}$ that $N_{j}$ is capable of accepting [persons]. Note that $\mathrm{FF}_{ij}$ cannot be calculated until $\mathrm{IF}_j$ is known. + +$$ +\mathrm {F F} _ {i j} = \mathrm {F F} (N _ {i}, N _ {j}, t) = \left\{ \begin{array}{l l} \mathrm {O F} _ {i j}, & \text {i f N _ {j} i s a n e x i t ;} \\ \mathrm {O F} _ {i j} \frac {\mathrm {O F} _ {i j}}{\sum_ {N _ {k} \in I _ {j}} \mathrm {O F} _ {k j}}, & \text {i f I F} _ {i j} < \sum_ {N _ {k} \in I _ {j}} \mathrm {O F} _ {k j}; \\ \mathrm {O F} _ {i j}, & \text {i f I F} _ {i j} \geq \sum_ {N _ {k} \in I _ {j}} \mathrm {O F} _ {k j}. \end{array} \right. +$$ + +Let $\mathrm{FFA}_{ij}$ denote the actual number of people who move from $N_{i}$ to $N_{j}$ . Note that $\mathrm{FFA}_{ij}$ cannot be calculated until $\mathrm{FF}_{iE_j}$ is calculated for all $N_{j}$ in $E_{i}$ . + +$$ +\mathrm {F F A} _ {i j} = \mathrm {F F A} (N _ {i}, N _ {j}, t) = \left\{ \begin{array}{l l} \mathrm {F F} _ {i j}, & \text {i f} P _ {i} \geq \sum_ {k \in E _ {i}} \mathrm {F F} _ {i k}; \\ P _ {i} \frac {\mathrm {F F} _ {i j}}{\sum_ {k \in E _ {i}} \mathrm {F F} _ {i k}}, & \text {o t h e r w i s e}. \end{array} \right. +$$ + +This is a pool-flux model. The $P_{i}$ are pools, and $\mathrm{FFA}_{ij}$ is the only flux that is ever applied to a pool. + +# Development of the Flow Functions + +For constant $t$ and $W_{ij}$ , $\mathrm{OF}_{ij}$ is linearly proportional to walking speed due to congestion ( $S_i$ ). For constant $t$ and $S_i$ , $\mathrm{OF}_{ij}$ is linearly proportional to bandwidth ( $W_{ij}$ ). + +Because $\mathrm{IF}_i$ is a function of the actual final flows out of a node, IF cannot be calculated until these actual final flows have been calculated first. These final flows are a function of the IF for the nodes that $N_{i}$ flows into. Because of this dependency, the node graph must be acyclic. If the graph contains a cycle, then no $\mathrm{IF}_i$ for any node that is a member of the cycle can be calculated because it is dependent on IF for another node that is a member the cycle. Then IF equals to the total number of people that flow out of a node plus the fill rate for that node. + +The final flow $\mathrm{FF}_{ij}$ (the number of people capable of moving from $N_{i}$ to $N_{j}$ that $N_{j}$ is capable of accepting [persons]) is a function of $\mathrm{IF}_j$ , unless $N_{j}$ is an exit. + +The relationship between final flow and actual final flow is straightforward. Final flow calculates the number of people that can flow out of a node. However, if final flow is more than the population of the node, then this population is divided evenly among the available final flows. Otherwise, actual final flow is equal to final flow. + +# Particle Simulation Model + +# Overview + +The particle simulation models humans one at a time as discrete, independent entities, instead of treating a flow of people as an undifferentiated group. + +The simulation begins with a single, 2-D room at the start of an emergency. People in the room each choose a visible nearby exit and walk toward it. People navigate obstacles such as furniture, and, if crowded together, interact with one another. The simulation continues until everyone has reached an exit. + +Individual humans (especially during an emergency) are concerned primarily with getting to an exit, greedily maximizing their own chance of survival. This model thus operates on a local level, allowing the overall global properties (such as total exit time and walking speed vs. congestion) to emerge. + +# Assumptions + +Although human behavior is in general very complex, the modeling task is substantially simplified in a crowd during an emergency. Still, the primary weaknesses of this model lie in its restrictive and somewhat arbitrary assumptions. + +1. All humans are aware of the emergency, and all attempt to exit. +2. People pick exits based on congestion (number of people near that exit), distance, and visibility—people cannot see through walls. Occasionally, people check for a better exit. + +3. People are safe, and removed from the simulation, when they reach an exit. +4. People walk at 4 ft/s. +5. People may change direction and speed instantly. +6. If a person's intended path would pass through another person, that person stops and tries to go in some other direction. +7. People cannot walk through walls or furniture. For these purposes, people are treated as disks. +8. People plan a path around furniture to reach an exit. + +# Weaknesses of the Assumptions + +Assumption 1 is not completely supported by the literature—not everyone is aware of or willing to leave during a real emergency. + +Assumption 2 is more restrictive than reality—humans remember the location of out-of-view exits, and often "follow the crowd" to an exit that they can't see. + +Assumption 3 neglects the finite person-handling capacity of many exits (e.g., narrow stairwells). + +Assumption 4 neglects the very young, old, or handicapped, who may move more slowly, as well as the panic-stricken, who move more quickly. + +Assumption 5 is contrary to basic physical principles, but significantly simplifies interactions. + +Assumption 6 neglects people's sophisticated path planning, which allows us (usually!) to avoid walking into each other without stopping. + +Assumption 7 treats people as hard, inelastic 2-D disks. + +Assumption 8 neglects the panic-stricken, who may in fact run directly into furniture. + +# Example + +Despite their disadvantages, these assumptions produce behavior that is remarkably crowd-like and consistent with research data. + +We simulated 400 people leaving a $110\mathrm{ft} \times 120$ ft gymnasium, with the people initially distributed uniformly across the room. Moving independently, people quickly form groups near the exit. As people near the exit flow out, the groups shuffle around to bring more people to the exit. The model produces loose clumping around the exits, a natural result of people's desire to go towards the exit but with aversion to running into one another. + +# Human Interaction in Crowds + +The overall result emerges only from our single assumption about how people interact: If your intended path will intersect another person, stop and try another (random) direction. We analyzed other potential ways for people to interact, but they produced decidedly non-human behavior. + +We would have preferred to pick a deterministic interaction, because we would rather not have the results of our model change with each execution of the model. For each deterministic interaction we considered (e.g. if your path will intersect someone, go around them to your right), we could always find cases that created a circular-wait condition. This situation, in which object A waits for object B, who in turn waits on object A, is known to computer scientists as deadlock. + +We can make the model give us the same results each time by using a pseudorandom (deterministic, but uniformly distributed and statistically uncorrelated) number generator to pick directions. Thus our randomized interaction scheme runs the same way each time, yet produces behavior that is reasonably similar to that of actual people—for example, they don't deadlock. + +# Test Scenarios and Model Validation + +We used both the particle model and the graph flow model to evaluate exit time from test rooms. Each test room has one exit. One test room is $15\mathrm{ft} \times 15\mathrm{ft}$ and has a 3-ft-wide exit in the center of the left wall. Each model was run repeatedly, using a different occupancy for each run. + +The graph flow model was applied to a space that was equal in size to the particle flow model space. + +After both models were executed repeatedly for different room occupancies, we obtained the results of Figure 1. + +The results of both models (for this room and for others not shown here) appear to be very nearly linear for the rooms tested. However, the slope of the nearest linear approximation of each model differs. Since both models are driven by arbitrary parameters, specifically bandwidth for the graph-flow model and person radius for the particle model, it is not surprising that this difference exits. + +We consider it significant that both models display similar trends. Each model was derived from an independent set of driving assumptions and data, but the behavior trends of the models are strongly correlated. This reflects positively on both models. + +![](images/1ded7d32fcea49c6278fff163ab1fc04c4a4dd49f3baa2ecb0d8fe559d3cd46d.jpg) +Figure 1. Results of simulations of the two models, for a $15\mathrm{ft}\times 15$ ft room. + +# Strengths/Weaknesses + +The graph-flow model has several weaknesses. It treats people not as indivisible entities but as a fluid. Its results depend on an arbitrary choice of bandwidth, as well as on the source graph, and we did not address the problem of building this graph. + +Human behavior in the graph-flow model is deterministic, but much of the mathematical structure of the graph-flow model is driven by actual research. + +The particle simulator model has several weaknesses. Its results are a function of an arbitrary choice of radius. Its decisions are nondeterministic, so they can vary significantly for tiny input changes. The model is also occasionally subject to pathological, non-human behavior—for example, people occasionally lose sight of a nearby exit and travel a long distance to a visible exit. + +The particle simulator model, however, also has several advantages. It models people as individual, indivisible entities. People can move independently of their neighbors. No assumptions need be made about the global flow in the room. + +# Conclusion + +We have presented two models for determining how long it takes to evacuate a room. Despite their very different approaches and assumptions, both models substantially agree on our test cases. + +Based on our test cases and the analysis of several local rooms, we find that a time-to-exit vs. initial population graph is nearly linear. The actual slope of the line depends on the layout of the room and the size and number of exits. + +We expected the exit rate to decrease as more people tried to pack into the exits, but the actual exit rate (for both models) remains constant. We attribute this to the fact that exits become congested very quickly, even if only a few dozen people are attempting to exit. This is in agreement with our experience—it doesn't take many people (under a dozen) to block an exit. + +The posted maximum occupancies of the local buildings that we simulated were adequate. At maximum occupancy, everyone evacuated in under $3\mathrm{min}$ , an acceptable time [Life Safety Code 1997, Section A-21-1.3]. + +To determine the maximum occupancy of a room, we suggest first consulting a fire marshal to determine the maximum acceptable time for evacuation. Then use the simulator to find the largest number of people who can escape in less than the maximum time. This is easy because the function relating the number of occupants to evacuation time is nearly linear. + +# References + +Benz, Gregory P. 1986. Application of the time-space concept to a transportation terminal waiting and circulation area. In Transit Terminals: Planning and Design Elements, Transportation Research Record 1054. Washington, DC: Transportation Research Board, National Research Council. +Egan, M. David. 1978 Concepts in Building Fire Safety. New York: John Wiley and Sons. +Life Safety Code. 1997. National Fire Protection Agency. Quincy, MA: National Fire Protection Association. +The Uniform Fire Code. 1991. International Conference of Building Officials, Western Fire Chiefs Association. International Fire Code Institute. + +# Judge's Commentary: The Outstanding Lawful Capacity Papers + +Jerrold R. Griggs + +Department of Mathematics + +University of South Carolina + +Columbia, SC 29208 + +griggs@math.sc.edu + +homepage: http://www.math.sc.edu/\~griggs/ + +# Introduction + +Judging the Lawful Capacity Problem in this year's contest was an enjoyable experience because of the diverse approaches taken, and the Outstanding papers published here display a truly wide range of modeling approaches. We leave it to the reader to decide which is best! + +One nice approach is to use a graphical/network flow model. One paper employs a series of queues to handle the bottlenecks. Another model tiles the room with one-person-sized hexagons and calculates the expected waiting times for each. There is a sophisticated motion simulation model that represents people by disks that naturally flow around obstacles towards exits. + +We judges had a tough job selecting the Outstanding papers—one of my favorites didn't get selected! Here are some of the things that we looked for. + +Many teams took an overly simplified approach to determine appropriate room capacity restrictions. This basic approach works as follows: + +Determine + +- an exit flow rate of $r$ people per second per exit, +- the number of exits $n$ in the room, and +the number of seconds $s$ to clear the room safely, + +then obtain a room capacity of $rns$ people. + +However, the best papers, including the Outstanding ones published here, allow for a range of significant factors to be included in the model. Among these, they consider the flow of people through the room—not just at the exit—as well as crowd congestion due to bottlenecks created by the room shape and furniture placement. A strong model allows a variable initial distribution of people in the room. Judges were impressed by models that permit people to react to crowding at a nearby exit by switching towards a less crowded, though more distant, exit. + +Many entries omit several of the elements requested in the problem; while very few papers manage to cover all of these points, the best ones all come close. These include: + +- considering different room arrangements and environments, +- comparing models to posted requirements or codes, +- discussing criteria other than safety, and +- writing an explanatory article suitable for the newspaper. + +Researching the problem impressed the judges, such as by consulting existing codes or by gathering data directly from crowd observations. One paper even considers capacity reductions mandated by the Americans with Disabilities Act! + +Some papers present easily understood graphs of exit time as a function of the number of people in the room; such displays make it easy for the decision maker. + +It was nice to see analysis of model run-time complexity, which is important in dealing with very large or complicated arrangements, along with improvements made by simplifying calculations. + +Papers stood out that consider factors that could be included in a more elaborate model, such as crowd panic, accessibility of emergency personnel, ventilation, and crowd flow out of the entire building. + +# Advice + +We conclude by giving advice to future entrants by listing some general tips that the judges feel are applicable to any contest problem. + +- Teams should attempt to address all major issues in the problem. Projects missing several elements are eliminated quickly. +- A thorough, informative summary is essential. Papers that are strong otherwise are often eliminated in early judging rounds due to weak summaries. Don't merely restate the problem in the summary, but indicate how it is being modeled and what was learned from the model. The summary should not be overly technical. + +- Develop a model that people can use! The model should be easy to follow. While an occasional "snow job" makes it through the judges, we generally abhor a morass of variables and equations that can't be fathomed. Well-chosen examples enhance the readability of a paper. It is best to work the reader through any algorithm that is presented; too often papers include only computer code or pseudocode for an algorithm without sufficient explanation of why and how it works. +- Supporting information is important. Figures, tables, and illustrations are very helpful in selling your model. A complete list of references is essential—document where your ideas come from. + +# About the Author + +Jerry Griggs a graduate of Pomona College and MIT, where he earned his Ph.D. in 1977. Since 1981, he has been at the University of South Carolina, where he is Professor of Mathematics and a member of the Industrial Mathematics Institute. He received the 1999 award at the University for research in science and engineering. + +His research area is combinatorics and graph theory, both fundamental theory and applications to database security, communications, and biology. He has published more than 60 papers and supervised 11 doctoral and 9 master's students. He serves on the Board of the Mathematics Foundation of America, which oversees the Canada/USA Mathcamp. He has been an MCM judge since 1988. + +# Practitioner's Commentary: The Outstanding Lawful Capacity Papers: The Answer Is Not the Solution + +Richard Hewitt, Ph.D. + +U S WEST Communications Performance Systems +1801 California St., Suite 4595 +Denver CO 80202 +rhewitt@uswest.com + +# Introduction + +I would first like to thank The UMAP Journal for inviting me to write the practitioners article for this years Mathematical Contest in Modeling. The topic is of interest to me in that I have some direct experience solving problems and implementing solutions in the public safety arena. My hands-on training in this area includes spending time in burning buildings, cutting people out of wrecked cars, and, unfortunately, putting a few people in body bags. + +As a result of these experiences, I developed sufficient knowledge and credibility to develop a method that the Denver Fire Department uses for selecting sites for new fire stations. The station-siting problem that I addressed is similar to your contest problem in that my mathematical training enabled me to find the correct answer, but focusing exclusively on a mathematical solution moved me further away from a solution which could be implemented. As the practitioner commentator for this years Contest in Modeling, let me suggest that in pursuing a mathematical solution to the maximum occupancy problem you may have moved away from a solution which could be implemented. + +Let me outline what I plan to discuss: + +- I want to congratulate not only the five winning teams whose papers are published in this issue but also the 202 other teams whose papers were not selected for publication. + +- I would like to discuss some of the common assumptions made in solving the maximum occupancy problem. +- I would like to challenge the contest participants and readers to elevate their problem-solving skills by focusing on what's required to get a solution implemented. Hence my title, "The Answer Is Not the Solution." +- Finally, I would like to offer some suggestions regarding how to effectively communicate technical information to the public. + +# Background (Mine) + +To accomplish these objectives, let me provide a bit of personal background which should help the reader interpret my comments. I hold three graduate degrees in quantitative/analytical fields: a Ph.D. in operations research, a master's degree in mathematics, and a master's degree in economics. I mention these academic credentials not to impress you but to impress upon you that my academic training only helped me begin to solve "real-world problems"—and in the initial stages of my career, it actually got in the way. + +I might have successfully convinced you that the training required to complete three graduate degrees would enable me to solve virtually any quantitative or analytic problem. I used to believe that myself, until I left the safety of academia. What I quickly found out is that in the real world, all bets are off. There are no pure analytic problems. There are no standalone quantitative problems. There are quantitative problems embedded in political problems. There are analytic problems embedded in economic problems. There are technical problems encrusted in hidden agendas with stakeholders whom you don't even know exist, much less their agendas, interrelationships, or side agreements. I'm not saying these things to discourage you, but rather to let you know up front that they do exist—and to let you know that these problems are much more fun to solve than the textbook problems that you encounter in school. I also want to let you know that if you can solve these types of problems (and based on what I've seen, you can), you can solve just about any problem. + +# Congratulations + +I want to congratulate all those who participated in this year's contest. To dedicate an entire weekend focused on a single problem, particularly one foreign to your prior problem solving experience, is commendable. Further, to outline and detail the quality of solutions evidenced by the published articles is extraordinary. You truly exhibit the problem-solving skills needed in the world today. + +I was particularly impressed by your abilities to outline a solution approach (including your assumptions) and decision criteria. The communication of + +these components will be critical to your future success. Frequently when one presents a solution to senior management, the assumptions and methodology are more important than the answer. That may sound strange at this point in your academic training; but trust me, after you graduate and present a few solutions, you will understand what I'm saying. It is your assumptions and modeling approach that determine, to a great degree, the answer that you get. It is in developing the approach and building the model that you gain a rich understanding of the problem. Your ability to truly understand a problem not only enables you to solve the problem, but more important, empowers you to implement the solution—and implementation is where you provide value to an employer and prove value to yourself. + +I was impressed as I read the five solution approaches published in this journal. The level of understanding and variety of mathematical tools employed to solve the problem were far beyond anything I encountered at the undergraduate level. Of course, back then we scratched our solutions on cave walls and held up torches so that our professors could read them. Well, I'm exaggerating a bit; but your understanding and sophistication are well beyond anything I possessed or encountered at the undergraduate level. I was also impressed with your written communication skills, your ability to put thoughts on paper and communicate them clearly and succinctly. Frequently, the people who master problem-solving are among the weakest at communicating the results. I encourage you to continue to refine your communication skills. They will serve you well and are perhaps even more important than your quantitative problem solving skills. + +# Assumptions + +Enough congratulations, I was truly impressed. Let's discuss some assumptions. The common assumption that I saw in the models was the one I will paraphrase as, "when exiting a room or building, during an emergency, people will exit via the nearest exit." Unfortunately, this not the case. According to Denver Fire Chief Richard Gonzales, "studies have shown that in the case of fire, people exit via the door they entered, regardless of the nearest exit. During a fire people do not always act rationally. They generally remember the way they came in and retrace that path even if another exit is much closer." This finding, based on actual experience, further complicates the maximum occupancy problem and creates a need to understand how people enter a room or building to determine how they will exit. This adds a level of complexity to determining how quickly people will exit and therefore the maximum number to let in. + +Let's examine another common assumption. I will paraphrase this one as, "the average person can exit at a rate of $x$ feet per second." Having entered and exited burning buildings on more than a few occasions, I can challenge this assumption based on first-hand experience. I have been in several burning + +buildings where the smoke was so thick that I couldn't see my hand in front of my face. It's also not unusual for the power to go out because of the fire or water used to fight it. Unless the room has emergency lighting (or it's daytime and the room has exterior windows), you're moving around in darkness. In smaller fires, or in the early stages of a fire, it can be difficult to see across a well-lit room; and many public places, such as restaurants, bars, dance halls, and theaters, are not well lit. The point is that visibility is a critical variable that impacts the speed at which people can exit or even find the exit they remember entering. + +In addition, most people have never exited a room or building during a fire, and this lack of experience impacts they way they react. Our common evacuation experiences occur during fire drills, and fire drills do not accurately represent emergency conditions. (As an aside, when was the last time you participated in a fire drill at a restaurant, bar, dance hall, or theatre?) The reason why people practice fire drills is so that they know where the exits are and which one to use in the event of a fire. But knowing what to do and doing it are two different things. If they were the same, we would ace every exam, never get in a car wreck, and always say the right thing. + +The next assumption that I would like to address was actually implied in each of the published articles. I will paraphrase the assumption as "people behave rationally in an emergency." In a private conversation, Chief Gonzales cited several examples of just how irrationally some people behave. The Chief's examples are best summarized by the comments of a restaurant patron who refused to leave a burning building, even as the room was filling with smoke. The man argued with fire department personnel, "I paid for this steak and I'm going to eat it." This may be an extreme example; but extreme or not, it highlights a point: You have to account for human behavior, whether it's logical or not, because that behavior represents reality. Failure to do so leaves your model, and therefore your solution, open to attack. + +The message that I hope you're hearing is this: Your model or solution approach must account for critical real-world conditions. Failure to do so will impact your credibility and therefore acceptance of your solution. + +# Will They Use Your Solution? + +Let's move on to implementation requirements. To maximize the probability of a successful implementation, your solution, model, or method must address the issues of each stakeholder. This implies that you must first figure out who the stakeholders are. In the case of the maximum occupancy problem, Chubb and Williamson [1998] provide a fairly complete list of construction project stakeholders, each of which has a stake in the maximum occupancy decision: + +Construction projects require an owner or developer who defines a specific need. The owner must usually obtain or arrange for the acquisition + +or transfer and expenditure of capital to finance the project. This capital will be used to procure skilled designers, builders, and the materials they need to execute the project. To protect this investment, insurers will be retained to underwrite the performance of the contracted designers and builders, and insurers will ultimately assume a portion of the risk exposure once the project is completed. Regulators will insist on reviewing the project throughout design and construction as well as throughout the period of occupancy to ensure compliance with regulatory mandates. Besides preserving public confidence in building safety and safeguarding the public from involuntary exposure to fire risk, their activities also help ensure a secure tax base. Finally, the occupants or tenants themselves will often participate to see that their individual needs are met. ... The complexity of fire safety decisions is amplified by the individual agendas these actors bring with them. + +After the relevant stakeholders have been identified, you have to identify the needs and concerns of each stakeholder group. This is best done by talking to them, in person, on their turf and in their terms. During the discussion, ask lots of "why" questions ("Why is that important?" and "Why do you feel that way?"). The answers to these questions enable you to understand what each stakeholder values. You can then define a solution space that incorporates what the stakeholders told you was important. You then begin to weigh priorities and make tradeoffs based on politics, economic impact, risk, importance to the decision makers, and/or the ability of a stakeholder group to block the implementation of your solution. And yes, you can now incorporate your mathematical findings. + +For what it's worth, I have never (outside of academia) seen a mathematical solution dominate the other decision criteria. In really good solutions, the mathematical findings complement the solution, but they never dictate it. Regarding the use of mathematical models to solve real-world problems, my point is best captured in the words of Chief Gonzales, "These models work in an ideal world, but that world doesn't exist." + +# Telling the Story + +Let me address the newspaper articles written for local newspapers, defending your analysis. It has been my experience that when communicating to the general public, one's message is best received when it is presented in simple, clear, and succinct terms that address the audience's hopes, fears, and dreams as they pertain to the topic at hand. To that end, let me suggest that an article defending your method should focus on its ease of use, grounding in common sense, and the amount by which the results you generate exceed what they already have. A quote from a highly visible and respected official never hurts either. + +As an example of what not to do, I submit the following. + +Dear Mr. And Mrs. Public, + +Concerning our award-winning method to determine the maximum occupancy level of your child's elementary school classroom, we used a polyhedral approach to approximate a statistically unbiased estimator that incorporated Euler's formula to model crowd movement based in small rooms. This model incorporated Chebyshev's inequality as it applies to elementary-school traffic patterns. + +We then fed our results into a simulation model utilizing software that we built ourselves based on tools that we downloaded from the Internet. + +While this method is probably way over your head, we have full confidence in its ability to forecast the probability of an emergency during school hours. + +This information enables us to set the maximum occupancy of your child's classroom at 183 plus or minus $7\%$ . + +Yours truly, + +Contest Winners + +The correct approach would be to convince the public that your method yields a solution that increases their safety and improves the likelihood of the survival of their loved ones. As an old farmer once told me, "No one wants to know how we make sausage, they just want to know how good it tastes." + +# Summary + +Let me close by summarizing my key points: + +- I commend you on your ability to frame the problem and communicate your assumptions and solution approach. +- Always test your assumptions to make sure they are grounded in reality. +- Identify the relevant stakeholders, elicit their concerns, and address those. You don't have to give everyone what they want, but you do have to demonstrate that you listened and considered each request. +- When communicating to the public (written or oral), use simple, clear, and succinct language that address the audience's hopes, fears, and dreams as they pertain to the topic at hand. Focus on the benefits that they will receive and how your solution represents an improvement over what they now have. + +If you do these four things, your successes will outnumber your failures and people will respect your work. + +Good luck and keep up the good work. + +# Reference + +Chubb, M.D., and R.B. Williamson. 1998. Value-based fire safety: A new regulatory model for mitigating human error. In Human Behaviour In Fire—Proceeding of the First International Symposium, August 30—September 2, 1998, edited by T.J. Shields, 105–114. Belfast, Northern Ireland: University of Ulster. + +# About the Author + +Richard Hewitt has solved problems and implemented solutions in a variety of industries, including oil and gas exploration, public safety, and telecommunications. His work includes two trips to Antarctica on behalf of the National Science Foundation to realign support operations for the NSF Antarctic Research Program. Dr. Hewitt's work has directly resulted in the generation of over $500 million in new revenue and annual cost savings in excess of $65 million. Dr. Hewitt is currently developing a performance feedback system for U.S. WEST, a Fortune 100 corporation in the telecommunications industry. + +# Pollution Detection: Modeling an Underground Spill through Hydro-Chemical Analysis + +James R. Garlick +Savannah N. Crites +Earlham College +Richmond, IN 47374 + +Advisors: Mic Jackson and Tekla Lewin + +# Summary + +Data from ten monitoring wells in a region of suspected underground pollution are used to assess the source, time, and amount of pollutant released into the ground. The chemicals are sorted based on changes recorded in their concentrations over time to determine which were active pollutants during the data collection period and to account for discrepancy caused by an incomplete data set. Those chemicals found to be active during this time period change concentration simultaneously, indicating that each chemical is a component of a single leaking liquid involved in two major spills. The concentrations of selected active chemicals are combined to form a composite indicator whose concentration value is found at each well on each date. The composite indicator reveals that two spills occurred, the first between July 1991 and March 1993, and the second between January 1995 and April 1997, possibly continuing until the end of the data collection period. The primary chemical constituents of the leaking liquid are identified. + +A Delaunay triangulation is used to interpolate a gradient of concentration for the composite indicator at each date between the monitoring wells. Given that the general flow of groundwater in this region is directed toward well 9, the time and location of the pollution source can be approximated based on changes in the concentration gradient over time. This spill is estimated to have originated in the region surrounding the point (8000, 4500). Following the initial triangulation, Voronoi polygons are used to construct a convex hull + +representing the total volume and position of the spill (the volume of the contaminated area). This polygon is comprised of smaller segments, each of a specific uniform concentration. The program Geomview is used to generate graphics of these polygons and convex hulls. A volume can be calculated at each concentration, and ultimately the total volume of polluting liquid can be found, if the concentration of the composite indicator in the original polluting liquid is known. + +Finally, various testing and interpretation methods are explored and incorporated into a procedure for evaluating underground pollution. Each method is discussed in terms of its application to the scenario in Problem Two and uses information given in the data set to test the validity of the method. + +# Introduction + +Given the location and elevation of eight groundwater monitoring wells (two more wells exist at unknown locations), a complete chemical analysis taken periodically at each well between 1990 and 1997, and the general direction of groundwater flow, it is possible to accurately estimate the location, source, time of origin, and total volume of pollutants seeping underground. In the case of a suspected leak in a chemical storage facility built over homogenous soil, cost and safety prohibited collection of analytical data directly below the suspected sight of the spill. Data from monitoring wells surrounding the periphery of but not necessarily directly in the suspected polluted region are used in a mathematical model to determine whether a leak has occurred, the time and location when the leak occurred, and the amount of liquid that has leaked during the data collection period. + +# Assumptions + +- All monitoring wells are located below ground and are contained within an aquifer (a geological unit capable of storing and transmitting substantial volumes of water). This aquifer has an unobstructed constant flow rate which is inversely proportional to the porosity of the soil medium. The monitoring wells are permanent, allow free flow through their measuring devices, have no effect on the chemical or geological composition of the region, and provide an accurate reflection of the surrounding area. This ensures that the wells themselves do not contaminate or pollute the region to be assessed [Soliman et al. 1997, 32]. +- The volume of fluid is constant in each well, and all wells have the same volume. Assuming a consistent volume between wells allows direct ratios to be assessed comparing concentrations of solutes in each well. + +- Different chemicals may travel through the aquifer at different rates. Chemical substances have a constant and specific ability to move in aqueous solutions depending on polarity of the molecules, hydrophobicity, and the initial concentration of each compound. +- Some chemicals found present in the data set occur naturally in the groundwater and are not products of pollution. Any chemical that exhibits no significant change in concentration at any monitoring well over the course of the data collection period can be removed from consideration in the data. In addition, certain naturally appearing chemical components of groundwater can be expected to fluctuate between standard levels. +- Concentrations of pollutants are highest near their source, and concentrations decrease as time and distance from their source increases. +- The given data set is incomplete. Some trends may be misrepresented or missed entirely due to lack of available data. Also, the values that are given must be appropriately evaluated so as not to treat the N/A values as zero. +- Discrepancies in the data can be attributed to variations in the equipment used or in sampling and analyzing techniques over the course of the study and should not always be interpreted as changes in the environment, especially those occurring on the same data in every sample tested. +- Pollution is defined as a contaminant that is harmful to an organism, while contamination refers to a greater concentration of a substance than would occur naturally without necessarily causing harm [Blatt 1997, 76]. In this problem, we assume that both terms refer to the artificial contamination of an underground region, regardless of the effect that the contaminants may have on organisms. + +# Dealing with the Data + +To use or interpret such a large and varied data set effectively, specific criteria must be employed to organize and sort the known information. We converted the data from its original spreadsheet form into a database so that we could set up queries and selectively access any portion of the information. + +Several components of the data were not chemical concentrations but other factors necessary for a thorough chemical analysis, such as specific conductivity and total dissolved solids. These were separated and stored in another spreadsheet. Although some methods of modeling pollution use these measurements, our models do not, because we could not detect a significant pattern in these values to indicate the presence of absence of pollution. + +Using line graphs mapping the concentration of a given chemical at all dates and at each well, we identified chemicals that exhibited a negligible change in concentration. These were removed and stored in a separate spreadsheet. This left 23 chemicals from an original set of 106 measurement categories. + +# Determining the Presence of Pollution + +From the rapid increases shown in the line graphs of chemical concentrations over time, it was apparent that new pollution had occurred in this region over the testing period. Those chemicals detected as new pollutants include: acetone, ammonia, arsenic, barium, bicarbonate, calcium, chloride, iron, lead, magnesium, manganese, nickel, nitrate/nitrite, potassium, sodium, TDS, sulfate, vanadium, and zinc. + +The concentration of the majority of the chemicals in the active data set rise and fall together, indicating that each is a constituent of a single liquid involved in the spill. Although the concentrations of all active chemicals in the data set follow obvious trends, the changes in concentration are much more amplified for some than for others. We chose these amplified chemicals as indicator chemicals to track the movement of the spill. To further simplify spill detection, we added the concentrations of these indicator chemicals (chloride, sulfate, and nitrate/nitrite) together to form a composite indicator chemical, the concentration of which indicates the presence of pollution at each test site on a given date. We chose these chemicals also because they are common components of pollutants and are often used to monitor pollution [B.C. Ministry of Environment, Land, and Parks 1999]. + +In choosing chemicals to serve as indicators for a spill, it is essential to find chemicals that were measured consistently on the same dates and at all wells throughout the data collection period. Three chemicals in this data set that fit this criterion are chloride, sulfide, and nitrate/nitrate, and we used those in the composite indicator. Because the data set is not complete and the measurements were not taken consistently for all chemicals at all points or on all dates, it is important to ensure that the concentration of this composite indicator does not misrepresent trends in the movement of the spill due to a lack or abundance of data for a given well or on a given date. We went through the data set and eliminated dates that were recorded twice (taking an average of the concentrations listed at each well) and corrected other abnormalities in the data until each of the three chemicals had exactly one value at each test location on all dates needed. Exceptions to this include those wells for which values are not available at the beginning of the testing period; these are added as data from these wells became available. + +# The Time of the Spill + +A series of line graphs showing the concentration of the composite indicator at a given well over time can be used to estimate the time of the spill. When plotted together so that each line represents a monitoring well, these graphs of concentration over time show when concentrations first start to increase and at which well(s) this increase is first recorded. This record of which wells show the first rise in concentration provides a rough estimation of the location of the + +source as well. [EDITOR'S NOTE: We cannot effectively reproduce the authors' graphs here in black and white.] + +Two spills probably occurred, the first between July 1991 and March 1993. During these times, the concentrations in wells believed to be closest to the spill increased dramatically, then receded back toward normal levels. The second probably began in January 1995 and continued at least until January 1997. At this time, concentrations were starting to descend, but this could result from a decrease in the rate of the spill and may not indicate that the leak stopped. + +# Locating the Source + +The line graphs generated by queries from the database are extremely useful in determining the presence of a spill, the time at which it occurred, and the chemicals involved. However, finding the source of the spill is more effectively accomplished with a visual interpolation showing the concentration of the composite indicator at each well over time. This way, we can determine where the concentrations rose first and the general direction the spill moved in. Knowing the general direction of the spill, we can develop bounds within which the source of the spill must lie. This can be done in three dimensions by creating a Voronoi polygon. This method of interpolation organizes data points into a triangles with their natural neighbors and partitions areas around each known point into polygons such that an arbitrary point placed in the polygon is closer to that data point than any other. The triangulation of a map is unique and effectively weights the value of any point in the region as a function of its distance from three natural neighbors. + +While the line graphs show approximate dates when a spill might have occurred and at which wells the changes in concentration were detected, the Voronoi polygon method interpolates between the known data points to show more precisely the location of the spill source. From a series of diagrams of the concentration of the composite indicator chemical at each well over a selection of dates, the progress of the spill is very apparent, and the location of the source can be found by following the flow patterns in the underground system backward from the point where the spill first occurred. [EDITOR'S NOTE: We do not reproduce the authors' maps.] + +# A Procedure for Evaluating Underground Contamination + +The problem of detecting the presence of underground liquids is an old one, and due to its applications in locating water sources, petroleum reserves, and mineral deposits, an abundance of information about techniques and methods is available. Drilling sampling or monitoring wells is clearly necessary at some + +point to determine the exact properties of an underground region. However, such sampling and the analysis that follows is time consuming, dangerous, expensive, and has the potential to contaminate or destroy the flow of groundwater in the region. There are numerous surface or superficial measurements that help determine the most effective placement of such wells. In addition, data gained from existing wells can help determine the need for and placement of additional monitoring wells when properly applied. Several useful measurements can be gained from a surface geophysical survey before drilling a well, including gravitational, electrical, and magnetic conductance readings. These involve the passage of electrical current or magnetic fields through surface soil and measuring the drop in voltage or potential magnetism, as well as the density at a given location. By comparing the conductance of surface soils at various locations in the region, the presence of sand or gravel beds can often be detected below the surface [Walton 1970, 61]. This is useful because sand and gravel beds have a high porosity, or ability to contain free flowing fluids in the form of groundwater tables known as aquifers. These aquifers are the mechanism by which underground pollutants are contained and spread, so an understanding of the flow and direction of the aquifer is crucial to accurately predicting the location or the source of contaminants [Soliman et al. 1997, 32]. + +# Drilling and Monitoring A Well + +Once the location of an initial well has been decided (surface measurements should indicate the presence of an aquifer), several types of wells are available. Because drilling the initial bore hole for the well is the most dangerous and expensive part of the process, permanent monitoring wells such as those used in collecting the data set for this problem are the most economical in the long term. Such a well should be capable of detecting the direction and rate of flow of fluids in the aquifer, determining the level of the water table, and providing core samples to be chemically analyzed in a laboratory. These wells must be permeable to water in the region and cannot disrupt flow in the aquifer or introduce new contaminants due to the drilling process or corrosion of the well itself over time. + +# How Many Wells Are Needed? + +The number of wells needed to determine the source, time, and volume of an underground chemical spill can vary widely based on the circumstances of the spill. For the models described here, a minimum of three wells is necessary. From the initial well, the direction and flow of the water system can be determined, along with the concentration values of chemicals dissolved in the groundwater. Additional wells should be drilled along the path of the water table, considering the general location of a chemical storage facility or other suspected source of contamination, if known. If contaminants are detected + +by the initial well, others should be drilled "downstream" to the spill, and if no contaminants are detected, wells should be drilled "up stream," or possibly along a different aquifer, depending on the geological constitution of the region. + +When at least three wells are available to detect the flow and concentration of contaminants, the following models can be used to estimate the location of the source of any contaminants found to be present, as well as the time of a spill, and the total volume of liquid spilled. In each case, the more wells used for data collection, the more accurate predictions can be made about the spill. + +# Model 1: A Graphical Approach + +This model requires the assessment of chemical concentrations at a minimum of three different locations on at least three dates per location. The more wells or locations of data collection, the more accurate the model. + +The first step is to enter the concentration values into a database so that they can be accessed by date, collection location, chemical, or concentration. Using the database, line graphs showing concentrations of all chemicals by date can be generated for each well or collection location. From these graphs, the presence of pollution can be determined, as well as the date of significant changes in concentration of measured chemicals. Dramatic increases in concentration indicate the introduction of a pollutant in this model. In many cases, as in this problem, many of the chemicals detected in a chemical analysis will rise and fall simultaneously, indicating that they are components of a common pollutant. It is possible that the chemicals would fall into two apparent groups, indicating that two liquids are leaking. In this case, one chemical, or preferably a group of detected chemicals, is consolidated to form an indicator chemical. This simplifies future graphs by allowing only one concentration value to be monitored. + +In creating a consolidated indicator, it is imperative that the data be consistent. The same type of data must be available at all sites and on all dates, or adjustments must be incorporated to prevent anomalies in the data set from drastically misrepresenting the concentration of chemicals detected in the groundwater. + +Having developed a consolidated indicator by adding the concentration values of representative chemicals at each site on a given date, we can generate new graphs to determine the time and source of a spill. Overlaying graphs showing concentrations over time at each well can effectively be used to determine the time of a spill. It is useful to collect enough data to develop a baseline concentration for the chemicals being measured. Tables published by the EPA, the British Columbia Department of the Environment, Land, and Parks, and other regulatory agencies list normal ranges of concentrations of various chemicals in groundwater and are also useful in distinguishing those chemicals present naturally in a system from those caused by pollution. + +Interpreting these graphs is relatively simple. On a chart showing the location of each well or test location, identify those wells (if any) that never show a definite increase in concentration of the indicator. Then the well or wells reached first by high concentration values and the wells showing the highest concentration of chemical overall must be identified. Using this information, as well as the direction and rate of flow in the system (measured at each well or given in the problem statement), the contaminants can be traced back to an estimated source location. + +The results of this process for each of the two spills indicated are included in Figure 1. Both spills reached wells 9 and 12 fastest, showing an increase at wells 3 and 11, and 7 later. This indicates that both spills began somewhere in the region surrounding the point (8000, 4500). + +![](images/99850e095ce60dc73b833b24a492d83f7637b757f682de48f31136605e7466f6.jpg) +Figure 1. Location of the spills, indicated by a bull's eye, with directions of flow in the aquifer. + +The delay of the spill reaching well 7 may be due to a pattern in the flow direction of the aquifer. It may at first seem strange that well 3 shows less concentration than well 7, which is clearly farther from the spill. This is probably due to the lower elevation of this well. The measurements taken at the top, bottom, and middle of the monitoring wells indicate that the spill is seeping in a downward direction, and that it never reaches the bottom of the wells. This also explains why well 13, directly in the path of the spill, never shows an increase in chemical concentration—its only samples were taken from the bottom of the well. + +# Strengths and Weaknesses + +The greatest strength of this method is its efficiency in interpreting a large and complex data set. Most computers are able to build and utilize such a database from a spreadsheet, and once the data are organized, the time needed to compute and interpret the results is minimal. Because it bases the placement of new wells on information gathered by existing ones and provides rough approximations with as few as three wells, the model is very efficient in terms of drilling and well maintenance. However, it provides only a very general approximation of the time and location of the source and has the potential to be greatly affected by irregularities in the positioning of the wells or uneven flow patterns in the groundwater system. In situations where samples are unavailable directly below a suspected pollution source, leaks cannot be detected until they have already penetrated into the groundwater supply, precluding attempts to stop the leak before it poses a problem to the surrounding community. This method provides no way to accurately determine the volume of polluting liquid spilled. + +# Model 2: Interpolation with Triangulation in Three Dimensions + +Using the same methods described above, this model requires the creation of a database and line graphs to determine the presence of a pollutant and the chemicals involved. It also uses changes in the concentration of a composite indicator of chemicals to monitor the flow of the pollutant. + +Computational geometry describes a method known as natural-neighbor interpolation by which sets of highly irregular data can be organized and represented visually. Using Delaunay triangulation, a unique set of triangles can be arranged in an arbitrary set of points. The value of an arbitrary point is defined entirely locally based on the values of the three nearest known points, the vertices of the triangle in which the point lies. + +Delaunay triangulation and Voronoi polygons are extremely useful for interpolation in this type of system for two primary reasons: + +- They provide a linear system by which the value of any arbitrary point can be determined, and the original data points are exactly recovered if solved for using this system. +- The interpolation of every point is influenced only by its natural neighbors such that irregularities in the data set are reflected in the model but do not distort the accuracy of the model at other points [Sambridge et al. 1995, 3]. + +The computer program Geomview takes concentrations stored in the database and the location coordinates of the monitoring wells and generates a convex hull that represents of the spill as a whole. The convex hull is the outermost surface of a Voronoi polygon, comprised of smaller tetrahedrons, each called a + +datum, representing the space between three "natural neighbors." Each datum has a specific and constant concentration based on the known concentrations at the three points that define it [Watson 1992, 108]. + +The data entered into this program are divided such that only concentrations above an established baseline appear in the visual model representing the spill. The program weights each datum by volume at a specific concentration and, by adding each of these weighted concentrations, calculates the total volume of liquid in the spill. The convex hull can be used in this model to visually represent the location and volume of the entire contaminated area of the spill. We used Geomview to generate maps to show the spill defined by this data set at six dates during the data collection period and the volume of the contaminated area at that time. [EDITOR'S NOTE: We do not reproduce the authors' maps.] + +This model is also useful in determining the source of the chemical spill. It produces a new diagram on every date requested based on the concentrations entered from the data set for that date. At the start of the data collection period, no pollution is visible. As time progresses, the diagram clearly indicates which wells experienced higher than normal concentrations of the indicator chemical. The approximate source of contamination, as well as the direction of flow in the groundwater table, are apparent when several successive convex hulls are viewed together. + +# Error Analysis + +To test the error of this model and the linear model, a data set can be computed for which the source location of contaminant, the date on which the leak occurred, and the total volume of liquid spilled are known. The difference between predicted values and the actual location, time, and volume, divided by the actual values, determines the percentage error of the interpolation. The amount of error will depend on numerous factors specific to the individual test including how far the source is from the nearest monitoring well, the flow rates in the underground system, the size of the spill, the number of monitoring wells, and many more factors. + +# Strengths and Weaknesses + +This model is inherently stronger than the graphical method because it allows a three-dimensional visualization of the data and because it uses a unique and algorithmic interpolation to evaluate the presence of contaminants between known points. Delaunay triangulations and the Voronoi polygons and convex hulls that can be derived from them are extremely accurate when used to interpolate in highly irregular data sets, because of the natural-neighbor principle. The presence of irregular data points or wide variations in distribution of points is reflected in the resulting projection but does not result in a misrepresentation of the data or inaccurately skew the values of known points + +[Sambridge et al. 1995]. The more data points available, the more accurate this interpolation is, because—like any interpolation—the model is most accurate nearest the known points. + +Because the data set given with this problem is highly irregular and contains very few sampling locations, the predictions of this method may not be entirely accurate, but they are highly superior to most interpolation methods. + +As with the previous model, this method cannot detect a spill until it has already entered the water table. + +Also, the only obvious method of error analysis is to test more points. The values given are still approximations, and the process of sorting such a large data set is still rather tedious. If such a method were employed from the start of a project and data were collected in a specific and consistent manner, analysis using this model would be relatively simple using the database and the processes described here. + +This model can determine whether or not a leak has occurred, approximate the time of the spill, and give an estimated location of the source of the spill. Its greatest advantage is its ability to determine the volume of contaminated area and to model its location underground. When the initial concentrations of chemicals in the leaking liquid are known, this model can also determine the volume of liquid that has leaked. + +This model is extremely cost-effective, generating approximate boundaries of a spill based on known information. Successive wells should be drilled at these boundaries to ensure that the spill is in fact accurately predicted by the current information. If new wells detect additional areas of the spill, new boundaries will be generated and the process can be repeated. In either case, unnecessary wells are never drilled once a contaminated area is identified. + +# Results and Recommendations + +New pollution occurred in this region during the testing period between 1990 and 1997. Two separate spills of the same liquid occurred, the first beginning in July 1991 and ending approximately in March 1993, and the second beginning in January 1995 and tapering off about February 1997, although possibly continuing for the remainder of the collection period. These spills originated from the region marked in Figure 1. + +The spill was composed of the following chemicals which were released into the ground as pollutants: acetone, ammonia, arsenic, barium, bicarbonate, calcium, chloride, iron, lead, magnesium, manganese, nickel, nitrate/nitrite, potassium, sodium, TDS, sulfate, vanadium, and zinc. The total volume of the polluted area by 1997 was approximately 32 million cubic feet. (When the initial concentration of the composite indicator in the liquid that spilled is available, it is possible to determine the total volume of liquid leaked by weighting each datum of a Voronoi polygon and finding the sum of the concentrations, multiplied by the total volume of contaminated area.) + +Recommendations for future testing include: + +- Identify surface properties to most effectively place the initial monitoring well. +- Test the same chemicals on the same dates at all wells to ensure complete and accurate data. +- Use the information generated by previous wells to predict the borders of the spill and place additional wells along this boundary to minimize the number of test locations needed to accurately determine the size of the spill. +- Determine the rate and direction of flow of the aquifer in which monitoring wells are located to accurately predict the time and location of the pollution source. + +# References + +Blatt, Harvey. 1997. Our Geologic Environment. Upper Saddle River, NJ: Prentice Hall. +British Columbia Ministry of Environment, Land, and Parks. 1999. http://www.env.gov bc.ca/wat/gws/gwbc/. +Flanagan, David. *Java in a Nutshell*. 1996. Sebastopol, CA: O'Reilly and Associates. +Levy, Stewart, Tamara Munzer, and Mark Phillips. 1996. GeomView—1996. http://www.geom.umn.edu/software/download/geomview.html. +Sambridge, Malcolm, Jean Braun, and Herbert McQueen. 1995. Geophysical parameterization and interpolation of irregular data using natural neighbours. Geophysical Journal International 122: 837-857. http://rses.anu.edu.au/geodynamics/nn/SBM95/SBM.html. +Sedgewick, Robert. 1988. Algorithms. 2nd ed. Reading, MA: Addison-Wesley. +Soliman, Mostafa M., Philip E. LaMoreaux, Bashir A. Memon, Fakhry A. Assaad, and James W. LaMoreaux. 1997. *Environmental Hydrogeology*. New York: Lewis Publishers. +Levy, Stewart, Tamara Munzer, and Mark Phillips. 1996. GeomView—1996. http://www.geom.umn.edu/software/download/geomview.html. +Todd, David Keith. 1959. Groundwater Hydrology. 2nd ed. New York: John Wiley and Sons. +Walton, William C. 1970. Groundwater Resource Evaluation. New York: McGraw Hill. +Watson, David F. 1992. Contouring: A Guide to the Analysis and Display of Spatial Data. New York: Pergamon Press. + +# Locate the Pollution Source + +Shen Quan +Yang Zhenyu +He Xiaofei +Zhejiang University +Hangzhou, China + +Advisor: Zhang Chong + +# Summary + +We develop a model for a strategy to detect new pollution. Three processes govern the movements of pollutants in groundwater: advection, dispersion, and retardation. Information from the wells is used to + +- determine the rate and direction of groundwater movement, +- determine the horizontal and vertical extent of the pollutants, and +- analyze the underground structure and characteristics. + +Regarding the diversity and complexity of the given data, we employ a two-step data selection to determine the pollutants most likely to cause new pollution during this period of time. We refine the data to choose those chemicals that best represent the variation during this period of time. Then, by using a grid-search algorithm, we write a computer program to simulate the movement process and identify the location and start time of the pollution source. The program is written in C and runs on a PC. Four kinds of new pollution sources are located. The graph resulting from our model is in a good agreement with the given data. Finally, we test parameter sensitivity. + +# Assumptions + +- All soil and aquifer properties are homogeneous and isotropic throughout both the saturated zone and the unsaturated zone. +- The aquifer consists of sand and gravel. + +- Steady, uniform water flow occurs only in the vertical direction throughout the unsaturated zone, and only in the horizontal (longitudinal) plane in the saturated zone in the direction of groundwater velocity. +- Physical processes play the greatest role, while the chemical processes are negligible. +- All the parameters describing the characteristic of both zones are constant throughout the monitoring period. +- All the sources of the pollutants are point sources. + +# Problem One + +This problem is to estimate the location and start time of the source, so we consider the movement process of the pollution and the structure of the underground. + +# Data Analysis and Processing + +We assume that there is no interaction between pollutants so that we can process each pollutant separately. With the given data of the coordinate and water level of each well, we plot the water level map by using linear interpolation on the elevations of the monitoring wells, as in Figure 1. For simplicity of computation, we assume that all the underground water flows in the same direction. + +![](images/61efd1cecedb136f89a2ce053bbd46dc373a43968baf220cd28fbc7a2787611c.jpg) +Figure 1. Water-level map. + +# Data Selection + +Because we have thousands of data points about concentration of various pollutants, we must select data carefully. We do this in the following steps: + +- Because pollutants are strongly influenced by layers of different permeability, measurements of critical parameters and pollutant concentrations need to be done at intervals over the depth of the aquifer. We need a method for sampling at different depths in an aquifer. By analyzing the data set, we find that almost every pollutant affects only one part of a well (top, middle, or bottom). Thus, for each pollutant we need to consider only the effect on one layer of the well. Furthermore, the data from the bottom of each well (if any) remain constant or nearly so, hence we can neglect such data. +- We delete the data for some pollutants, such as tetrachloroethane, acrolein, benzene, bromomethane, chlorobenzene, cobalt, and so on, because there are hardly any changes in concentrations of these pollutants in each well. +- We think that the pulse fluctuation about the pollutant concentration during a relatively stable period, such as for manganese, is caused by random factors. Thus, we eliminate these pollutants from the data set. +- There is a particular constituent, the CarbonTotalOrganic, whose concentration value decreases significantly, from more than 1000 to less than 1.5. Thus, we eliminate it. +- Now only four pollutants remain: calcium, chloride, magnesium, and TDS. + +# Reselection + +For each remaining pollutant, to accurately reflect the tendency for the concentration to change, we reselect its data as follows: + +- For each well, we choose two concentration values for each year, one from the first half of the year and the other from the second half. +- Because we do not know the locations of MW-27 and MW-33 and, moreover, the concentration changes in these two wells are small, we do not consider their data. +- According to the groundwater flow direction, the average concentration value of MW-9 should not be higher than that of MW-3 and MW-12, which contradicts the given data for calcium, chloride, and so on. This is also true for barium. (In 1997, concentrations in MW3M and MW12M vary from 50 to 85, whereas they vary from 80 to 95 in MW9M.) Therefore, we think that MW-9 is a pumping well (Figure 2). Thus, we do not use the data from MW-9 in our analysis. + +Finally, we list in Table 1 the data for calcium that we use to calculate the source location. + +![](images/14e3c6ed37c3ecf8f75579e727da0175d4fd1b58fee960556250709b780380e5.jpg) +Figure 2. Groundwater movement near a pumping well. + +Table 1. Data for calcium used in the model. + +
DateMW-3MMW-7MMW-11TMW-12M
12/7/9341503942
3/7/9442504347
9/19/9442454141
7/10/9536.554.344.759.5
10/10/9519.25343.254.7
3/6/9662.465.150.782.4
10/9/9660.261.953.387.6
3/18/9763.812553.287.6
12/15/9761.411563.888.4
+ +- According to the data, there is some pollutant detected in an early year such as 1990; we name it the background concentration $(C_b)$ . We think that the later pollutants' concentrations consist of background concentration plus new injected concentration. According to Figure 1, MW-1 must be at the headwater level. Moreover, the data from its bottom hardly change during this period according to the data set. Thus, we estimate $C_b$ using data from MW-1B as follows: + +$$ +C _ {b} = \text {a r i t h m e t i c m e a n o f t h e c o n c e n t r a t i o n v a l u e f r o m M W - 1 B} +$$ + +In Table 2 we collect the symbols used in this paper and their definitions. + +Table 2. +Symbols used. + +
αLhorizontal dispersion coefficient (m)
αTvertical dispersion coefficient (m)
Cpollutant concentration (mg/liter)
Cbbackground concentration (described above)
C0concentration in the pollutant source (mg/liter)
Dpervasion coefficient (m2/s)
Hwater level (ft)
Ihydraulic gradient
Khydraulic conductivity (gal/day/ft2)
Lhorizontal distance in the direction of water flow (ft)
mdischarge rate of the pollutant (mg/day)
neffective porosity
qdischarge rate of the pollutant (liter/day)
Rdretardation factor
Scompound parameter
t0start time of the pollution (yr)
θangle between the direction of underground water and the x-axis
Vdgroundwater velocity (ft/day)
Whantush function
(x0,y0)pollution source coordinate
+ +# Model Design + +# Model Formulation + +The movement of pollutants consists of advection, dispersion, and retardation. Furthermore, regarding the large scale of the area, the vertical movement is negligible. Thus, movement of pollutant in the soil (saturated and unsaturated) can be described by the following two-dimensional equation: + +$$ +R _ {d} \frac {\partial C}{\partial t} = V _ {d} \alpha_ {L} \frac {\partial^ {2} C}{\partial x ^ {2}} + V _ {d} \alpha_ {t} \frac {\partial^ {2} C}{\partial y ^ {2}} - V _ {d} \frac {\partial C}{\partial x}. \tag {1} +$$ + +# Model Explanation + +The model equation applies to steady uniform flow. An analytical solution to the equation can be developed for both continuous (step-function) and pulsed inputs of pollutants as boundary conditions. A step function implies the input of a constant concentration pollutant for an infinite amount of time, while a pulse load is a constant concentration input for a finite amount of time. The terms "infinite" and "finite" are relative to the time frame of the analysis. + +We assume that the pollution source is applied as a step function (continuously) with the following boundary conditions: + +$$ +\begin{array}{l} C (x, y, 0) = 0, \qquad (x, y) \neq (0, 0); \\ C (0, 0, t) = C _ {0}; \\ C (\pm \infty , y, t) = C (x, \pm \infty , t) = 0, \qquad t \geq 0. \\ \end{array} +$$ + +# Model Solution + +The function is a second-order partial differential equation. Equations of this form apply to a wide variety of problems, including mass transport, fluid dynamics, and heat transfer. + +For an instantaneous point source at time $t = 0$ , there is an analytical solution of the form + +$$ +C (x, y, t) = S \exp \left(\frac {x}{2 \alpha_ {L}}\right) [ W (0, b) - W (t, b) ], \tag {2} +$$ + +where + +$$ +m = C _ {0} q, \qquad \qquad S = \frac {m}{4 \pi V _ {d} (\alpha_ {L} \alpha_ {T}) ^ {1 / 2}}, +$$ + +and $W(u, b)$ is the hantush function + +$$ +W (u, b) = \int_ {u} ^ {\infty} \frac {\exp \left[ - y - \frac {b ^ {2}}{2 y} \right]}{y} d y \qquad \mathrm {w i t h} \qquad b = \sqrt {\frac {x ^ {2}}{4 \alpha_ {L} ^ {2}} + \frac {y ^ {2}}{4 \alpha_ {L} \alpha_ {T}}}. +$$ + +Before computing, we classified the parameter used according to our assumptions above: + +- During the data processing, the coordination and time of the pollution source are unknown, and so is the value of $m$ . Thus, $x_0, y_0, t_0$ , and $S$ are variable. +- The parameters $\alpha_{L}, \alpha_{T}, \theta$ , and $V_{d}$ are constants. + +The main task is to find the location and the start time of the pollutants. Hence, we develop a grid-search optimization routine to get an optimized solution: + +- We estimate the location of the pollutant source and transform coordinates as follows: + +- Set the point of the pollutant source to be the new origin. +- Set the new $x$ -axis to be parallel to the direction of the underground water flow. +- Set the new $y$ -axis to be perpendicular to the new $x$ -axis. + +- We construct an equation to calculate the movement of the pollutant under the ground. We calculate the concentration changes in each well and compare with the changes according to the data set. We repeatedly adjust the location of the pollution source, the value $S$ , and the value $t_0$ (detailed in the following) until there is a satisfactory agreement. The criterion for convergence is the sum of the squares of the residuals between the data and the model predictions. The objective function to be minimized is + +$$ +\sum_ {i} \left[ (C _ {i} - C _ {b}) - C _ {i} ^ {\prime} \right] ^ {2}, +$$ + +where $C_i$ is the pollutant concentration data value for well $i$ , $C_i'$ is the model prediction, and $C_b$ is the background pollution level. + +# Parameter Estimation + +We estimate the parameters for the saturated zone as following: + +- Hydraulic Conductivity $K$ : We consider hydraulic conductivity, measured in gallons per day per square foot, only in the horizontal direction. According to the literature, $K = 265 \, \mathrm{gpd} / \mathrm{ft}^2$ ( $1 \, \mathrm{gpd} / \mathrm{ft}^2 = 4.72 \times 10^{-5} \, \mathrm{cm} / \mathrm{sec}$ ). +- Hydraulic Gradient: According to Figure 1, made by interpolation, we assume that the direction of the underground water is one-dimensional. +- Ground-Water (Interstitial Pore Water) Velocity $V_{d}$ : According to Darcy's Law, $V_{d}$ is defined as + +$$ +V _ {d} = - K I / n, +$$ + +where $I$ is the hydraulic gradient, $K$ is hydraulic conductivity, and $n$ is effective porosity. We assume that the soil type of the saturated zone is sand with porosity $20\%$ , so we estimate $V_{d} = 1.5$ ft/day. + +- Dispersion Coefficient $\alpha$ : This coefficient incorporates two forms of dispersive process: dynamic dispersion and molecular diffusion. According to the literature, the horizontal dispersion coefficient and the vertical dispersion coefficient are approximately equal. Both have the estimated value 25 ft. +- Retardation Factor: Retardation is based on pollutant characteristics and aquifer composition. Since its effect is not very significant, we estimate $R_{d} = 1$ . +- Concentration in Pollution Source: According to the literature, when the water table is usually sufficiently high so that the pollutant directly enters ground water, the $C_0$ value is the estimate of the source concentration. + +# Results + +There are four new pollutants: calcium, chloride, magnesium, and TDS. The location and the start time for the pollution sources, as predicted by our model, are in Table 3. + +Table 3. Source and start time for pollutants. + +
Pollutantx-coordinate (ft)y-coordinate (ft)Start time (m/d/y)
TDS707765388/12/91
Magnesium642374611/1/94
Chloride693158235/18/91
Calcium775060409/1/93
+ +Finally, we mimic the movement process of the pollutant in reverse and compare with the given data set (Figure 3). From the graphs, we conclude: + +![](images/e3bdd23d8aa1bb6af8064db6822a997ec31e98a2980cb65053807cb77a1728b6.jpg) + +![](images/36938c8de37c99a277827f7b6e64bc34838d42c198adeebe68c34587963d670f.jpg) + +![](images/7b1e56c42f72d102df949c2393ccaae970bd54f1758b1d7add7c09a8b631050a.jpg) +Figure 3. Calcium concentrations at four wells. The thick curves are data, the thin curves are model predictions. + +![](images/42c1b5c2fcde48b2635e31f9609e7c94fc3f862d4bcc1e50b77e36b187540ce0.jpg) + +- For near-ideal conditions, the model is suitable; for regular use, a more robust model is desired. +- Even though the two curves do not fit very well, they show a similar change tendency. + +# Sensitivity Analysis + +We conduct a rudimentary sensitivity analysis to explain the stability of our model. We separately vary the values of the constants $\alpha_{L},\alpha_{T},\theta ,$ and $V_{d}$ by $10\%$ and compute the corresponding changes in the values of the location and time of the pollution source (Table 4) + +The model demonstrates good stability, but $\theta$ has a relatively significant influence on the result of the model. Thus, it is reasonable to consider the parameter $\theta$ as a variable and repeat our grid-searching algorithm in a five-dimensional space of $\theta$ , $x_0$ , $y_0$ , $t_0$ , and $S$ . For calcium, we get the comparative results shown in Table 5. + +In the expanded model, the value for $\theta$ is $7\%$ larger. We think that there is some deflection of the direction of the groundwater flow, as shown in Figure 4. + +Table 4. Effects of perturbations of the parameter values. + +
ParameterChange in location (ft)Change in time (yr)
θ700.2
αL<10<0.1
αT10<0.1
Vd<10<0.1
+ +Table 5. Comparison of 4- and 5-dimensional models. + +
Dimensionθx0(ft)y0(ft)t0(yr)S×106
40.7857750604093.752.1
50.847750610093.602.2
+ +![](images/74dd37c413429ca9eeb78c37d56ae396eed0d4fb8ca3f2dbae69e69ae9e222d1.jpg) +Figure 4. Suspected deflection of groundwater flow. + +# Problem Two + +# Local Assumptions + +- The storage tanks are located underground in the saturated zone. +- The direction of the groundwater flow remains the same. +- The saturated zone is semi-infinite. +- The leak process is continuous, since the primary cause of leaks in steel underground storage systems is corrosion. + +# Model Design + +To detect the pollutant rapidly and accurately, we develop a three-step method. + +1. According to the shape and size of the storage and the direction of the groundwater flow, we determine the number and location of the first group of wells. Provided that the storage is a square $S$ m on a side, the number of the first group of wells is $N = S / 20$ . That is, we drill a well every $20$ m in a line perpendicular to the direction of the groundwater, as shown in Figure 5. We monitor the data from the wells. + +![](images/62e255c775125aaa4a6a4863148a080c12e548065798f25f761053798a90d68a.jpg) +Figure 5. Locations of monitoring wells. Empty circles represent the initial monitoring wells; filled circles are the wells drilled after pollution is detected and found to affect well 3 most of all. + +2. Once there is some evidence of pollution, we determine which well is most affected by the pollutant. Near this well, we drill a series of wells (perhaps five or more) along the direction of the groundwater flow. Thus, we can construct a three-dimensional formulation to calculate the fluctuation of the pollutant concentration. Here the area occupied by the storage facility may not be very large (with side less than 1000 ft), so we cannot use (1). We employ the three-dimensional equation + +$$ +R _ {d} \frac {\partial C}{\partial t} + V _ {d} \frac {\partial C}{\partial x} = D \left(\frac {\partial^ {2} C}{\partial x ^ {2}} + \frac {\partial^ {2} C}{\partial y ^ {2}} + \frac {\partial^ {2} C}{\partial z ^ {2}}\right) + \frac {m}{n}. +$$ + +Because the leaking is a continuous process, we assume that the pollution + +source is applied as a step function (continuously) with boundary conditions + +$$ +C (x, y, z, 0) = 0, \quad (x, y, z) \neq (0, 0, 0), +$$ + +$$ +m (x, y, z, t) = q C _ {0} \delta (x, y, z), +$$ + +$$ +C (\pm \infty , y, z, t) = C (x, \pm \infty , z, t) = C (x, y, \pm \infty , t) = 0, \quad t \geq 0. +$$ + +For an instantaneous point source at time $t = 0$ , this equation possesses an analytical solution of the form + +$$ +\begin{array}{l} C (x, y, z, t) = \frac {R _ {d} q C _ {0}}{8 \pi n D r} \exp \left(\frac {V _ {d}}{2 D}\right) \\ \times \left\{\exp \left(\frac {V _ {d} x}{2 D}\right) \mathrm {e r f c} \left[ \frac {r + V _ {d} t}{2} \left(\frac {R _ {d}}{D _ {t}}\right) ^ {1 / 2} \right] \right. \\ + \exp \left(\frac {- V _ {d}}{2 D}\right) \mathrm {e r f c} \left[ \frac {r - V _ {d} t}{2} \left(\frac {R _ {d}}{D _ {t}}\right) ^ {1 / 2} \right] \}, \\ \end{array} +$$ + +where $r = (x^{2} + y^{2} + z^{2})^{1 / 2}$ + +When $t \to \infty$ , a steady-state equation results: + +$$ +C (x, y, z, t) = \frac {R _ {d} q C _ {0}}{4 \pi n D r} \exp \left(\frac {V _ {d} (r - x)}{2 D}\right). \tag {3} +$$ + +For convenience, we employ the symbol $C_m(x,y,z,t)$ to represent the right side of the (3). + +For constant $V_{d}, R_{d}, n, q,$ and $D$ , we can draw an equal-concentration plane with the concentration value $0.01C_{0}$ , as in Figure 6. + +![](images/233f6227637f3b94c73486daed6f2e1086d21047c443c1dba01f98ecb1d656aa.jpg) +Figure 6. Large dose. + +Let Height be the maximum height of the equal-concentration plane. We transform the Cartesian coordinates in the same way as Problem One. + +For a monitoring well at $(x,y)$ and aquifer thickness $b$ , we consider the concentration in the well for three situations: + +- If $b \ll$ Height or $b \ll$ Size of the storage facility, we can transform (3) into a two-dimensional equation like (1). +- If $b \geq \text{Height} / 2$ , it is reasonable to consider $b = \infty$ . Thus, the problem can be simplified. We assume that the substance in the aquifer cannot enter the unsaturated zone except at the source point. Thus, $\partial C / \partial z|_{z=0} = 0$ . + +Moreover, for every point $(x,y,z)$ under the water table, the concentration is double that depicted by (3) in the case of semi-infinite space: + +$$ +C (x, y, z, t) = 2 C _ {m} (x, y, z, t). +$$ + +- Otherwise, for $(x,y,z)$ on the upper or lower surface of the aquifer (see Figure 7), we have + +$$ +\left. \frac {\partial C}{\partial z} \right| _ {z = 0} = \left. \frac {\partial C}{\partial z} \right| _ {z = - b} = 0. +$$ + +Virtual source 2 + +![](images/44f60603b6182691e12ddd5e1891c92d7bc92c43632267be9c3a777f4a5fca6e.jpg) +Figure 7. Pollution source on the aquifer. + +Draw virtual source 1 symmetric to the pollutant source with the axis being $A'$ (the lower surface). Thus, condition $\left. \frac{\partial C}{\partial z} \right|_{z = -b} = 0$ is satisfied, while the condition $\left. \frac{\partial C}{\partial z} \right|_{z = 0}$ is not satisfied. In the same way, we draw virtual source 2 symmetric to virtual source 1 with $A$ (the upper surface). Repeating this process, we get virtual source 3, and so on. + +The concentration on the upper or lower surface can be considered as the result of accumulation of all the concentration values of all the sources (including the virtual sources). That is, + +$$ +C _ {t} (x, y, z) = 2 \sum_ {0} ^ {\infty} C _ {m} \big (x, y, z + 2 (- 1) ^ {i + 1} \lfloor \frac {i + 1}{2} \rfloor b, t \big). +$$ + +Actually, we need to consider only the former three virtual sources, for the following reasons: + +- The distances from these three sources to $\mathrm{A}(\mathrm{A}^{\prime})$ are the smallest, so they have the most effect on $C_t$ . Other virtual sources are very far away from $\mathrm{A}(\mathrm{A}^{\prime})$ and generally the distance between them is larger than the value Height. Therefore, we neglect these virtual sources. +- The pollutant discharged from the virtual sources far from the surfaces of the aquifer needs a long time to reach the aquifer. + +Finally, we transform (3) into + +$$ +C _ {t} (x, y, z) = 2 \left[ C _ {m} (x, y, z, t) + C _ {m} (x, y, z + 2 b, t) + C _ {m} (x, y, z - 2 b, t) \right]. +$$ + +Thus, we get the final analytical solution of $C(x,y,z,t)$ . Then we use the same computer-based method as in Problem One to calculate the approximate location and the time of the pollutant source. + +3. In the last step, we draw a circle with center the approximate source point $Q$ and diameter $25\mathrm{m}$ (or more). Inside this circle, we sample some soil from the surface and analyze its chemical constituents to find the maximum. Thus, we can accurately identify the location of the pollutant source. + +# Numerical Integration Scheme + +To calculate leakage, it is necessary to integrate the values of the dependent variable $(C)$ over space. Unfortunately, the integral of (3) does not possess an analytical representation and must therefore be integrated numerically. We employ a three-dimensional integration scheme for this model. The molar mass $(M)$ of leaked liquid is computed as + +$$ +M = \int \int \int C (x, y, z, t) d x d y d z \approx \sum_ {i, j, k} C _ {i j k} \delta x \delta y \delta z, +$$ + +where $C_{ijk}$ refers to the computed concentration in "differential" element $(i,j,k)$ . We employ uniform spatial steps of $\delta x = \delta y = \delta z = 1 \, \text{m}$ . + +# A Better Method for Mass Estimation + +While processing the data by computer program, we minimize the variance to get a quasi-optimal solution. Meanwhile, we have estimated the $m$ value, so we can compute the molar mass of leaked liquid more conveniently and efficiently as + +$$ +M = m t. +$$ + +# Strengths and Weaknesses of the Model + +# Strengths + +- The model has quite good practicality, and the given algorithm has little time complexity. For the given problem size, our C program for the grid-search algorithm runs in less than 2 min on a Pentium-166 computer. +- The model gives good agreement of predicted values and data. It is fast, efficient, and stable. +- As the given data are refined to simplify the computation, the accuracy does not decrease. For illustration, we list data for calcium in Table 6. + +Table 6. Effect of refining to simplify computation. + +
Number of data pointsx0(ft)y0(ft)t0(years since 1900)
60 (primitive)7750606093.70
36 (after refining)7750604093.75
+ +# Weaknesses + +- If the detected area is not large enough, there is some error. As the distance between the pollution source and the monitoring well grows larger, computation accuracy increases, measurement accuracy decreases, and response time increases. +- To decrease the complexity of the computation, we simplify the groundwater flow net, which may affect the accuracy of the results. +- Not taking statistical factors into our model makes the result of our model not fit the crude data exactly. + +# References + +Abriola, Linda M., editor. 1989. Groundwater Contamination. Wallingford, U.K.: International Association of Hydrological Sciences. +Barcelona, Michael, Allen Wehrhann, Joseph F. Keely, and Wayne A. Pettyjohn. 1990. Contamination of Ground Water: Prevention, Assessment, Restoration. Park Ridge, NJ: Noyes Data Corp. +Guswa, J.H., W.J. Lyman, A.S. Donigian, T.Y.R. Lo, and E.W. Shanahan. 1984. Groundwater Contamination and Emergency Response Guide. Park Ridge, NJ: Noyes. +Ward, C.H., W. Giger, and P.L. McCarty, editors. 1985. Ground Water Quality. New York: Wiley. + +# Judge's Commentary: The Ground Pollution Papers + +David L. Elliott + +Visiting Senior Research Scientist + +Institute for Systems Research + +University of Maryland + +College Park, MD 20742 + +delliot@usr.umd.edu + +Teams tended to expend more effort on Problem One, and these comments concern that problem. The top papers handled both problems well. + +The papers that I saw broke Problem One into several subproblems; assumptions beyond the problem description were needed to attack these, and the best papers made these very explicit with as much justification as possible. The subproblems included: + +1. list of "pollutant" species, +2. mathematical model of pollutant transport, +3. detection of time and number of spills, and +4. location of spill sources (using 1-3). + +The answers varied greatly, even among the best papers, depending on the assumptions and on the interpretation of the spreadsheet data. The winners + +- showed evidence of careful search and interpretation of relevant literature; +- posed the subproblems well, and found mathematical models capable of producing usable answers; +- presented their results in clear, convincing ways; and +- avoided major errors (these seemed often to be due to poor communication among team members!). + +The problem statement might well have given at the Web site some description of the site (dump? storage?), description of soil/ aquifer types, and other qualitative information that a professional in this field would be given. + +The spreadsheet columns are labeled according to assayed chemical species and contain their concentrations in the form of separate time series from several wells and depths. There was little agreement among the contestants as to which species were "pollutants": concentrations for most species (e.g., organochlorides) were negligible, others were non-increasing with time or likely to occur naturally (in rainfall or in soil), and some columns seem to use more than one unit of measurement. + +The Outstanding entries are good examples of how different the models could be. The entry from Zhejiang University fits the data to solutions of a simple partial differential equation. Note that the "diffusion" mentioned here is mostly of dynamic origin (percolation, although I did not see that term in any paper I read). Some contestants seem to have considered thermal (Brownian) diffusion important, but it is far too small to be observable in most fluids. The team from Earlham College neglected diffusion but assumed that different species might travel at different rates; this team used time-series graphs to good effect in selecting species to look at and to estimate times. + +Other papers had trouble in finding the direction of flow, in putting together diffusion and advection, or in finding a rationale for data selection. Some teams did not find relevant scientific literature that would help in modeling. + +# About the Author + +David L. Elliott is Professor Emeritus of Mathematical Systems at Washington University in St. Louis, and since 1992 has been Visiting Senior Research Scientist at the Institute for Systems Research of the University of Maryland, College Park. + +He took his B.A. (Pomona College, 1953) and M.A. (USC, 1959) in Mathematics, and his Ph.D. (UCLA, 1969) in Engineering. After working in control systems and oceanic acoustics at the U.S. Naval Ocean Systems Center, Prof. Elliott taught at UCLA, at Washington University, and as a visitor at Brown University and once more at UCLA. He also served as Program Director for System Theory at the National Science Foundation, 1987-1989. His research has been in nonlinear control theory and applied mathematics (including the kinetics of blood coagulation—he has hemophilia). + +He is an IEEE Fellow and member of SIAM, AMS, MAA, and Sigma Xi. He was associate editor for several mathematical journals and edited Neural Systems for Control (Academic Press, 1997). His previous association with MCM was as faculty advisor in 1985 and 1986 for Outstanding MCM teams from Washington University. + +# Author's Commentary: The Outstanding Ground Pollution Papers + +Yves Nievergelt +Department of Mathematics, MS 32 +Eastern Washington University +526 5th Street +Cheney, WA 99004-2431 +yves.nievergelt@mail ewu.edu + +Late on a November night, in the brew pub of an old logging and mining town somewhere in the Wild West, I happened to be sitting near a hydrogeologist. Besides the usual gossip typically heard in such a pub—about the taste of the local brew, the lack of fish in the river, cougars stalking deer in your back yard, and a bear trashing your garbage can in the front yard—the conversation turned to a discussion of a private or public agency that also monitors the geographic area from which the data come. + +The data consist of a real, original, unaltered electronic spreadsheet listing measurements of pollutants in wells drilled through the aquifer in an area used by a private or public firm. The data are not only real, they are also significant, which means that someone could suffer or benefit from their analysis and interpretation. + +After some discussion, the hydrogeologist granted permission to use the data in the MCM, provided no statement be made containing information that would identify the parties involved. + +The two Outstanding teams used two fundamentally different approaches, but each demonstrated an effective understanding of the situation and a strong command of mathematical concepts. Both teams' understanding of the situation enabled them + +not to be overwhelmed by the size of the data file; +not to be stopped by some of the data file's blank fields or repeated fields; +not to be hampered by names of unfamiliar chemicals; + +- to recognize naturally occurring from potentially polluting chemicals; +- to sort out chemicals with concentrations that seemed nearly constant, nearly random, or potentially revealing of a trend; and +- to locate and use references from the literature and the World Wide Web. + +Beyond such an understanding of the problem, the two teams adopted very different mathematical methods of solution. + +The team from Zhejiang University used the method of least squares (minimum variance) to fit the parameters of a partial differential equation modeling the physics of advection, dispersion, and retardation: dispersion coefficients, ground water velocity, ground porosity, and the time and location of potential sources of pollution. Their result suggests four spills, near the points with coordinates (7077, 6538) and (6931, 5823) in 1991, and (7750, 6040) and (6423, 7461) near the end of 1993. + +The team from Earlham College used Delaunay triangulations and Voronoi polytopes to interpolate concentration gradients and ultimately to detect sudden increases in pollutant concentrations and trace them back to a putative source. Their result suggests two spills, first about 1992 and again about 1996, near the point with coordinates (8000, 4500). + +Though the two results differ from each other, they suggest an increase in pollution in the area corresponding to the upper right hand corner of the map $[0,10000] \times [0,7000]$ . The two teams' papers also contain presentations of their assumptions, models, methods, and results that are quite impressive for a weekend's work. + +# About the Author + +Yves Nievergelt graduated in mathematics from the École Polytechnique Fédérale de Lausanne (Switzerland) in 1976, with concentrations in functional and numerical analysis of PDEs. He obtained a Ph.D. from the University of Washington in 1984, with a dissertation in several complex variables under the guidance of James R. King. He now teaches complex and numerical analysis at Eastern Washington University. + +Prof. Nievergelt is an associate editor of The UMAP Journal. He is the author of several UMAP Modules, a bibliography of case studies of applications of lower-division mathematics (The UMAP Journal 6 (2) (1985): 37-56), and Mathematics in Business Administration (Irwin, 1989). His new book is Wavelets Made Easy (Birkhäuser, 1999). \ No newline at end of file diff --git a/MCM/1995-2008/2000MCM&ICM/2000MCM&ICM.md b/MCM/1995-2008/2000MCM&ICM/2000MCM&ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..9ef1122b34ab8d00b4ef1bcb4ff677dfc6ba991e --- /dev/null +++ b/MCM/1995-2008/2000MCM&ICM/2000MCM&ICM.md @@ -0,0 +1,5045 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +P.O.Box 210667 + +Montgomery, AL 36121-0667 + +JMCargal@sprintmail.com + +Development Director + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Project Manager + +Roland Cheyne + +Copy Editors + +Seth A. Maislin + +Pauline Wright + +Distribution Manager + +Kevin Darcy + +Production Secretary + +Gail Wessell + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 21, No. 3 + +# Associate Editors + +Don Adolphson + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. ("Gene") Woolsey + +Brigham Young University + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription includes print copies of quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in their classes, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2020 $69 + +(Outside U.S.) #2021 $79 + +# INSTITUTIONAL PLUS MEMBERSHIP SUBSCRIBERS + +Institutions can subscribe to the Journal through either Institutional Pus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in any class taught in the institution, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2070 $395 + +(Outside U.S.) #2071 $415 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +Regular Institutional members receive only print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2040 $165 + +(Outside U.S.) #2041 $185 + +# LIBRARY SUBSCRIPTIONS + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching and our organizational newsletter Consortium. + +(Domestic) #2030 $140 + +(Outside U.S.) #2031 $160 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2000 by COMAP, Inc. All rights reserved. + +# Table of Contents + +# Publisher's Editorial + +COMAP on the Web + +Solomon A. Garfunkel 211 + +In Memoriam: Ross L. Finney + +Solomon A. Garfunkel 212 + +# Modeling Forum + +Results of the 2000 Mathematical Contest in Modeling and + +Interdisciplinary Contest in Modeling + +Frank Giordano, Chris Arney, and John H. Grubbs .213 + +# The Air Traffic Control Problem + +Air Traffic Control + +Samuel Westmoreland Malone, Jeffrey Abraham Mermin, + +and Daniel Bertrand Neill 241 + +The Safe Distance Between Airplanes and the Complexity of an + +Airspace Sector + +Finale Doshi, Rebecca Lessem, and David Mooney 257 + +The Iron Laws of Air Traffic Control + +Kevin Arnett, Jonathan S. Gibbs, and John J. Horton 269 + +You Make the Call: Feasibility of Computerized Aircraft Control + +Richard D. Younger, Martin B. Linck, and William P. Woesner .285 + +Judge's Commentary: The Outstanding Air Traffic Control Papers + +Patrick J. Driscoll 301 + +Practitioner's Commentary: The Outstanding Air Traffic Control + +Papers + +Jack Clemons 305 + +# The Channel Assignment Problem + +Channel Assignment Model: The Span Without a Face + +Jeffrey Mintz, Aaron Newcomer, and James C. Price + +"We're Sorry, You're Outside the Coverage Area" + +Robert E. Broadhurst, William J. Shanahan, and + +Michael D. Steffen 327 + +Utilize the Limited Frequency Resources Efficiently + +Chu Rui, Xiu Baoxin, and Zong Ruidi 343 + +Grovin' with the Big Band(width) + +Daniel J. Durand, Jacob M. Kline, and Kevin M. Woods .357 + +Radio Channel Assignments + +Justin Goodwin, Dan Johnston, and Adam Marcus 369 + +Author/Judge's Commentary: The Outstanding + +Channel Assignment Papers + +Jerrold R. Griggs 379 + +Space Aliens Land, Threaten Global Destruction + +Christopher R.H. Hanusa, Anand Patil, and Otto Cortez 387 + +# Interdisciplinary Contest in Modeling (ICM) + +# The Elephant Population Problem + +Elephant Population: A Linear Model + +Nathan Cappallo, Daniel Osborn, and Timothy Prescott 389 + +A Computational Solution for Elephant Overpopulation + +Jesse Crossen, Aaron Hertz, and Danny Morano 403 + +EigenElephants: When Is Enough, Enough? + +David Marks, Jim Sukha, and Anand Thakker 417 + +Judge's Commentary: The Outstanding Elephant Population Papers + +GaryKrahn. 431 + +About the Problem Author + +Chris Arney and Gary Krahn. 435 + +# Publisher's Editorial COMAP on the Web + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +s.garfunkel@mail.comap.com + +As you may be aware, COMAP has over the past year updated our Web site. Not only are there clearer and more detailed descriptions of our products and projects, but we have initiated a Web membership and made our major publications available online. In addition, this year we will move (almost) exclusively to Web registration and problem delivery for the Mathematical Contest in Modeling (MCM) and the High School Contest in Modeling (HiMCM). All of our new supplementary materials are being designed with the Web clearly in mind. We understand that the Internet is increasingly the delivery method of choice for educational materials. + +Moreover, we are planning a number of new projects that use the Web as a vehicle for the delivery of distance learning. It has been a while since we created television courses as a way of promoting lifelong learning. As we have developed new curricula at the secondary level, we have come to understand the importance of teacher preparation and enhancement. New teachers need to be prepared to face the challenges of the content and pedagogy of Standards-based curricula, as well as new developments in technology. Web-based courses have become increasingly important in the continuing education of teachers and students. We see this as an important area for COMAP's future. + +This year has seen the publication of our fourth course in the Mathematics: Modeling Our World series. We are currently working on college level texts in Precalculus and in College Algebra, all with W.H. Freeman as publisher. And we look forward to the publication next year, with Brooks-Cole, of a new text in mathematical methods for secondary school teachers, embracing a modeling-and applications-based approach. We expect and intend to be in the business of textbook development for many many years to come. But we are aware that a large part of that future will be in electronic publishing—and we are investing in that future. + +# In Memoriam: Ross L. Finney + +We write here to mourn the passing of Ross Finney this August. + +I am sure that most of the readers of The UMAP Journal know Ross from his work on the last several editions of the Thomas calculus texts. As important as that work has been, Ross Finney was also a pioneer in mathematics education in the pre-COMAP era. In fact, Ross was a co-principal investigator on the original UMAP grant from NSF that led directly to the founding of COMAP. + +We remember Ross as the founding editor of The UMAP Journal; he continued that effort through the first five years of its existence. It is fair to say that the Journal that you hold in your hands would likely never have existed without his efforts. We here at COMAP, and readers of the Journal throughout the world, owe him a large debt of gratitude. He will be sorely missed. + +# Obituaries + +Ross Lee Finney III; math teacher wrote calculus textbooks. 2000. Los Angeles Times (18 August 2000): B-6. http://www.latimes.com/print/metro/20000818/t000077771.html. +Saxon, Wolfgang. 2000. Ross Lee Finney III, 67, author of widely used math textbooks. New York Times (16 August 2000). http://www10.nytimes.com/yr/mo/day/news/national/obit-r-finney.html. + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for eleven years and has dedicated the last 20 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he appeared as the on-camera host), Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary school mathematics. + +# Modeling Forum + +# Results of the 2000 Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling + +Frank Giordano, MCM Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +f.giordano@mail.comap.com + +Chris Arney, ICM Co-Director + +U.S. Military Academy + +West Point, NY + +ad6819@exmail.usma.army.mil + +John H. Grubbs, ICM Co-Director + +Tulane University + +New Orleans, LA + +# Introduction + +COMAP is pleased to announce the results of the 16th annual Mathematical Contest in Modeling (MCM) and the 2nd annual Interdisciplinary Contest in Modeling (ICM). This year 495 teams representing 231 institutions from 9 countries spent the first weekend in February working on applied mathematics and interdisciplinary problems. + +The 2000 MCM/ICM began at 12:01 A.M. on Friday, Feb. 4 and officially ended at 5:00 P.M. on Monday, Feb. 7, 2000 (local time). Teams of two or three undergraduates were to research and submit an optimal solution for one of + +three open-ended modeling problems. After a weekend of hard work, typed solution papers were mailed to COMAP. Twelve of the top papers appear in this issue of The UMAP Journal. + +This year's Problem A was dedicated to the memory of Dr. Robert Machol, former chief scientist of the Federal Aviation Agency. Dr. Machol posed the problem when the FAA was considering adding software to the air traffic control system that would alert controllers to potential problems and thus improve safety and reduce workload. In addition to solving the problem, participants were asked to write a summary that could be presented to the FAA Administrator, Ms. Jane Garvey. + +Problem B this year sought to model the assignment of radio channels to a symmetric network of transmitter locations so as to avoid interference. Many groups came to the conclusion that the pure mathematical solution needed some practical applications and included those as well. + +This year's ICM Problem C offered information regarding the need to keep the elephant population in a national park in South Africa down to 11,000 while avoiding having to destroy any of the animals. A contraceptive dart has been developed that prevents conception for two years. Participants were to investigate a strategy for how to use the dart successfully. Six specific questions were posed and data were offered about emigration patterns, gender ratios, elephant conception patterns, and new calf survival rates. + +Results and winning papers from the first fifteen contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-1999). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +# Problem A: Air Traffic Control + +Dedicated to the memory of Dr. Robert Machol, former chief scientist of the Federal Aviation Agency + +To improve safety and reduce air traffic controller workload, the Federal Aviation Agency (FAA) is considering adding software to the air traffic control system that would automatically detect potential aircraft flight path conflicts and alert the controller. To that end, an analyst at the FAA has posed the following problems. + +- Requirement A: Given two airplanes flying in space, when should the air traffic controller consider the objects to be too close and to require intervention? +- Requirement B: An airspace sector is the section of three-dimensional airspace that one air traffic controller controls. Given any airspace sector, how do we + +measure how complex it is from an air traffic workload perspective? To what extent is complexity determined by the number of aircraft simultaneously passing through that sector + +- at any one instant? +- during any given interval of time? +- during a particular time of day? + +How does the number of potential conflicts arising during those periods affect complexity? Does the presence of additional software tools to automatically predict conflicts and alert the controller reduce or add to this complexity? + +In addition to the guidelines for your report, write a summary (no more than two pages) that the FAA analyst can present to Jane Garvey, the FAA Administrator, to defend your conclusions. + +# Problem B: Radio Channel Assignments + +We seek to model the assignment of radio channels to a symmetric network of transmitter locations over a large planar area, so as to avoid interference. One basic approach is to partition the region into regular hexagons in a grid (honeycomb-style), as shown in Figure 1, where a transmitter is located at the center of each hexagon. + +![](images/f8149e15a99109f74cded5b8b38b64725f97cd9d4e499372bd04c72aa2a63bc7.jpg) +Figure 1. Honeycomb grid of hexagons. + +An interval of the frequency spectrum is to be allotted for transmitter frequencies. The interval will be divided into regularly spaced channels, which we represent by integers 1, 2, 3, .... Each transmitter will be assigned one positive + +integer channel. The same channel can be used at many locations, provided that interference from nearby transmitters is avoided. + +Our goal is to minimize the width of the interval in the frequency spectrum that is needed to assign channels subject to some constraints. This is achieved with the concept of a span. The span is the minimum, over all assignments satisfying the constraints, of the largest channel used at any location. It is not required that every channel smaller than the span be used in an assignment that attains the span. + +Let $s$ be the length of a side of one of the hexagons. We concentrate on the case that there are two levels of interference. + +- Requirement A: There are several constraints on frequency assignments: + +- No two transmitters within distance $4s$ of each other can be given the same channel. +- Due to spectral spreading, transmitters within distance $2s$ of each other must not be given the same or adjacent channels: Their channels must differ by at least 2. + +Under these constraints, what can we say about the span in Figure 1? + +- Requirement B: Repeat Requirement A, assuming the grid in the example spreads arbitrarily far in all directions. +- Requirement C: Repeat Requirements A and B, except assume now more generally that channels for transmitters within distance $2s$ differ by at least some given integer $k$ , while those at distance at most $4s$ must still differ by at least one. What can we say about the span and about efficient strategies for designing assignments, as a function of $k$ ? +- Requirement D: Consider generalizations of the problem, such as several levels of interference or irregular transmitter placements. What other factors may be important to consider? +- Requirement E: Write an article (no more than 2 pages) for the local newspaper explaining your findings. + +# ICM Problem: + +# Elephants: When is Enough, Enough? + +"Ultimately, if a habitat is undesirably changed by elephants, then their removal should be considered—even by culling." + +National Geographic (Earth Almanac) (December 1999) + +A large national park in South Africa contains approximately 11,000 elephants. Management policy requires a healthy environment that can maintain a stable herd of 11,000 elephants. Each year, park rangers count the elephant population. During the past 20 years whole herds have been removed to keep the population as close to 11,000 as possible. This process involved shooting (for the most part) and occasionally relocating approximately 600 to 800 elephants per year. + +Recently, there has been a public outcry against the shooting of these elephants. In addition, it is no longer feasible to relocate even a small population of elephants each year. A contraceptive dart, however, has been developed that can prevent a mature elephant cow from conceiving for a period of two years. + +Here is some information about the elephants in the park: + +- There is very little emigration or immigration of elephants. +- The gender ratio is very close to 1:1 and control measures have endeavored to maintain parity. +- The gender ratio of newborn calves is also about 1:1. Twins are born about $1.35\%$ of the time. +- Cows first conceive between the ages of 10 and 12 and produce, on average, a calf every 3.5 years until they reach an age of about 60. Gestation is approximately 22 months. +- The contraceptive dart causes an elephant cow to come into oestrus every month (but not conceiving). Elephants usually have courtship only once in 3.5 years, so the monthly cycle can cause additional stress. +- A cow can be darted every year without additional detrimental effects. A mature elephant cow will not be able to conceive for 2 years after the last darting. +- Between $70\%$ and $80\%$ of newborn calves survive to age 1 year. Thereafter, the survival rate is uniform across all ages and is very high (over $95\%$ ), until about age 60; it is a good assumption that elephants die before reaching age 70. +- There is no hunting and negligible poaching in the park. + +The park management has a rough data file of the approximate ages and gender of the elephants that they have transported out of the region during the past two years. These data are available on the Web: www.comap.com/icm/ icm2000data.xls . Unfortunately, no data are available for the elephants that were shot or that remain in the park. + +Your overall task is to develop and use models to investigate how the contraceptive dart might be used for population control. Specifically: + +- Task 1: Develop and use a model to speculate about the likely survival rate for elephants aged 2 to 60. Also speculate about the current age structure of the elephant population. +- Task 2: Estimate how many cows would need to be darted each year to keep the population fixed at approximately 11,000 elephants. Show how the uncertainty in the data at your disposal affects your estimate. Comment on any changes in the age structure of the population and how this might affect tourists. (You may want to look ahead about 30–60 years.) +- Task 3: If it were feasible to relocate between 50 and 300 elephants per year, how would this reduce the number of elephants to be darted? Comment on the trade-off between darting and relocation. +- Task 4: Some opponents of darting argue that if there were a sudden loss of a large number of elephants (due to disease or uncontrolled poaching), even if darting stopped immediately, the ability of the population to grow again would be seriously impeded. Investigate and respond to this concern. +- Task 5: The management in the park is skeptical about modeling. In particular, they argue that a lack of complete data makes a mockery of any attempt to use models to guide their decisions. In addition to your technical report, include a carefully crafted report (3-page maximum) written explicitly for the park management that responds to their concerns and provides advice. Also, suggest ways to increase the park managers' confidence in your model and in your conclusions. +- Task 6: If your model works, other elephant parks in Africa would be interested in using it. Prepare a darting plan for parks of various sizes (300–25,000 elephants), with slightly different survival rates and transportation possibilities. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was read preliminarily by two "triage" judges at Southern Connecticut State University (Problem A), Carroll College (Montana) (Problem B), or University of New Hampshire (Problem C). At the triage stage, the summary and overall organization were important. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree, a third judge evaluated the paper. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +The twelve papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Air Traffic Control4214584154
Channel Assignment54372151271
Elephant Population312183770
1276135272495
+ +those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +# Air Traffic Control Papers + +"Air Traffic Control" + +Duke University + +Durham, NC + +David P. Kraines + +Samuel Westmoreland Malone + +Jeffrey Abraham Mermin + +Daniel Bertrand Neill + +"The Safe Distance Between Airplanes and the Complexity of an Airspace" + +Governor's School + +Richmond, VA + +Crista Hamilton + +Finale Doshi + +Rebecca Lessem + +David Mooney + +"The Iron Laws of Air Traffic Control" + +U.S. Military Academy + +West Point, NY + +David Bailey + +Kevin Arnett + +Jonathan S. Gibbs + +John J. Horton + +"You Make the Call: Feasibility of Computerized Aircraft Control" + +University of Colorado + +Boulder, CO + +Anne M. Dougherty + +Richard D. Younger + +Martin B. Linck + +William P. Woesner + +# Channel Assignment Papers + +"A Channel Assignment Model: + +The Span Without a Face" + +California Polytechnic State University + +San Luis Obispo, CA + +Thomas O'Neil + +Jeffrey Mintz + +Aaron Newcomer + +James Price + +"We're Sorry, You're Outside the Coverage Area" + +Lewis and Clark College + +Portland, OR + +Robert W. Owens + +Robert E. Broadhurst + +William J. Shanahan + +Michael D. Steffen + +"Utilize the Limited Frequency Resources Efficiently" + +National University of Defence Technology + +Changsha, Hunan, China + +Wu Meng Da + +Chu Rui + +Xiu Baoxin + +Zong Ruidi + +"Groovin' with the Big Band(width)" + +Wake Forest University + +Winston-Salem, NC + +Edward Allen + +Daniel J. Durand + +Jacob M. Kline + +Kevin M. Woods + +"Radio Channel Assignments" + +Washington University + +St. Louis, MO + +Hiro Mukai + +Justin Goodwin + +Dan Johnston + +Adam Marcus + +# Elephant Population Papers + +"Elephant Population: A Linear Model" + +Harvey Mudd College + +Claremont, CA + +Michael Moody + +Nathan Cappallo + +Daniel Osborn + +Timothy Prescott + +"A Computational Solution for Elephant + +Overpopulation" + +North Carolina School of Science and Mathematics + +Durham, NC + +Dot Doyle and Dan Teague + +Jesse Crossen + +Aaron Hertz + +Danny Morano + +"EigenElephants: When Is Enough, Enough?" + +North Carolina School of Science and Mathematics + +Durham, NC + +Dot Doyle and Dan Teague + +David Marks + +Jim Sukha + +Anand Thakker + +# Meritorious Teams + +Air Traffic Control Papers (21 teams) + +Chongqing University, Chongqing, China (Chen Yihua) + +Drake University, Des Moines, IA (Alexander F. Kleiner) + +East China Univ. of Science & Technology, Shanghai, China (Lu Yuanhong) + +First Middle School of Jiading, Shanghai, China (Fang Yunping) + +Harvey Mudd College, Claremont, CA (Zachary Dodds) + +Lafayette College, Easton, PA (Thomas Hill) + +Peking University, Beijing, China (Deng Minghua) + +Rose-Hulman Institute of Technology, Terre Haute, IN (Frank Young) + +Science School of Xi'an Jiaotong University, Xi'an, Shaanxi, China (He Xiaoliang) + +Simpson College, Indianola, IA (M.E. Waggoner) + +Stetson University, Deland, FL (Lisa O. Coulter) + +Trinity University, San Antonio, TX (Tarynn Witten) + +University College Cork, Cork, Ireland (Michael Quinlan) + +University of Alaska Fairbanks, Fairbanks, AK (Chris Hartman) + +University of Cincinnati, Cincinnati, OH (Charles Groetsch) + +Univ. of Colorado at Colorado Springs, Colorado Springs, CO (Jon Epperson) + +University of Saskatchewan, Saskatoon, SK, Canada (Raj Srinivasan) + +University of Science & Technology of China, Hefei, Anhui, China (Sun Liang) + +Worcester Polytechnic Institute, Worcester, MA (Bogdan Vernescu) + +Youngstown State University, Youngstown, OH (Steve Hanzely) + +Youngstown State University, Youngstown, OH (Thomas Smotter) + +Zhejiang University, Hangzhou, Zhejiang, China (Yang Qifan) + +Zhejiang University, Hangzhou, Zhejiang, China (He Yong) + +Channel Assignment Papers (43 teams) + +Anhui University, Hefei, Anhui, China (Chen Junsheng) + +Asbury College, Wilmore, KY (Kenneth P. Rietz) + +Beijing University of Post & Telecom, Beijing, Beijing, China (He Zuguo) + +Beijing University of Post & Telecomm, Beijing, Beijing, China + +(Sun Hongxiang) + +Calvin College, Grand Rapids, MI (Dorothea Pronk) + +China University of Mining & Technology, Xuzhou, Jiangsu, China (Wu Zongxiang) + +East China Univ. of Science & Technology, Shanghai, China (Shi Jinsong) + +East China Univ. of Science & Technology, Shanghai, China (Xiwen Lu) + +Fudan University, Shanghai, China (Cai Zhijie) + +Gettysburg College, Gettysburg, PA (James P. Fink) + +Grinnell College, Grinnell, IA (Marc Chamberland) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Wang Xuefeng) + +Harvey Mudd College, Claremont, CA (Michael Moody) + +Institution of Information Science & Engineering, Shenyang, Liaoning, China (Xiao Wendong) + +Luther College, Decorah, IA (Reginald D. Laursen) + +MIT, Cambridge, MA (Michael Brenner) + +Mt. Mercy College, Cedar Rapids, IA (Kent R. Knopp) + +National U. of Defence Technology, Chang Sha, Hunan, China (Cheng Lizhi) +Northwestern Polytechnical University, Xi'an, Shaanxi, China (Liu Xiaodong) +Northwestern Polytechnical University, Xi'an, Shaanxi, China (Peng Guohua) +Pacific Lutheran University, Tacoma, WA (Rachid Benkhalti) +Paivola College, Tarttila, Finland (Esa Lappi) +Peking University, Beijing, China (Lei Gongyan) +Rose-Hulman Institute of Technology, Terre Haute, IN (David Rader) +Shanghai Foreign Languages School, Shanghai, China (Wan Baihe) +Shanghai Jiao Tong University, Shanghai, China (Zhou Gang) +South China Univ. of Technology, Guangzhou, Guangdong, China (Fu Hongzhuo) +Southeast University, Nanjing, China (Huang Jun) (two teams) +Trinity University, San Antonio, TX (Allen Holder) +Tsinghua University, Beijing, China (Hu Zhiming) +U.S. Military Academy, West Point, NY (Greg Parnell) +University of Alaska Fairbanks, Fairbanks, AK (Chris Hartman) +Univ. of Michigan—Dearborn, Dearborn, MI (David James) +University of New South Wales, Sydney, Australia, (James Franklin) +University of Richmond, Richmond, VA (Kathy W. Hoke) +University of Toronto, Toronto, Ontario, Canada (Nicholas A. Derzko) (two teams) +Wuhan Univ. of Hydraulic & Engineering, Wuhan, Hubei, China (Peng Zuzeng) +Yale University, New Haven, CT (Steven Orszag) +Zhejiang University, Hangzhou, Zhejiang, China (Yang Qifan) +Zhejiang University, Hangzhou, Zhejiang, China (He Yong) + +# Elephant Population Papers (12 teams) + +Bloomsburg University, Bloomsburg, PA (Kevin Ferland & Scott Inch) +California Academy of Math & Science, Carson, CA (Brian R. Lawler) +China University of Mining & Tech., Xuzhou, Jiangsu, China (Zhou Shengwu) +MIT, Cambridge, MA (Michael P. Brenner & Lakshmira Rayananan Muhadevan) +Northwestern Polytechnical University, Xi'an, Shaanxi, China +(Xu Wei & Wang Mingyu) +Peking University, Beijing, Beijing, China (Shao Min & Zhang Tao) +Southeast University, Nanjing, China (Chen Enshui) +U.S. Military Academy, West Point, NY (Michael Jaye & Greg Fleming) +Univ. of Science & Tech. of China, Hefei, Anhui, China (Wan Qian) +Youngstown State University, Youngstown, OH (Scott Martin) +Zhejiang University, Hangzhou, Zhejiang, China +(Cong Zhang, Chu Jaiowu, & He Yong) +Zhejiang University, Hangzhou, Zhejiang, China +(Cong Zhang, Chu Jaiowu, & Yang Qifan) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, gave a cash award and a three-year membership to each member of the teams from the University of Colorado (Air Traffic Control Problem), Washington University (Channel Assignment Problem), and North Carolina School of Science and Mathematics (the team of Jesse Crossen, Aaron Hertz, and Danny Morano) (Elephant Population Problem). Moreover, INFORMS gave free one-year memberships to all members of Meritorious and Honorable Mention teams. + +The Society for Industrial and Applied Mathematics (SIAM) designated as SIAM Winners the teams from U.S. Military Academy, West Point, NY (Air Traffic Control Problem) and Wake Forest University (Channel Assignment Problem). Each of the team members was awarded a $300 cash prize. Their schools were given framed certificates hand-lettered in gold leaf. Both teams presented their results at a special Minisymposium of the SIAM Annual Meeting in Puerto Rico in July. + +The Mathematical Association of America (MAA) designated as MAA Winners the teams from the Duke University (Air Traffic Control) and California Polytechnic State University (Channel Assignment Problem). The team from California Polytechnic State University presented their solution at a special session of the MAA Mathfest in Los Angeles in August. Each team member was presented a certificate by MAA President Thomas Banchoff. + +# Judging + +# MCM + +Director + +Frank R. Giordano, COMAP, Lexington, MA + +Associate Directors + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +William Fox, Chair, Dept. of Mathematics, Francis Marion University, Florence, SC + +# Air Traffic Control Problem + +Head Judge + +Martin Keener, Executive Vice President, Oklahoma State University, Stillwater, OK + +Associate Judges + +Ron Barnes, University of Houston—Downtown, Houston, TX (MAA) + +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY (INFORMS) + +David L. Elliott, Institute for Systems Research, University of Maryland, College Park, MD (SIAM) +Gordon Erlebacher, Dept. of Computer Science and Information Technology, Florida State University, Tallahassee, FL +Richard Haberman, Mathematics Dept., Southern Methodist University, Dallas, TX (SIAM) +Mark Levinson, Edmonds, WA (SIAM) +Theresa M. Sandifer, Southern Connecticut State University, New Haven, CT (Triage) + +Triage Judges + +Head Triage Judge + +Theresa M. Sandifer, Southern Connecticut State University, New Haven, CT +Associate Triage Judges + +Ross Gingrich, Southern Connecticut State University + +Cynthia B. Gubitose, Western Connecticut State University, Danbury, CT + +Ronald E. Kutz, Western Connecticut State University, Danbury, CT + +C. Edward Sandifer, Western Connecticut State University, Danbury, CT + +Jim Wohlever, Western Connecticut State University, Danbury, CT + +# Channel Assignment Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +Paul Boisen, Defense Dept., Ft. Meade, MD + +James Case, Baltimore, Maryland + +Lisette de Pillis, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Doug Faires, Dept. of Mathematics and Statistics, Youngstown State University, Youngstown, OH + +Jerry Griggs, Dept. of Mathematics, University of South Carolina, Columbia, SC (SIAM) + +Jeff Hartzler, Dept. of Mathematics, Penn State University, Middletown, PA (MAA) + +Mario Juncosa, RAND Corporation, Santa Monica, CA + +Deborah Levinson, Dept. of Mathematics, Colorado College, Colorado Springs, CO + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Dept. of Mathematics, St. Mary's College, Notre Dame, IN + +Mark Parker, Dept. of Mathematical Sciences, U.S. Air Force Academy, CO (SIAM) + +John L. Scharf, Carroll College, Helena, MT + +Lee Seitelman, Glastonbury, CT + +Kathleen M. Shannon, Salisbury State University, Salisbury, MD (MAA) + +Jonathan Shapiro, Dept. of Mathematics, California Polytechnic State University, San Luis Obispo, CA +Robert M. Tardiff, Dept. of Mathematical Sciences, Salisbury State University, Salisbury, MD +Michael Tortorella, Lucent Technologies, Holmdel, NJ +Marie Vanisko, Carroll College, Helena, MT (Triage) +Martin Wildberger, Electric Power Research Institute, Palo Alto, CA (SIA) + +Triage Judges +(all from Mathematics Dept., Carroll College, Helena, MT) +Head Triage Judge +Marie Vanisko +Associate Triage Judges +Mark Keefe, Terence J. Mullen, Phil Rose, and Jack Oberweiser + +# ICM + +Contest Director David C. Arney, Dept. of Mathematical Sciences, U.S. Military Academy + +# Elephant Population Problem + +Head Judge +Gary W. Krahn, U.S. Military Academy, West Point, NY +Associate Judges +Kelly Black, Mathematics Dept., University of New Hampshire, Durham, NH (Triage) +John Boland, Center for Industrial and Applied Mathematics (CIAM), University of South Australia, Australia +Karen Bolinger, Dept. of Mathematics, Clarion University of Pennsylvania, Clarion, PA +Ben Fusaro, Mathematics Dept., Florida State University, Tallahassee, FL (MAA) + +Triage Judges (all from Mathematics Dept., University of New Hampshire, Durham, NH) Head Triage Judge. +Kelly Black +Associate Judges +John B. Geddes, Gertrud Kraut, Dave Mecker, Jason Owen, Phil Ramsey, and +Kevin Short + +# Sources of the Problems + +Contributors of the problems were as follows: + +- Air Traffic Control Problem: Robert Rovinsky, Federal Aviation Agency, Washington, DC +- Channel Assignment Problem: Jerrold R. Griggs, Dept. of Mathematics, University of South Carolina, Columbia, SC +- Elephant Population Problem: Anthony M. Starfield, Dept. of Ecology, Evolution, and Behavior, University of Minnesota, Minneapolis, MN + +# Acknowledgments + +The MCM was funded this year by the National Security Agency, whose support we deeply appreciate. The ICM received major funding from the National Science Foundation. We thank Dr. Gene Berg of NSA for his coordinating efforts. The MCM is also indebted to INFORMS, SIAM, and the MAA, which provided judges and prizes. + +I thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +
P = Successful ParticipationA = Air Traffic Control Problem
H = Honorable MentionB = Channel Assignment Problem
M = MeritoriousI = Elephant Population Problem
O = Outstanding (published in this special issue)
+ +
INSTITUTIONCITYADVISORABI
ALABAMA
Huntingdon CollegeMontgomeryBob RobertsonP
ALASKA
Univ. of Alaska FairbanksFairbanksChris HartmanMM
CALIFORNIA
Calif. Acad. of Math & Sci.CarsonBrian R. LawlerM
Calif. Lutheran UniversityThousand OaksCindy WyelsP
Calif. Poly. State Univ.San Luis ObispoThomas O’NeilO,H
Calif. State U.BakersfieldJoseph R. FiedlerPP
Calif. State U.NorthridgeGholam-Ali ZakeriP
Calif. State U. FullertonFullertonMario MartelliPH,P
Calif. State U. Monterey BaySeasideDan Fernandez and Michael DaltonCP
Harvey Mudd CollegeClaremontMichael MoodyHMO,P
Zachary DoddsMH
Humboldt State Univ.ArcataRoland LambersonP
Sonoma State UniversityRohnert ParkSunil K. TiwariP
Univ. of Calif. - BerkeleyBerkeleyRainer K. SachsHP
COLORADO
Colorado CollegeColorado SpringsJane McDougallH
Jennifer CourterH
Mesa State CollegeGrand JunctionBill TiernanP,P
Edward Bonan-HamadaH,H
Regis UniversityDenverLinda DuchrowPP
U.S. Air Force AcademyUSAF AcademyDawn StewartP,P
Univ. of ColoradoColorado SpringsJon EppersonMH
BoulderAnne M. DoughertyO
Anne Dougherty and Bengt FornbergH
Univ. of Southern ColoradoPuebloJames LouisellP
CONNECTICUT
Sacred Heart Univ.FairfieldAntonio A. MagliaroP
Southern Conn. State Univ.New HavenRoss B. GingrichP
Theresa BennettP
Univ. of BridgeportBridgeportDr. Natalia RomalisP
U.S. Coast Guard AcademyNew LondonJohn FredaP
Western Conn. State Univ.DanburyC Edward SandiferH,P
Yale UniversityNew HavenSteven OrszagM
DISTRICT OF COLUMBIA
Georgetown UniversityWashingtonAndrew VogtPP
FLORIDA
Florida A&M UniversityTallahasseeBruno GuerrieriP
Florida Inst. of Tech.MelbourneGary W. HowellP
Jacksonville Univ.JacksonvilleRobert A. HollisterHP
Stetson UniversityDelandLisa O. CoulterM,H
Univ. of North FloridaJacksonvillePeter A. BrazaP
GEORGIA
Agnes Scott CollegeDecaturRobert A. LeslieP
Georgia Southern Univ.StatesboroEric FunasakiP
Dr. Goran LesajaP
Gary HubardH
State Univ. of West GeorgiaCarrolltonScott GordonH
HAWAII
Kapi'olani CommunityHonoluluSusan MooreP
IDAHO
Boise State UniversityBoiseStephen H. BrillP
ILLINOIS
Greenville CollegeGreenvilleGalen R. PetersPP
Northern Illinois UniversityDekalbHamid BelloutP
Wheaton CollegeWheatonPaul IsiharaP,P
INDIANA
Goshen CollegeGoshenDavid HousmanPP
David Housman and
Patricia OakleyP
Indiana UniversityBloomingtonJohn BrothersP,P
Rose-Hulman Inst. of Tech.Terre HauteDr. Frank YoungM
Frank YoungP
David RaderM,P
Sharon A. Jones and
Robert J. HoughtalenH,P
Saint Mary's CollegeNotre DamePeter D. SmithH,P
IOWA
Drake UniversityDes MoinesAlexander F. KleinerMP
Graceland CollegeLamoniRonald K. SmithH
Steve K. MurdockPP
Grand View CollegeDes MoinesTimothy L. HardyP
Grinnell CollegeGrinnellMarc ChamberlandM,H
Mark MontgomeryP,P
Iowa State UniversityAmesStephen J. WillsonH
Luther CollegeDecorahReginald D. LaursenM,H
Mt. Mercy CollegeCedar RapidsKent R. KnoppM,P
Simpson CollegeIndianolaM.E. WaggonerM,P
Randy BowerPP
Univ. of Northern IowaCedar FallsGregory M. DotsethP
Wartburg CollegeWaverlyMariah BirgenP
KANSAS
Benedictine CollegeAtchisonJo Ann Fellin, OSBP
KENTUCKY
Asbury CollegeWilmoreKenneth P. RietzM
Bellarmine CollegeLouisvilleBill HardinH
LOUISIANA
Northwestern State U. of La.NatchitochesRichard DeVaultP
MAINE
Bowdoin CollegeBrunswickAdam B. LevyP
Colby CollegeWatervilleJan HollyPH
MARYLAND
Goucher CollegeBaltimoreRobert E. LewandP
Mt. St. Mary's CollegeEmmitsburgBill O'TooleP
Salisbury State Univ.SalisburyMichael BardzellH
Steven M. HetzlerH
MASSACHUSETTS
MITCambridgeMichael P. BrennerM
Michael P. Brenner and L. MahadevianM,H
Simon's Rock CollegeGreat BarringtonAllen B. AltmanPP
Michael BergmanP
Smith CollegeNorthamptonRuth HaasPP
Univ. of MassachusettsLowellJames K. Graham-EagleP
Lou RossiP
AmherstRobert B. KusnerP
Western New England CollegeSpringfieldLorna HanesP
Worcester Poly. Inst.WorcesterBogdan VernescuM
Richard JordanH
MICHIGAN
Calvin CollegeGrand RapidsDorothea PronkM
Eastern Michigan UniversityYpsilantiChristopher E. HeeH
Hillsdale CollegeHillsdaleJohn P. BoardmanPH
Lawrence Tech. Univ.SouthfieldHoward WhitstonP
Ruth G. FavroP
Scott SchneiderH
Michigan State Univ.E. LansingCharles R. MacCluerH
Univ. of MichiganDearbornDavid JamesM
MINNESOTA
Macalester CollegeSt. PaulDaniel A. SchwalbeP
Daniel KaplanPP
MISSOURI
Crowder Coll.NeoshoCheryl IngramP
Patrick CassensP
Northwest Missouri State Univ.MaryvilleRussell EulerP
St. Louis UniversitySt. LouisDavid A. JacksonH
Truman State UniversityKirksvilleSteve SmithP,P
Washington UniversitySt. LouisHiro MukaiO
Wentworth Military AcademyLexingtonJacque MaxwellH
MONTANTA
Carroll CollegeHelenaJack OberweiserP
Mary KeeffeP
Phil RoseP
Terry MullenP
NEBRASKA
Hastings CollegeHastingsDavid B. CookeP,P
NEVADA
Sierra Nevada CollegeIncline VillageSteve EllsworthP
University of NevadaRenoMark M. MeerschaertP
NEW JERSEY
Rowan UniversityGlassboroHieu NguyenP
Paul J. LaumakisP
Sam LoflandP
NEW MEXICO
New Mexico Inst. of Mining & Tech.SocorroWilliam D. StoneP
New Mexico State UniversityLas CrucesMarcus S. CohenP
NEW YORK
Colgate UniversityHamiltonThomas W. TuckerH,P
Ithaca CollegeIthacaJames E. ConklinP
John C. MaceliH
Nazareth CollegeRochesterKelly M. FullerH
Rensselaer Poly. Inst.TroyBruce PiperP
Donald A. DrewP
Roberts Wesleyan CollegeRochesterCarlos A. PereiraP
St. Bonaventure Univ.St. BonaventureAlbert G. WhiteP
Maureen CoxP,P
SUNY CortlandCortlandR. Bruce Mattingly and R. Lawrence KlotzP
U.S. Military AcademyWest PointDavid BaileyO
Diane NelsonH
Greg ParnellM
Michael Jaye and
Greg FlemingM
Michael Phillips and
Michael DarrowP
James S. RolfH
Wells CollegeAuroraCarol C. ShilepskyP
Westchester Comm. CollegeValhallaSheela WhelanPP
NORTH CAROLINA
Brevard CollegeBrevardTheresa A. BrightP
Duke UniversityDurhamDavid P. KrainesO
Elon CollegeElon CollegeTodd LeeP
N.C. School of Sci. & Math.DurhamDot Doyle and Dan TeagueO,O
North Carolina State Univ.RaleighRanji S. RanjithanH
Univ. of North CarolinaChapel HillJon W. TolleP
WilmingtonRussell L. HermanPH
Wake Forest UniversityWinston-SalemEdward AllenO
Western Carolina Univ.CullowheeJeff GrahamH
OHIO
Capital UniversityColumbusDr. Ignatios VakalisP
Hiram CollegeHiramAngela SpalsburyH,P
Marietta CollegeMariettaTom LaFramboiseP
Miami UniversityOxfordDouglas E. WardP
Ohio UniversityAthensDavid N. KeckH
The College of WoosterWoosterPamela PierceH
University of CincinnatiCincinnatiCharles GroetschM
Xavier UniversityCincinnatiRichard J. PulskampP,P
Youngstown State Univ.YoungstownBob KramerP
Scott MartinM
Steve HanzelyM
Thomas SmotterMPP
OKLAHOMA
Univ. of Central OklahomaEdmondCharles L. CooperH
Daniel J. EndresP
Charlotte Simmons and Jesse ByrneP
OREGON
Eastern Oregon State CollegeLaGrandeDavid AllenP
John ThurberP
Anthony Tovar and Jeffrey PutnamP
Richard Hermens and Tom HerrmannP
Lewis & Clark CollegePortlandRobert W. OwensO,P
Southern Oregon State CollegeAshlandKemble R. YatesP
PENNSYLVANIA
Bloomsburg UniversityBloomsburgKevin FerlandP
Kevin Ferland and Scott InchM
Bucknell UniversityLewisburgSally KoutsoliotasP
Chatham CollegePittsburghEric RawdonP
Larry ViehlandP
Clarion UniversityClarionAndrew Turner and Sharon ChallenerP
John W. HeardH
Jon BealP
William D. KrughP
Gettysburg CollegeGettysburgJames P. FinkPM
John JaromaH
Sharon StephensonH
Haverford CollegeHaverfordRobert ManningP
Lafayette CollegeEastonThomas HillM
Messiah CollegeGranthamDouglas C. PhillippyP
James MakowskiH
RHODE ISLAND
Rhode Island CollegeProvidenceDavid L. AbrahamsonP
SOUTH CAROLINA
Central Carolina Tech. Coll.SumterMichael L. SalaisP
Charleston Southern Univ.CharlestonStan PerrineP,P
Univ. of South CarolinaAikenLaurene FausettP
SOUTH DAKOTA
Northern State Univ.AberdeenA.S. ElkhaderH
TENNESSEE
Lipscomb UniversityNashvilleGary HallH
Mark MillerP
TEXAS
Abilene Christian UniversityAbileneDavid HendricksP
Angelo State Univ.San AngeloJohn C. (Trey) SmithH
Austin Community College/RiverAustinAllison SuttonP
Southwestern Univ.GeorgetownTherese SheltonP
Stephen F. Austin State Univ.NacogdochesColin StarrP
Trinity UniversitySan AntonioAllen HolderM
Jeffrey OldhamP,P
Tarynn WittenM
University of DallasIrvingPete McGillP
University of North TexasDentonJohn QuintanillaH
University of Texas at DallasRichardsonTiberiu ConstantinescuP
UTAH
University of UtahSalt Lake CityDon H. TuckerH
Weber State UniversityOgdenRichard MillerH
VERMZONT
Johnson State CollegeJohnsonGlenn SproulP,P
VIRGINIA
Governor's SchoolRichmondCrista HamiltonOH
John BarnesH
James Madison UniversityHarrisonburgCaroline SmithH
James S. SochackiP
Randolph-Macon CollegeAshlandEve TorrenceP
University of RichmondRichmondKathy W. HokeM
Virginia Western Comm. CollegeRoanokeRuth ShermanP
WASHINGTON
Pacific Lutheran UniversityTacomaRachid BenkhaltiM,P
University of Puget SoundTacomaRobert A. BeezerH,P
University of WashingtonSeattlePeter SchmidP
Randall J. LeVequeP
Western Washington UniversityBellinghamIgor AverbakhH
WISCONSIN
Beloit CollegeBeloitPaul J. CampbellHP
Cardinal Stritch UniversityMilwaukeeSr. Barbara ReynoldsP
Ripon CollegeRiponDavid ScottPP
St. Norbert CollegeDe PereJohn A. FrohlingerP
Univ. of WisconsinPlattevilleSherrie NicolP
Stevens PointNathan WetzelP
Wisconsin Lutheran CollegeMilwaukeeMarvin C. PapenfussP
AUSTRALIA
University of New South WalesSydneyJames FranklinM,H
Bruce HenryH,P
University of Southern QueenslandToowoombaTony RobertsP
CANADA
Brandon UniversityBrandon, MBDoug PickeringPPH
Memorial Univ. of NewfoundlandSt. John's, NFLDAndy FosterPH
Okanagan University CollegeKelowna BCDr. Heinz BauschkeH
University of OttawaOttawa, ONLuc DemersH,P
University of SaskatchewanSaskatoon, SKJames A. BrookeP
Raj SrinivasanM
Tom SteeleP
University of TorontoToronto ONNicholas A. DerzkoM,MP
York UniversityToronto ONJuris StepransP
Neal MadrasH
University of Western OntarioLondon, ONPeter H. PoolePP
CHINA
Anhui UniversityHefei, AnhuiChen JunshengM
Wang DapengH
Wang HaiXi'anH
Beijing Institute of TechnologyBeijingXiao DicuiHP
Beijing Union UniversityBeijingRen KailongP
Xing ChunfengH
Zeng QingliH
Beijing Univ. of Chemical Tech.BeijingLiu DaminH
Zhao BaoyuanH
Beijing U. of Post & Telecom.BeijingHe ZuguoM
Sun HongxiangM,P
Central-South Univ. of Tech.Changsha, HunanZheng ZhoushunP
Zhang HongyanH,P
China U. of Mining & Tech.Xuzhou, JiangsuZhou ShengwuM
Wu ZongxiangM
Xue XiuqianH
Zhu KaiyongH
Chongqing UniversityChongqingChen YihuaM
Gong QuH
He Renbin and Zhang DanH
Li ChuandongP
Liu QionsunH
Zhao PingyongH
Coll. of Sciences, Jiamusi Univ.Jiamusi, HeilongjiangShan BaifengP,P
Gu LizhiPP
Dept. of System ScienceChangsha, HunanDuan XiaolongP
East China U. of Sci. & Tech.ShanghaiShi JinsongM
Shao NianciH
Lu XiwenM
Lu YuanhongM
Lu Xiwen and Shao NianciP
Lu Yuanhong and Dong LimingH
First Middle School of JiadingShanghaiChen GanH
Fang YunpingM
Fudan UniversityShanghaiCao Yuan and Cai ZhijieH
Xu QinfengP
Gong XueqingP,P
Cai ZhijieM
Cai Zhijie and Cao YuanP
Guangdong Commercial Coll.Guangzhou, GuangdongChen GuanghuiP
Harbin Engineering Univ.Harbin, HeilongjiangLuo YueshengH
Shen JihongP
Zhang XiaoweiH
Shang Shouting and Shang ShoutingP
Zheng TongH
Harbin Institute of Tech.Harbin, HeilongjiangShang Shouting and Yu XiujuanP
Wang XuefengM
Zhang ChipingH
Hefei University of Tech.Hefei, AnhuiBao JieP
Du XueqiaoP
Huang YouduP
Su HuamingH
Yang YouqingP
Zhou YongwuP
Huazhong UniversityWuhan, HubeiWang YizhiP,P
Inst. of Info. Sci. & Eng.Shenyang, LiaoningCui JianjiangH
Hao PeifengH
Xiao WendongM
Inst. of Operations ResearchQufuShi ZhenjunP
Jilin UniversityChangchun, JilinHuang QingdaoP
Lu Xian RuiP
Jinan UniversityGuangzhou, GuangdongFan SuohaiP
Ye ShiqiP
Nanjing Normal UniversityNanjing, JiangsuFu ShitaiP,P
Yao TianxingH,H
Nankai UniversityTianjinRuan JishouHP
Chen QiushuangH
Xingwei ZhouP
Chen ZengqiangH
National U. of Defence Tech.Changsha, HunanCheng LizhiM,H
Wu MengdaO
Northwest Textile Inst.Xi'an, ShaanxiHe XingshiP
Northwest UniversityXi'an, ShaanxiHe RuichanH
Northwestern Polytech. U.Xi'an, ShaanxiLiu XiaodongM
Liu Xiaodong and Peng GuohuaP
Peng GuohuaM
Wang MingyuH
Xu WeiH
Xu Wei and Wang MingyuM
Peking UniversityBeijingDeng MinghuaMP
Lei GongyanM,H
Shao Min and Zhang TaoM,H
Science School of Xi'an Jiaotong U.Xi'an, ShaanxiZhou YicangH
He XiaoliangM
He XiaoliangP
Shandong UniversityJinan, ShandongWang DongshengP
Shanghai Jiao Tong Univ.ShanghaiGong Peimin and Liu XiaojunP,P
Huang JianguoH
Lei ZhishuH
Song BaoruiP
Zhou GangM
Shanghai Foreign Languages SchoolShanghaiPan LiounP
Wan BaiheM
Shanghai Normal Univ.ShanghaiZhu DetongP,P
Guo ShenghuanP
Sichuan UniversityChengdu, SichuanZou ShuchaoP
Zhou JieP
Yang ZhihuoP
South China Univ. of Tech.Guangzhou, GuangdongHe Chunxiong and He DehuaH
Hao ZhifengP
Xu Shuwen and Hong YiP
Fu HongzhuoM
Xie LejunH
He ChunxiongP
Southeast UniversityNanjingChen EnshuiM
Huang JunHM,M,P
Tianjin UniversityTianjinRong XiminP
Liu ZeyiH
Hu ZhimingM,H
Hu Zhiming and Mei HeluP
Tsinghua UniversityBeijingYe JunHP
Univ. of Elec. Sci. & Tech.ChengduWang JianguoP
Xu QuanzhiHH,P
Zhong ErjieH
Univ. of Sci. & Tech. of ChinaHefei, AnhuiWan QianM
Xu JizhengH
Hong QuanH
Jin LetianP
Sun LiangM
Wei LiuH
Wuhan U. of Hydraulics & Eng.Wuhan, HubeiPeng ZuzengM
Cheng GuixingH
Huang ChongchaoH
Li YuhongH
Xi'an Jiaotong UniversityXi'an, ShaanxiDai YonghongP
Zhou YicangP,P
Xi'an Univ. of Tech.Xi'an, ShaanxiDu ZhanrongP
Tang PingP
Xidian School of ComputersXi'an, ShaanxiZhou Shuisheng and Liu HongyingP
Xidian UniversityXi'an, ShaanxiLiu HongweiH
Zhang ZhuokuiH
Mao YongcaiP
Zhejiang UniversityHangzhou, ZhejiangYang QifanMM
He YongMM
Cong Zhang, Chu Jiaowu, and He YongM
Cong Zhang, Chu Jiaowu, and Yang QifanM
Zhong Shan UniversityGuangzhou, GuangdongChen ZepengH
Yin ZhaoyangP
Li CaiweiP
Li ChaoruiH
Tang MengxiH
Yuan ZhuojianH
FINLAND
Paivola CollegeTarttilaEsa LappiPM
HONG KONG
Hong Kong Baptist Univ.Kowloon TongChong Sze TongH
Wai Chees ShiuP
IRELAND
University College CorkCorkJames J. GrannellP
Michael QuinlanM
Patrick FitzpatrickP
Martin StynesH
University College DublinBelfield, DublinE.A. CoxP
Peter DuffyH
Fergus GainesH
Niamh O'SullivanP
University College GalwayGalwayMartin MeereH
Michael P. TuiteH
University of LimerickLimerickGordon S. LessellsP
LITHUANIA
Vilnius UniversityVilniusAlgirdas ZabulionisP
Antanas MitasiunasP
Ricardas KudzmaP
Tadas MeskauskasP
SOUTH AFRICA
University of StellenboschMatielandJan H. van VuurenPP
+ +# Acknowledgment + +The editor thanks Chia Tzun Goh '01 of Beloit College for his help in identifying family names of team advisors from China, for whom the family name is listed first. + +# Air Traffic Control + +Samuel Westmoreland Malone +Jeffrey Abraham Mermin +Daniel Bertrand Neill +Duke University +Durham, NC + +Advisor: David P. Kraines + +# Introduction + +We propose five models. Each gives a metric for measuring eventual danger and a threshold such that controller intervention becomes necessary. An immediate danger metric prioritizes problems for a controller. + +We test the models on several different sample cases. The Close-Approach model matches most closely our intuitive understanding of the situations. + +We present an algorithm that models the decision process of a controller detecting and solving conflicts. The time-complexity of scanning for potential conflicts varies quadratically with the number of airplanes, though this could be reduced by clustering the airplanes by proximity. We argue that conflict resolution for a cluster of $n$ airplanes is an NP-complete problem with worst-case complexity $2^{n(n + 1) / 2}$ . + +# Assumptions and Hypotheses + +- A near mid-air collision (sometimes called a near miss) is defined by the FAA as an incident in which two airplanes pass within 500 ft of each other [Federal Aviation Administration 2000]. +- The airspace can be represented by a convex subset of $R^3$ . +- Air-traffic controllers have established protocols to prevent airplanes from colliding when crossing airspace boundaries in opposite directions. +- We know the position and velocity of every airplane in the airspace, with negligible error. + +- Every airplane has sufficiently negligible acceleration that linear models for its movement make sense over at least the next 2 min unless it is: + +- accelerating under the direction of a controller (so the controller has already determined potential conflicts), +- taking off (so it is not in the airspace used by cruising airplanes), or +- attempting to land (so it is not in the airspace used by cruising airplanes). + +- Airplanes can accelerate through a 2-minute turn, either clockwise or counterclockwise, parallel to the $xy$ -plane [Denker 2000]. +- Airplanes travel within vertically well-separated planes, called "cruising altitudes" [Mahalingam 1999, 27]. Airplanes in different cruising altitudes pose no danger to each other. +- Airplanes can accelerate at a given maximum rate in the direction of their velocity. (In particular, they do not cruise at their maximum speed.) +- Two airplanes present an eventual danger when, if their velocities are allowed to go unchanged indefinitely, + +-theywillcollide, +- they will pass "near" each other at some time, or +- they will pass through "nearby" points in space at "nearby" times. + +We later discuss appropriate values of "near." + +# Possible Solutions to the Danger Problem + +Two considerations dominate in determining the danger of two airplanes to each other: the proximity that they will attain and the time until it occurs. We present several approaches: + +- The Trivial Model provides yes-or-no answers to the question "Will the airplanes collide?" +- The Probabilistic Model determines risk based on the probabilities of collisions and near misses. +- The Close-Approach Model calculates the closest approach of two airplanes and the time until it occurs. +- The Space-Time Model considers the closest approach in four-dimensional space-time and the time until it occurs. +- The Logarithmic Derivative Model approximates a human observer's intuitions about how fast the airplanes are approaching each other. + +# The Trivial Model + +# How It Works + +The Trivial Model ignores effects such as wind, measurement uncertainty, or piloting imperfection, which could make an airplane's course deviate from the linear projection of its current position and velocity. + +Large commercial aircraft have lengths and wingspans of about 200 feet. Thus, a collision occurs only if the centers of airplanes pass within 200 ft of each another, and a near miss occurs only if they pass within 700 ft. + +Suppose airplanes $A$ and $B$ have position and velocity vectors $p_A, v_A$ and $p_B, v_B$ . Set $p = p_B - p_A$ and $v = v_B - v_A$ , the position and velocity of $B$ relative to $A$ . The distance of closest approach is the altitude from $A$ to $v$ (Figure 1). Its length is $d = |p| \sin \theta = |p \times v| / |v|$ . + +![](images/a753a4598ba6a59d1f33dbc90176082f46ea21f8eaa7f5c9aecd7f6dfae1e3df.jpg) +Figure 1. The position and velocity vectors of airplane $B$ relative to airplane $A$ . + +There will be a collision if $d$ is ever less than 200 ft, and a near miss if it is ever less than 700 ft. A measure of the eventual danger takes on three discrete values $a (\gg 1), 1,$ or 0 corresponding to a collision, a near miss, and no danger. The value of $a$ is best determined empirically. + +# Strengths and Weaknesses + +This model is simple and efficient. However, it assumes that airplanes always travel at a constant speed in a straight line; but in fact airplanes are buffeted by changing winds and their actual trajectories vary significantly and chaotically from those predicted by a linear model. Additionally, this model considers only eventual danger, not how soon immediate danger will be present, though the model could be extended to rank collisions and near misses based on immediate danger (time to collision or near miss). + +# Probabilistic Simulation Model + +# How It Works + +The Probabilistic Simulation Model calculates the probability that a given situation will result in a collision or near miss by using a Monte Carlo simulation. To do so, it performs a large number of random trials (each of which may result in a collision, near miss, or neither) and computes a measure of eventual danger: danger $= c_{1}x + c_{2}y$ , where $x$ and $y$ are the probabilities of collision and near miss and, as in the Trivial Model, we set $c_{2} = 1$ and $c_{1} \gg 1$ . + +We assume normal distributions of each airplane's speed and direction, with the mean the measured value and the standard deviation specified by the user. For each trial, a normally distributed random value of each quantity is chosen, then both airplanes' paths are extrapolated linearly; we use the minimum-distance formula from the Trivial Model to determine whether a collision, near miss, or neither occurs. + +# Strengths and Weaknesses + +Though a minimum time to collision or near miss is computed, this time is not taken into account in the measure of danger. Hence, we add an optional user-specified maximum time horizon; if two airplanes do not reach their minimum distance by then, their distance at that time is considered instead of the (later) minimum distance. Thus, we ignore conflicts that occur far in the future, focusing on more immediate dangers. According to one source [MAICA: MET improvement . . . , 124], conflict analysis tools that extrapolate based on current aircraft trajectories "operate over a short time horizon, generally less than two minutes." + +# The Close-Approach Model + +# How It Works + +We expect eventual danger to be inversely related to closest approach and immediate danger to be inversely related to time until that closest approach. Thus, we have, as a first approximation, + +$$ +\mathrm {E v e n t u a l d a n g e r} \approx \frac {1}{(\mathrm {d i s t a n c e o f c l o s e s t a p p r o a c h}) ^ {\alpha}}, +$$ + +Immediate danger $\approx$ + +$$ +\frac {1}{(\text {d i s t a n c e o f c l o s e s t a p p r o a c h}) ^ {\alpha} (\text {t i m e u n t i l c l o s e s t a p p r o a c h}) ^ {\beta}}. +$$ + +Since danger could be averted by accelerating the airplanes away from each other, the extra separation achieved should be proportional to the square of the + +time of acceleration. Since the time during which they can accelerate is bounded by the time until projected close approach, it seems reasonable to set $\beta = 2\alpha$ . Since raising to a positive power doesn't affect ordering, we set $\beta = 2, \alpha = 1$ . + +Such a simple formula runs into trouble in boundary situations: + +- No matter how far away the airplanes will be at their closest approach, immediate danger goes to infinity as they come to closest approach. +- If the two airplanes are on a collision course, the formula gives infinite immediate danger, no matter how much time remains until collision. +- If the airplanes have nearly equal velocities, the formula rates immediate danger as near zero (unless the aircraft are practically on top of each other) when it should intuitively be inversely proportional to current separation. + +We fix the formula as follows: + +Immediate danger $=$ + +$$ +\begin{array}{l} \frac {1}{(\text {d i s t a n c e o f c l o s e s t a p p r o a c h} + c _ {1}) (\text {t i m e u n t i l c l o s e s t a p p r o a c h} + c _ {2}) ^ {2}} \\ + \frac {c _ {3}}{\text {c u r r e n t s e p a r a t i o n}}, \\ \end{array} +$$ + +where $c_{1}, c_{2}$ and $c_{3}$ are positive constants, probably best determined empirically. + +Now we calculate the ingredients in the formula. If the distance of closest approach of a pair of airplanes is sufficiently large (e.g., $d > 5$ nautical mi [Mahalingam 1999, 26-27]), they pose no danger to each other. If they will pass closer, we rate the level of eventual danger as $d$ . + +The time until closest approach is the time until airplane $B$ reaches point $C$ : + +$$ +\frac {\left| \overrightarrow {B C} \right|}{| v |} = \frac {| p | \cos \theta}{| v |} = \frac {\frac {p \cdot v}{| v |}}{| v |} = \frac {p \cdot v}{| v | ^ {2}}. +$$ + +Plugging in, we get + +$$ +\text {I m m e d i a t e} = \frac {\left| v \right| ^ {5}}{\left(\left| p \times v \right| + \left| v \right| c _ {1}\right) \left(p \cdot v + \left| v \right| ^ {2} c _ {2}\right) ^ {2}} + \frac {c _ {3}}{\left| p \right|}. +$$ + +Since we get $0/0$ in the first summand when $v = 0$ , that is, when the two airplanes are flying parallel to each another, in this case we set the immediate danger equal to $c_3 / |p|$ . + +# Strengths and Weaknesses + +The danger can be computed with just over 50 basic numeric operations if there is eventual danger and about half as many to conclude that there is + +not; thus, a personal computer could handle this computation every second for several thousand airplane-pairs, or about 500 airplanes every 2 min. + +This model does not worry about airplanes that pass near each other in time but not in space. For example, it cannot distinguish between two airplanes flying the exact same route through space with a time-separation of 15 sec and two airplanes flying parallel with a physical separation of $2\mathrm{mi}$ in a direction orthogonal to their velocity; the first situation appears to us to be much more dangerous. The next model attempts to differentiate such situations. + +# The Space-Time Model + +# How It Works + +The Space-Time Model uses similar reasoning to the Close-Approach Model but considers the airplanes' proximity in space-time rather than simply in physical space. Thus, intuitively, we have: + +$$ +\mathrm {E v e n t u a l d a n g e r} \approx \frac {1}{\mathrm {c l o s e s t a p p r o a c h i n s p a c e - t i m e}}, +$$ + +Immediate danger $\approx$ + +$$ +\frac {1}{(\text {c l o s e s t a p p r o a c h i n s p a c e - t i m e}) (\text {t i m e u n t i l c l o s e s t a p p r o a c h}) ^ {2}}. +$$ + +The same corrections for boundary conditions apply, leaving + +Immediate danger = + +$$ +\begin{array}{l} \frac {1}{(\text {c l o s e s t a p p r o a c h i n s p a c e - t i m e} + \gamma_ {1}) (\text {t i m e u n t i l c l o s e s t a p p r o a c h} + \gamma_ {2}) ^ {2}} \\ + \frac {\gamma_ {3}}{\text {c u r r e n t s p a c e - t i m e s e p a r a t i o n}}. \\ \end{array} +$$ + +These quantities are harder to compute than in the Close-Approach Model. We can represent the future of airplane $A$ (initially at the origin) and airplane $B$ by rays in $R^4$ , parametrized by the vectors $(v_{a_x}t_a, v_{a_y}t_a, v_{a_z}t_a, kt_a)$ and $(v_{b_x}t_b + p_x, v_{b_y}t_b + p_y, v_{b_z}t_b + p_z, kt_b)$ , for $t_a, t_b > 0$ , where $k$ is a constant chosen so that one unit of time is as dangerous as one unit in one of distance. Mahalingam [1999, 26-27] equates a 15-min separation to a 5-nautical-mile separation; we assume that this equivalence scales. Then $k$ equals 5 nautical mi per 15 min, or about 34 ft/s. + +For any $t_a, t_b$ , the space-time distance between airplane $A$ at time $t_a$ and the airplane $B$ at time $t_b$ is + +$$ +\delta (t _ {a}, t _ {b}) = \left| \left(v _ {a _ {x}} t _ {a}, v _ {a _ {y}} t _ {a}, v _ {a _ {z}} t _ {a}, k t _ {a}\right) - \left(v _ {b _ {x}} t _ {b} + p _ {x}, v _ {b _ {y}} t _ {b} + p _ {y}, v _ {b _ {z}} t _ {b} + p _ {z}, k t _ {b}\right) \right|. +$$ + +This yields + +$$ +\big (\delta \big (t _ {a}, t _ {b} \big) \big) ^ {2} = A t _ {a} ^ {2} + B t _ {a} t _ {b} + C t _ {b} ^ {2} + D t _ {a} + E t _ {b} + | p | ^ {2}, +$$ + +where + +$$ +A = k ^ {2} + \left| v _ {a} \right| ^ {2}, B = - 2 k ^ {2} - 2 v _ {a} \cdot v _ {b}, C = k ^ {2} + \left| v _ {b} \right| ^ {2}, D = - 2 p \cdot v _ {a}, E = 2 p \cdot v _ {b}. +$$ + +The minimum space-time distance occurs at $t_a$ and $t_b$ that minimize this expression, that is, where $\nabla \delta^2 = 0$ : + +$$ +t _ {\alpha} = \frac {2 C D - B E}{B ^ {2} - 4 A C}, \qquad t _ {\beta} = \frac {2 A E - B D}{B ^ {2} - 4 A C}, +$$ + +which are well-defined whenever the velocities are not equal, since + +$$ +B ^ {2} - 4 A C = 4 \left[ (v _ {a} \cdot v _ {b}) ^ {2} - | v _ {a} | ^ {2} | v _ {b} | ^ {2} + 2 k ^ {2} v _ {a} \cdot v _ {b} - k ^ {2} | v _ {a} | ^ {2} - k ^ {2} | v _ {b} | ^ {2} \right]. +$$ + +The first two terms add to less than zero by Cauchy-Schwarz, and the last three by AM-GM, with equality in both cases only when the velocities are equal. [The case when the velocities are equal is handled more simply: For every $t_{\alpha}$ , there is a unique $t_{\beta}$ satisfying $Bt_{\alpha} + 2Ct_{\beta} + E = 0$ (since $C$ is always positive) that yields the minimum space-time separation.] The minimum space-time separation is $\delta(t_{\alpha}, t_{\beta})$ , and the time until this separation is $\min\{t_{\alpha}, t_{\beta}\}$ . + +The current space-time separation would appear to be the minimum of the space-time separations between the current position of airplane $A$ and the future of airplane $B$ , and the current position of $B$ and the future of $A$ . + +Determination of danger is done as in the Close-Approach Model, except that the times associated to closest approach must be computed first. If either is negative, then any danger posed by this airplane-pair has already been avoided, and so the eventual and immediate dangers are set to zero. Then the space-time separation of the closest approach is computed; if it is sufficiently large (e.g., more than 5 nautical mi), then the airplanes pose no danger to each other. + +# Strengths and Weaknesses + +Every airplane-pair receives at least as high immediate- and eventual-danger measures from the Space-Time Model as from the Close-Approach Model, while the Space-Time Model recognizes as dangerous some cases that the Close-Approach Model does not. The Space-Time Model is not significantly slower in operation. + +On the other hand, this model is much more opaque to any human who must try to work with it; human beings are not equipped to think in terms of extra dimensions. + +# The Logarithmic Derivative Model + +# How It Works + +The model arises from the observation that, if the velocities of airplanes $A$ and $B$ remain constant, the time derivative $dd/dt$ of the distance between the airplanes is monotonically increasing with time (unless the airplanes are traveling in the same line, in which case it is constant) and is bounded both above and below. Thus, the negative derivative is monotonically decreasing, and + +$$ +\left. - \frac {d - \ell}{\frac {d d}{d t}} \right| _ {t = t _ {0}} +$$ + +gives a lower bound on the time between $t_0$ and any future time $t$ at which the airplanes are separated by a distance less than some danger threshold $\ell$ . Thus, the reciprocal of this quantity, + +$$ +- \frac {\frac {d d}{d t}}{d - \ell} +$$ + +(the negative derivative of the logarithm of $d - \ell$ ), might work as a measure of immediate danger. + +We investigate the behavior of this function. Suppose that airplane $B$ has position and velocity vectors $p$ and $v$ in some frame of reference where $A$ is stationary at the origin, as in Figure 2. + +![](images/1a6aed9d614431abcb7d2ed9b34bd4a531b78cad1983eb090e02827fba59c1bb.jpg) +Figure 2. The logarithmic derivative. + +Now + +$$ +\frac {d}{d t} \left(| p | ^ {2}\right) = \lim _ {h \to 0} \frac {| p + h v | ^ {2} - | p | ^ {2}}{h} = \lim _ {h \to 0} \frac {2 h p \cdot v + h ^ {2} | v | ^ {2}}{h} = 2 p \cdot v. +$$ + +But + +$$ +\frac {d}{d t} \left(| p | ^ {2}\right) = 2 | p | \frac {d}{d t} | p |, +$$ + +so + +$$ +\frac {d}{d t} | p | = \frac {p \cdot v}{| p |} = | v | \cos \theta . +$$ + +This quantity is represented in Figure 2 by $\mu$ , the projection of one time-unit of velocity onto $p$ . Dividing by $|p| - \ell$ gives the number of time units until the projection onto $p$ intersects the circle of radius $\ell$ about $A$ . + +Consider also the behavior of this function as time passes (Figure 3): + +- If $B$ will not pass within $\ell$ of $A$ , the target point goes to infinity as $B$ approaches the point of least separation from $A$ ; the measure simultaneously goes to zero, then becomes negative as that point is crossed. This is fine; all danger is past once the point of least separation is reached. +- If $B$ will pass through the circle of radius $\ell$ , the target point converges to the intersection of $B$ 's trajectory with the circle; the measure goes to infinity as $B$ approaches the circle, then becomes negative once it passes inside. If $\ell$ is sufficiently small (e.g., 700 ft, the threshold for a near miss), this is also fine: Once the two airplanes are within $\ell$ of each other, it is already too late. + +![](images/8d9737c38780b07c74f71e01b7e6c7edb3fa1432c8023b595e8b7da1125e18c5.jpg) +Figure 3. Motion of the target point over time; two cases. + +# Strengths and Weaknesses + +The immediate-danger measure can be computed with only about 10 basic operations, even faster than the Close-Approach Model. Furthermore, it offers other useful information: The reciprocal of the measure is how much time remains until the airplanes play out whatever danger they face from each other. + +This measure behaves correctly in two of the situations where the previous two measures required ugly fixes: + +- If the two aircraft are on or very near a collision course, it acts as a countdown. +- If the airplanes are near in time to their closest approach, it gets large only if they are actually close to each other but remains small otherwise. + +A flaw is that this measure always gives an immediate danger near zero when the airplanes have nearly identical velocities. As before, we'd like the danger in this case to be roughly inversely proportional to their separation. We can solve this problem by adding a term $c / |p|$ to the measure; unfortunately, doing so eliminates the nice relationship between this measure and the time left for the controller to act. + +A potentially more serious problem is the inability of this algorithm to project far into the future. It detects almost no difference between, for example, two pairs of airplanes with the same relative velocities, the first of which are on course to collide in 5 min, and the second to pass with 1 mi of separation. + +# Testing the Models + +In the situations of Table 1, all airplanes move at 480 knots (811 ft/s). Airplane $A$ 's initial position is at the origin. + +Table 1. +Test situations. + +
SituationA headingB headingB location (in ft)
1. Impending head-on collision180°(6000, 0)
2. Impending oblique collision60°120°(3000, 0)
3. Tailgating(2400, 0)
4. Flying alongside(0, 2400)
5. Same point, nearby time90°(2400, -3200)
6. Same point, nearby time120°(4400, -2100)
7. Passing at a distance180°(18000, -6000)
8. Far-future head-on collision180°(600000, 0)
9. Flying parallel(0, 18000)
10. Right angles270°(18000, 0)
11. Receding180°(0, 6000)
12. Receding120°60°(3000, 0)
13. Receding180°(6000, 0)
+ +We use our intuition and all of our models (except the Space-Time Model, for which we could not find appropriate constants in the allotted time) to rank the immediate dangers presented by each situation. With the Probabilistic Model, we use the metric + +$$ +\mathrm {d a n g e r} = \frac {\mathrm {c o l l i s i o n s}}{1 0 0 0 0 \mathrm {t r i a l s}} + \frac {1}{2 0} \cdot \frac {\mathrm {n e a r m i s s e s}}{1 0 0 0 0 \mathrm {t r i a l s}}. +$$ + +The Close-Approach Model uses $c_{1} = 50$ ft, $c_{2} = 5$ s, and $c_{3} = .05$ Hz². + +Table 2. Rankings of dangers of situations. + +
SituationIntuitionTrivProbClose-AppLogarithmic
112123.5
222213.5
339.543.510.5
449.573.510.5
55.54.5552
65.54.5361
78.59.51195
8726107
98.59.58810.5
10109.51176
11129.5111210.5
12129.5111210.5
13129.5111210.5
+ +# Recommendations + +# Which Danger Model Should We Use? + +The Probabilistic and the Close-Approach Models match well our intuitive rankings, though not particularly each other. The Trivial Model and the Logarithmic Derivative Model both compare much less favorably. + +The Close-Approach Model agrees with our intuition almost exactly, except for switching the rankings of situations 8 and 10. As we expect very little eventual danger from situation 10, and very little immediate danger from situation 8, this is perhaps not so bad. The Space-Time Model would probably do even better: It would certainly rank situation 3 as more dangerous than situation 4 (agreeing with our intuition), and, as it is closely related to the Close-Approach Model, might well rank all the other situations identically. + +We recommend the Close-Approach Model. + +# How Close Is Too Close? + +In the Close-Approach Model, there is a natural map from the eventual danger to distance (by taking the reciprocal). Mahalingam [1999, 26-27] argues that airplanes should be horizontally separated by 3 nautical miles. But is she right? + +Assume that each airplane has velocity $v$ ft/s and that each can turn $z$ radians/s. We find the distance $d$ (in ft) at which each airplane must start turning to ensure that the aircraft do not pass within $x$ feet of each other at any time. + +Each turn (assuming constant turning rate) forms an arc of a circle of radius $r$ . Since the length of an arc subtended by an angle $\theta$ is given by $s = r\theta$ , we take the derivative to obtain + +$$ +r = \frac {d s / d t}{d \theta / d t} = v / z. +$$ + +Next, we note that $x$ (the shortest distance between the two circles) is equal to the sum of the distance $k$ between the centers of the two circles and the two radii: $k = x + 2v / z$ (Figure 4). + +![](images/264d42862a356908219ffa0ed55ea89bcd4bbe8bd5fd1f3cd3ae749f3aa14caf.jpg) +Figure 4. Analysis of avoidance of head-on collision. + +Finally, we observe that the initial line of flight of the two airplanes is tangent to both circles and hence two right triangles are formed. In each triangle, the lengths of the sides are $r = v / z$ , $d / 2$ , and hypotenuse $k / 2 = v / z + x / 2$ . We apply the Pythagorean theorem to obtain $(d / 2)^{2} = (v / z + x / 2)^{2} - (v / z)^{2}$ , so that $d = \sqrt{x^2 + 4xv / z}$ . + +We consider two airplanes with velocity 480 knots (811 ft/s) and turning rate $3^{\circ} = \pi /60$ radians/s (a standard "two-minute turn"). To avoid a near miss, the centers of the airplanes must be 700 ft apart. We calculate $d = 6,621$ ft $= 1.09$ nautical miles. So, both pilots must start a turn at a distance of at least 1.09 nautical miles apart, and the controller must identify the problem and communicate to the pilots before this point. Assuming a maximum delay of 15 s between the controller's discovery of the problem and the pilots' response gives a "safety distance" of about 5 nautical miles, or 19 s. + +# Measuring Complexity + +To measure the complexity of the workload faced by an air traffic controller (ATC), we need a basic understanding of the tasks performed by the ATC and how ability to perform these tasks is affected by the number of airplanes in the airspace sector. We present the following algorithm as a model for the decision process of an ATC in detecting and solving conflicts. The order of the steps is drawn from a synthesis by Endsley and Rodgers [1994] of reports on the factors identified by experienced air traffic controllers as relevant to conflict prevention. + +# ATC Decision Algorithm + +1. Scan the radar screen (and other sources of information) for airplanes located close to each other or currently at a safe distance but whose projected paths cross. +2. If a pair/group of airplanes at a given time instant appear close to each other, evaluate velocity and heading information to determine whether the airplanes will move to within a minimum separation distance of each other within the "near future" (2 min?). +3. If a potential conflict is detected, scan for other more pressing conflicts. +4. If there are no more-urgent conflicts, alert the pilots of the airplanes detected in Step 2 and formulate alternative routes for them. +5. Assess whether the alternative routes will cause conflicts with projected routes of nearby aircraft. +6. If the alternative routes will cause conflicts, reformulate other alternatives. +7. If there are no impending conflicts, or if the most recent conflicts have been resolved successfully, then take care of other tasks. + +8. When the items in Step 7 have been adequately dealt with, return to Step 1. + +# Complexity of Step 1 + +For $n$ airplanes in the airspace, there are $\binom{n}{2} = n(n-1)/2$ pairs, so this operation also has complexity of order $O(n^2)$ .1 More realistically, we could divide airplanes into clusters and analyze the complexity of each cluster, though clusters are not completely independent of one another. + +# Complexity of Steps 2-6 + +We incorporate the danger presented by each aircraft pair into a single danger metric $D$ for the airspace by summing the dangers of the individual pairs. It is useful first to assess each danger against a threshold level for ATC intervention; then the measure of danger $D$ for the airspace becomes the number of interventions that must be made. + +For each pair of endangered airplanes, the ATC must resolve the situation while making sure that the solution does not conflict with the constraints of any previously solved conflicting pair. Thus, each conflict constrains the choices to resolve each of the other conflicts; in some cases, after the first $k$ conflicts are solved, no solution for conflict $k + 1$ may exist under the constraints, hence backtracking may be necessary. In other words, this problem is a form of the general constraint satisfaction problem, which is known to be NP-complete [Vardi 1999]. We guess that the worst-case complexity of our problem varies exponentially with the number of pairs: $O(k^{n(n - 1) / 2})$ , for some constant $k$ . + +If the altitude of the airplanes cannot be changed, the ATC can either tell both pilots to bank to the right (from their point of view) or to the left. So, for every pair of aircraft in conflict, there are two possible solutions, hence $2^{D}$ possible solutions for a system with $D$ conflicts. + +We must also take into account the decisions of the ATC in Steps 4 through 6, estimating how many operations are involved. For an airspace divided into $C$ clusters, with $n_i$ airplanes in cluster $i$ , there are $n_i - 2$ other airplanes to consider in resolving a conflict in cluster $i$ . Thus there are (approximately) $2(n_i - 2)(D(i))$ interactions that the ATC must consider in a given cluster $i$ . The number of interactions added by this measure does not change the complexity of Step 2, $O(k^{n(n-1)/2})$ . + +# Additional Factors in Workload Complexity + +Complexity is also affected by factors such as the rate of airplanes entering and exiting the airspace, the volume of the airspace, and the presence of additional software tools. + +The complexity of airplanes entering and exiting is linear in the total number. + +Many of the operational errors by ATCs result from ignoring secondary conflicts for too long [Endsley and Rogers 1997], so the potential for accidents is higher for more airplanes per unit volume. + +Software could identify conflicts and order them by danger level, thereby reducing the complexity in Step 1; nevertheless, the primary complexity comes from solving conflicts once they arise $(O(n^{2})$ for Step 1 vs. $O(2^{n(n - 1) / 2})$ for Step 2). So programs to detect conflicts do not combat the primary complexity faced by an ATC, and they could cause an ATC to take a more passive attitude in searching for potential conflicts not identified by the software. In other words, software could worsen the problem of ignoring secondary conflicts for too long. Programs designed to aid an ATC in identifying conflicts should be designed as guides to the ATC's judgment rather than as automation of ATC functions. + +# References + +Airbus Industries. 2000. http://www.airbus.com/imperial.html. +Denker, John S. 2000. See How It Flies. http://www.monmouth.com/~jsd/how/htm/mannuver.html. +Endsley, M., and M. Rodgers. 1994. Situation awareness information requirements for en route air traffic control. DOT/FAA/AM-94/27. +__________ 1997. Distribution of attention, situation awareness, and workload in a passive air traffic control task. DOT/FAA/AM-97/13. +Eppstein, David, and Jeff Erickson. 1999. Raising roofs, crashing cycles, and playing pool: Applications of a data structure for finding pairwise interactions. Extended abstract at Proceedings of the 14th Annual ACM Symposium on Computational Geometry (1998), 58-67. Discrete and Computational Geometry 22 (4): 569-592. \http://www.uiuc.edu/ph/www/jeffe/pubs/cycles.html. +Federal Aviation Administration. 2000. Welcome to the Office of System Safety. http://www.asy.faa.gov/asy_internet/safety_data/default.htm. +Kremer, H., et al. 1997. Probabilistic vs. geometric conflict probing. National Aerospace Laboratory NLR. Amsterdam. +Lauderman, L.V., et al. 1998. Dynamic density: An air traffic management metric. NASA, NASA/TM-1998-112226. +Mahalingam, Sandra. 1999. Air Traffic Control. New Delhi: Kaveri Books. + +MAICA: Modeling and analysis of the impact of changes in ATM. Transport Research #71. Belgium. +MICA: MET improvement for controller aids. Transport Research #72. Belgium. +Rodgers, M., et al. 1998. The relationship of sector characteristics to operational errors. DOT/FAA/AM-98/14. +Vardi, M. 1999. The descriptive complexity of constraint satisfaction. Implicit Computational Complexity 1999. http://www.cs.indiana.edu/icc99/vardi.html. +Wiener, E., and D. Nagel. 1988. Human Factors in Aviation. San Diego: Academic Press. + +# The Safe Distance Between Airplanes and the Complexity of an Airspace Sector + +Finale Doshi +Rebecca Lessem +David Mooney +Governor's School +Richmond, VA + +Advisor: Crista Hamilton + +# Summary + +We determine the minimum safe spacing between aircraft and also the complexity of the air traffic control system. + +Taking into account the vortex that a leading plane leaves in its wake, the distance between the tail of one plane and the nose of the next plane should be at least $5.5\mathrm{km}$ or $3.4\mathrm{mi}$ . The minimum spacing between adjacent planes either to the side, above, or below should be at least $730\mathrm{m}$ or 0.45 miles. These distances were calculated using Bernoulli's principle, which states that the internal pressure of a fluid (such as air) decreases when its speed increases. Because the speed of a plane is very high, the pressure around the wings is low. The change in pressure associated with the Bernoulli factor, applied over the facing surface area, results in a force pushing the planes together; the force may alter the plane's flight pattern. + +Finally, if two planes are heading towards each other, there must be enough space between them to perform evasive maneuvers. We find that $12\mathrm{~s}$ is required; at normal flight speed, this translates to $2.9\mathrm{~km}$ or $1.8\mathrm{~mi}$ . + +We define complexity of an airspace sector as the probability of a conflict occurring during a given period of time. To determine complexity, we assume that sectors are rectangular solids and that planes fly either in parallel or antiparallel directions. We calculate the probability that a plane enters a sector either too soon after another plane, or that two planes enter the same airway going in antiparallel directions. + +The UMAP Journal 21 (3) (2000) 257-267. ©Copyright 2000 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +The weaknesses of this model include that all planes are assumed to be Boeing 767s. This model also does not take into account weather changes and multiple conflicts. + +The strengths of this modeling include allowing for passenger safety while slightly shortening the Federal Aviation Administration (FAA) minimum distances, thereby increasing airspace capacity. The complexity model accounts for stress; stability analysis shows that a small change in the environment does not drastically change the model. + +# Background + +According to current FAA separation guidelines, an aircraft must maintain a separation of 5 mi behind the plane and 2 mi adjacent to the plane [Gilbert 1973, 36-37]. + +Numerous benefits could come from reducing separation standards. Primarily, air space capacity would increase. Delays would be reduced because planes would not have to wait for an open airway. Finally, fuel costs would decrease because planes would be rerouted less frequently, with fewer delays. + +Questions remain, however, as to whether other bottlenecks would mitigate these benefits. Potential conflicts can occur in one of two areas: + +- Over $75\%$ of collisions occur within the terminal area—the space within 30 mi of an airport—because traffic is dense and constantly changing. +- Other conflicts occur en-route; most en-route airplanes have a constant altitude and velocity [Gilbert 1973, 91]. + +Our model concentrates on en-route traffic. + +The purpose of the air traffic control system is to avoid collisions. Avoidance depends on two basic sources of air traffic information: + +- Air-derived collision avoidance involves either the visual detection of a conflict by the pilot or the radar detection of air disturbances around the plane. +- Ground-derived collision avoidance uses radar from ground-based radar antennae. Controllers monitor radar to detect potential conflicts and contact the pilots involved to give them new courses. + +Ground-derived avoidance is the primary tool to maintain minimum spacing between planes. The controller of a certain air space gives continuous and detailed instructions to the pilot as to flight parameters that should be taken in the airspace, including heading and altitude. When all aircraft operate under the Ground Collision Avoidance System (GCAS), there is an extremely low rate of mid-air collisions [Collins 1977, 123]. + +The type of radar most commonly used by GCAS for en-route traffic monitoring is Air-Route Surveillance Radar (ARSR), long-range radar with a range + +of $200\mathrm{mi}$ and an altitude range of 40,000 ft. It has a slower rotation (3-6 rpm) than short-range radar (10-30 rpm); because the rotation is slower, accuracy and resolution are not as high [Gilbert 1973, 40]. + +Other disadvantages in the current radar system are [Federal Aviation Administration 1997, 4.2]: + +- ARSR lacks sufficient low-altitude coverage because traffic is concentrated at higher altitudes. +- Radar equipment can be unreliable or malfunction. +- Blind spots exist in the radar pattern, behind large objects and other planes. +- Radar cannot differentiate between targets within $3^{\circ}$ of each other from the radar antenna; these objects blur together on the screen. + +The capacity of the airspace is limited by the minimum spacing between planes and also by the size of the workload placed on the controller team. + +Airspace is broken down into sectors; one controller team manages each sector. The controllers must maintain radio contact with each aircraft located in the sector, and they must identify each aircraft on the radar screen. Each aircraft must be assigned a "travel plan" with a vector, heading, and altitude. Controllers must maintain constant surveillance on each aircraft flight pattern to identify potential conflicts. If the number of aircraft in a sector increases, more work is required by the controllers and more separation between planes is necessary to ensure that controllers spot potential conflicts. Controllers must also "transfer" planes that are exiting to a neighboring sector [Federal Aviation Administration 1997, 4.2]. + +Dividing the airspace into a greater number of smaller sectors would ease the workload of controllers, but the increased work in "sector transfers" and the increased cost in added controller teams and radar would reduce efficiency. + +# Assumptions in the Model + +- All planes behave like Boeing 767s. +- All planes are considered cylindrical rings for physics calculations. +- Weather is not a factor in the safe distance between planes. +- All planes are en route; they are not landing or taking off. +- All planes are flying at about 35,000 ft. +- All planes have a constant speed of $857 \mathrm{~km} /$ hour. +- There is no energy loss due to friction. + +Table 1. Symbols in the model (with standard SI-MKS units). + +
SymbolMeaning
aAcceleration
αAngular acceleration
C(t)Number of planes in crucial beginning area of sector at time t
dDistance between planes
ΔθAngular rotation
hHeight of sector
IRotational inertia
LLength of sector
ωAngular velocity
PPressure of air
P(t)Probability of conflicts at time t
R(t)Number of planes entering sector at time t
ρDensity of air
tTime
vVelocity
wWidth of sector
+ +# Model Development + +The model examines the distances that are required between planes from the front, back, above, below, and laterally; different forces and factors account for the different distances required. + +When two planes in opposite directions approach each other, we assume that one of the planes descends to avoid a collision. Knowing that it takes 12 s to avoid a collision, we can find the minimum distances. Each plane creates a pair of vortices, areas of strong turbulence, that extend outward and around from the wingtips. A vortex affects the safe distance behind the plane (point $c$ in Figure 1); the vortex from a large plane is large enough to damage seriously another plane that follows too closely. + +The other important factor is Bernoulli's Principle, which states that the internal pressure of a fluid (liquid or gas) decreases at points where the speed of the fluid increases. The moving air around the wing reduces the air pressure and would cause a nearby plane to accelerate towards the first plane. The force affecting planes above, below, and to the sides (points $d$ , $b$ , $e$ , and $f$ ) comes from a combination of the Bernoulli and vortex forces. Enough distance must be allowed between planes to overcome it. + +# Distance Required in Front of Plane + +According to 241 Air Traffic Control Squadron (ATCS) [1999], 12 s is needed to steer clear of another object: 6 s for the controller to radio to the pilot, 4 s for the pilot to start the maneuver, and 2 s to gain enough space to clear. At a speed of $238~\mathrm{m / s}$ , the corresponding distance is $2.9~\mathrm{km}$ . + +![](images/608f826531f6ef9459eea571f0e26932e2fd3b076ded191640590ce46627d05d.jpg) +Figure 1. Plane and associated forces. + +# Distance Required Behind Plane + +The minimum safe distance below and behind the plane is determined by the size of the vortex behind the plane. Each wing develops a vortex of air approximately $15.0\mathrm{m}$ in diameter, spinning at $42.7\mathrm{m / s}$ . The vortex sinks at $2.03\mathrm{m / s}$ until it is approximately $244\mathrm{m}$ below the level of the plane; it is usually $9,250\mathrm{m}$ long. This vortex is a great danger to following planes because it can cause them to roll [241 ATCS, 1999]. + +Because the Boeing 767 is $48.5\mathrm{m}$ long, and the vortex has a diameter of $15.0\mathrm{m}$ , a column of air with volume $8,850\mathrm{m}^3$ acts upon a plane flying in a vortex from another plane. The air density at $10.7\mathrm{km}$ above the ground (cruising altitude for Boeing 767s) is $0.380\mathrm{kg / m}^3$ . Hence, the mass of air acting on the plane is $3,360\mathrm{kg}$ if the plane is flying into the vortex. + +We assume that all of the angular momentum of the air is transferred to the plane: + +$$ +I _ {\text {a i r}} \omega_ {\text {a i r}} = I _ {\text {p l a n e}} \omega_ {\text {p l a n e}}, \tag {1} +$$ + +where $I$ is rotational inertia and $\omega$ is angular velocity. + +The air is a spinning disk, whose rotational inertia is given by + +$$ +I _ {\mathrm {a i r}} = \frac {m r ^ {2}}{2} = 0. 5 \times 3, 3 6 0 \times (7. 5 0 2) ^ {2} = 9. 7 7 \times 1 0 ^ {4} \mathrm {k g} \cdot \mathrm {m} ^ {2}. +$$ + +The angular velocity of the air at given distance $d$ from the plane can be found by using the properties of the vortex. As it leaves the plane, winds near the edge of the vortex measure at $45.72 \, \mathrm{m/s}$ . The circumference of the vortex is $47.1 \, \mathrm{m}$ ; it therefore takes $1.05 \, \mathrm{s}$ for the air to travel through one rotation, which means that the initial angular velocity is $6.00 \, \mathrm{rad/s}$ . The vortex disappears in $9,250 \, \mathrm{m}$ ; because the plane flies at $238 \, \mathrm{m/s}$ , this is $38.8 \, \mathrm{s}$ after the vortex was created. The angular deceleration, then, is found using the formula + +$$ +\alpha = \frac {\omega_ {f} - \omega_ {i}}{t} = \frac {0 - 6 . 0 0}{3 8 . 8} = - 0. 1 5 5 \mathrm {m / s ^ {2}}. +$$ + +The air decelerates at this rate. The plane moves at $238\mathrm{m / s}$ , so it will take $d / 238$ s to travel distance $d$ . The equation for angular velocity at distance $d$ becomes: + +$$ +\alpha = \frac {\omega_ {f} - \omega_ {i}}{t} = \frac {\omega_ {f} - 6 . 0 0}{d / 2 3 8} = - 0. 1 5 5 \mathrm {m / s ^ {2}}. +$$ + +We solve to find angular velocity $\omega_{f}$ in terms of $d$ : + +$$ +\omega_ {f} = (- 6. 4 9 \times 1 0 ^ {- 4}) d + 6. 0 0 \mathrm {r a d / s}. +$$ + +Next, we consider the angular velocity and rotational inertia. We assume that the plane spins around its central axis. Because most of the plane's mass is located to the outside, we assume that the plane is a rotating ring. Using the mass and radius of the Boeing 767, we find the rotational inertia: + +$$ +I = m r ^ {2} = (1 5 6, 5 0 0) (2. 8 5) ^ {2} = 1. 2 7 \times 1 0 ^ {6} \mathrm {k g} \cdot \mathrm {m} ^ {2}. +$$ + +This calculation assumes that the plane is flying into the vortex, not across it. Across the vortex, its rotational inertia would be that of a pivoting rod, given by: + +$$ +I = \frac {m l ^ {2}}{3} = \frac {(1 5 6 , 5 0 0) (4 8 . 5) ^ {2}}{3} = 1. 2 3 \times 1 0 ^ {8} \mathrm {k g} \cdot \mathrm {m} ^ {2}. +$$ + +This is far greater inertia than for the plane flying into the vortex. Less inertia means that it takes less force to turn the plane, meaning that the situation is more dangerous. Therefore, to determine the safe distance, we consider further only the approach from behind. + +The plane starts with zero angular velocity. We assume that it cannot turn more than $5^{\circ}$ (0.0873 radians) in the crossing period (0.938 s) without discomfort and loss of control. Further, we assume that the plane is climbing at an angle of $5^{\circ}$ or steeper as it goes through the vortex. At this angle, the plane, traveling at $238\mathrm{m / s}$ , would be in the vortex for no more than 0.938 s. The final angular velocity of the plane is found using these data and the equations + +$$ +\Delta \theta = \omega_ {i} t + \alpha t ^ {2} / 2, \quad 0. 0 8 7 3 = 0 + \alpha \times (0. 9 3 8) ^ {2} / 2, \quad \alpha = 0. 1 9 8 \mathrm {r a d} / \mathrm {s} ^ {2}; +$$ + +$$ +\omega_ {f} ^ {2} = \omega_ {i} ^ {2} + 2 \alpha \Delta \theta = 0 + 2 \times 0. 1 9 8 \times 0. 0 8 7 3, \quad \omega_ {f} = 0. 1 8 6 \mathrm {r a d / s}. +$$ + +Substituting these values into (1) gives + +$$ +(9. 7 7 \times 1 0 ^ {4}) [ (- 6. 4 9 \times 1 0 ^ {- 4}) d + 6. 0 0 ] = (1. 2 7 \times 1 0 ^ {6}) (0. 1 8 6), +$$ + +$$ +d = 5, 5 1 0 \mathrm {m}. +$$ + +Adding $152\mathrm{m}$ for radar uncertainty, we get an unsafe zone $5.7\mathrm{km}$ long behind a plane. + +# Distance Required Vertically and Laterally + +We use Bernoulli's Principle to find the minimum distance on the sides and up and down between planes. The distance between the two planes is $d$ , the + +initial velocity of the vortex is $45.7\mathrm{m / s}$ , and the acceleration of the air is the same as the acceleration of the vortex, $-1.766\mathrm{m / s^2}$ . + +We apply the equation + +$$ +v _ {f} ^ {2} = v _ {i} ^ {2} + 2 a d = (4 5. 7) ^ {2} + 2 (- 1. 7 7) d = 2. 0 9 \times 1 0 ^ {3} - 3. 5 3 d. +$$ + +Next, to determine the change in pressure, we apply Bernoulli's equation governing fluids: + +$$ +P _ {i} + \frac {\rho_ {i} v _ {i} ^ {2}}{2} = P _ {f} + \frac {\rho_ {f} v _ {f} ^ {2}}{2}, +$$ + +where $\rho$ is air density. The initial velocity is zero, and at 35,000 ft the density is $0.380\mathrm{kg / m^3}$ and the atmospheric pressure is $2,340\mathrm{Pa}$ : + +$$ +2. 3 4 \times 1 0 ^ {3} = P _ {f} + \frac {(. 3 8 0) (2 . 0 9 \times 1 0 ^ {3} - 3 . 5 3 d)}{2}, \qquad P _ {f} = 1. 9 5 \times 1 0 ^ {3} + 0. 6 7 0 d \mathrm {P a}. +$$ + +The change in pressure is + +$$ +2. 3 4 \times 1 0 ^ {3} - (1. 9 5 \times 1 0 ^ {3} + 0. 6 7 0 d) = 3 9 7 - 0. 6 7 0 d. +$$ + +Since pressure equals force divided by area, the force exerted can be found by using the surface area of the plane, which varies depending on whether the sides or the top and bottom are being considered. + +# Sides + +The length of a Boeing 767 is $28.5\mathrm{m}$ and the width is $5.7\mathrm{m}$ . Assuming that the plane is a cylinder and half of it is facing the side, the surface area affected is $(28.5)(5.7)\pi /2 = 434\mathrm{m}^2$ . (This is an overestimate, since we neglect the curvature of the body of the plane.) Since pressure $=$ force/area, we have $397 - 0.670d =$ force/434 and the force is $1.72\times 10^{5} - 291d$ . We assume that the maximum lateral acceleration, before either the ride becomes too turbulent or the plane loses control, is $0.1\mathrm{m / s}^2$ . Newton's second law (force $=$ mass $\times$ acceleration) becomes $1.72\times 10^{5} - 291d = (1.57\times 10^{5})(0.1)$ , leading to $d = 538\mathrm{m}$ . + +The total distance on the sides now needs to be calculated: + +Distance $= 538 +$ Vortex Width $+ .5$ (wingspan) $+$ Uncertainty factor for radar + +$$ += 5 3 8 + 1 5 + 4 7. 6 / 2 + 1 5 2 = 7 2 9 \mathrm {m}. +$$ + +Therefore, $729\mathrm{m}$ must be allowed on each side of the plane. + +# Above and Below + +For the vertical viewpoint, we have + +Surface area = half cylinder + wing area = (48.5)(5.7)π/2 + 283 = 717m². + +The pressure, calculated previously, is $396.8 - 0.670d$ . We have + +$$ +\text {P r e s s u r e} = \text {F o r c e} / \text {A r e a}, +$$ + +$$ +3 9 7 -. 6 7 0 d = \text {F o r c e} / 7 1 7, +$$ + +$$ +\text {F o r c e} = 2. 8 5 \times 1 0 ^ {5} - 4 8 1 d. +$$ + +Using Newton's Second Law as before, we have + +$$ +F = m a, \quad 2. 8 5 \times 1 0 ^ {5} - 4 8 1 d = (1. 5 7 \times 1 0 ^ {5}) (0. 1), \quad d = 5 5 9 \mathrm {m}. +$$ + +The total vertical distance is + +Distance $= 559 +$ Vortex Width + Uncertainty factor $= 559 + 15 + 152 = 727 \, \text{m}$ . + +Therefore, for safety, a plane needs $729\mathrm{m}$ on each side and $727\mathrm{m}$ above and below. + +# Complexity + +Complexity, from a workload perspective, we define to be the probability of a conflict—and therefore the need for an air traffic controller (ATC) to intervene—in a certain period of time. A high likelihood of conflict in a short period of time means a lot of potential stress for the ATC. In addition, to accommodate time to recover from stress, the 5 min before the time being considered is also included in the definition of complexity. + +We first find the probability of a conflict at a certain point in time. We assume that sectors are rectangular solids ( $L \times w \times h$ , in m). We divide a sector into blocks $727\mathrm{m}$ tall and $729\mathrm{m}$ wide, the minimum for Boeing 767 planes to be apart. + +We assume that planes fly in either parallel or antiparallel directions. There are two possibilities for conflict. + +- A plane enters a block too soon after another plane. The safe distance between planes following each other—the sum of the minimum distance in front of and behind a plane, $8,380\mathrm{m}$ —should not be violated; so planes entering a block should be 35.2 s apart. +- Two planes enter the same block from opposite ends, in antiparallel directions. + +We assume that planes enter a block according to a random process, so that entries to the block are independent. + +Let $m(t)$ be the probability density function for entry of a plane into a block, divided equally between the two directions. + +Suppose that a plane enters the block at time $t$ . + +- The probability that another plane enters the block in the same direction as the first plane but too close behind it is + +$$ +P _ {1} (t) = \frac {1}{2} \int_ {t} ^ {t + 3 5. 2} m (t) d t. +$$ + +- The probability that another plane enters the block from the opposite direction while the first plane is in the block is + +$$ +P _ {2} (t) = \frac {1}{2} \int_ {t} ^ {t + L / 2 3 8} m (t) d t. +$$ + +Hence, the conditional probability of conflict in a block, given entry of a plane into the block at time $t$ , is the sum of the two probabilities: + +$$ +P (\text {c o n f l i c t} \mid \text {p l a n e e n t e r s} \text {a t} t) = \frac {1}{2} \left[ \int_ {t} ^ {t + 3 5. 2} m (t) d t + \int_ {t} ^ {t + L / 2 3 8} m (t) d t. \right]. +$$ + +The unconditional probability density of conflict, assuming independent arrivals of planes in the block, is + +$$ +\begin{array}{l} P (t) = P (\text {c o n f l i c t} \mid \text {p l a n e e n t e r s a t} t) P (\text {p l a n e e n t e r s a t} t) \\ = P (\text {c o n f l i c t} \mid \text {p l a n e e n t e r s a t} t) m (t). \\ \end{array} +$$ + +The most important component of complexity is the probability of a conflict; the most complex situation is a high likelihood of conflict over a sustained period of time. + +However, a situation is more complex if the ATC is already stressed from previous problems, so in our definition of complexity for a block over an interval we add, weighted at $10\%$ , the probability of a conflict during the 5 min (300 s) preceding: + +$$ +\text {C o m p l e x i t y} = P (t) + 0. 1 \int_ {t _ {1} - 3 0 0} ^ {t _ {1}} P (t) d t. +$$ + +While software to alert controllers of potential conflicts would add to the safety of flight, it would also add to the conflict. Most models and distance estimates, including this model, tend to overestimate the safe distance between planes. Even if this were remedied, there is also the uncertainty of the exact position of the plane due to the imprecision of the radar, which would cause the software to alert the ATC even when no conflict was likely to occur. This would add to the complexity, because the controller would have more potential conflicts through which to sift. However, this added complexity would add to the safety because most possible conflicts would receive ample warning. + +# Stability + +We did a stability analysis in regard to changes in the mass of the planes, the velocity of the vortex, and the pilot's reaction time. The mass of the planes or the velocity of the vortex would be different for a different type of plane, and pilots' reaction times may vary. + +With the lateral Bernoulli forces, if the initial velocity of the vortex is changed by $2.2\%$ , the distance is changed by $4.8\%$ . If the mass is changed by $6.4\%$ , the distance is changed by $0.5\%$ . With the vertical Bernoulli forces, if the initial velocity of the vortex is changed by $2.2\%$ , the distance is changed by $4.5\%$ . + +The effect of the changes in the pilot's reaction time was analyzed in regard to the distance in front of the plane. If the time is changed by $8.3\%$ , the distance also changes by $8.3\%$ . Therefore, this model is stable with regard to all of the variables tested. + +# Strengths and Weaknesses + +The model provides for safety. Parameters, such as the mass of the plane, can be changed as appropriate. However, the model does not accommodate two planes of different models. The complexity section accounts for controller stress. The model's minimum spacings for aircraft, including taking into account 500 ft of radar uncertainty, reflect FAA guidelines reasonably well. + +# References + +241 Air Traffic Control Squadron (ATCS). 1999. 241 ATCS Home Page. Modified December 1999. Accessed February 2000. http://aw139.ang.af.mil/241atcs. +Association for the Advancement of Amateur Aeronautics and Astronautics. 1999. Altitude Range: General. Modified November 1999. Accessed February 2000. http://www.tinkertech.com/a5/outline/altrange.gen.html. +Boeing Corporation. 2000. Boeing Home Page. Modified February 2000. Accessed February 2000. http://www.boeing.com. +Collins, Richard L. 1977. *Flying Safely*. New York: Delacorte Press. +Federal Aviation Administration. 1997. *Pilot's Handbook of Aeronautical Knowledge*. Washington, DC: Government Printing Office. +Giancoli, Douglas C. 1991. Physics: Principles with Applications. 3rd ed. Englewood Cliffs, NJ: Prentice Hall. +Gilbert, Glen A. 1973. Air Traffic Control: The Uncrowded Sky. Washington DC: Smithsonian Institution Press. + +Machol, Robert E. 1975. Navigation standards over the North Atlantic. Interfaces 5 (No. 2, Pt. 2) (February 1975): 62-71. +_______. 1995. Thirty years of modeling midair collisions. Interfaces 25 (5) (September-October 1995): 151-172. +Marks, Robert W. 1969. The New Dictionary and Handbook of Aerospace. New York: Frederick A. Praeger Publishing. +Shevell, Richard S. 1983. Fundamentals of Flight. Englewood Cliffs, NJ: Prentice Hall. +Trivedi, Kishor S. 1982. Probability and Statistics with Reliability, Queueing, and Computer Science Applications. Englewood Cliffs, NJ: Prentice-Hall. +_____. 1998. A Concept Paper for Separation Safety Modeling: An FAA/Euro-control Cooperative Effort on Air Traffic Modeling for Separation Standards. Washington, DC: GPO. +Wesson, Robert, et al. 1981. Scenarios for Evolution of Air Traffic Control. Santa Monica, CA: Rand Corporation. + +# The Iron Laws of Air Traffic Control + +Kevin Arnett + +Jonathan S. Gibbs + +John J. Horton + +U.S. Military Academy + +West Point, NY + +Advisor: David Bailey + +# Introduction + +We focus our analysis on two key system design specifications: + +- how the Air Traffic Control System accomplishes its requirements (the safe routing of aircraft through a sector of airspace), and +- what computational and time demands are generated by the traffic load. + +We develop a solution that takes into account factors such as knowledge of proposed flight paths, orientation of aircraft, and acceptable probability of an accident. We develop two separate models to analyze situations where in-flight conflicts arise. + +- The first examines the position of aircraft through three-dimensional normal probability distributions to develop the likelihood of a collision. +- The second uses vector calculus and dynamics to develop real-time data on the likely trajectory of an aircraft, making no assumptions that the aircraft are flying along a predetermined path. + +Two other models use analogies to other fields to provide metrics of complexity of the workload of the air traffic controller (ATC), one focusing on the inherent complexity of an airspace (analogous to fluid flow) and the other on number of aircraft. + +Of our four models, we are able to validate only two, the probability distribution model for likelihood of collision and the airspace complexity model. The implementation and validation are straightforward and we omit them due to time considerations. + +# Assumptions + +- All aircraft flight paths are filed with the FAA prior to departure and are available to all ATCs. +- We treat an aircraft as a sphere with a set radius, whose orientation is insignificant. +- Two distinct types of errors cause flight path deviation: + +- systematic error (pilot error, emergencies, failed equipment, etc.) and +- random error (differences in weather, GPS signals, aircraft characteristics). + +- Relativistic effects are negligible. +- The curvature of the Earth need not be accounted for explicitly (the ease with which transformation matrices can be designed precludes this from being of great import). + +# Requirement A + +All aircraft are required by law to file flight plans with the Federal Aviation Administration. ATCs generally assume that an aircraft will follow its filed flight plan but are attentive to deviations. There is a duality in the ATC job: on one hand, ATCs plan as if everything will perform in a predictable manner, while they simultaneously must be vigilant in case things do not. The two different roles of the ATC are reflected in our model. Based on how an ATC monitors a sector, we break our model into two separate submodels: + +- The Random Effects Model predicts potential conflicts between two aircraft based on flight path data and characteristics of the aircraft, weather, etc. +- The Contingency Model addresses routine real-time monitoring of aircraft. It makes no assumptions about aircraft following a given flight path. The model serves as an alert system for an ATC, warning of conflicts caused by systemic errors as they arise. + +Both models run on real-time data, but the Contingency Model has a higher priority in terms of computing resources. The Random Effects Model is re-evaluated as fluctuations in velocity affect the flight path. The Contingency Model is more relevant in situations where airplanes are not well spaced and where there is a high probability for unplanned deviations in flight path. In contrast, the Random Effects Model is more suited to the operational planning of flight paths and the monitoring of major air corridors. + +While certainly different, these two models both answer the question of what constitutes "too close," by examining situations that make a collision likely and advising the ATC of the need for intervention. + +It may seem that we should determine a safe separation between two planes, but this question is so related to orientation as to be meaningless. The better approach is to determine what flight paths and velocities will cause a collision. + +# Random Effects Model + +We represent an expected path with a vector-valued function $r_p(t)$ such that + +$$ +r _ {p} (t) = x (t) \vec {i} + y (t) \vec {j} + z (t) \vec {k}, +$$ + +where $t$ is time and $x, y, z$ are functions that describe the aircraft's position. However, due to a variety of factors such as weather and instrumentation inaccuracies, the actual position is not fixed but rather is dependent on three random variables. We define this actual position vector as + +$$ +r _ {p} (t) = x ^ {\prime} (t) \vec {i} + y ^ {\prime} (t) \vec {j} + z ^ {\prime} (t) \vec {k} = [ x (t) + \epsilon_ {x} ] \vec {i} + [ y (t) + \epsilon_ {y} ] \vec {j} + [ z (t) + \epsilon_ {z} ] \vec {k}, +$$ + +where each of the three error terms has normal distribution centered at 0, $\epsilon_{i} \sim N(0, \sigma_{i}^{2})$ for dimension $i$ , where each dimension can have a different variance. We assume independence of the error terms. We identify values for the variances based on data from the FAA (see Appendix A). Essentially, this model describes a probability shell surrounding the aircraft (Figure 1). + +![](images/ea0284930d1f4bf1430e25375cfb288d613348f50f4cc3efb119525c507087d8.jpg) +Figure 1. Probability shell for an aircraft in flight (not to scale). + +The probability of being within $\Delta_{i}$ of a predicted location in dimension $i$ is + +$$ +P _ {i} (\Delta_ {i}) = \Phi (\Delta_ {i} / \sigma_ {i}) - \Phi (- \Delta_ {i} / \sigma_ {i}) = 2 \Phi (\Delta_ {i} / \sigma_ {i}) - 1, +$$ + +where $\Phi$ is the standard normal cumulative probability density function. Assuming independence of the error terms in the three directions, the probability that the plane will be within $(\Delta_x,\Delta_y,\Delta_z)$ of its projected position is + +$$ +g (t, x ^ {\prime}, y ^ {\prime}, z ^ {\prime}) = P _ {x} (\Delta_ {x}) P _ {y} (\Delta_ {y}) P _ {z} (\Delta_ {z}). +$$ + +Consider two aircraft with probability functions $g$ and $h$ , assumed independent. The probability that there will be a collision at a given point is the probability that both planes occupy that point at the same time, namely, + +$$ +c (t, x ^ {\prime}, y ^ {\prime}, z ^ {\prime}) = g (t, x ^ {\prime}, y ^ {\prime}, z ^ {\prime}) \cdot h (t, x ^ {\prime}, y ^ {\prime}, z ^ {\prime}). +$$ + +Thus, we determine the viability of two flight paths by examining the probabilities associated with each point at every instant. This is a very computationally intensive prospect, but there are methods that provide solutions in reasonable amounts of time. + +# Implementation and Validation + +We wrote a program in Visual Basic for Applications on top of a Microsoft Excel spreadsheet. Flight data are placed in the worksheet, which the program uses as input. We used the program to validate the model for two scenarios: + +- The two planes collide. +- The two planes cross paths but at different times. + +The results of these simulations are presented and discussed in Appendix B. + +# Contingency Aircraft Tracking System + +For aircraft deviating grossly from their flight plans, the Random Effects Model is not useful. The Contingency Aircraft Tracking System is designed to alert the ATC to any aircraft that could be on a collision course. + +Using data collected on the positions of the aircraft, by either GPS or some other monitoring system, the system uses the path that the aircraft has been on to predict where it will be in the future. Those future positions are used to designate a sector of air space as off limits to other aircraft. When two or more aircraft are predicted to pass through the same sector, the ATC is alerted. This tracking system is based on several assumptions: + +- Aircraft do not accelerate in the direction of travel while in the airspace. +- Aircraft turn at a constant normal acceleration or move in a straight path. +- The position locating systems are accurate and provide continuous updating. + +Most major airports have the tracking ability described in the third assumption; continuous updating compensates for the other assumptions of zero tangential acceleration and constant normal acceleration. + +To predict possible future positions of the aircraft, we use three past positions. We calculate the vectors from the first to the second and from the second to the third and determine the angle between them. If the angle is below a certain tolerance, we approximate the path of the aircraft by a straight line; otherwise, we approximate it by the arc of the circle defined by the three points. In the latter case, since the aircraft could also stop turning or turn less sharply, we generate a line for the path of the aircraft if it keeps its current velocity vector, tangent to the circle. Anywhere between the arc of the circle and the line—a planar, fin-shaped area—is a possible future position. This system approximates + +the fin with a triangle defined by the current aircraft position, the position the aircraft would occupy if it continued along the arc of the circle for the time step, and the position if it flies straight for the rest of the time step. [EDITOR'S NOTE: We omit the vector calculus details of the calculations involved.] + +# Requirement B + +Applying vector calculus and multivariable analysis, we relate the characteristics of the flow of traffic in a sector to the complexity of the ATC's job in controlling the sector. By examining the influence of aircraft entering and exiting the sector in three different respects (instantaneous, over a time interval, and over a particular time of day), we refine the model. + +# Determining the Complexity Inherent in a Sector + +The sector is defined by its size (boundaries) and by the objects that impact the flow of traffic through it. We assume that the sector extends from the ground upwards through all space (no ceiling) and is bounded by cylinder walls following the shape of the base of the sector (ground projection). + +We model the airport as a vector field with the following properties: + +- Aircraft are equally drawn from all points toward the location of the airport. +- The size of the airport determines the magnitude of the attraction impact on aircraft traffic. + +The simple field to address these requirements is + +$$ +A (x, y) = k _ {i} \cdot \frac {(x _ {i} - x) \vec {i} + (y _ {i} - y) \vec {j}}{| (x _ {i} - x) \vec {i} + (y _ {i} - y) \vec {j} |}. +$$ + +The field $A_{i}$ for airport $i$ points from any point in the plane $(x, y)$ toward the airport location $(x_{i}, y_{i})$ . Furthermore, we assign each vector a magnitude $k_{i}$ , representing the impact of the airport on the traffic in the sector. + +The properties for obstacle fields in the sector are: + +- An obstacle's influence on a point is limited by the distance from the point to the center of the obstacle (obstacles create local effects). +- Physically larger obstacles impact aircraft farther from their centers than smaller obstacles do (Dallas/Ft. Worth has a greater influence on traffic than a municipal landing strip). +- The impact of the obstacle on traffic is related to properties like permanence, physical height, and the way that it impacts air traffic (a small town should have a lesser impact than a similarly sized downtown of skyscrapers). + +We use the following function for obstacle intensity: + +$$ +O _ {i} (x, y) = - h _ {i} \exp \left[ \frac {(x - q _ {i}) ^ {2} + (y - w _ {i}) ^ {2}}{- l _ {i}} \right]. +$$ + +Points more distant from the center of the obstacle $(q_i, w_i)$ get values closer to zero, and the variable $l_i$ reflects how large (geographically) the obstacle is. Our final property is upheld by the use of $h_i$ , which determines the relative impact of the obstacle. The negative sign ensures that the object repulses traffic. + +The impact of a number of obstacles is just the sum $O_{\mathrm{TOT}}$ of their impacts, and we get the vector field for the obstacles by taking the gradient $B(x,y) = \nabla O_{\mathrm{TOT}}$ of the total impact. This creates a field in which traffic tends to flow away from obstacles in the radial direction. + +We then combine the obstacle and airport fields into a single flow field through simple addition. + +# Total Complexity + +We characterize the complexity of the flow field that we have created. The flow of aircraft through a sector is analogous to the flow of fluid particles during bulk flow. In fluid mechanics, a laminar flow is marked by smoothness and predictability and is irrotational (no eddies). Turbulent flow, far more difficult to analyze, is choppier, less predictable than laminar flow, and rotational. Turbulent flow through the sector serves as our model for high complexity, while laminar flow is analogous to a sector with very low complexity. + +The curl of a vector field measures the level of rotationality of a flow. We evaluate the magnitude of the curl of the vector field at every point and use this as the measurement of the total complexity of a sector. + +# Complexity by Number of Aircraft + +We examine the impact of traffic volume on complexity of the workload for ATCs. We delineate three separate components: + +- instantaneous complexity, +- complexity over a time interval, and +- complexity over a particular time of day. + +From most demanding to least demanding, the tasks of an ATC are: + +- adjust a plane's trajectory to avoid a potential conflict or collision, +- create a minimum spanning tree that highlights the critical relationships among aircraft, +calculate the distance between aircraft, + +- receive and record data from each aircraft, and +- communicate with ATCs in adjacent sectors (hand off aircraft to one another). + +# Definition of Complexity + +We propose that the complexity of monitoring a sector can be defined similarly to the time complexity of an algorithm, in terms of the number of reference functions required to produce the correct output. + +# Instantaneous Complexity + +We assume that something similar to our flight plan validation model is used by the ATC, screening potential conflicts long before the concerned aircraft enter the sector. The real-time complexity of the ATC's workload is then related solely to the need for corrections and the management of aircraft in the sector. The instantaneous case handles deciding if corrections are necessary, which requires examining the relationship between every pair of aircraft. But checking all of the $\binom{n}{2} \sim \mathcal{O}(n^2)$ distances between pairs is inefficient; a human operator visually inspecting graphical output should be able catch dangerous interactions between aircraft, at least in simple or routine situations. To do so requires the ATC to look at only $n-1$ interactions, examining only the distance between an aircraft and its nearest neighbor. In more complex situations, we would need a better method for determining interactions of concern. + +An improved process for extremely vexing scenarios would be to employ a minimum spanning tree algorithm to determine the "edges" of interest, namely, the distance between an aircraft and its closest neighbors. By the definition of a minimum spanning tree, all airplanes that are very close together are connected by an edge, whereas those that are relatively far away from each other are not. Minimum spanning tree algorithms, such as Prim's or Kruskal's algorithms, have a complexity of $\mathcal{O}\left(n^{2}\right)$ . At first glance, this would not appear to have an advantage over checking the distances between all pairs. But the minimum spanning tree would not have to be determined at every iteration; the tree could be reused for some number of time steps without significant loss of accuracy. Hence, we conjecture that the instantaneous time complexity of monitoring $n$ aircraft falls between $\mathcal{O}(n)$ and $\mathcal{O}\left(n^{2}\right)$ .1 + +# Time Interval Complexity + +We examine how complexity is related to the number of aircraft passing through the sector over a given interval of time. The difference between this + +scenario and the instantaneous example is that there are now additional tasks beyond simply monitoring positions: + +- receive and record updated aircraft positions, +- deal with aircraft entering the sector, +hand off aircraft leaving the sector, and +- issue flight adjustments. + +As before, we are not interested in how long these tasks take, or even how many are performed; our concern is how the number of these operations that must be performed is related to the number $n$ of planes passing through the sector. + +- Updates: To update the positions of the airplanes in the sector, $n$ transmissions from the aircraft must be received and recorded. +- Flight Adjustments: We assume that any given aircraft has a probability $p$ of requiring a path readjustment at a given instant. It is reasonable to conjecture that this probability increases by (at least) some constant amount for each aircraft added to the system, so we postulate that $p = cn$ , where $c$ is some constant. The probability that a given plane will require an adjustment, multiplied by the total number of planes in the system, provides an estimated readjustment workload for the ATC of $W = cn^2$ . +- Handoffs and Receptions: For a plane entering the sector, its relationship with other aircraft already in the sector must be evaluated. Thus, entering aircraft require distance calculations. Aircraft leaving the sector require that the ATC send a radio transmission to the adjacent sector's ATC. An exiting aircraft is no longer tracked, so complexity decreases. We ignore the cost of handing an aircraft off, as it is in terms of cost. +- Total Analysis: The complexity of a time interval is considerably greater than before, as there is the monitoring requirement (conjectured to be between $\mathcal{O}(n)$ and $\mathcal{O}(n^2)$ ), the flight adjustment requirement ( $\mathcal{O}(n^2)$ ), the reception requirement ( $\mathcal{O}(n)$ ), and the handoff requirement (conjectured to be $\mathcal{O}(n^2)$ ). + +# Complexity During a Given Time of Day + +We define the flux of a sector over an interval as $\Delta n = n_{\mathrm{final}} - n_{\mathrm{initial}}$ . We interpreted an interval to be relatively short—10 to 15 min. + +Entering planes have a high complexity cost, while departing planes reduce complexity. Therefore, the times of greatest complexity are not just when $n$ is at a maximum, but rather when $d\Delta n / dt$ is at a maximum. + +# Potential Conflicts + +The most critical aspect of the ATC's job is to reroute aircraft when a potential conflict arises. Once a trajectory is corrected, we expect that the complexity from that problem returns to 0 but the complexity of correcting the next aircraft increases, because of more limited options. We therefore conjecture that the complexity of the addition of another needed correction is related to both the total number of aircraft and the number $q$ of previously corrected aircraft that have not departed the sector. If $k$ is the number of aircraft still requiring course corrections, each additional potential conflict increases total complexity by $\mathcal{O}\left(aq + bk^2\right)$ , with $a, b$ constants. We expect that increasing $k$ has a greater impact than increasing $q$ , because of the added demand of each correction. + +# Effect of Software Tools + +The automated tracking of aircraft would safely allow more aircraft to operate in a given sector. The motivation would primarily be economic. Air traffic would become more complex as automated software increased the ability of the system to handle a complex situation. For an ATC, the situation would be no more complex, since it could be handled with the same effort as before. + +# Discussion + +# Part A: Random Effects Model + +We develop a numerical method and implement a software program to analyze simulated flight paths. The results convincingly show the strength of this approach. In all cases, the model correctly predicts what would occur. + +This model has three main strengths: + +- It can easily accommodate the addition of aircraft to the system. +- It allows the ATC to be confident that the sector is devoid of conflicts before aircraft even enter. +- It is general enough so that we can make refinements as new data become available for different types of aircraft and situations. + +There are, however, a number of important weaknesses in this model: + +- Although the theoretical approach seems straightforward and simple, actually finding the largest value across numerous three-dimensional arrays is computationally foreboding. Since this program would ideally be real-time, the solutions must be achieved quickly (under 30 s). +- The model is not based on any actual data except for FAA guidelines. Ideally, the nature of the random effects could be determined and realistic standard deviations could be used. + +- We lack data as to what constitutes a dangerous probability. + +# Part A: The Contingency Model + +This model is used when aircraft are not following specified flight plans. The model has some very strong points: + +- It allows the ATC to keep track of many unplanned paths at once. +- Using an array of blocks of space facilitates the addition of multiple aircraft into the tracking system. +- Continuous updating maintains a current view of where all aircraft are projected to be. +- The model accounts for aircraft turning, when pilots are more likely to miss seeing another aircraft. + +The weaknesses of this model are due to some of the constraints on its operation and some of the approximations it makes: + +- Because the model approximates possible future positions with a triangle, some space is considered that the aircraft could not possibly enter. We also do not account for space that the aircraft could get to by turning less sharply. +- After a certain amount of time, the position of an aircraft along a circle begins to return to its original position, making the triangle of future positions a bad approximation. Therefore, the model is limited in how long into the future it is useful. +- The model does not account well for sharper and sharper turns. + +# Part B: Inherent Complexity of a Sector + +The only validation possible is to ensure that the model is consistent with intuition. We expect that sectors containing more objects (airports and obstacles) are more complex, and the model supports this by suggesting that these features add to the rotationality of the sector flow. In our implementation, the magnitude of the impact a particular object (storm, mountain, airport etc.) is based purely on conjecture. With experimental data and the input of actual ATCs, the model could be calibrated. + +The strength of this model is that it provides a metric for complexity that is completely general. Obviously, calibrating the model would make it more useful, but it seems unlikely that new considerations would appear that would invalidate the basic approach and the related assumptions. The weakness of this model is that although analytically sound, it is extremely difficult to implement numerically. + +# Part B: Aircraft-Based Complexity of a Sector + +The main strength lies in providing a method for analyzing how the role of the ATC would be affected by automated tracking software. The flaws are that + +- the model relies on a loose analogy between algorithmic and air traffic handling complexities, +- the model has not been calibrated, and +- the mathematics of complexity for the operations that we have defined are not well understood (that is, our operations are not numerical calculations but procedures for which the numbers of calculations are not known). + +# Conclusions + +DEPARTMENT OF TRANSPORTATION FEDERAL AVIATION AGENCY AIR TRAFFIC CONTROL DIVISION ANALYSIS SUBDIVISION WASHINGTON, DISTRICT OF COLUMBI + +DOT-FAA + +7 FEBRUARY 2000 + +MEMORANDUM THRU: Dr. W. Roland Hamilton, Chief of Analysis, Air Traffic Control Division + +FOR: Jane Garvey, Administrator, Federal Aviation Agency + +SUBJECT: Summary of report conclusions from Project Star-Chaser + +1. The purpose of this memorandum is to outline the findings of Project Star-Chaser and the ramifications that for operations and personnel of the Federal Aviation Agency and Air Traffic Control System. +2. Project Star-Chaser has arrived at two computational methods for alerting ATCs to instances where two aircraft come dangerously close to one another. + +(a) Random Effects Model: This model, if applied in ATC hardware, will allow operators to determine when the flight paths of two aircraft come dangerously close. This should give ATCs confidence that, provided aircraft stick to their flight plans, there is no chance that a multi-aircraft accident can occur. +(b) Contingency Model: This model, if implemented in ATC software/-hardware, will provide a real-time analysis of the movement of aircraft in an ATC'S sector. If an aircraft deviates from its flight plan (having to circle above an airport during severe delays, for example), the ATC has a + +useful tool to help determine how the real-time behavior of aircraft lead directly to possible aircraft proximity conflict. + +3. Perhaps more important than developing these computational tools, however, Project Star-Chaser has been largely focused on the systemic complexity associated with Air Traffic Control. We have broken the analysis of this complexity into several portions. + +(a) Inherent Complexity of a Sector: By examining the impact that objects have on a sector of airspace (namely, airports and obstacles), we have developed a means to analyze the complexity of a sector. This is the key to evaluating our current sector arrangement and determining if geographical boundaries should necessarily determine how the National Airspace is subdivided. +(b) Instantaneous, Time Interval, and Time of Day Complexity: These three areas are iterative refinements of how the aircraft in the sector lead to increased complexity. Even with employing a minimum spanning tree as a best case scenario, the complexity of an ATC's workload is still $\mathcal{O}\left(n^{2}\right)$ . This means that, under the present circumstance, increasing the number of aircraft leads to a much higher complexity for the ATC. +(c) Impact of Collision Corrections: The Project further examined the complexity model to account for how complexity is affected by the number of course corrections needed. We find that this complexity is $\mathcal{O}\left(q + k^2\right)$ , where $q$ is the number of corrections already made and $k$ is the number of corrections outstanding. This result suggests that a backlog of corrections greatly increases the complexity of the ATC's job. +(d) Impact of Advanced Information Systems: We predict that the use of more advanced and autonomous hardware/software packages will reduce the job complexity for the ATC. + +4. Advances will reduce the workload placed on ATCs and will increase efficiency and the volume of air travel. + +Unfortunately for ATCs, the end result of system improvements is no change. Though advances in guidance, tracking, and other technological areas seem to offer hope to reduce the stress and worry of the job, market and other economic equilibrium forces will quickly return the system to its maximum safe carrying capacity. + +5. The Point of Contact for this memorandum is the undersigned at J_Gibbs@ Star_Chase.org. + +# Appendix A: Identification of $\sigma$ s + +We assume that the net effect of variations in an aircraft's actual position compared to its predicted position is zero, since they are nonsystemic. In addition, the additive combination of random variations strongly suggests that the error term is normally distributed. + +We do not have a good way of measuring the standard deviations in the three coordinate directions. Instead, we "reverse engineer" the standard deviations based on FFA guidelines for the minimum separation of aircraft in flight. + +Table A1. FAA guidelines and corresponding standard deviations. + +
OrientationAltitudeMinimum separationStandard deviation (km)
LaterallyN/A5 mi1.563
Vertically<29,000 ft1,000 ft0.304
Vertically>29,000 ft2,000 ft0.430
+ +We determine the values for the standard deviations in Table A1 by assuming that the FAA set guidelines so that there would be a .999 chance that the actual position of the plane would be within the accepted range; hence we use for each dimension the equation + +$$ +\frac {1}{\sigma \sqrt {2 \pi}} \int_ {- \min \mathrm {d i s t}} ^ {\min \mathrm {d i s t}} \exp \left(\frac {- s ^ {2}}{2 \sigma^ {2}}\right) d s = 1 -. 9 9 9 +$$ + +and solve for $\sigma$ using MathCad. + +# Appendix B: Results of Simulations + +Figure B1 (next page) shows the result for the collision probabilities procedure on data for planes whose paths intersect at the same point at the same time. Figure B2 shows the result for a near-miss, paths that intersect in space but not at the same time. The maximum probability in the near-miss case differs from the maximum probability in the collision case by 11 orders of magnitude. + +![](images/ae6ade440054fff416b9b470873037adb5fcc8f0c1644eddf4326e5063e1e880.jpg) +Figure B1. Collision probabilities for the "collision" data set. + +![](images/b1a6df3749b432cfb19d0c8be2baefbbca1d94b93f3f41507d735b696f05f06d.jpg) +Figure B2. Collision probabilities for the "near-miss" data set. + +# References + +Baird, D.C. 1995. Experimentation: An Introduction to Measurement Theory and Experimental Design. 3rd ed. Englewood, NJ: Prentice Hall. +Denver Air Route Traffic Control Center (ARTCC) Homepage. 2000. http: //www.tc.faa.gov/ZDV/. +Devore, Jay L. 1995. Probability and Statistics for Engineering and the Sciences. 4th ed. Boston, MA: Duxbury Press. +Eppstein, David, and Jeff Erickson. 1999. Raising roofs, crashing cycles, and playing pool: Applications of a data structure for finding pairwise interactions. Extended abstract at Proceedings of the 14th Annual ACM Symposium on Computational Geometry (1998), 58-67. Discrete and Computational Geometry 22 (4): 569-592. \http://www.uiuc.edu/ph/www/jeffe/pubs/cycles.html. +Hibbeler, R.C. 1998. Engineering Mechanics: Dynamics. 8th ed. Upper Saddle River, NJ: Prentice Hall. +Horowitz, Ellis, Sartaj Sahni, and Sanguthevar Rajasekaran. 1997. Computer Algorithms / C++. New York: Computer Science Press. + +Rosen, Kenneth H. 1999. Discrete Mathematics and Its Applications. Boston, MA: WCB McGraw-Hill. +Schey, H.M. 1997. Div, Grad, Curl, and All That: An Informal Text on Vector Calculus. 3rd ed. New York: W.W. Norton. +Spiegel, Murray, and John Liu. 1999. Mathematical Handbook of Formulas and Tables. 2nd ed. New York: McGraw-Hill. +Stewart, James. 1998. *Calculus II*. Special Edition for the United States Military Academy, Fall/Spring 1998/1999. Boston, MA: Brooks/Cole Publishing Company. +Ugural, Ansel, and Saul Fenster. 1995. Advanced Strength and Applied Elasticity. 3rd ed. Upper Saddle River, NJ: Prentice Hall. + +# You Make the Call: Feasibility of Computerized Aircraft Control + +Richard D. Younger + +Martin B. Linck + +William P. Woesner + +University of Colorado + +Boulder, CO + +Advisor: Anne M. Dougherty + +# Introduction + +We investigate whether some of the work done by air traffic controllers (ATCs) could be handled by computers. Automated systems could act as watchdogs, heading off crises before they become catastrophes. Specifically, we investigate at what point an ATC must take charge of a situation to avoid catastrophe, what sort of decision must be made to remedy the situation, and how much stress is involved. + +# Objectives + +- Define a minimum safe distance between aircraft. +- Develop a numerical model of air traffic around a busy airport. +- Assess system complexity and corresponding ATC workload under a variety of circumstances. +- Develop aircraft guidance algorithms that minimize controller stress. + +# The System + +An ATC has three main tasks as an aircraft approaches an airport, all of which must be carried out as quickly as possible [Wood 1983]: + +- Ensure that the aircraft does not collide with another aircraft or any other obstacle. +- Ensure that the approaching craft is inserted smoothly into the traffic around the airport, with a minimum of disruption to the flight paths of other aircraft. +- Guide the aircraft onto a runway, again with a minimum of disruption to the rest of the traffic around the airport. + +Special cases, such as aircraft experiencing mechanical malfunctions, medical crises, or fuel shortages, must be dealt with, and changing weather conditions must be taken into account. + +To make the simulation concrete, we model Denver International Airport (DIA); our methods could be extended to nearly any air traffic control center. + +Each ATC is assigned a specific type of task. Thus, one set of controllers assigns flight paths to incoming aircraft, another guides those aircraft to holding patterns or landing approaches, and yet another guides planes to a safe landing. As an aircraft passes from one controller to another, the pilots switch radio frequencies. Each frequency belongs to a specific controller, and each ATC watches a radar screen on which icons represent flights for which that ATC is responsible. The tower controllers, who are in charge of landings, can see the aircraft and so are not as dependent on radar information [FAA 1999]. + +At DIA, it is common practice to route all incoming flights on north-south and east-west vectors, since prevailing wind conditions usually favor these approaches [Wood 1983]. Departing flights must use the same runways as incoming flights but once airborne are routed out of DIA airspace on northeast-southeast and northwest-southwest vectors, to minimize conflicts between incoming and outbound aircraft. Figure 1 shows a diagram of the general approach and departure vectors. To make its final approach and land on a runway, each aircraft has to pass a point in space approximately $5\mathrm{mi}$ from the end of the landing runway and approximately $2\mathrm{mi}$ (10,000 ft) above ground level. + +![](images/9c8aa2281a904788e2c803cb05c7192d417444dc578854e088d840145dec06d6.jpg) +Figure 1. Airspace approach and departure diagram. + +# Assessing Safety + +We use real data wherever possible; we consult the Federal Aviation Regulations (FAR) [FAA 1999] whenever technical questions arose. + +Federally regulated Instrument Flight Rules (IFR) state that the minimum safe distance between adjacent aircraft is 1,000 vertical feet and $3\mathrm{mi}$ of horizontal distance, when aircraft are moving at landing speeds in close proximity to each other and to the airport. Since the runways at DIA are 4,330 ft apart, this minimum safe distance must be ignored on final approach and on runways. + +# Assessing Complexity, ATC Workload and Stress + +Goode and Machol [1957], writing about large-scale queueing systems, describe complexity as "the extent to which any given attribute of a system will affect all the others if it is changed." In a complex system, all the variables are tightly linked; one could not, for example, change the position of an aircraft without immediately having to change some characteristic of most of the other aircraft in the system. Unfortunately, measuring this type of complexity is difficult, since there is no obvious set of measurable system variables for assessing how closely each variable depends on all the others. + +Further, we are interested not merely in the complexity of the system but in the stress and fatigue that the system is likely to cause its ATCs. Surprisingly, there is no accepted set of factors that cause ATC stress, fatigue, and error. Some researchers, such as Redding [1992], have concluded that the number of incidents was highest when ATC workload was actually moderate to intermediate. On the other hand, Morrison and Wright [1989], reviewing NASA data, report that ATCs make more mistakes when the workload is at its highest. One innovative study of the phenomenon of ATC fatigue is that by Brookings et al. [1996]. Their test subjects were Air Force ATCs who were asked to play an air traffic control simulation and were exposed to scenarios of varying difficulty. One scenario was an "overload scenario," in which they were asked to coordinate the movements of 15 aircraft at once. As the ATCs attempted to deal with each scenario, their heart rates, blink rates, and brain activity were monitored. Brookings et al. concluded that there was a correlation between workload and operator stress and that the likelihood of error—in the form of separation errors (not enough room between planes), fumbled approaches, and botched handoffs—was directly linked to ATC stress. As a result, we assume for our simulation that ATC stress should be minimized and that there is no "optimal stress level" [Redding 1992] at which ATCs should operate. + +Based on our literature survey, we conclude that there are four measurable factors that influence ATC stress levels: + +- $n$ , the number of total planes in the airspace around the airport. +- $f$ , the number of separation errors currently occurring in the airspace around the airport. We assume that a single separation error causes as much stress + +as 20 extra aircraft in the airspace. + +- $d$ , the smallest distance between planes currently on the map. +- $a$ , the average distance between aircraft. If all the aircraft are well distributed throughout the airspace of the airport, the ATCs are likely to experience less stress than they would if they were all clustered together. + +Our formula for the stress-causing complexity experienced by ATCs is + +$$ +C = n + 2 0 f + 1 0 0 / d + 1 0 0 / a. +$$ + +# Queueing Theory, Stochastic Input, and Algorithm Design + +The airport is a queueing system. A queueing system has servers that handle input, perform some function on the input, and then pass the input to some other part of the system. Input is not created or destroyed in the system. The servers are usually referred to as channels. The five active runways of DIA are the channels of the system. + +When the number of servers is inadequate to the amount of input they are called upon to handle, a queue develops. In the case of an airport, the holding patterns in which aircraft wait for clearance to land are the queues of the system. + +When the amount of traffic is stochastic (determined by a probabilistic distribution) and the input is discrete, the input density is often described by a Poisson distribution. The amount of time for each channel to process an input need not be constant. The standard numerical approach to modeling would involve setting up an input generator, which would send us airplanes according to a density function. We would then set up our channels, and the amount of time to process each plane would vary, probably according to a normal distribution. We would also set up holding patterns, to which our planes could be sent when there are no runways available. We could then let the program run and see how queues develop as time passes. + +Unfortunately, this simple approach doesn't allow us to investigate all the questions posed by the problem statement. The objective of the simulation should be to develop and test algorithms to monitor the position and speed of aircraft and to alter flight paths to minimize the workload and stress of ATCs. If we treat airplanes simply as inputs, without them becoming objects maneuvering in space, we cannot adequately test those algorithms. So, a more ambitious approach is called for, one that allows us to "create airplanes" stochastically, maneuver them, send them to holding patterns, land them, and assess the value of stress-causing complexity as the simulation runs. + +# The Model + +An airport is a continuous system; each airplane continuously changes position and velocity. However, there are discrete events that characterize the system, such as takeoffs, landings, and handoffs. Both continuous and discrete characteristics of the system can be described by a discrete model provided the time resolution is fine enough. Recognizing this, we develop three numerical simulations of aircraft behavior, which test: + +- the minimum-safe-distance assumption, +- guidance algorithms on aircraft landing, and +- guidance algorithms for aircraft entering and maintaining a holding pattern. + +# Minimum-Safe-Distance Simulation + +Consider two aircraft traveling on parallel flight paths at an airspeed of $300\mathrm{mph}$ , well below the cruising speed of commercial airliners. Let the vertical axis be the $z$ -axis, the axis parallel to the flight path be the $x$ -axis, and the axis perpendicular to both be the $y$ -axis. Wind and other factors perturb the velocity vectors of the planes according to (we assume) a normal distribution with mean zero. A large standard deviation might correspond to the planes flying in heavy weather, a small one to calm weather. + +![](images/4d0f0ec2bdb0c11487d465a3d47959245f201ef55eadfe8ec0330fdc8314bd27.jpg) +Figure 2. Aircraft separation simulation. + +If we did not correct the aircraft's course, the plane would diverge from its flight path and move in a random walk as time passes. Our program determines the distance between the aircraft and its flight path and changes the velocity vector to bring the plane back on course. Airplanes have maximum rates of acceleration in any direction, and we assume that the aircraft can change velocity in any direction by no more than $10\mathrm{mph / s}$ . The result is a corrected random walk, with the step size normally distributed and a finite correction to each step. + +Since the wingspan of an airliner is approximately 200 ft and turbulence effects surround the aircraft, we assume that if the airliners come within 250 ft of one another, they collide. We assume for the sake of simplicity that this holds true in the vertical direction as well. To test our minimum-safe-distance assumption, we fly planes next to each other for $100\mathrm{h}$ ; if the likelihood of collision is less than $0.05\%$ , we consider the distance safe for the given weather conditions. + +# Developing Aircraft Guidance Algorithms + +A flow diagram for our airport is shown in Figure 3. + +![](images/4827f39fb1ea7d986e939f92786f9a40c2e4b23a3256ac033f7bee7a4b853182.jpg) +Figure 3. Flow diagram for an airport. + +Airplanes enter the airspace according to a stochastic distribution. If a landing approach is free, they proceed towards it. If no runway is clear or if the airspace is too crowded, they are sent to a holding pattern (our queue). The amount of time in the holding pattern depends on the rates at which planes land and planes enter the airspace. + +We assume that the scheme used to select the next aircraft cleared for approach to landing is FIFO (first in, first out). If we wished to take into consideration factors such as fuel or other flight emergencies, each plane would have to be assigned a priority and planes would be pulled from the queue according to priority. + +Once an aircraft has been cleared for approach, it must select a runway. If the nearest runway is not free, the aircraft must circle until one is. Once it finds a clear runway, it may land. The runway is then occupied for some (perhaps stochastic) amount of time. The aircraft then spends some (perhaps stochastic) amount of time on the ground and takes off again. + +The process can be divided into boxes, or areas. How much stress airplanes in a given area cause to the ATCs depends on the number of airplanes and the extent to which guidance algorithms manage to control the aircraft. Aircraft just entering or just leaving the airspace are unstressful, since they are presumably spaced out along a very large circumference. Aircraft within 5 mi of a runway that are coming in to land are admitted only when a runway is clear; they are handled by ATCs who monitor those flights visually as well as via radar. The + +consequences of a mistake in this function are so high that we can safely assume that humans will handle this task for the foreseeable future. + +There remain two areas in which our algorithms might ease controller stress: + +- aircraft that have received clearance and must line up for a landing. Figure 4 shows runway checkpoints, through one of which an aircraft must pass to land. +- aircraft that have not been cleared to land and must be sent to a queue. Figure 5 shows the set of checkpoints that comprise the holding pattern. An aircraft can enter the holding pattern at any checkpoint, and, if properly guided, proceeds to the next checkpoint in the sequence. It cycles through the sequence until given clearance to approach. All aircraft move through the sequence in the same direction, to avoid head-on collisions. + +We design algorithms to maintain aircraft spacing while guiding those aircraft, either toward a runway checkpoint, or through the circular set of checkpoints that form the queue. + +![](images/1737185fab83751f425b63b1d6fdb7a45e4fab766d3290bfc36b98a3e52d7134.jpg) +Figure 4. Runway checkpoints. + +![](images/d8c439cbc3e5bb98d19ba1d36986ce80f5f5e8a5d292834c22697863ee369df1.jpg) +Figure 5. Queue checkpoints. Arrow indicates flight direction. + +# The Algorithms + +# Single-Avoidance + +This algorithm determines the distance between each aircraft and that aircraft's next checkpoint and orients the aircraft toward the checkpoint. It also evaluates the distance between that aircraft all other aircraft; if any distance is equal to or smaller than the minimum distance (3 mi horizontal, 1,000 ft vertical), it then orients the aircraft directly away from the dangerously close airplane (the aircraft in the airspace longer changes course) without changing speed. Once the distance again exceeds the minimum safe distance, the aircraft that had to change its course looks for its nearest objective (which need not be + +the same objective that it was originally approaching) and changes course toward that objective. An example of aircraft being guided toward a landing checkpoint by the single-avoidance algorithm is shown in Figure 6. + +![](images/384fc9ceec6543bfd54bb215d66b061298668ea8812336af32e2ff47c21e918c.jpg) +Figure 6. Aircraft converging on runway checkpoints while guided by the Single-Avoidance Algorithm. + +![](images/799c8c94387c8bb4bb3b921b92390f6e34a03d3996fcab9047ff93a75a3a5858.jpg) +Figure 7. Aircraft moving toward landing checkpoints under the Double-Avoidance Algorithm. + +# Double Avoidance + +If the distance between two aircraft drops below the minimum safe distance, both aircraft head away from each other, changing courses equally and in opposite directions without changing speeds. Figure 7 shows an aircraft being guided toward a landing checkpoint by the Double-Avoidance Algorithm. + +# Single-Vector Repulsion + +Unlike the first two algorithms, this algorithm constantly measures the distance between the aircraft under its control and that aircraft's nearest neighbor. It alters the course of that aircraft before a separation error can occur. The scheme used to correct the aircraft's flight path is as follows. + +In each time step, the algorithm determines the direction in which the aircraft must fly to reach the nearest checkpoint. It thus establishes a velocity vector $\vec{V}$ for the aircraft, whose magnitude cannot change but whose direction is corrected in every time step. It then finds the vector $\vec{D}$ that connects the craft under guidance with its nearest neighbor and calculates a correction vector $\vec{A}$ whose direction is the same as that of $\vec{D}$ but whose magnitude is + +$$ +| | \vec {A} | | = \frac {b}{| | \vec {D} | | ^ {2}}, +$$ + +where $b$ is a scaling constant. If $b$ is large, the correction is severe, even at large distances; if $b$ is very small, we run the risk that flight paths will not be adjusted + +quickly enough and the aircraft will come to close to each other. + +The velocity vector $\vec{V}$ is corrected to $\vec{V}_c = \vec{V} -\vec{A}$ . The result is that every aircraft is repelled by the aircraft nearest to it, with increasing intensity as the distance between the adjacent aircraft becomes smaller. At the same time, the aircraft remains attracted to its objective. + +If there are only two aircraft in the simulation, and both are headed for the same objective, they behave like a pair of negatively charged ions approaching a large positively charged sphere (this analogy breaks down when there are more than two aircraft). Figure 8 shows aircraft maneuvering toward an objective while guided by the Single-Vector Repulsion Algorithm. Note that most of the aircraft maintain much better spacing than with the previous algorithms. + +![](images/f44342a12d19394e11c020e35bd09e54d7f57d9ed3e593babeec2c1a16564cfb.jpg) +Figure 8. Aircraft maneuvering toward landing checkpoints under the guidance of the Single-Vector Repulsion Algorithm. + +![](images/fbf16d843dd6ecb5ee8a8644b6ba85922512eb003078fa53c6276712856028ac.jpg) +Figure 9. The Multiple-Vector Repulsion Algorithm. + +# Multiple-Vector Repulsion + +The Multiple-Vector Repulsion Algorithm has the same vector mechanics as the Single-Vector Repulsion Algorithm, but each airplane is repelled not just by its nearest neighbor but by every other airplane in the airspace. The behavior of a number of aircraft headed toward a single objective would be roughly analogous to a group of negatively charged ions heading toward a large positively charged sphere. With more than one objective, however, the analogy is less apt, since each "ion" is attracted only to the nearest sphere. Figure 9 shows 10 aircraft approaching a checkpoint under the guidance of the Multiple-Vector Repulsion Algorithm. + +# Testing the Algorithms + +# The Landing Approach Test + +In this test, aircraft enter the airspace according to a normal distribution with mean 120 s and a standard deviation of 60 s. The points on the circumference of the airspace are evenly and randomly distributed. The airspace is centered on the airport and has a radius 50 mi. Each aircraft enters the airspace at an altitude of 6 mi (close to cruising altitude) and descends to one of 10 runway checkpoints (each of the 5 runways can be approached from either end), each located 5 mi from the end of a runway and at an elevation of 2 mi. Once an airplane reaches a runway checkpoint, its velocity instantly becomes zero. This ensures that no further aircraft can reach that checkpoint while the plane remains in place. After a set period of time (1 min in our simulation, based on the FAR [FAA 1999]), the airplane disappears and can be described as having landed. The runway is then clear and a new aircraft can occupy that checkpoint. The value for ATC stress is calculated at each time step. + +# The Queueing Test + +Aircraft enter the airspace just as they do in the landing approach test. They descend to a set of queueing checkpoints and maneuver around those checkpoints in a holding pattern. Each holding pattern has a saturation level of airplanes, equal to the perimeter of the pattern divided by twice the minimum allowable distance between aircraft. Aircraft are added to the holding pattern stochastically until saturation is reached. We calculate the amount of ATC stress that the holding pattern generates. Figure 10 shows 10 aircraft descending toward and entering the holding pattern under the control of the Single-Vector Repulsion Algorithm. + +![](images/7fb74b40f189c2b8020789832fc34c24c8342fdc5adc88e512ec78d89c6c5772.jpg) +Figure 10. Aircraft being maneuvered in to a holding pattern by the Single-Vector Repulsion Algorithm. + +# Simplifying Assumptions in Testing the Algorithms + +- All aircraft behave like commercial air carrier aircraft. +- The velocity of all aircraft is $300 \mathrm{mph}$ . +- The minimum vertical separation between aircraft is 1,000 ft. +- The minimum horizontal separation between aircraft is $3\mathrm{mi}$ . +- The turning radius of all aircraft is negligibly small compared to the overall airspace. +- An aircraft has reached a checkpoint when it has passed within $1\mathrm{mi}$ of that checkpoint. +- Weather conditions do not affect the behavior of aircraft and are not considered. +- No aircraft is given any special priority over any other; fuel and emergency considerations are therefore ignored. +- There is no coordinating intelligence at work. The human ATCs can watch the simulation (and be stressed by it), but all actions of the aircraft are determined by the algorithm being tested. +- Aircraft are incapable of acting without instructions from the control algorithms. In essence, the pilot of each aircraft blindly and unquestioningly follows the orders given by the algorithm. + +The following assumptions apply only to the Landing Approach test for all algorithms: + +- A runway approach checkpoint remains occupied for $1\mathrm{min}$ . +- Any aircraft that reaches an approach checkpoint lands; there are no failed landing attempts. +- Once an aircraft has landed, it ceases to interact with any other aircraft and disappears from the simulation. +- Outbound aircraft do not interact with inbound aircraft and hence do not appear in the simulation. Figure 11 shows that with our current checkpoint system, corridors develop in which there is no inbound traffic. So we assume that outbound traffic passes through these corridors and need not be accounted for. + +![](images/f9450f384abe79730abc40dce4b9903546a6af1913f02fee1ec099d264da1d64.jpg) +Figure 11. Typical traffic pattern when aircraft head toward the checkpoints from the periphery. Note the open corridors, along which outbound traffic can be routed. + +# Results and Commentary + +# Minimum-Safe Distance Simulation + +Figure 2 shows the flight paths of two aircraft as they travel next to each other for $5\mathrm{min}$ , together with ideal flight paths that are separated by $3\mathrm{mi}$ . The results of many such simulations are given in Figure 12. Each data point represents the average of 150 runs, with each run lasting $100\mathrm{h}$ . The components of each aircraft's velocity are each disturbed by components with a mean of zero and with standard deviations as noted in the legend. The disturbances correspond to moderate, heavy, and severe weather conditions. + +![](images/e504a845799a6869fa2e0525613a76b102f6c69927e70d08f22bb5600b0c45a1.jpg) +Figure 12. Probability of collision in a $100\mathrm{h}$ run as a function of initial separation. + +The likelihood of collision is well under our $0.05\%$ criterion when the separation distance is $3\mathrm{mi}$ . In fact, in the course of all 450 runs of $100\mathrm{h}$ , over all three disturbance levels, no collisions occur. The likelihood of collision increases exponentially as the horizontal separation distance decreases and becomes appreciable at $1.8\mathrm{mi}$ , where the first collision occurs. We conclude that FAA regulations give a secure safety margin and so abide by them for the rest of the simulation. + +# Holding Pattern and Landing Approach Simulations + +There are no statistically significant differences among the stress levels for our four test algorithms, neither for holding pattern nor landing approach simulations. The holding pattern simulation achieves equilibrium with little to no stress and then the stress levels out. Since the landing simulation is much more complicated, one would expect sharp spikes and peaks in the stress, and this is what we see. Figures 13 (generated using the Single-Vector Repulsion Algorithm) and 14 (generated using the Multiple-Vector Repulsion Algorithm) show sample graphs of stress vs. time for the queueing and landing simulations, respectively. + +In Figure 13, the stress rises as planes are added; when the pattern is saturated, the stress level becomes steady. There are few collision warnings; the rise in stress is caused by the decreasing proximity of the planes as they approach the holding pattern. + +Figure 14, however, is fascinating. For $0 < t < 5,000$ , the graph is piecewise continuous, but the rest is totally discontinuous. In our definition of stress, there are two continuous terms and two discrete terms. Over the first $5,000\mathrm{s}$ of the simulation, the continuous terms play an important role in stress; but during the latter $5,000\mathrm{s}$ , the continuous terms die off, leaving the discrete terms to determine the stress. + +![](images/0dce42b1909146f988e2c22e1f98996f6cc3a924429ab0a1ec1e2b8da1ff5eca.jpg) +Figure 13. Stress vs. time for a queueing simulation. + +![](images/2c820a9f8bc2b7faf85a5d58847ec206825b9d493c0d9c1b5101ec7463ca49b5.jpg) +Figure 14. Stress vs. time for a landing simulation. + +Table 1 gives summary statistics for both simulations run with each of the controlling algorithms. Each scenario was repeated 100 times. The seed for + +the pseudorandom number generator was held constant across the controlling algorithms, so that each algorithm received the same input. + +Table 1. Descriptive statistics from the simulations. + +
Landing SimulationQueueing Simulation
MeanSDMeanSD
Single Avoidance12.80.84.11.9
Double Avoidance14.21.74.11.9
Single-Vector Repulsion14.21.74.11.9
Multiple-Vector Repulsion13.21.64.82.2
+ +The most interesting result is that the Single Avoidance Algorithm performs better than the other more clever algorithms. Here we can draw a parallel to the phenomenon from computer science known as deadlock. Deadlock occurs when two or more processes cannot continue executing because each process is requesting resources owned by the other process [Nutt 1999]. Despite much research and many clever algorithms, the standard way to handle deadlock in a computing environment is to just pick a winner, usually the process that has been waiting the longest. The Single Avoidance Algorithm is analogous: When two planes request the same airspace, only one can be awarded it; the algorithm picks a winner and vectors the loser away. + +To make the Multiple-Vector Repulsion Algorithm more efficient, we increase the repulsion factor between planes; if planes are kept farther apart, they will not incur collision warnings. The idea is good but the results are not encouraging. Figure 15 is a plot of stress vs. time for the landing simulation using the Multiple-Vector Repulsion Algorithm with increased repulsion. + +![](images/a3488f6531af925f41a60ec0640af1c4db864914d0c7c8f4f1d26d9c0a959840.jpg) +Figure 15. Stress vs. time for the landing simulation using the Multiple-Vector Repulsion Algorithm with increased repulsion. + +We see a monotonically increasing stress level—certainly not the desired result. We can explain this phenomenon too in terms of deadlock. Imagine that two airplanes approach a checkpoint on opposite approach vectors. When they get close enough to each other, the repulsion factor makes them turn around. When they again get far enough away from each other, they turn around and fly back toward the checkpoint; and the cycle repeats. The planes become deadlocked, so very few planes can land. Hence, the number of planes in the airspace increases, leading to increasing stress. + +The queueing simulations generate very little stress. Hence, software should be able to take control of a plane, vector it into a holding pattern, and keep it there. + +# Strengths and Weaknesses + +More factors contribute to this system than we are able to consider in a weekend. However, our model is modularized to the point where modules can be run independently of each other, making it easy to focus on specific parts of the model. + +It took our workstation approximately two hours to generate the data for our limited model, but parallel processors could each handle a set of planes. + +Our model covers all of the major elements of an airport simulation; while it is based on DIA, it can easily be generalized. + +# Conclusions and Recommendations + +We conclude from our simulation that FAA guidelines for aircraft separation give a generous margin for navigational, technical, and pilot error. Perhaps they could be relaxed, especially for well-controlled parallel landing situations in good to moderate weather. + +The air traffic control problem exhibits many traits common to NP-complete problems: + +- There is no deterministic way to pick the best solution. +- Humans appear to control air traffic much more effectively than computers could. +- Changes in the control algorithm for our simulation do not seem to impact the quality of the solution at all. + +These features lead us to speculate that this problem is indeed NP-complete. + +The computer does, however, handle the queueing problem fairly well; planes are vectored into the queue and held there with minimal stress. These results are promising; they suggest that though computers should not replace humans as the primary means of air traffic control, computers might be capable of handling the more mundane tasks. + +# References + +Airports Data Base—Denver International Airport. 2000. http://www.boeing.com/assocproducts/noise/denver.html. +Brookings, J.B., G.F. Wilson, and C.R. Swain, 1996. Psychophysiological responses to changes in workload during simulated air traffic control. *Biological Psychology* 42: 361-77. +Federal Aviation Administration. 1999. Federal Aviation Regulations. Washington, DC: Government Printing Office. +Goode, Harry H., and Robert E. Machol. 1957. System Engineering. New York: McGraw-Hill. +Morrison, R., and R.H. Wright. 1989. ATC control and communication problems: An overview of recent ASRS data. In Proceedings of the Fifth International Symposium on Aviation Psychology, 902-907. +Nutt, Gary. 1999. Operating Systems: A Modern Perspective. Reading, MA: Addison-Wesley. +Redding, R.E. 1992. Analysis of operational errors and workload in air traffic control. In Proceedings of the 36th Annual Meeting of the Human Factors Society, 1321-1325. +Wood, Kenneth Ray. 1983. Airport activity simulation. Master of Science thesis, University of Colorado. Boulder, CO. + +# Judge's Commentary: The Outstanding Air Traffic Control Papers + +Patrick J. Driscoll + +Department of Mathematical Sciences + +U.S. Military Academy + +West Point, NY 10996 + +ap5543@exmail.usma.army.mil + +# Introduction + +The Federal Aviation Administration (FAA) Air Traffic Control problem was an interesting mix of quantitative and qualitative inquiry. Teams used a variety of modeling approaches to resolve the qualitative issue of how to model the complexity of an air traffic controller's job; they apparently discovered that this type of question is often the most challenging, the most interesting, and frankly, the most fun to tackle. + +# The Approaches of the Top Papers + +The top papers in this year's contest did so with a flair of creativity and obvious thoughtful consideration for all dimensions of the modeling process. Many recognized that the number of aircraft in a controller's sector must ultimately be a significant contributing factor to this complexity. A good number of papers further refined this notion to include dimensions such as workload, relative proximity of aircraft, and number of aircraft flight path adjustments required per unit time, among others. + +Many papers recognized that aircraft conflict occurs in pairs and proceeded to assess the maximum number of possible conflicts for a given scenario. Some papers chose to divide the overall airspace into vertically separated layers and then developed conflict algorithms for the 2-D problem on a particular layer as opposed to using a 3-D model from the start. However, several papers fell + +back on a tacit 3-D model for complexity without realizing that their earlier results did not extend to this case. The most common approach to address the 3-D problem directly was to create an inner collision space (either a sphere or rectangle) around the aircraft and then use a larger alert space (containing this inner collision space) for early warning. The difference in radii between the two spaces was used as a measure for air traffic control conflict reaction time, so that a controller could adjust one of the aircrafts' courses without causing excessive internal forces on the aircraft, its passengers, or its cargo. + +There was a wide range of techniques used by teams to represent and identify potential conflicts along aircraft flight paths. Some reduced this problem to a time-parameterized vector-intersection problem in the 2-D plane, and others did the same for the 3-D case as well. + +There were several more extensive approaches worth noting: + +- One paper assumed that a drift error exists from wind effects, weather, and turbulence along an aircraft flight vector in three-space and applied a probability distribution to this error. This enabled the team to construct a stochastic simulation to test their model design. +- Still another paper incorporated both straight and curved parametric flight paths into their methodology. In both cases, if the flight paths of two aircraft drifted sufficiently close to cause an alert space violation, a controller was presumed to take corrective action to prevent intrusion into the collision space. The number of these alerts occurring for a given aircraft density in traffic then became a component of complexity measurement. + +# Other Papers + +The papers that did not rise to the top possessed glaring omissions or oversights that typically manifest themselves when teams run out of time, fail to properly identify all of the questions being asked, or develop complex mathematical representations and then find themselves lacking the ability to solve them. There seemed to also be a bit of uncertainty in some papers as to what exactly it means to analyze a model. + +Teams that dismissed portions of the problem based on the claim that "FAA ATC conflict software already exists" missed the point: The FAA knows what they have in their repertoire of tools; they are looking for other ideas and approaches that might facilitate a better solution than they currently have. If the head of the FAA were completely satisfied with the status quo, there would be no problem to be solved. + +# The Need for Verification + +# The Model Must Produce Results... + +Certain fundamental elements of modeling are consistently present in quality applications of the problem-solving process. Without these, technical reports have noticeable gaps in the information that they provide—gaps that call into question the validity, voracity, credibility, and applicability of the results being presented. This situation is similar to courtroom testimony; judges and juries have a difficult time believing a witness unless sufficient evidence is presented to support the witness's testimony. Consequently, if an MCM team's principal effort is to construct computer code to simulate an air traffic scenario, they must present results that provide evidence that their code/model actually ran and yielded the information sought. Analyzing the output of a model provides a basis for determining if the modeling approach chosen was reasonable. + +# … Which Must Be Presented and Analyzed… + +Simply put, after creating an acceptable mathematical representation (system of equations, simulation, differential equations, etc.) of a real-world event, this representation (model) must be tested to verify that the information it produces (solutions, simulation output, graphics, etc.) makes sense in the context of the questions being asked and the assumptions made to create the mathematical representation. It is insufficient to present such a representation without this additional evidence. Once a mathematical model is created, symbolic, graphical, and/or numerical methods must be used to produce evidence that the model works. Many of the best papers did so using a combination of these three approaches; some teams wrote C++ code or used spreadsheets, while others used a computer algebra system such as MAPLE or MathCad as their workbench. + +# ...and Compared with Clearly Stated Assumptions + +Far and away, papers reaching the final round of judging paid a good deal of attention to stating their assumptions clearly, explaining the impact of each assumption and why they felt it was necessary to include it in their model development. They were also very careful not to assume away the challenging and information-relevant portions of the problem posed. Teams' increased sensitivity to this aspect of the modeling process has consistently improved over the years to the point that it is a hallmark of most modeling efforts today. From a judging perspective, it is far easier to follow the logical construction of these teams' models and to identify what they were attempting to do. However, even a few of the best papers mistakenly placed key pieces of information in appendices rather than in the section of the paper where supporting evidence was desperately needed. + +# Use of Existing Research + +Teams are increasingly adept at using the Internet to find credible, reliable information sources to support their modeling efforts. There is a good deal of room for improvement in team papers as to how best to incorporate this information properly into a technical report, especially for a team that perceives that it has struck the motherlode of reference sources. Incorporating others' work without diluting one's own effort is challenging. However, parroting large portions of technical reports, thereby reducing a team's main contribution to simply interpreting someone else's research, is clearly not the solution. + +Three uses of existing research are common to most technical reports: + +- To chronicle the events leading to the approach taken in the current paper and to help the reader understand the context or domain of the problem. This action is typically accomplished in an Introduction or Background section. +- To identify and justify technical parameters needed for the new approach. For the FAA problem, some of these parameters could have been the average airspeed of a Boeing 747 or the typical work hours of an air traffic controller. +- To compare the graphical, symbolic, or numerical results generated by the new modeling approach with those previously identified, so as to examine the benefits or drawbacks of the new approach. + +Credible existing research used in these ways does not replace or dilute the current effort but directly supports and strengthens it. + +Given the time pressure of the MCM, one has to be very cautious not to get trapped into adopting a complicated modeling component from existing research without being able to explain clearly its development, its use and limitations, and its impact on the current model. This remains the classic red herring of the MCM, luring teams into committing to an approach only to discover late in the process that they are ill-equipped to handle it properly. Ultimately, the evidence of this error appears in the MCM entry in such forms as miraculously appearing formulae, unexplained graphics, and tables of data still waiting to be analyzed. Just as in a court of law, the MCM judges consistently find the results of models built on such tenuous foundations difficult to believe. + +# About the Author + +Pat Driscoll is an Academy Professor of Operations Research in the Dept. of Mathematical Sciences at USMA. He received his M.S. in both Operations Research and Engineering Economic Systems from Stanford University, and a Ph.D. in Industrial & Systems Engineering from Virginia Tech. He is currently the program director for math electives at USMA. His research focuses on mathematical programming and optimization. Pat is the INFORMS Head Judge for MCM/ICM contests. + +# Practitioner's Commentary: The Outstanding Air Traffic Control Papers + +Jack Clemons + +Senior Vice President + +Lockheed Martin Air Traffic Management + +Rockville, MD + +Jack.Clemons@lmco.com + +# Introduction + +The problem of introducing computer-assisted aids to the air traffic controller community to improve safety and reduce workload is both relevant and immediate. The Federal Aviation Agency (FAA) in the United States, the National Air Transport Services (NATS) agency in the United Kingdom, and numerous other civil aviation authorities around the world currently are engaged in evaluating, developing, testing, and deploying automated support tools even as these MCM papers were being written. All of the papers correctly identified the two factors that determine automation viability: passenger safety and controller workload. + +The air traffic controller community is slow to adapt to and embrace new technology. This is not due to some inherent technophobia by this population but rather is driven by professional concern for air traffic safety. Every controller must maintain regular job certification. A serious incident (e.g., a "near miss") that is traced to an error in their performance can cause them to be pulled from their station and recertified. Air traffic control (ATC) is a three-shift-a-day/seven-days-a-week operation. The entire focus of a controller is on the air traffic moving through the airspace as imaged on her display, and the voices of the pilots in her headset. There is little time to have her attention pulled away from those concerns to learn a new, more complicated series of mouse clicks, or to adapt to a new graphic data format or automation aid. The distraction caused is analogous to that of talking on a cellular phone while driving at high speed on the Interstate—attention is diverted from an exclusive focus on safety. And + +this is already one of the most stressful professions on earth. The introduction of automation to support and expand the controller's focus must be done with great care. + +# Criteria: Light from the Real World + +Each of the four selected MCM papers addressed the problem assigned in a different way. As a practitioner, I chose not to evaluate each on the elegance of the mathematical modeling employed; the MCM judges can concentrate on that. Rather, I examined each from its relevance to the problems facing real air traffic controllers. This is not to criticize any of the teams for not having the insight and experience of practitioners; of course, that is not realistic. Instead, my intent is to shed some light from the real world on each approach that might be useful to both the team members and to other readers. I assessed each paper on three qualities: + +- Thoughtfulness: How well did they think through the problem statement before addressing it? +- Realism: How close does the proposed solution come to addressing a real-world problem? +- Usefulness: Is the proposed solution itself applicable to address the world of the air traffic controller? + +On this basis, there was substantial variation in the approaches taken by the four teams. + +# The Best of the Four: University of Colorado Entry + +The entry from the University of Colorado team evaluated best by these measures. It was clear that this team spent appreciable time understanding the FAA. The selection of the Denver International Airport as the system model and obtaining the relevant airport, airspace, and ATC parameters from this airport were done extremely well. The team accurately represented the details of the air traffic and correctly asserted that conclusions drawn should translate into other regions. Second, the team used the Federal Aviation Regulations (FARs) as the guidance for resolving technical assumptions. Once again, this shows a real understanding of FAA procedures. FARs are indeed the governing standard for assessing safety. + +The use of a "corrected random walk" to validate the FAA's minimum aircraft separation standard was clever. It ignores standard approaches like the computation of minimum maneuver time required for two aircraft approaching + +head on. However, the model used by this team provides an insight for parallel flight that is novel and confirms the FAA separation standard as well. + +The approach taken to quantify stress as a parameter was good. Once again, they researched the available literature to develop a baseline for ATC stress and built the model from there. The model used is both simple and probably applicable. Though arbitrary, the measure of complexity should demonstrate a logical relationship between traffic patterns and workload stress that can provide insight into underlying causes. Of course, the motivation was to allow the team to transform the automation problem into a problem of queueing, thus greatly simplifying the model and still providing insights. The simple queueing model that the team proposed overlooked a significant factor in runway access, which is gate availability (i.e., even though all runways are clear, traffic may still have to hold because there are no available passenger gates). However, their model can easily accommodate this factor. + +The results of this model are intuitive and appear to be correct. Further, for the problem of airport terminal approach and landing at least, they correctly identify which elements of ATC need remain under the control of humans and which have potential for automation. + +# The Other Outstanding Papers + +Each of the other three papers, while providing useful insights, misses this mark by several degrees. + +# The Duke University Entry + +The team from Duke University scored well on my (admittedly subjective) scale of "thoughtfulness." They paid attention to the problem description and showed understanding in the model setup and the trades they performed. However, the paper seems to rely heavily on an assumption that pilots control the aircraft and controllers principally intervene to avoid collisions. This ignores the active control of all aircraft at all times by the controller team as each craft navigates through the airspace. + +Commercial airline pilots (as contrasted with general aviation or "private" pilots) do not dynamically choose their routes, their airspeed, or their altitudes. All this is under the direct control of the air traffic controller. Each aircraft files a flight plan, which must be coordinated and cleared with the FAA prior to take off. This flight plan sets the cross-country route that that aircraft will follow, and must explicitly follow regulated "highways" leading from one ATC center to another to ensure continuous ATC coverage. The aircraft's flight is under $100\%$ control of the air traffic controllers from the moment of gate departure through gate arrival. + +While the Duke University team's approach was thorough in defining the various "automation" programs that might be employed for collision detection + +and avoidance, it did not address or encompass the information available to every controller about the planned trajectory of the aircraft obtained from its flight plan. Collision detection schemes currently in use, and more advanced techniques now being evaluated by the FAA, combine information from the radar "track" history for the aircraft with the projected trajectory derived from the flight plan. In addition, the team did not adequately explore the cascade effects of ATC actions. In other words, once a controller decides to have an aircraft change altitude, speed, or direction to avoid a potential conflict, that action must be evaluated to see if new conflicts with other craft have been created. Collision avoidance schemes must have a "what-if?" capability embedded to allow controllers to evaluate a number of potential maneuvers for conflict avoidance before making giving final direction to the pilot. Such techniques are currently under evaluation by NATS in the United Kingdom for implementation in their newest En Route Control Center. On balance, however, I thought this paper had a good approach to realism and usefulness. + +# The U.S. Military Academy Entry + +The approach of the team from the U.S. Military Academy addressed the availability of flight data but appeared to overlook radar. While the mathematical modeling employed seemed sophisticated, the effort was directed toward determining the likely deviation of an aircraft from its flight plan by statistical methods alone. This is both complex and unnecessary, since every aircraft under ATC control can be uniquely tracked by radar. Not only does a controller have a positive radar image (or "track") on his screen, but that track is identified graphically with the aircraft identifier available from the aircraft's transponder. Coupled with the flight plan data available for the aircraft, the ATC has explicit knowledge of position, history, and intent. The statistical model employed by this team is largely irrelevant in that case. + +Also, the team makes an assumption that curvature of the earth need not be accounted for, which is not true for high-speed aircraft unless projecting conflicts over short time periods. A conflict model currently being evaluated by the FAA explicitly includes earth curvature for that reason. A similar set of comments applies to the team's complexity model; it is an interesting mathematical exercise but not useful. A controller simply viewing his radar display can readily determine complexity of the airspace. An approach to quantify this complexity for evaluation of automation should be direct, as the simplified approach by the University of Colorado team demonstrates by example. + +# Virginia Governor's School Entry + +Finally, the team from the Virginia Governor's School takes an approach that I believe is the least "real world": + +- The approach to validating minimum safe separation was unduly complex. + +The team's intent was to attempt to model the aerodynamics of a commercial airliner and, from that, to determine how close another craft could approach before being adversely affected by the air flow. This work has already been done by aircraft manufacturers themselves (and by countless generations of Aeronautical Engineering students, myself among them) and could be readily accessed rather than derived from first principals. + +- Once the team set out on this endeavor, they were then forced to make a number of oversimplifying assumptions (e.g., modeling the aircraft as a cylindrical ring, defining the fluid dynamics using vortices and Bernoulli's equation). These assumptions are quite wrong for the model and therefore cause the derived conclusions to be suspect. +- Similar sets of simplifying and unrealistic assumptions were used to model potential conflict. For example, the model takes no consideration of the actual sectors of air space and the active control from ATC under which every aircraft operates. + +I believe this team's paper would need substantial rework before it could be credibly presented to the FAA Administrator. + +# Summary + +Each team took a very different approach from the others in addressing the stated problem, with significant variation in practicality from the viewpoint of a practitioner. Of the four papers, that from the University of Colorado would be the one I would recommend taking forward as is to Ms. Garvey and her staff for review and further evaluation. + +# Reference + +Galotti, Jr., Vincent P. 1998. The Future Air Traffic Navigation System (FANS): Communications Navigation Surveillance Air Traffic Management. Ashgate Publishing Company, Ltd. +Machol, Robert E. 1995. Thirty years of modeling midair collisions. Interfaces 25 (5) (September-October 1995): 51-72. + +# About the Author + +Jack Clemons is the Senior Vice President of Strategic Programs at Lockheed Martin's Air Traffic Company, located in Rockville, Maryland. Jack joined the Lockheed Martin Corporation in April 1996. + +Jack began his career at General Electric Corporation's Reentry Systems Division in Valley Forge, Pennsylvania, now part of Lockheed Martin Management and Data Systems. He then worked on the NASA Apollo and Skylab programs for TRW Systems Group in Houston, Texas, and on the NASA Space Shuttle program for IBM Federal Systems Group, also located in Houston. Following that, Jack spent eight years in new product market development and market support for the IBM Corporation in White Plains, New York, and he served a one-year chair assignment as instructor at IBM Corporation's New Management School in Armonk, New York. + +Jack joined IBM Federal Systems Group's Air Traffic Control Company in 1992 as Functional Manager of Software Development. Following the acquisition of Federal Systems by Loral, Jack performed the roles of Director of EnRoute Programs and Vice President of Air Traffic Control Engineering. Following the acquisition of Loral by Lockheed Martin, Jack became Senior Vice President of Engineering, Technology and Operations before moving into his current position. + +Jack graduated from the University of Florida with Bachelor of Science and Master of Science degrees in Aerospace Engineering. + +# A Channel Assignment Model: The Span Without a Face + +Jeffrey Mintz + +Aaron Newcomer + +James C. Price + +California Polytechnic State University + +San Luis Obispo, CA + +Advisor: Thomas O'Neil + +# Introduction + +We were asked to design efficient assignment of radio channels to a symmetric network of transmitter locations over a large planar area, so as to avoid interference. Efficiency is based on the span, the minimum of the largest channel assigned. + +We derive properties implied by the first set of constraints and by the geometry of the given figure, which we use to construct what we call "span theory." We prove upper and lower bounds for the span of the given figure. With the aid of a computer program, we narrow the bounds and prove that the span is 9. This is also the span of a network generated by extending the figure arbitrarily far in all directions. + +We then consider slightly altered constraints, that the channels of neighboring transmitters cannot differ by less than $k$ . We determine two distinct strategies for channel assignments and two associated formulas for the span; the span is $\min \{3k + 3,2k + 7\}$ for both the figure and the generated plane. + +Allowing a transmitter to be positioned irregularly in the hexagons changes the span by at most 1. Allowing all transmitters to be positioned irregularly—a worst-case scenario—gives a span of 18. + +# Assumptions and Justifications + +- Every hexagon in the field has a single transmitter at its center. This can be assumed for Requirements A, B, and C from statements in the problem. + +- Every transmitter is an ideal transmitter, that is, it transmits radio signals equally well in all directions. No information is given suggesting any of the transmitters are less than ideal. According to our research, an ideal radio station would perform in this manner [Rorabaugh 1990, 134]. +- Every transmitter in the grid is assigned a single channel (the problem so states). +- Every positive integer works equally well as a transmission channel. + +# Terms and Definitions + +- Neighbors: Two polygons with a common side. +- Network: A finite or infinite group of connected, non-overlapping, maximally packed, regular polygons. +- Span: The minimum, over all assignments satisfying the constraints, of the largest channel used at any location. +- Symmetric network: A network that is symmetric about some axis. +- Tessellate: To repeat a geometric pattern over an infinite or finite plane. +- Tessellation: An arrangement of polygons that will fit together without overlapping or creating any gaps and cover an infinite plane. +- Valid network: A network that satisfies all the constraints of the given requirement. + +# Analysis + +# Hexagonal Geometry + +For convenience, we rotate the figure given in the problem to obtain rows instead of columns, as shown in Figure 1. + +![](images/2cb0a65ffe517f9cafba50bc19b9c269b25d2847fc1ffe009e402191757a8c43.jpg) +Figure 1. Rotated horizontal layout of hexagons. + +We begin by analyzing the geometry of a regular hexagon with side length $s$ . Drawing the diagonals as shown in Figure 2 yields six regular triangles. The distance from the center to any corner is $s$ , and the perpendicular distance from the center to any side is $s\sqrt{3}/2$ . + +![](images/61b7d64337a1b425d13367a199c62f915b50410b6b2aefe9b66a8513be707e09.jpg) +Figure 2. Detail of a regular hexagon. + +We examine a small symmetric network to determine the effects of the spectral spreading constraint. Figure 3 illustrates a circle of radius $2s$ drawn from the center hexagon. The centers of all six adjacent hexagons are within this circle, so no transmitter may neighbor a transmitter of an adjacent channel; but the circle does not spread beyond these six hexagons, so any transmitter beyond the six hexagons that surround the center may be assigned to an adjacent channel without interference. + +![](images/74783bfa1be5baeebce58e42a7abd36bfaa1b9aea4cd5fae45e1d7ce0730a513.jpg) +Figure 3. A symmetric network with seven hexagons and radius $2s$ marked from the center of the center hexagon. + +For example, consider Figure 4, with two networks in which channels are assigned to each hexagon. The network on the left violates the constraint above, because channel 1 and channel 2 are neighbors; the network on the right is a valid network. + +![](images/faf5fb81a1b43662c96406da1a2c0c0db5088b670437a31908c55f70118dccf0.jpg) +Figure 4. Two symmetric networks with seven hexagons and assigned channels. + +![](images/b25a7466a9bf3cf73ab729f0b09887f90d321356a2e86f21396e0dc8a46d9254.jpg) + +We summarize the effect of spectral spreading with the following rule: + +Adjacent Channel Principle: No neighboring hexagons may be assigned adjacent channels. + +We now consider the requirement that no two transmitters with the same channel may be within $4s$ of each other. + +We define two hexagons to be $n$ hexagons away from each other if and only if one can construct a path of $n$ straight line segments, and no fewer, with each segment having length $s\sqrt{3}$ and having both endpoints at centers of hexagons. For example, in Figure 5, hexagon $A$ is 3 hexagons away from hexagon $B$ . + +![](images/25eb6104309db2aac697e14a05e52d793a9d1d0d594d151b1396e9e5a66c5297.jpg) +Figure 5. Hexagon $A$ is 3 hexagons away from hexagon $B$ . + +From Figure 5, we observe that no two transmitters of the same channel can be 2 hexagons away from each other, but they can be 3 hexagons away from each other: + +Same Channel Principle: No two transmitters of the same channel may be less than three hexagons away from each other. + +# Embedded Subgraph Method + +We divide the hexagons into sets as indicated in Figure 6. The dotted lines indicate the embedded subgraph within the network. The vertices of the subgraph are indicated in the hexagons that are included in the subgraph, and those hexagons are marked as well. The vertices are intentionally drawn off-center so that the subgraph's edges do not coincide with any hexagon sides. The shaded hexagons alternate blue and green. $^{1}$ + +The model works as follows. Any blue hexagon is at least 3 hexagons away from any other blue hexagon; so by the Same Channel Principle, all blue hexagons may be assigned the same channel. Similarly, all green hexagons may + +![](images/20b72652bc778cc7e32fc21b3c8e7d1b65b6e28028cf0597a3086da9fda1cdc3.jpg) +Figure 6. Embedded Subgraph Method setup. + +be assigned the same channel. No blue hexagon neighbors any green hexagon; by the Adjacent Channel Principle, we may assign blue and green hexagons adjacent channels. + +There are four such embedded subgraphs in the network. The hexagons on the same rows as the blue and green hexagons, but between them, create another embedded subgraph, which can also be assigned two adjacent channels. Next, we can place two more subgraphs to connect the rows that have not been assigned channels yet. The result is four embedded subgraphs that together contain all the hexagons in the network. Figure 7, in which the first and third rows alternate colors and the second and fourth rows alternate different colors, shows how every hexagon is assigned to exactly one subgraph. Hexagons of the same pattern belong to the same subgraph. + +![](images/5ce804f9591bf7d46b56dbab37756513c7e5de7ce1097d12763dc2d1dc65af30.jpg) +Figure 7. Embedded subgraphs indicated by shading. + +The final step is to assign channels to all four subgraphs with two channels per subgraph, as described above. For the first subgraph, we use channels 1 and 2. At this point every hexagon in the other three subgraphs neighbors a channel 2 hexagon; thus no hexagon may be assigned channel 3, by the Adjacent Channel Principle. So we skip 3 and assign 4 and 5 to the hexagons of a second subgraph (it does not matter which subgraph). As before, each + +remaining hexagon neighbors a 5; so we skip channel 6 and assign 7 and 8 to a third subgraph. Finally, we skip channel 9 and assign channels 10 and 11 to the remaining subgraph. This pattern yields three channel values not being assigned. The resulting pattern is shown in Figure 8. + +![](images/4775d7ab1f38dedd56109f88d775f1eb6e6770cca4392cd5428bfc6b4c3bc09c.jpg) +Figure 8. Channels assigned using the Embedded Subgraphs model. + +For the sake of examining interference, the distance to transmitters of the same channel is at least $s\sqrt{21}$ . The distance to adjacent channels is at least $3s$ . Also, every channel has only one adjacent channel that appears in the network. + +This pattern can be tessellated across any number of hexagons. Thus, we know that the span is at most 11 for Figure 1 as well as a grid that spreads arbitrarily far in all directions. + +# Diagonals Method + +We attempt to fill the network using diagonals. We want exactly three channels repeated along any diagonal. To avoid same-channel interference, each channel along the diagonal should be exactly three hexagons away from the closest hexagon with the same channel. This diagonal should then be repeated three diagonals away on either side. We call such repeated diagonals same diagonals. The result is shown in Figure 9. The channels $x$ , $y$ , and $z$ are chosen so that no two of them are consecutive channels and spectral spreading is avoided. For example, $x$ , $y$ , and $z$ may be assigned the values 1, 3, and 5. + +![](images/7550556070a6d0fc04685534ad56341be1714201f597e5a48e544cff97e49d72.jpg) +Figure 9. Diagonals Method setup. + +After assigning channels to the first diagonal, we attempt to assign values to its neighbor diagonals without violating any constraints. With trial and error, + +we obtain the assignments shown in Figure 10. + +![](images/d201366d2c592ff6b7c7b1f3bbded1157973575bea4342072d8ecf5fb972543b.jpg) +Figure 10. Channel assignments made using the Diagonals Method. + +This tessellation also has a maximum channel of 11, and like the Embedded Subgraphs Method, it can be expanded infinitely in all directions. Also, the distance to the closest adjacent channel is $3s$ . Unlike the Embedded Subgraphs Method, this pattern contains nine unique channels, leaving two unused (channels 4 and 8). Furthermore, the distance to the closest transmitter with the same channel is $3s\sqrt{3}$ . Another difference from the Embedded Subgraphs Method is that three of the channels have two adjacent channels appearing in the network, while six of them have only one. + +We explain why the assignment in Figure 10 is valid. Consider every hexagon as belonging to one of three subgraphs, whose edges create a set of triangles. With the 1, 2, and 3 placed as they are, they violate no rules; but placing a 4 in any hexagon would violate the Adjacent Channel Principle. So we skip 4 and assign channels 5, 6, and 7 to a triangular subgraph with the same properties as the first subgraph. Finally, we skip 8 and assign 9, 10, and 11 to the remaining subgraph. + +![](images/9d5d738b1deccec2b1c0602a6e3ba2e5aa203e389c24377f6d7818a131932944.jpg) +Figure 11. Embedded subgraph model setup. + +# Computer Program + +We confirmed our results (maximum channel of 11) by writing a computer program to test every possible combination and determine the span. (Before writing the program, we knew that it would require 91 levels of recursion, but the speed and memory capacity of modern PCs makes this a feasible option.) + +The program begins with an empty network, that is, a network with no channels assigned to any transmitter. It starts with the farthest left hexagon in the top row of the network and assigns it channel 1. It moves across the row and then similarly processes the following rows. Each time a new hexagon is encountered, the program attempts to assign a channel to the hexagon, starting with 1 and ending at some user-specified upper bound. After assigning a channel, it checks that the assignment does not violate the Adjacent and Same Channel Principles listed above. If there is a violation, it tries the next highest channel. If the channel assigned exceeds the upper bound, the program moves back to the previous hexagon and performs a reassignment before continuing. If the program successfully assigns a value to the last hexagon in the last row, it has successfully filled the network and displays the entire network for the user. If a network is never displayed, the upper bound was too low. + +The program output various working configurations for an upper bound of 9 but no layouts for 8. The pattern that generates a span of 9 is shown in Figure 12. This pattern appears to be a variation of the Diagonals Method: the hexagons of any right diagonal consist of exactly three channels, which are repeated ad infinitum. However, as highlighted in the figure, for any entry in a diagonal, the corresponding entries of the closest same diagonals are on different rows. This does not change the fact that the Same Channel Principle is satisfied; any pair of same-channel transmitters are still at least three hexagons away from each other. + +![](images/b8a984f7123a88d67df17a1dc974c66aa54ec919190784b2d7e9df83f069f2fa.jpg) +Figure 12. Computer-generated span of 9. + +To verify that this is a valid pattern, we must also show that the Adjacent Channel Principle is not violated. The channel of any hexagon differs from its left and right neighbors by at least two. Every row is identical to the one above it but shifted three diagonals to the left. Due to the shift, neighbors above and below differ by at least three. Thus, the Adjacent Channel Principle is satisfied. + +We develop "span theory" to prove that the span of Figure 1, as well as the span of an arbitrarily large grid, is 9. + +# Span Theory + +Let $S$ be the set of all networks constructed with regular hexagons of side length $s$ , and let $H_{n}$ be the symmetric network of regular hexagons with $n$ regular hexagons on any side of it (Figure 13). Note that Figure 1 is $H_{6}$ ; we denote by $H_{\infty}$ the symmetric network that spreads arbitrarily far in all directions. + +![](images/400d76c8ad691c1cd2181f9a0dc2714740b31464ba4da985001f7aba14682990.jpg) +Figure 13. Symmetric networks $H_{1}$ and $H_{2}$ . + +Let $A \in S$ . We define $\operatorname{Span}(A)$ as the minimum of the largest channel used at any location in $A$ , over all channel assignments to the hexagons in $A$ satisfying both the Adjacent and Same Channel Principles. From our previous results, we have: + +Maximum Span Property: For all $A \in S$ , $\operatorname{Span}(A) \leq 9$ . + +Our goal is to show that $\operatorname{Span}(\mathbf{A})$ is also bounded below by 9 for $A = H_{6}$ and $A = H_{\infty}$ . + +Let $A, B \in S$ . We define $A \subset B$ if $A$ can be traced out entirely inside $B$ , that is, $B$ can be truncated by removing hexagons so that it is congruent to $A$ . + +Proposition 1. If $A \subset B$ , then $\operatorname{Span}(A) \leq \operatorname{Span}(B)$ . + +[EDITOR'S NOTE: We omit the proof.] + +Lemma 1. If the channel of the center transmitter in an $H_{2}$ network is not 1 or 8, then there is a channel greater than or equal to 9 in that network. + +Proof: Suppose not. Let $x$ be the channel of the center hexagon, $1 < x < 8$ . Then there are at most 8 choices for the seven hexagons in the $H_{2}$ network. Let $T$ be a set of possible channels for the six neighbors of the center hexagon. By the Same Channel Principle, $x \notin T$ . Since $x > 1$ , we have $x - 1 \geq 1$ , so by the Adjacent Channel Principle, $x - 1 \notin T$ . Similarly, since $x < 8$ , we have $x + 1 \leq 8$ , and by the Adjacent Channel Principle, $x + 1 \notin T$ . So $T \subset \{1, 2, 3, 4, 5, 6, 7, 8\} - \{x - 1, x, x + 1\}$ , so $|T| \leq 5$ . Since we must assign to the six hexagons at most five different channels, at least one channel is assigned to two hexagons. Doing so violates the Same Channel Principle, so we arrive at a contradiction. + +Theorem 1. $\operatorname{Span}(H_6) = 9$ . + +Proof: Let $A$ be as shown in Figure 14; $A \in S$ . + +![](images/e6b56ff335fc8fe43206a7f407ad37167e7258b593b3b7bcbf68c8d5f3736295.jpg) +Figure 14. Network $A$ of Theorem 1. + +Note that $A$ has three hexagons (shaded in the figure) that are the centers of three corresponding $H_{2}$ networks. By the Same Channel Principle, since these three hexagons are neighbors, they must be assigned three unique channels. Thus, at most one can be assigned channel 1, at most one can be assigned channel 8, and therefore at least one of the shaded hexagons must be assigned a number other than 1 or 8. Since the shaded hexagon with a channel other than 1 or 8 is the center of an $H_{2}$ network, that network has a channel greater than or equal to 9 in it, by Lemma 1. Since $A$ contains this $H_{2}$ network, there must be a hexagon in $A$ with a channel greater than or equal to 9 in it. Thus, $\operatorname{Span}(A) \geq 9$ . By the Maximum Span Property, $\operatorname{Span}(H_{6}) \leq 9$ . By Proposition 1, since $A \subset H_{6}$ we have $9 \leq \operatorname{Span}(A) \leq \operatorname{Span}(H_{6}) \leq 9$ . Therefore, $\operatorname{Span}(H_{6}) = 9$ . + +Corollary 1. $\operatorname{Span}(H_{\infty}) = 9$ . + +# Requirement C + +We move to the case in which transmitters within distance $2s$ differ by at least some given integer $k$ . The result of this new constraint is a modified principle for adjacent channels: + +The $k$ -Adjacent Channel Principle: No neighboring hexagons may have channels that differ by less than $k$ . + +Requirement C states that the distance to same-channel transmitters remains at least $4s$ , so the Same Channel Principle applies as before. + +We must modify our definition of $\operatorname{Span}(A)$ : Let $A \in S$ and $k > 1$ . We redefine $\operatorname{Span}(A, k)$ as the minimum of the largest channel used at any location in $A$ , over all channel assignments to the hexagons in $A$ that satisfy the $k$ -Adjacent and Same Channel Principles. Thus, $\operatorname{Span}(A) = \operatorname{Span}(A, 2)$ . + +Note that the analogue of Proposition 1 holds, that is, if $A \subset B$ , then $\operatorname{Span}(A, k) \leq \operatorname{Span}(B, k)$ . + +We want to find $\operatorname{Span}(H_6, k)$ for general $k$ . Running our computer program to find networks satisfying the $k$ -Adjacent Channel Principle with various values of $k$ , we find + +$\operatorname{Span}(H_{6}, 2) = 9 \quad \operatorname{Span}(H_{6}, 7) = 21 \quad \operatorname{Span}(H_{6}, 3) = 12$ + +$\operatorname{Span}(H_{6}, 8) = 23 \quad \operatorname{Span}(H_{6}, 4) = 15 \quad \operatorname{Span}(H_{6}, 9) = 25$ + +$\operatorname{Span}(H_{6}, 5) = 17 \quad \operatorname{Span}(H_{6}, 10) = 27 \quad \operatorname{Span}(H_{6}, 6) = 19$ + +So we hypothesized that + +$$ +\operatorname {S p a n} (H _ {6}, k) = \left\{ \begin{array}{l l} 3 k + 3, & 2 \leq k \leq 4; \\ 2 k + 7, & k \geq 4. \end{array} \right. +$$ + +We found two distinct patterns, one with maximum channel $3k + 3$ and the other with maximum channel $2k + 7$ (Figure 15). + +![](images/7b9a5c45112bd05e193739077ec61467b550ade0593b9860e074ecd84e49137b.jpg) +Figure 15. Patterns for channel assignments. + +![](images/cc952f858c7c4757a77adba871fc3225b0b6197f16d1c611365c2387457fd449.jpg) + +Highlighted with shading in the upper left-hand corner of each pattern is the tessellation that generates the network. Also shaded is the maximum channel used by each tessellation. The $2k + 7$ pattern is actually an arrangement that fits the Diagonals Method; a sample triangular subgraph is highlighted in the lower right-hand corner. + +The $2k + 7$ pattern uses channels $1, 2, 3, k + 3, k + 4, k + 5, 2k + 5, 2k + 6,$ and $2k + 7$ . It is valid because it follows the same rules as the Diagonals Method: + +- Specifically, any given channel is three hexagons away from a hexagon with the same channel on the same row. Also, the nearest row with the same channel is three rows away. Thus the Same Channel Principle is satisfied. +- We note that any channel cannot be placed next to any other channel in its column for $k > 2$ . However, it can be placed next to every entry in the other columns, since they will differ by at least $k$ . Since every entry has six unique neighbors, placing the six channels from the other two columns around the entry results in a valid arrangement. Examining the network confirms this deduction: Every 1, 2, or 3 is surrounded by the six channels listed in the other two columns; the same also holds for these six channels. Thus, the $k$ -Adjacent Channel Principle is satisfied. + +The verification of the $3k + 3$ pattern is similar. + +Comparing the two patterns, the minimum distance to a transmitter of the same channel is $3s\sqrt{3}$ in both patterns, and the minimum distance to the nearest adjacent channel is $3s$ in both. From these results, we can state a modified maximum span property: + +Second Maximum Span Property. If $A \in S$ , then + +$$ +S p a n (A, k) \leq \min \{3 k + 3, 2 k + 7 \}. +$$ + +Theorem 2. If $A \in S$ , then $2k + 4 \leq \operatorname{Span}(A, k) \leq \min \{3k + 3, 2k + 7\}$ . + +Lemma 2. If $k > 6$ , then $\operatorname{Span}(H_4, k) = 2k + 7$ . + +Theorem 3. If $A \in S$ , $H_4 \subset A$ , and $k > 6$ , then $\operatorname{Span}(A, k) = 2k + 7$ . + +[EDITOR'S NOTE: We omit the proofs of these results.] + +There are a few cases that our mathematical results do not cover; but since our program verifies all results that we obtain mathematically, we are confident that it can find $\operatorname{Span}(A, k)$ for any $A \in S$ , $k > 1$ . + +# Requirement D + +We begin by considering the case of irregular transmitter placements and analyze two cases. + +# All Transmitters Except One Are in Hexagon Centers + +The exception may be anywhere in its hexagon. How far from the center of a hexagon can a transmitter be and still be in the hexagon? Just $s$ . We consider the constraints of Requirement A. + +Adjacent-channel transmitters must be $2s$ , and same-channel transmitters $4s$ , away from each other. If we give one transmitter freedom to move up to $s$ away from its center, then to avoid interference the distance between the center of the irregularly placed transmitter and the other transmitters must not be less than $2s + s = 3s$ for adjacent-channel transmitters, and $4s + s = 5s$ for same-channel transmitters. + +The change to $3s$ has no effect, since there are no transmitters between $2s$ and $3s$ away from the center of a hexagon. Thus, the Adjacent Channel Principle does not change. However, there are 12 hexagons whose centers are $s\sqrt{21}$ away, which means they could have been same-channel transmitters before (by the Same Channel Principle) but now we cannot be sure. Thus, the Same Channel Principle no longer holds, as we have transmitters three hexagons away that cannot have the same channel. These 12 hexagons are the shaded ones in a ring in Figure 16. The center hexagon is the one that can be irregularly placed. + +![](images/c6ef54d46693e3cf44a8df30c7cd0555611bd4a9067e9c09b6e0edbdbb92d3a0.jpg) +Figure 16. The channel of any shaded hexagon cannot be the same as that of any white hexagon. + +Allowing one transmitter to be irregularly placed has minimal effect on the span, changing it by at most 1, a fact that we prove. Let $A \in S$ , $k > 1$ , and $n \geq 0$ . We define $\operatorname{Span}_n(A, k)$ as the minimum of the largest channel used at any location in $A$ , over all channel assignments to the hexagons in $A$ that satisfy the $k$ -Adjacent and Same Channel Principles, with the additional allowance that up to $n$ transmitters in $A$ may appear anywhere within their respective hexagons. Note that $\operatorname{Span}_0(A, k) = \operatorname{Span}(A, k)$ . + +Theorem 4. $\operatorname{Span}_1(A, k) \leq \operatorname{Span}(A, k) + 1$ . + +Proof: Let $x = \operatorname{Span}(A, k)$ . We can assign channels to all hexagons in $A$ so as to yield a span of $x$ , with the irregularly placed hexagon assigned channel $x$ . By the Same Channel Principle, all transmitters with channel $x$ are at least three hexagons away from the irregularly placed transmitter. By the $k$ -Adjacent Principle, the irregularly placed hexagon has no neighbors with channels greater than $x - k$ . Change the assignment of the irregularly placed transmitter from $x$ to $x + 1$ . Since there are no other transmitters with channel $x + 1$ , the Same Channel Principle is satisfied. Furthermore, since there are no hexagons neighboring the irregularly placed transmit with channels greater than $x - k$ , the $k$ -Adjacent Channel Principle is satisfied. Thus, we have constructed a valid pattern for $A$ with highest channel $x + 1$ and a single irregularly placed transmitter. So $\operatorname{Span}_1(A, k) \leq x + 1 = \operatorname{Span}(A, k) + 1$ . + +# Any Transmitter Can Occupy Any Position within Its Hexagon + +The farthest that a transmitter can be from its center is $s$ . Suppose that two transmitters need to be at least $2s$ apart; to guarantee that they are $2s$ units apart, we require that their hexagons' centers be $2s + 2s = 4s$ apart. To guarantee that transmitters are $4s$ apart, we can place their hexagons' centers at least $4s + 2s = 6s$ apart. + +Placing this scenario in our computer program, we find a pattern for the case of the large network (Figure 17). + +![](images/6552b750210adad720c32fd23f30c38f97ae9d091278970d3091a27295f2f652.jpg) +Figure 17. A valid network in which any transmitter can be located anywhere in its hexagon. + +This network has a maximum channel of 18. Every channel is the minimum distance from a same-channel transmitter on two sides, as illustrated by the shaded hexagons assigned channel 1. Every transmitter is the minimum distance from each adjacent channel on three sides; that is, there is a hexagon closer that could be assigned an adjacent channel and still satisfy both principles. If a channel is not 1 or 18, it has two adjacent channels, so it is the minimum distance from six transmitters with adjacent channels; this is illustrated by the shaded hexagon assigned channel 10, which is the minimum distance from the six shaded hexagons assigned channels 9 and 11. + +Since all 18 channels are used, each transmitter is minimally close to at least five other transmitters, and since our program cannot find a pattern with maximum channel 17, we conjecture that $\operatorname{Span}(H_6, 2) = \operatorname{Span}(H_\infty, 2) = 18$ . + +# Several Levels of Interference and Other Factors + +We were asked to consider generalizations of the problem. + +- A network with several levels of interference could imply a third level of interference, in addition to same-channel interference and spectral-spreading interference. It could also imply varying levels of spectral spreading, such as a rule that channels differing by 1 must be at least $3s$ apart, channels differing by 2 must be $2s$ apart, and same-channel interference remains unchanged. +- Interference levels may vary in different parts of the grid. For example, the top half of the network may satisfy the 2-Adjacent Channel Principle while the bottom half requires the 3-Adjacent Channel Principle. Our program could be modified to calculate the span of such a network. +- Transmitters could have non-repeated channels, or there may be certain channels that no transmitters may use. +- Perhaps a small amount of spectral spreading is acceptable, that is, a network might set $n$ as the limit to the number of transmitters allowed within distance + +$d \cdot s$ with channels differing by less than $k$ , for some nonnegative integers $d, k$ , and $n$ . In Requirement A, spectral spreading would be described by $n = 0$ , $d = 2$ , $k = 2$ . The Same Channel Principle would be defined by $n = 0$ , $d = 4$ , $k = 1$ . + +As the problem states, one basic approach is to partition the region into regular hexagons. Squares and triangles could also be used; these are the only regular polygons besides hexagons that tessellate a plane [Firby and Gardiner 1982, 151]. Our program could easily be modified to handle such networks. + +# Press Release + +# Radio Channel Assignments Problem Solved: + +# Robust Computer Program, Mathematical Theory Pave Path to Solution + +SAN LUIS OBISPO, Feb. 30 — A team of three undergraduates students from CalPoly cracked the case of the radio channel assignments. The team had to assign channels to radio transmitters on a hexagonal grid in such a way as to prevent several levels of frequency interference. + +The team first determined that no more than 11 channels would be needed; then a computer program suggested that a solution with 9 channels might be possible. To prove that 9 channels—but no fewer—would work, the team developed "span theory," a new mathematical theory of channel assignment. + +The team also solved several more general problems accounting for wider channel separation or allowing transmitters to be moved around. + +# Strengths and Weaknesses + +Through a series of models and some mathematical theory, we find and rigorously prove that the span of the given figure in Requirement A, and an arbitrarily large figure in Requirement B, is 9. + +Our computer program verifies all of our results and is an invaluable tool for determining patterns. It is very robust in its ability to calculate spans for networks of almost any size, subject to constraints that can easily be modified. Execution is almost instant for networks with fewer than 100 hexagons. It would be very difficult to prove that the code is correct, but we develop a rigorous span theory to prove the values of spans. + +We also present some early heuristics, the Embedded Subgraph Method and Diagonals Method, that provide near-span solutions and are easily shown to be valid without span theory. In some scenarios, these methods might be preferred for assigning channels, such as if certain channels are forbidden. + +For Requirement C, aided by our computer program and span theory, we find two strategies and set upper and lower bounds on the span of a network. With span theory, we find the exact value of the span both for Figure 1 and for an arbitrarily large network, as well as for hexagonal networks with side length exceeding 3, provided $k > 6$ . Though we cannot rigorously determine the spans for $3 \leq k \leq 6$ , the computer program calculates them. + +We consider only extreme cases of irregular placement in Requirement D. We prove a formula for the span if only one transmitter is allowed to be irregularly placed. On the basis of our program, we also find a span, 18, for allowing every transmitter to be irregularly placed. Channel assignments are valid no matter where a transmitter is moved, provided that it stays within its hexagon. + +# References + +Conway, John H., and Richard Guy. 1996. The Book of Numbers. New York: Springer-Verlag. +Firby, P.A., and C.F. Gardiner. 1982. Surface Topology. Chichester, UK: Ellis Horwood. +Johnsonbaugh, Richard. 1993. Discrete Mathematics. 3rd ed. New York: Macmillan. +Rorabaugh, C. Britton. 1990. Communications Formulas and Algorithms. New York: McGraw-Hill. +Saaty, Thomas and Paul Kainen. 1977. The Four Color Problem: Assaults and Conquests. New York: McGraw-Hill. +Smith, Douglas, Maurice Eggen, and Richard St. Andre. 1997. A Transition to Advanced Mathematics. 4th ed. Monterey, CA: Brooks/Cole. + +# "We're Sorry, You're Outside the Coverage Area" + +Robert E. Broadhurst + +William J. Shanahan + +Michael D. Steffen + +Lewis and Clark College + +Portland, OR + +Advisor: Robert W. Owens + +# Our Approach + +We assume that the physical transmission properties do not result in penetration or interference varying from channel to channel. Since the channels occupy a continuous portion of the frequency spectrum, they can be numbered with integers from 1 up to the number $n$ of channels; $n$ represents the bandwidth of the portion of the spectrum. The minimum possible value for the bandwidth that achieves all the requirements for a given transmitter arrangement we call the span of that arrangement. + +Our problem is to provide a method for arranging channels among the transmitters that achieves as low a bandwidth as possible. We analytically establish a lower limit for the span and we find the best solutions that we can to the given problem. + +The bandwidth of a feasible solution is an upper limit on the span. We seek to raise (through further analysis) the lower limit and to lower (through further construction) the upper limit. If our upper limit meets our lower limit, then we have completely determined the span. + +Requirements A and B are special instances of the more general problem posed in Requirement C; we treat A, B, and C as one main problem and solve them together. Requirement D asks us to consider generalizations of the problem; we examine how to treat some of the weaknesses in our main problem. Finally, Requirement E asks us to write an article for the local newspaper, which appears at the end. + +# The Main Problem + +At any point in the plane, a receiver can obtain a clear signal from transmitters within some maximum range, without interference from the signals broadcast by neighboring transmitters. + +# Assumptions + +- The region of interest is partitioned into adjacent regular hexagons of the same size. +- The length of the side any hexagon is $s$ . +- Each hexagon represents the area serviced by one transmitter, which is located in the center of its hexagon. +Each transmitter broadcasts a single channel. +- To minimize interference, any two transmitters occupying the same channel must be at least $4s$ apart. +- To minimize sideband interference, transmitters less than $2s$ apart must use channels that differ by at least $k$ channels. (In Requirements A and B, $k = 2$ .) +- The region of interest is a field of indefinite size and shape. A specific region is given for Requirement A and for a special case of Requirement C. + +# Definitions + +- Cell: The area serviced by a given tower—in this case, hexagons. +- Distance: The minimum number of edges that must be crossed to move from one cell to another. Any hexagon in an infinite field is surrounded by six cells of distance 1, twelve of distance 2, and so on. +- Separation: The absolute value of the difference between the integers assigned to two channels. + +# The Model + +Each hexagon is labeled with the integer channel of its transmitter. To satisfy the $2s$ requirement, adjacent hexagons must have a channel separation of at least $k$ . To satisfy the $4s$ requirement, neither adjacent hexagons nor hexagons that are a distance of 2 away can be labeled with the same integer; they must have a separation of at least 1. In Figure 1, the inner ring of 6 unmarked hexagons cannot assume the same label as the center hexagon or labels with a separation + +of less than $k$ . The surrounding ring of 12 barred hexagons cannot be labeled with the same value as the center. + +![](images/1ef312dfe4e8db26c05af816e7f0ea4144a60e8b095b064bc9e783dc9966df19.jpg) +Figure 1. The dashed circle has radius $2s$ , the dash-dotted circle has radius $4s$ . + +We establish a minimal value for the span: + +Theorem. The span of any solution is at least 7. + +Proof: Given any hexagon $X$ , there are 6 additional hexagons whose centers are within a distance of $2s$ from the center of $X$ . Any two of these 7 hexagons have centers that are less than $4s$ apart. Therefore, no 2 of the 7 hexagons can have the same label. + +Theorem. For $k = 1$ , the span is 7. + +Proof: The span is at least 7. A solution with 7 as the largest label used is illustrated in Figure 2. A solution for the infinite region of interest can be constructed by tiling the plane with the center group of 7 cells. $\square$ + +![](images/948efb851361e85ea595281b4a6c5badcc77d85c5d4279efc7ed925147f93498.jpg) +Figure 2. Solution showing that the span is 7. + +Theorem. For any $k \geq 1$ , the span is at least $2k + 5$ . + +Proof: Select three cells such that each is adjacent to the others and no cell is on the boundary of the region of interest. No two of these three cells can assume the same label. Thus, there must be a maximum, a middle, and a minimum value: $A$ , $B$ , and $C$ respectively. Since two adjacent cells' labels cannot be separated by less than $k$ , we know that $A - B \geq k$ and $B - C \geq k$ . + +Consider the six hexagons adjacent to $B$ . The minimum separation between $B$ and any adjacent hexagon is $k$ , so the labels of the cells adjacent to $B$ cannot be any of $B, B \pm 1, B \pm 2, \ldots, B \pm (k - 1)$ . Therefore, there are $2(k - 1) + 1$ channels between $A$ and $C$ that cannot be adjacent to $B$ . There must be at least 6 distinct channels adjacent to $B$ ; thus, there must be at least $2(k - 1) + 7$ channels, so the span is at least $2k + 5$ . + +Theorem. For $k = 2$ , the span is 9. + +Proof: By the preceding theorem, the span is at least 9. A solution for Requirement A with 9 as the largest label used is illustrated in Figure 3. $\square$ + +![](images/dcc2e1e6c6dd16abd83764527d85004034c04c2ca0b5724d7a939ba6d4de24ef.jpg) +Figure 3. Solution to Requirement A with span 9. + +The solution contains a center repeating group of 9 cells that tiles the plane, hence providing a solution to the case in Requirement B. An additional method of generating this arrangement is to add 2 whenever you move from one cell to its neighbor up and to the left, add 3 when you move up and to the right, and add 4 when you move down; if the result is greater than 9, subtract 9. Label each cell with the result and continue in all directions. If you start at a cell labeled $x$ and move once in each direction, you will return to the starting cell. You will have added 2, 3, and 4 to $x$ and subtracted 9 once, giving a net label of $x$ . Therefore, this labeling algorithm is consistent. + +At this point, the span has been completely specified for the cases $k = 1,2$ + +Theorem. For an infinite region of interest and for $k \geq 2$ , the span is no more than $3(k + 1)$ . (This extends the spanning solution for $k = 2$ .) + +A solution for $k = 3$ is displayed in Figure 4; the group of 12 cells numbered in bold tiles the plane. We do not prove this theorem for general $k$ , because we improve on it shortly. + +We summarize in Table 1 what we know about the span as a function of $k$ . + +![](images/7b0545c5de2cb33083b35150cb43d0f111bf8819c9e40a449922a27abc99dfed.jpg) +Figure 4. Solution for $k = 3$ , with span 12. + +Table 1. Minimum and maximum values for the span by value of $k$ . + +
kMinimumMaximum
177
299
31112
41315
51518
k2k + 53k + 3 (not proven)
+ +We develop some further theory to help us determine the span for $k = 3$ . + +Theorem (The Symmetry Argument). If a solution with span $s$ uses label $x$ , then there is a solution with the same span that uses label $s + 1 - x$ . + +Proof: The cells of the given solution are labeled with values between 1 and $s$ . Relabel each cell $y$ with the label $s + 1 - y$ . Since $1 \leq y \leq s$ , then $1 \leq s + 1 - y \leq s$ . The difference $|a - b|$ between the labels $a$ and $b$ of any two cells does not change, because $|(s + 1 - a) - (s + 1 - b)| = |a - b|$ . Therefore, the new labeling is also a + +solution, because solutions depend on only the differences between the labels of the cells. We know that at least one cell was originally labeled $x$ ; that cell is now labeled $s + 1 - x$ . + +Contrapositive of the Symmetry Argument: If there are no solutions of span $s$ that include label $x$ , then there are no solutions of span $s$ that include label $s + 1 - x$ . + +Theorem. For $k = 3$ , the span is 12. + +Proof: The span is at least 11. Suppose that there is a solution that uses label 3. Labels 1, 2, 3, 4, 5 cannot be adjacent to 3, so the six adjacent hexagons must be labeled 6, 7, 8, 9, 10, and 11. The only label that could be adjacent to both 3 and 9 would be 6; but we need at least 2 channels that are adjacent to both (Figure 5). Hence, 3 cannot be used in a solution with span 11. By the contrapositive of the symmetry argument, no solutions of span 11 include label 9, either. + +![](images/012a78a0b12200af3e325593b7a494f19e36052a843c25c921dd6b2a588dade9.jpg) +Figure 5. Situation that arises if label 3 is used in a span of 11. + +Now suppose that label 4 is used. Then labels 2, 3, 4, 5, 6, and 9 cannot be adjacent to 4, leaving only labels 1, 7, 8, 10, and 11. We need 6 distinct labels adjacent to 4 but only 5 remain. Thus, 4 cannot be used in a solution with span 11. Again by symmetry, no solutions of span 11 include label 8. + +An identical argument excludes labels 5 and 7. Only labels 1, 2, 6, 10, and 11 remain; but at least 7 labels are required for a solution. Thus, a solution for $k = 3$ with a span of 11 is not possible. We have already constructed a solution of bandwidth 12 for $k = 3$ , so the span is 12. + +We turn to $k\geq 4$ + +Theorem. For $k \geq 4$ , the span is no more than $2k + 7$ . + +Proof: A solution satisfying all constraints may be constructed as follows. Consider Figure 6. Let $A_{1}, A_{2}, A_{3} = 1, 2, 3$ ; $B_{1}, B_{2}, B_{3} = k + 3, k + 4, k + 5$ ; and $C_{1}, C_{2}, C_{3} = 2k + 5, 2k + 6, 2k + 7$ . + +Such an assignment guarantees that the difference between any two channels that are in different groups is at least $k$ . Every channel $X$ is surrounded by channels that are in the other two groups, so there are no adjacent channels in the same group as $X$ . Therefore, all channels surrounding $X$ have labels that differ from $X$ 's label by at least $k$ . In addition, no two cells with the same label are closer together than $4s$ . These two properties establish this arrangement as a solution. The highest label used is $2k + 7$ , so the bandwidth is $2k + 7$ . This establishes a new upper limit on the span. + +![](images/7c0c1509d189b16d35bcd8336d4bda7a3c016d2a90b92d6986610b26f9894f05.jpg) +Figure 6. Design of a solution. + +So, for $k \geq 4$ , the span is either $2k + 5$ , $2k + 6$ , or $2k + 7$ . We need more building blocks to help us: + +Lemma 1. If there is no solution with bandwidth $b$ , then the span is greater than $b$ . + +Proof: Assume that there is a solution with bandwidth $m < b$ . Replace the largest channel used with $b$ . Since the relative spacing between labels either stays the same or grows, this also must be a solution, with bandwidth $b$ . + +Lemma 2. For any $k \geq 4$ , given 6 consecutive labels, any selection of 5 of these cannot all be adjacent to a given cell $X$ . + +Proof: Five hexagons all adjacent to a given hexagon include four edges between pairs (Figure 7). + +![](images/5afc3f29b0a3000f50b5fe1bfcc4a45b672221090f605e69513f4570041d7df8.jpg) +Figure 7. Situation of Lemma 2. + +But for the six labels $y, y + 1, y + 2, y + 3, y + 4$ , and $y + 5$ , there are at most three ways in which they can be adjacent ( $y$ with $y + 4$ , $y$ with $y + 5$ , or $y + 1$ with $y + 5$ , for $k = 4$ ). Since no label can be used more than once, there is no way to label all five hexagons. + +Lemma 3. Any 4 hexagons that are all adjacent to a given hexagon must include at least 2 whose labels differ by more than $k$ . + +Proof: Assume that there are 4 hexagons around a given hexagon such that the lowest label of the 4 is $x$ and the highest label is $y \leq x + k$ . At least 2 edges are shared among the four hexagons (Figure 8). But there is at most one way in which the labels between $x$ and $y$ can be adjacent: $x$ with $x + k$ . Since no labels can be used more than once, there is no way to label all 4 hexagons. + +![](images/9a2d93f15cae66cefc5e305d67dfb436bc6f8e2d668e85afccbc88c4ad20e5e1.jpg) +Figure 8. Situation of Lemma 3. + +Theorem. For $k \geq 4$ , the span is at least $2k + 7$ . + +Proof: Suppose that the span is $2k + 6$ and the label $k + 1$ is used. The possible labels adjacent to $k + 1$ are $1, 2k + 1, \ldots, 2k + 6$ , which include 6 consecutive channels. But a selection of 5 out of 6 consecutive labels cannot all be adjacent to $k + 1$ (Lemma 2). So any possible solution uses neither $k + 1$ nor, by the contrapositive of the symmetry argument, $k + 6$ . + +Next, suppose that the label $k + 2$ is used. The possible labels adjacent to $k + 2$ are $1, 2, 2k + 2, \ldots, 2k + 6$ , which include 5 consecutive channels. But a selection of 4 out of 5 consecutive labels cannot all be adjacent to $k + 2$ (Lemma 3). So any solution uses neither $k + 2$ nor $k + 5$ (by symmetry). + +There are now 3 groups of labels remaining: $\{1, \ldots, k - 1\}$ (group $A$ ), $\{k + 3, k + 4\}$ (group $B$ ), $\{k + 8, \ldots, 2k + 6\}$ (group $C$ ). No two channels in the same group can be adjacent, since they differ by less than $k$ . + +Suppose that a label $X$ from group $A$ is used. The possible adjacent labels must be from groups $B$ and $C$ . Since group $B$ only has 2 channels, there must be at least 4 channels from $C$ adjacent to $X$ . But by Lemma 3, this is impossible; so no label from group $A$ is used. By symmetry, no label from group $C$ is used. + +This leaves only the 2 channels in group $B$ . Since every solution includes at least 7 distinct channels, there is no solution with a bandwidth of $2k + 6$ , so by Lemma 1 the span is at least $2k + 7$ . + +We have specified the span for all $k$ : 7 for $k = 1, 9$ for $k = 2, 12$ for $k = 3, 4$ and $2k + 7$ for $k \geq 4$ . + +# Evaluation of the Model + +Our model finds the span exactly for the constraints in requirements A, B, and C. This model still is limited by the assumptions of regular placement of transmitters, equal coverage around transmitters, and equal channel size. These constraints, however, may be too limiting or erroneous in real-world applications. We therefore examine three generalizations of the model. + +# Multiple Levels of Interference + +Typically, in wireless communication the strength of a signal from a transmitter declines with the distance between receiver and transmitter. So interference caused by transmitters occupying the same or close frequencies also decreases with distance. In our main model, transmitters on the same frequency must be at least $4s$ apart, while transmitters on close frequencies must be $2s$ apart. We consider a generalization. + +The distance between any two cells is 1 more than the minimum number of cells you must go through to go from the first cell to the second (a distance of 1 corresponds to $2s$ ). We consider a model where the amount of interference decreases linearly with distance. The needed separation $f$ between any two cells as a function of distance $d$ is $f = k(n - d)$ , for $d \leq n$ . + +Theorem. The span for $n = 1$ is $2k + 1$ . + +Proof: For any solution, there is a smallest label $z$ that is used. Select two cells that are adjacent to this cell and adjacent to each other; these two cells must have different labels, $x, y$ , with $x < y$ . We know that $x \geq k + z$ . But then $y \geq x + k$ , so $y \geq 2k + z$ . Since $z \geq 1$ , we have $y \geq 2k + 1$ . This proves that the span is at least $2k + 1$ . + +By setting $z = 1$ , $x = k + 1$ , and $y = 2k + 1$ , and arranging them as indicated in Figure 9, we achieve $2k + 1$ as the bandwidth, so the span is $2k + 1$ . + +Theorem. For $k = 1$ and $n \geq 2$ , the span is no more than $n^3 + n^2 - n + 1$ . + +Proof: We demonstrate our construction first for the case $n = 3$ in Figure 10. The general construction is shown in Figure 11, where $X, a, b = 1 + (a - 1)(n - 1) + (b - 1)n^2$ . The largest channel is $X, (n + 1), (n + 1) = 1 + (n + 1 - 1)(n - 1) + (n + 1 - 1)n^2 = n^3 + n^2 - n + 1$ . + +Theorem. The span for any $k$ is at least + +$$ +\left\{ \begin{array}{l l} 3 k \left(\frac {n ^ {2}}{4} + \frac {n}{2}\right), & \text {f o r e v e n n ;} \\ \frac {3 k (n ^ {2} - 1)}{4} + 1, & \text {f o r o d d n .} \end{array} \right. +$$ + +![](images/a00b6b7c75b974b48f9b62a7a776c813dfa34493233a55e07e99a5a006e3ef96.jpg) +Figure 9. Solution with span $2k + 1$ . + +![](images/69cbb77b2d5947880f52d7364611ca69eda3a678a0c3e7ef688de387bfef9b75.jpg) +Figure 10. Construction for the case $n = 3$ . + +![](images/96dd0eeea73e485090405c6de554b5a6a8a38d91291280919f4c52406c473133.jpg) +Figure 11. Construction for general $n$ . + +Proof: Case $n$ even: Consider an arbitrary cell and the first $n/2$ rings of cells around it; all of the cells in this region are within a distance $n$ of each other. Therefore, they must all be distinct, and they all must differ from each other by at least $k$ . There are + +$$ +3 \left(\frac {n ^ {2}}{4} + \frac {n}{2}\right) + 1 +$$ + +cells in this region, so the bandwidth must be at least as large. + +Case $n$ odd: Any solution for $n$ is also a solution for $n - 1$ . Therefore the same formula applies with $n - 1$ in place of $n$ , yielding the value stated. + +Theorem. Given a solution for some $n$ and with bandwidth $b$ , a solution exists for $n$ and any $k$ with bandwidth $kb - k + 1$ . + +Proof: Multiply all labels by $k$ ; this increases the separation between any two cells by a multiple of $k$ . Subtract $k - 1$ from all channels. This returns the lowest label to 1 and does not affect separation. + +We have established that the lower bound of the span for this modified model increases linearly with $k$ and with $n^2$ ; we have established an upper bound that increases linearly with $k$ and with $n^3$ . + +# Weaknesses + +This modification still assumes uniform arrangement of the transmitters. In addition, there is a discrepancy error between distance as we have defined it for hexagons and actual linear distance; however, an increase in hexagonal distance occurs if and only if there is an increase in actual distance as long as the hexagonal distance is less than 7. + +# Freedom of Transmitter Placement + +We consider how far from the center of its cell a transmitter may be placed without violating any of the original constraints. + +We assume that all transmitters can be displaced from the centers of their cells by an equal amount. The two interference constraints still apply: Transmitters within $2s$ of each other must still have a separation of at least $k$ , and transmitters within $4s$ of each other must have a separation of at least 1. We consider two questions: + +- What is the maximum freedom of displacement that can be allotted to transmitters before a solution developed by using our main model ceases to be a solution? +- What is the maximum freedom that can be allotted before a solution ceases to be a minimum solution? + +Theorem. If transmitters are displaced by less than $0.29s$ , a solution developed by assuming that they are in the centers of the cells is still a solution. + +Proof: In the uniform arrangement of transmitters, the nearest transmitter that is more than $2s$ away from a given transmitter is at a distance of $s\sqrt{7} \approx 2.64s$ . Their channel separation could potentially be less than $2$ , so we assume that it is. If both are moved toward each other by equal amounts, the $2s$ rule is violated when each has moved $0.32s$ . + +The nearest transmitter that is more than $4s$ away is at a distance of $s\sqrt{21}$ . The two could have the same label. If they both move toward each other by equal amounts, the $4s$ rule is violated when they have both moved $0.29s$ . + +Therefore, as long all transmitters move less than $0.29s$ , neither rule will be violated. + +Theorem. If transmitters are displaced by less than $0.13s$ , a minimum solution developed by assuming that they are in the centers of the cells is still a minimum. + +Proof: In the uniform arrangement of transmitters, the farthest transmitter that less than $2s$ away from a given transmitter is at a distance of $s\sqrt{3} \approx 1.73s$ . If both are moved away from each other by equal amounts, they are more than $2s$ apart when they have each moved $0.13s$ . + +The farthest transmitter that is less than $4s$ away is at a distance of $2s\sqrt{3} \approx 3.46s$ . If they both move away from each other equal amounts, they will become more than $4s$ apart when they have both moved $0.27s$ . + +Therefore, as long all transmitters move less than $0.13s$ , none of the constraints is affected. + +# Weaknesses + +If transmitters are displaced from the centers, regions of their cells are more than $s$ away from the transmitter. In addition, this approach does not consider the relative strength of signals as a function of the relative distance to the transmitters. + +# Rectilinear Constraints on Transmitter Placement + +Our main model assumes that transmitters are arranged in a honeycomb pattern. However, due to city streets and township and county lines, a cellular service provider may be constrained to arrange transmitters rectilinearly. Therefore, we attempt to find the span under the original $2s$ and $4s$ constraints when the infinite plane is tiled with squares, each containing a transmitter in the center. + +So, we consider a generalization of the $k = 2$ case (as specified in Requirements A and B) with an infinite grid of squares instead of hexagons. This requires a new definition of $s$ , which now becomes the distance from the center + +of a square to one of its corners, so that $s$ is still the maximum distance between a transmitter and a point in its cell. The cells considered within $2s$ and $4s$ from any given cell are shown in Figure 12. The inner four unmarked squares must have a separation of at least two from the center square. The outer ring of barred squares must not have the same label as the center (representing a separation of at least 2). The cells with a distance of exactly $2s$ and $4s$ are not included in the corresponding regions. We make this choice because when the cells are hexagons, each cell contains regions whose points are simultaneously within a distance of $s$ of two transmitters; with square cells, there is no region of nonzero area where a point in one cell is within $s$ of the transmitter in a diagonally adjacent cell. + +![](images/44c27f31b83568009f33b096ae0bc16684053dc3ff201b9910a5800107a6ad0a.jpg) +Figure 12. Square lattice of transmitters, with radii of $2s$ and $4s$ indicated. + +Theorem. The span is at least 8. + +Proof: In Figure 13, by the $4s$ condition, all of the boldfaced cells must be labeled distinctly. Therefore, there must be at least 7 distinct labels. Assume that there is a solution with a bandwidth of 7. Then all of the cells that are not in boldface must be labeled as indicated. However, all 7 of the labels are within $4s$ of the cell indicated with a question mark. Therefore, 7 labels are insufficient. + +![](images/1c97388bbd095276387bcfd6350a9e092d7dc97c68a4af6a751ae9136bd9ef72.jpg) +Figure 13. Seven labels are not enough ... + +Theorem. The span is 8. + +Proof: The construction of a solution with a bandwidth of 8 is shown in Figure 14. $\square$ + +![](images/32c0584750aa8f1f19fb7af4f5da14320150ec514195fb8bd429620d3f975efe.jpg) +Figure 14. . . . but eight are. + +The block of repeating channels in Figure 14 is reflected about a horizontal axis and then tiled to the right and left. In this fashion a solution for an infinite array of squares can be constructed. + +# Weaknesses + +This modification still assumes that transmitters are placed regularly, that channels with a separation of more than 2 never interfere with each other, and that there is no interference beyond a radius of $4s$ . + +# News Release + +Let's say that you work for a company that provides cellular telephone service. Your job is to decide how the company should go about expanding into a new area. You decide where transmission towers will be placed and what channels will be needed. + +You know that gaps in coverage must be avoided. There must not be any places in the new service area where a customer's cell telephone reads "We're sorry, you're outside the coverage area." + +One way to ensure that there are no "holes" or gaps in coverage is simply to put up lots and lots of towers. "Put up a tower every half-mile," you tell your boss. Your boss says that the idea is interesting, and would certainly guarantee complete coverage, but that another essential goal is to minimize the number of towers needed. Each tower costs tens of thousands of dollars. + +So you return to your cubicle and do some research. You learn that the signals from a transmission tower get weaker as a customer's phone gets farther and farther from the tower. You find out that there is a distance, say, five miles, beyond which clear reception can no longer be guaranteed. So you take a map of the region and start drawing circles whose radius is this distance (reduced to the scale of the map, of course). Right away you realize that the circles will have to overlap a bit. There are some quarters sitting on your desk, and you quickly see that one quarter can be surrounded by six other quarters, so that all the quarters are pushed together as close as possible. But there are little + +triangular holes in between them, so you have to overlap them a bit so that the area that they cover has no holes. + +This, in fact, turns out to be the most efficient way to cover a plane with circles—there is as little overlap as possible. So you cover your map with circles in this fashion and decide to place a transmission tower in the center of each circle. Since the two neighboring circles overlap a bit, to eliminate those pesky triangular holes you split the overlap area between them. Since every circle is surrounded by six other circles, you do this for all six overlapping areas. Now the region (cell) that is serviced by any transmission tower is a hexagon. In fact, the hexagons form a honeycomb pattern on your map, and you wonder why you didn't think of that before. The distance from the center to the corner of any hexagon is as far as you can get from the center without leaving the hexagon, so this distance must be the five miles that you determined earlier is as far as a customer can get from a tower and still be assured of coverage. + +Quite excited with your discovery, you run to your boss shouting, "Eureka! Hexagons!" Your boss smiles, a bit condescendingly, and informs you that they already knew that hexagons were the best way to cover a large area. What your boss really needs you to determine is how many channels the company will have to purchase from the FCC in order to have enough to for all the towers. "That's easy," you respond, still not realizing that nothing is as easy as it seems, "just buy one channel for each tower!" Your boss sighs, silently wondering if you are the right person for this job. Then your boss reminds you that not spending large amounts of unnecessary money is also an essential part of doing business. The company needs to know: What is the smallest number of channels that they will have to buy? And how should the channels be assigned to the towers? + +Having finally learned to be cautious, you suppress the urge to blurt out, "Just put the same channel on all the towers!" Instead, you begin asking questions. Your manager refers you to the engineering department. This is where the problem gets interesting. + +First, you are told that if a transmitter in some cell uses a particular channel, then none of the cells in the first ring of six hexagons around it can share that channel. This is because when you move from one cell to another, the way that the cell phone stops talking to one tower and starts talking to the tower in the new cell is by changing channels. If two cells right next to each other used the same channel, a phone in between would not know which one to listen to! + +Next, you learn that none of the cells in second ring of 12 hexagons can use the same channel either. In fact, the closest that two cells that use the same channel can be is three hexagons apart. So the same channel can be used over and over again, but the cells that use it must be spread out so that they are all at least three cells apart from each other. + +Finally, you find out that cells that are right next to each other cannot use channels that are too close together in frequency. When you ask why, the engineers begin enthusiastically telling you about something called "spectral spreading." You decide that it is better to not know why it is true, only that + +neighboring cells must use channels that are separated by some number of channels. For instance, suppose neighboring channels must be separated by at least four. Then if one cell uses channel 10, then perhaps all its neighbors must use channels that are at least four away, that is, none of the neighbors can use 7, 8, 9, 11, 12, or 13. You are told that for this new region, it has not been determined yet just how far apart the channels in neighboring cells must be from each other. Perhaps it depends on the expected call volume and on atmospheric conditions in the new area. But as soon as this channel separation is determined, you will be expected to say immediately how many channels are needed and how they should be distributed among the towers. + +When you return to your desk, you find a message on your chair. It seems that your boss neglected to mention an important detail: It does not matter how many channels the company actually uses, but what is crucial is how big a block of consecutive channels in the frequency spectrum the company occupies. For example, if you use only channels 11 and 20, you still have to pay for the block of ten channels between 11 and 20. + +This is the problem that our mathematical modeling team recently solved. We determined that if neighboring cells must be at least two channels apart, then a region of any size can be covered with a block of 9 consecutive channels. If a separation of three channels is required, any region can be covered with 12 channels. If a separation of four or more channels is required between neighboring cell, then the necessary number of channels can be found by doubling the separation and adding 7. For example, if neighboring cells must be 5 channels apart, then 17 channels must be purchased from the FCC. We also determined that these are the best possible solutions. That is, there is no way to cover a large region with fewer channels, without breaking some of the rules that the engineering department requires. + +But what is interesting about our result is not the number of channels required but how the channels are distributed among the towers. An example will illustrate this point. Suppose again that neighboring cells must be five channels apart; then our model calls for 17 channels to be purchased, say channels 1 through 17. But the only channels that our solution uses are 1, 2, 3, 8, 9, 10, 15, 16, and 17! We spread out the first three channels among the cells so that none of them are adjacent to each other. This covers one-third of the all the cells. We repeat this with channels 8, 9, and 10. Now two-thirds are covered. The rest are covered with 15, 16, and 17. We proved that this is the best possible solution. It is surprising that the best solution has 8 channels in the middle of its block that are not even used, but this is indeed the case. What the unused channels do is divide the remaining channels into three groups. The groups are far enough apart that any channel in one group can be surrounded by channels in the other two groups. + +# Utilize the Limited Frequency Resources Efficiently + +Chu Rui + +Xiu Baoxin + +Zong Ruidi + +National University of Defence Technology + +Chang Sha, Hunan, China + +Advisor: Wu Meng Da + +# Introduction + +For Requirements A, B, and C, we find an optimal assignment. We find efficient strategies for assignment and the spans for $k = 2$ (9), $k = 3$ (12), and $k \geq 4$ ( $2k + 7$ ) for the case of two levels of interference. + +For Requirement D, to check all possible assignments is impractical. Instead, we devise a heuristic algorithm to produce a near-optimal assignment and span. + +We also consider other important factors, such as cell-splitting and duopoly. + +# Analysis of the Problem + +We first define + +- successful assignment: An assignment of channels that satisfies all constraints. +- span: The minimum, over all successful assignments, of the largest channel used. + +Our goal is to find the span and a successful assignment under given constraints: + +- Constraint 1: The channels of transmitters within distance $2s$ differ by at least $k$ . +- Constraint 2: No transmitters within $4s$ of each other can use the same channel. + +# Assumptions + +- The region is partitioned into a grid of regular hexagons (called cells), and each transmitter is located at the center of a hexagon. +- The frequency spectrum is divided into regularly spaced channels represented by positive integers. +- A channel can be used by more than one transmitter, provided interference is avoided. +Each transmitter is assigned a single channel. +- Each transmitter covers its entire cell, and the effect of landform can be ignored. + +Table 1. +Notation used. + +
slength of a side of a hexagon
A,B,...cells or transmitters
a,b,...channels
ωisdistance constraint of the ith level of interference
kifrequency constraint of the ith level of interference
kfrequency constraint of first level of interference in Requirement C
p,qshift parameters
d(A,B)distance between cells A and B
+ +# Model Design and Results + +# Requirement A + +We find an optimal result via a recursive backtracking algorithm. Our goal is to see if we can use the integers up through $n$ to satisfy the constraints. We loop from $n = 1$ to $n =$ number of cells (giving every cell a distinct channel must work). We order the cells and try each in turn to see if it can be numbered with an integer between 1 and $n$ : if so, we proceed to the next cell; if not, we renumber the previous cell. If all cells can be numbered, then we have a successful assignment with $n$ channels; otherwise, $n$ channels are not enough. + +# Proposition 1. For requirement $A$ , the span is 9. + +Proof: Every cell is adjacent to six others, and these cells are all within $4s$ distance of each other; so according to Constraint 2, these seven cells must be assigned different channels. Furthermore, if one cell has channel $m$ , the six adjacent cells cannot have $m + 1$ or $m - 1$ , according to Constraint 1. Hence, + +the span must be at least 9. Our algorithm finds a successful assignment with 9 channels (Figure 1), so 9 is the span. + +![](images/9a2038dc21e02807f1aa83d8115a816b35fbf8a583a3582cda50afa0c1f1317f.jpg) +Figure 1. For $k = 2$ , the span is 9. + +# Requirement B + +The shaded part in Figure 1 can be expanded arbitrarily far in all directions, so for Requirement B the span is still 9. + +# Requirement C + +For Requirement C, there are still two levels of interference, but the $k$ of Constraint 1 is allowed to vary. + +Our algorithm finds the successful assignments of Figure 2 for $k = 3,4,5$ , for which the given spans can be proved easily. + +The shaded parts of Figure 2 show patterns that allow the same radio frequencies to be reused throughout. We systematize the reuse pattern. We begin with a pair of integers $p \geq q$ , which we call shift parameters. In a hexagonal tiling there are six "chains" of hexagons emanating in different directions from each hexagon. Starting at any cell, we proceed as follows: Move $p$ cells along any chain, turn CCW $60^{\circ}$ , then move $q$ cells along the chain in that direction. The original cell and the $q$ th cells so located in each direction are co-channel cells (Figure 3). + +![](images/47a70ff6334d570bae995756dbaef660fe0cd90760bf4fe9d88623d0bf0ff09c.jpg) +Figure 2. a. $k = 3$ : the span is 12. b. $k = 4$ : the span is 15. Figure c. $k = 5$ : the span is 17. + +![](images/73f86135578e0747eb16117905b3350a1c0aab7441257ed0bb20e0d5f694cb52.jpg) +Figure 3. A cell and co-channel cells. + +We repeat the procedure for a different starting cell until all cells in the region are assigned. The region can then be divided into a cluster of cells, such that transmitters in the same cluster are assigned different channels. The form of the cluster is determined by the shift parameters $p$ and $q$ , and it can be proved that the number of cells in a cluster is $p^2 + pq + q^2$ . + +In particular, we want to find a suitable cluster of cells for all $k \geq 4$ . To minimize the width of the interval of the frequency spectrum, we adopt the cluster of 9 cells in Figure 2c; it can be proved that $2k + 7$ is the best result for the span. This reuse pattern is shown in Figure 4. + +We define the three integer sets $N_{1}, N_{2}, N_{3}$ : + +$$ +N _ {1} = \{1, 2, \dots , k \}, N _ {2} = \{k + 1, k + 2, \dots , 2 k \}, N _ {3} = \{2 k + 1, 2 k + 2, \dots , 2 k + 6 \}. +$$ + +Proposition 2. In a successful assignment with largest channel no more than $2k + 6$ ( $k \geq 6$ ), channels for two transmitters at a distance of $3s$ must belong to the same set $N_i$ . + +![](images/c92d69ca5fb6ae745d354afdcf6c68aac2182487da4409be064c051b8fe26e3b.jpg) +Figure 4. Cell reuse pattern using 9 channels. + +Proof: Let $A, B$ be the transmitters, and let $C, D$ be transmitters adjacent to both (Figure 5); $a, b, c, d$ are the corresponding channels in a successful assignment. Considering Constraint 1, $b, c, d$ must belong to three different sets $N_{1}, N_{2}, N_{3}$ ; similarly, $a, c, d$ must belong to three different sets. Hence $a, b$ must belong to the same set. + +![](images/c5c5debfa9efc78ad1696033a20a6a34c5f8020d89f2076b0e6178086da1a6de.jpg) +Figure 5. Situation of Proposition 2. + +Proposition 3. The span is $2k + 6$ for $k \geq 6$ . + +Proof: Suppose not. Select any cell $A$ not at the edge of the given region. If its channel $a$ is in $N_2$ , then the six channels of the cells adjacent to $A$ must belong to $N_1$ or $N_3$ . By Proposition 2, three of the six channels belong to $N_1$ , and they are different from each other; hence, we deduce that $a \geq k + 3$ by Constraint 1. + +From the shaded part of Figure 6, select any cell $B$ whose channel $b$ is in $N_{3}$ . Let the channels for the cells adjacent to $B$ be $c, d, e, f, g, h$ . From Proposition 2 and Constraint 1, we may assume that $d, f, h \in N_{1}$ and $c, e, g \in N_{2}$ ; thus, we have + +$$ +\min \{c, e, g \} \geq k + 3 \longrightarrow \max \{c, e, g \} \geq k + 5 \longrightarrow b \geq \max \{c, e, g \} + k \geq 2 k + 5. +$$ + +So, in $N_{3}$ there are only two integers $(2k + 5, 2k + 6)$ that can be assigned to the shaded part of Figure 6, which is impossible. + +Hence, the supposition that the span is less than or equal to $2k + 6$ is not true. We already have a successful assignment with largest channel $2k + 7$ , so the span is $2k + 7$ . + +![](images/3b7c3c5f45479990f5b57bc521194d684f5acef891ac95ece4e45d08d4f28e9b.jpg) +Figure 6. Situation of Proposition 3. + +# Requirement D + +# Several Levels of Interference + +Our backtracking algorithm's workload increases rapidly with the number of interference levels, and we don't know what kind of clustered region could attain the span. So we turn to an approximate algorithm. + +We notice from our results for Requirement C that + +- There are several groups of channels; channels in the same group are close and evenly distributed, while channels in different groups differ greatly. +- In a single cluster, the channels are all different. + +Considering these observations, we guess that the distance between any two adjacent channel-reuse cells should be constant. We use this as the basis for the + +# Heuristic Skip Algorithm (HSA) + +Let there be $n$ levels of interference, and let the channels for transmitters within $\omega_{i} s$ of each other have to differ by at least $k_{i}$ . Let $\Omega$ be a "cell set" and let $\alpha \in \{1, \ldots, n\}$ be a control parameter. + +Step 1. Choose the center cell $A$ of the region as the initial cell: + +$$ +j := \alpha , \quad l := 1. +$$ + +Step 2. If $j = 0$ , stop; + +else, pick out all cells $A_{i}$ that have not been numbered but satisfy + +$$ +\omega_ {j - 1} s \leq d (A _ {i}, A) \leq \omega_ {j} s. +$$ + +Step 3. If there are not such $A_{i}$ , then set $j := j - 1$ and go to Step 2; + +else, choose one cell nearest to $A$ from $A_{i}$ and assign the minimal feasible channel to it. If $l = 1$ , denote the selected cell by $B$ and denote the shift parameters from $A$ to $B$ by $p, q$ . + +Step 4. Add the selected cell to $\Omega$ . + +Step 5. Start with any cell in $\Omega$ as a reference, move $p$ cells along any chain of hexagons, turn CCW $60^{\circ}$ , and move $q$ cells along the chain in the new direction. + +Assign the minimal feasible channel to this new cell and add it to $\Omega$ . + +Step 6. If there is no starting cell in Step 5, set $\Omega := \phi$ and $l := l + 1$ and return to Step 2; + +else, repeat Step 5. + +When the algorithm ends, every cell is numbered. The control parameter $\alpha$ determines the distance between any two adjacent channel-reuse cells, and we can execute the algorithm repeatedly with different values of $\alpha$ to get the best result. + +# Results + +- 2 levels of interference (e.g., $\omega_{1} = 2, \omega_{2} = 4, k_{1} = 5, k_{2} = 1$ ): The largest channel assigned by HSA is 17, which agrees with the optimal result earlier. +- 3 levels of interference (e.g., $\omega_{1} = 2, \omega_{2} = 4, \omega_{3} = 6, k_{1} = 5, k_{2} = 3, k_{3} = 1$ ): The largest channel assigned by HSA is 33; the cluster pattern is shown in Figure 7a. +- 4 levels of interference (e.g., $\omega_{1} = 2, \omega_{2} = 3, \omega_{3} = 4, \omega_{4} = 6, k_{1} = 4, k_{2} = 3, k_{3} = 2, k_{4} = 1$ ): The largest channel assigned by HSA is 29; the cluster pattern is shown in Figured 7b. + +# Irregular Transmitter Placement + +Let $r$ be the largest distance between a transmitter and the center of its hexagon. For $r \leq 0.134s$ and two levels of interference, the results apply as before, and a similar analysis can be made for other cases and numbers of levels of interference. + +For larger $r$ , we still use HSA. But first, since the position of transmitters is irregular, some might be missed when we use HSA to assign channels. We + +![](images/414543cf7851de2168ec5b06ab259d6cf530c60ce96eeb1022e2ee8123530cd3.jpg) +Figure 7. a. Assignment for 3 levels of interference. b. Assignment for 4 levels of interference. + +![](images/dc457632ae9a2448ed5fc92794943faabdf3e1ac0f7fae994f11af540691ca6d.jpg) + +overcome this problem by improving HSA so that when it determines which transmitter is to be assigned a channel, it ignores shift parameters, but it does consider the position of the transmitter when assigning a channel. + +# Result + +- 2 levels of interference (e.g., $\omega_{1} = 2, \omega_{2} = 4, k_{1} = 4, k_{2} = 1$ ): To simulate reality, we randomly choose $80\%$ of the transmitters move by $0.134s$ and the others to move by $0.3s$ . The largest channel is 16, compared with 15 for regular placement. + +# Analysis of Results + +We use HSA to solve the problems under various conditions. Attenuation of radio signals follows a log normal distribution. Since the radius of a cell is several kilometers, we solve only the problem of $6s$ interference, which should suffice in reality. + +# Two or More Levels + +In Requirements A, B, and C, only two levels of interference are taken into account, with $\omega_{1} = 2$ and $\omega_{2} = 4$ . For $k = 2$ , HSA gives 11 channels, while the span is 9; for $k = 3$ , HSA gives 13, while the span is 12; for $k = 4, \ldots, 10$ , HSA gives the span. + +For Requirement D, with three levels of interference $(\omega_{1} = 2, \omega_{2} = 3, \omega_{3} = 4)$ , HSA gives a largest channel of $3k_{1} + 6$ , which we feel is very close to the span. Figure 8 shows the frequency reuse pattern of a cluster of 12 cells. + +![](images/3f39841efc2f94f0dd0a96a58247587a3cf280328f1843f50a4f7457cc38bea2.jpg) +Figure 8. Cellular reuse pattern using a cluster of 12 channels. + +Table 1 gives results for various combinations of the parameters for the case of 3 levels of interference; Table 2 gives results for 4 levels. + +For three levels and $k_{2}$ small compared to $k_{1}$ , and for four levels with $k_{3} = 2$ , the HSA results are smaller than for other cases, because the algorithm can adopt a more rational cluster structure and assign channels more economically. This fact indicates that the result of HSA is determined not by one or two parameters but by all parameters and that HSA makes full use of the information of the constraints; hence, HSA may give a comparatively good result. + +# Irregular Transmitter Placement + +Tables 3-4 give the results, which varies only slightly when the proportion of transmitters moved more than $0.3s$ is less than $10\%$ . + +Table 1. Results for 3 levels $(\omega_{1} = 2,\omega_{2} = 4,\omega_{3} = 6)$ + +
k1k2k3HSA
32127
42127
3133
52133
3133
4139
62138
3139
4139
5145
72138
3145
4145
5145
6151
82140
3151
4151
5151
6151
7157
92142
3153
4157
5157
6157
7157
8163
+ +Results for 4 levels $(\omega_{1} = 2, \omega_{2} = 3, \omega_{3} = 4, \omega_{4} = 6)$ . + +Table 2. + +
k1k2k3k4HSA
432129
532132
42134
3135
632137
42137
3139
52137
3141
4141
732138
42138
3145
52138
3145
4145
62140
3147
4147
5147
+ +Irregular transmitter placement, 2 levels of interference $(\omega_{1} = 2,\omega_{2} = 4)$ + +Table 3. + +
% moved > 0.3sk1 = 4, k2 = 1k1 = 6, k2 = 1
01519
51519
101519
201627
252029
302133
352131
402131
451529
502126
+ +Table 4. Irregular transmitter placement, 3 levels of interference $(\omega_{1} = 2,\omega_{2} = 4,\omega_{3} = 6)$ + +
% moved > 0.3sk1 = 4, k2 = 3, k + 3 = 1k1 = 6, k2 = 4, k3 = 1
03350
53341
103342
203844
253846
303846
353443
403454
453651
503957
+ +# Further Discussion + +# Cell Splitting + +One advantage of cellular service is its ability to keep up with changing customer demands. If the customer base approaches full capacity, key cells can be divided into a number of smaller cells, each broadcasting at lower power, and channels can be reassigned to increase the volume of customers (Figure 9). + +![](images/3d0c415440a5a795d2481768ead295691bb94dfa1cab90e7fe78878c923a11f4.jpg) +Figure 9. Cell-splitting. + +# Assumptions + +- The radius of a new cell is half that of an old one, and the power of the new transmitter is half that of the old one; as a result, the distance constraints between new cells become half that of the old. +- The new cells plus unsplit old ones cover the entire region. +- The number of levels of interference between any two old cells are unchanged. +- The number of levels of interference between an old cell and a new cell are the same as between the old cells. + +# Strategies + +- Strategy 1: Assign the new cells first, then the old ones. +- Strategy 2: Assign the old cells first, then the new ones. + +# Results + +We allot channels by HSA in the area with split shells that is shown in Figure 10. For two levels of interference $(\omega_{1} = 2, \omega_{2} = 4, k_{1} = 4, k_{2} = 1)$ , Strategy 1 uses 28 channels and Strategy 2 uses 27. + +![](images/c7b749d9e7ad9bce2084625f8568de2f8e03b352db1384556f3e80945beac691.jpg) +Figure 10. Test region for splitting strategies; result for Strategy 2 is shown (largest channel is 27). + +# Duopoly + +If there are two providers in a region, we must assign their channels at the same time to avoid cross interference between the two systems. + +# Strengths + +For two levels of interference, we determine the span as a function of $k$ , prove optimality of the result, and give an efficient strategy for assigning channels. + +The HSA algorithm is adaptable to large areas, irregular transmitter placement, and splitting cells. It seems to give results close to the span, together with a cluster of cells that allows simple assignment of channels. + +The HSA algorithm is polynomial-bounded (in class $\mathcal{P}$ ); for the test cases examined, it gave results (on a PC) within 5 sec. + +# Weaknesses + +The result of the HSA algorithm may not be optimal. + +Although we conjecture that the span is $3k_{1} + 6$ for the situation with parameters $\omega_{1} = 2, \omega_{2} = 3, \omega_{3} = 4$ , we cannot prove it. + +# References + +EL208 Radio Communications Engineering: Week 6: Introduction to Cellular Telephones. http://homepages.unl.ac.uk/~fusterg/el208/week6.htm. +Elliott, Scott D., and Daniel J. Dailey. 1995. Wireless Communications for Intelligent Transportation Systems. Boston, MA: Artch House. +Simon, Haykin. 1981. Communications Systems. New York: Wiley. +Yee, William C.Y. 1982. Mobile Communications Engineering. New York: McGraw-Hill. + +# Grovin' with the Big Band(width) + +Daniel J. Durand + +Jacob M. Kline + +Kevin M. Woods + +Wake Forest University + +Winston-Salem, NC + +Advisor: Edward Allen + +# Introduction + +We have a planar surface divided into hexagonal cells with sides of length $s$ . In Figure 1, all of the "first concentric" (striped) cells lie within $2s$ of the central cell, whereas all of the dotted cells and all the "second concentric" (dotted plus striped) cells lie within $4s$ of the central cell. + +![](images/7173f422026d2fb5839736f40fa4e427c4ce6ea23f39da7cd41c08093f007a5a.jpg) +Figure 1. Diagram of interference regions. + +# The First Case + +We analyze the case where transmitters within $4s$ must differ by at least 1 channel and those within $2s$ must differ by at least 2 channels. We show that the span of the network is 9 for both the finite grid in the problem statement and for the infinite plane; the answers to Requirements A and B are identical. + +We begin by considering the "first concentric," shown in Figure 1 by the central cell and the ring of striped cells surrounding it. Since any cell of the first concentric is within $4s$ of all the others, each cell in it must be assigned a distinct integer; so the span cannot be 7 or less. + +A more careful examination reveals that the span cannot be 8. Suppose that it were. Consider three adjacent hexagons that share a common vertex, we find that only one cell can be assigned a 1 and only one cell can be assigned an 8. Thus, the remaining cell must be assigned some numbers between 2 and 7. Consider this last cell as the center of a first concentric and assign it $n$ . Then the $2s$ constraint dictates that the ring of 6 cells surrounding it cannot be assigned numbers $n - 1, n$ , or $n + 1$ . Their assignments must also be distinct from one another, since all cells within the first concentric are within $4s$ of each other. To make these cell assignments, we need six integers other than $n - 1, n$ , or $n + 1$ , or at least 9 numbers altogether; so the span must be at least 9. + +Figure 2 shows a solution with 9, so the span is 9. The central column in gray is the sequence $1,3,5,7,9,2,4,6,8$ repeated over and over. The column to the right of it (dotted) is the same sequence but shifted down 3 cells; the striped column to the left of center is the same sequence shifted up 3 cells. Repeat this process of shifting up or down indefinitely to the left and right. Look at each 1 in the pattern (in black). The column to the left of each 1 is always shifted up by 3, and the column to the right is always shifted down by 3. Therefore, each 1 must have the same neighbors. The cells within $2s$ of the 1s differ from it by at least 2, and those within $4s$ by at least 1; so the pattern meets the constraints. Checking the neighbors of the other numbers 2 through 9 shows that they meet the constraints also. This pattern can fill the grid supplied in the problem, or it can be extended arbitrarily far left and right and also up and down to cover the plane. This pattern is unique, not including rotations and reflections. [EDITOR'S NOTE: We omit the proof of this fact.] + +![](images/85a4d665917e391f548752ce90c3538abcae2addef0e9f86d30397089ba1e1f8.jpg) +Figure 2. Solution with 9 channels. + +# Generalization: Differing $k$ + +In this section, we maintain the constraint that transmitters within a distance $4s$ of one another cannot use the same channel but generalize the second constraint, so that transmitters within a distance $2s$ of one another must have channels whose assignment numbers differ by $k$ . The previous section treated the case $k = 2$ . + +We show that for $k = 1$ , the span is 7; for $k = 3$ , the span is 12; and for $k > 3$ , the span is $2k + 7$ . + +First, we show that for all $k, 2k + 5$ is a lower bound for the span. Suppose that we have a channel configuration that uses only 1 through $2k + 4$ ; this will lead to contradiction. Let $A = \{1, 2, \ldots, k\}$ and $B = \{k + 5, k + 2, \ldots, 2k + 4\}$ . All numbers in $A$ are within $k$ of each other, as are all numbers in $B$ . Consider three adjacent hexagons that share a common vertex. At most one of these three can be assigned an element of $A$ and at most one can be assigned an element of $B$ , so the third must be assigned some channel $n$ between $k + 1$ and $k + 4$ . Consider a first concentric in which the central cell has been assigned this integer $n$ . The $2s$ constraint dictates that the 6 adjoining cells cannot be assigned numbers $n - k + 1, n - k + 2, \ldots, n + k - 2, n + k - 1$ . Their assignments must also be distinct from one another, since all cells within the first concentric are within $4s$ of each other. To make these cell assignments, we need six integers other than $n - k + 1$ through $n + k - 1$ . This means that we need $6 + (2k - 1) = 2k + 5$ integers. Therefore, we cannot make proper channel assignments using only the integers 1 through $2k + 4$ . + +$$ +k = 1 +$$ + +When $k = 1$ , the $2s$ constraint is subordinate to the $4s$ constraint. The span must be at least $2k + 5 = 7$ . In fact, we can complete the grid using a span of exactly 7. As in Figure 2, the central column is a sequence of numbers repeated over and over, in this case the sequence 1, 2, 3, 4, 5, 6, 7. Also as in Figure 2, the adjacent column on the right contains the same sequence shifted down 3 cells, and the adjacent column on the left contains the same sequence shifted up 3 cells. For example, the 1 in the column to the right is between the 3 and 4 of the central column. As in the $k = 2$ constraint, every occurrence of each integer would have identical neighbors. Using this pattern, we can construct a satisfactory network. Moreover, since we have proved that the span must be greater than 6, our construction demonstrates that the span is exactly 7. + +$$ +k = 3 +$$ + +We show that no assignment exists that uses only 1 through 11 and provide an example that works for 12, thereby demonstrating that the span is 12. + +# Assertion A: The span must be greater than 11. + +Proof of A: By contradiction. Suppose that the span is 11. We show that several channel numbers cannot appear and use these facts for our final contradiction. + +Case A1: Suppose that some transmitter is assigned channel 3. Consider a first concentric about a central cell assigned channel 3. No transmitters in the first concentric can use channels 1, 2, 3, 4, or 5, because they are all within $2s$ of the center transmitter operating on 3. We are left with six viable channels, 6, 7, 8, 9, 10, and 11, all of which must be used to provide distinct assignments to the cells surrounding the center cell. Clearly, channel 8 must be used somewhere in the first concentric. We must then use two of the five remaining channels (6, 7, 9, 10, 11) in two empty cells of the first concentric lying to either side of 8. However, this is not possible, since placing either 6, 7, 9, or 10 in either of these cells would violate the $2s$ requirement (as 6, 7, 9, 10 are all within 3 of 8). It follows that no transmitter can be assigned channel 3. + +Case A2: Suppose that some transmitter is assigned channel 9. Since channels $n$ and $m$ within a distance $2s$ of each other must differ by at least $k$ , we have that $|n - m| \geq k$ . If we flip all channel numbers $m$ to $(\mathrm{span} + 1 - m)$ , then $|(\mathrm{span} - n) - (\mathrm{span} - m)| = |m - n| = |n - m| \geq k$ . Thus, the set of new channels functions identically under the $2s$ and $4s$ constraints. The channel numbers remain between $12 - 11 = 1$ and $12 - 1 = 11$ , so we have a correct channel assignment, and the span remains 11. So if some transmitter is assigned channel 9, a flip produces a configuration with a channel $12 - 9 = 3$ , which Case A1 shows is impossible. Therefore, no transmitter can be assigned channel 9. + +Case A3: Assume that some transmitter is assigned channel 10. Consider the first concentric around a channel 10 (in gray), as shown in Figure 3. + +![](images/e62089a94cb2ed6a0657a30cb2daefe139b01efe14a18e5f0728ba8e4ebf4dae.jpg) +Figure 3. Case A3. + +No transmitters in these cells can use the channels 8, 9, 10, 11, since the cells are within $2s$ of the center transmitter operating on 10; and none can be assigned channel 3, as we showed in Case A1. We are left with six usable channels, 1, 2, 4, 5, 6, 7, all of which we must use, since six distinct channels are required to fill the concentric. Channel 4 must be assigned to one of the cells, as in the dotted cell in Figure 3, and the striped cells neighboring it must contain channels 1 and 7. The $2s$ constraint requires that the cell with channel 5 can be adjacent only to the cells using channels 1 and 2, so the 5 and 2 must be added as shown in the figure (in black). However, we cannot assign channel 6 to the remaining cell, because that would violate the $2s$ requirement (since $7 - 6 = 1 < k$ ). It follows that no transmitter may be assigned channel 10. + +We interrupt the flow of the argument to establish a claim that we need. + +Claim: Any network of transmitters can be renumbered so that some transmitter operates on channel 1. + +Proof: Suppose that there is a set of channels where no transmitter operates on channel 1. Let $a$ be the smallest channel. As in Case 2, we renumber every channel $m$ , this time as $m - a + 1$ . This new numbering preserves differences between channel assignments, so it still satisfies the difference constraints. Moreover, it contains channel 1 and all numbers in it are positive integers. + +So we can assume that some transmitter is assigned channel 1. Consider the first concentric around this transmitter. No transmitters in these cells can use channels 1, 2, or 3, because the cells are within $2s$ of the transmitter operating on 1, and none can be assigned channels 9 or 10, as we showed in Cases A2 and A3. We are left with six usable channels, 4, 5, 6, 7, 8, 11, all of which we must use since six distinct channels are required to fill the first concentric. Channel 6 must be assigned to one of the cells, but there are not two numbers remaining in the list (4, 5, 7, 8, 11) that differ from 6 by more than $k = 3$ . Therefore, it is impossible to complete the concentric in a way that satisfies the $2s$ constraint. This contradicts our supposition that we could assign channels using numbers between 1 and 11. Hence, when $k = 3$ , the span must be greater than 11. $\square$ + +Assertion B: For $k = 3$ , the span is 12. + +Proof B: The span is at least 12; we construct a solution that realizes 12. As was the case for $k = 1$ and $k = 2$ , there exists a sequence of integers that, when applied in a series of adjacent, offset columns, produces a satisfactory network on an infinite plane (as in Figure 2 for $k = 2$ ). The central column for $k = 3$ is the sequence 1, 8, 3, 10, 5, 12, 7, 2, 9, 4, 11, 6 repeated over and over. The adjacent column to one side is the same sequence shifted down 4, and the adjacent column to the other side is the sequence shifted up 4. As before, this pattern can be repeated indefinitely, and since the neighborhood of each number is exactly the same, the conditions are met. Therefore, for $k = 3$ , the span is 12. + +# $k > 3$ + +We prove that for $k > 3$ , the span is $2k + 7$ . We must first prove that no assignment with the channels 1 through $2k + 6$ satisfies the constraints. This proof requires a detailed analysis. [EDITOR'S NOTE: We omit the details of this analysis.] We must also show that there is a configuration of channels with span $2k + 7$ satisfying the constraints; Figure 4 shows our solution. The same rhombus pattern is repeated over and over, tiling the plane (for example, the dotted, striped, and gray parallelograms are all identical copies). This rhombus consists of the numbers 1, 2, 3, $k + 3$ , $k + 4$ , $k + 5$ , $2k + 5$ , $2k + 6$ , and $2k + 7$ . As before, the neighborhood of each 1 is identical, and we can see that it satisfies + +the constraints, as do the neighborhoods of the other 8 cells. Therefore, $2k + 7$ is the span for all $k > 3$ . + +![](images/177c8379459a525b80009369a7403ebdf7aa15d6f84f3705fedbfa0ebb134a0e.jpg) +Figure 4. Configuration that tiles the plane to show that $2k + 7$ channels will work. + +# More Generalizations + +We generalize the $4s$ constraint to: Transmitters $4s$ apart must have channels $m$ apart, $m \leq k$ . While we do not determine the span in this general case, we deduce bounds on it: It must lie between $1 + 2k + 4m$ and $1 + 2k + 6m$ . + +First, suppose that we can make correct assignments using only channels $1, \ldots, 2k + 4m$ . Let $A = \{1, 2, \ldots, k\}$ and $B = \{k + 4m + 1, k + 4m + 2, \ldots, 2k + 4m\}$ . All numbers in $A$ are within $k$ of each other, as are all numbers in $B$ . Consider three cells that share a common vertex. At most one of these three can be assigned an element of $A$ , and at most one can be assigned an element of $B$ ; so the third must be assigned a channel $n$ between $k + 1$ and $k + 4m$ . Consider the first concentric about this central cell with channel $n$ . We need 7 numbers to make enough assignments to fill this first concentric (including the central cell, $n$ ). We label these in increasing order: $x_1 < \cdots < x_7$ . + +Case 1: $n$ one of $x_{2}, x_{3}, x_{4}, x_{5}, x_{6}$ . Since all of these transmitters are within $4s$ of each other, each of the gaps between $x_{1}$ and $x_{2}$ , between $x_{2}$ and $x_{3}$ , and so on must contain at least $m - 1$ numbers; two of these six gaps (the two around $n$ ), must contain at least $k - 1$ numbers. Summing up the seven channels in the first concentric and the channels in the gap, we need $7 + 4(m - 1) + 2(k - 1) = 1 + 2k + 4m$ channels, which contradicts our earlier assumption that we could make the assignments using only $2k + 4m$ . + +Case 2: $n = x_{1}$ . This means that $n$ is the smallest of the numbers. We still have one gap of size $k - 1$ (between $n$ and $x_{2}$ ), and the rest are of size $m - 1$ . Furthermore, since $n$ is chosen so that it is at least $k + 1$ , there are $k$ channels below it. Therefore, we need $k + 7 + 1(k - 1) + 5(m - 1) = 2k + 5m > 2k + 4m$ + +channels, which contradicts our assumption. + +Case 3: $n = x_7$ . This is the same as $n = x_1$ , except that $n$ was chosen to be at most $k + 4m$ . Therefore, we need at least a span of $1 + 2k + 4m$ channels to make correct assignments. + +![](images/cb9106d85f7b003b3448ca9671e027460de869f1775a5748f40bcdf08fb17f72.jpg) +Figure 5. A rhombus that tiles the plane and uses only integers between 1 and $1 + 2k + 6m$ . + +A generalized network using only the integers between 1 and $1 + 2k + 6m$ is shown in Figure 5. The situation is analogous to the $k > 3$ case discussed earlier. This time, we have a rhombus that tiles the plane, with channels assignments $1, 1 + m, 1 + 2m, 1 + k + 2m, 1 + k + 3m, 1 + k + 4m, 1 + 2k + 4m, 1 + 2k + 5m,$ and $1 + 2k + 6m$ . As before, we can check the neighborhood of each channel to make sure that it satisfies the constraints across all values of $m$ and $k$ . + +To assess whether $1 + 2k + 6m$ is a good upper bound for the span, consider how much smaller the span could be. There is a lower bound of $1 + 2k + 4m$ , so our upper bound is not more than $2m$ greater than the span. Furthermore, for $m = 1$ , we have $1 + 2k + 6m = 2k + 7$ , which is exactly the span for $k > 3$ . + +Most important, the pattern we offer in this section provides a surprisingly efficient way to generate assignments for any sized grid, based on $k$ and $m$ : One need simply construct a rhombus of nine hexagons and tile the grid. In summary, though we have not proven that the span is $1 + 2k + 6m$ , that expression appears to be a close approximation. + +# More Layers of Interference + +We consider what happens if there are three levels of interference. We construct a method for deriving assignments that satisfy all the conditions: + +- $2sk$ constraint: Channel assignments for transmitters within a distance of $2s$ of each other must differ by $k$ . +- 4sm constraint: As above, but within a distance $4s$ they must differ by $m$ . +- 6sn constraint: As above, but within a distance 6s they must differ by $n$ . + +We require that $n \leq m \leq k$ . + +We build up this assignment from a 2-level interference assignment. Figure 6 shows a triangular lattice that results from drawing lines between centers of adjacent hexagons, while Figure 7 shows a triangular lattice that connects only some of the cells. Figure 7 looks identical to Figure 6 but on a larger scale. In Figure 7, the dotted cells are within $4s$ of the central gray cell but more than $2s$ away, while the striped cells are within $6s$ of the central cell but more than $4s$ away. + +![](images/6fa98f872097b45c4b967d0a09b3ae5e07a6301497b66ee83471091551de1eb9.jpg) +Figure 6. Triangular from drawing lines between centers of adjacent hexagons. + +![](images/e31c4ee828bfc3392003cc967b5bbac7954a8fa155669f87f65d3fe927117928.jpg) +Figure 7. Triangular lattice that connects only some of the cells. + +Suppose that an assignment satisfies both the $2sm$ and the $4sn$ constraints. If we assign these channels to the vertices of the lattice of Figure 7, they will meet the $4sm$ and $6sn$ constraints. Figure 8 shows how we can overlap three lattices (light gray, dark gray, and black) such that every hexagon is centered on a vertex of one of the three lattices. + +![](images/4d61db3085008290ef3720284c2e8f0f9e1dedf3ced91e94d4577f3479fc82d1.jpg) +Figure 8. How the three lattices can be overlapped so that every hexagon is centered on a vertex of one of the three lattices. + +We give the details. Suppose that the assignments on the light gray lattice use 1 through $L$ and satisfy the $2s$ and $4s$ constraints. We label the cells on the dark gray lattice with $k + L$ to $k + 2L - 1$ (by simply adding $k + L - 1$ to each channel, following the same assignment). We label the cells on the black lattice with $2k + 2L - 1$ to $2k + 3L - 2$ (by adding $2k + L - 2$ to each channel). + +Cells on different lattices have channels at least $k$ apart. Cells on the same lattice are more than $2s$ apart; if they are less than $4s$ apart, their channels differ by $m$ ; and if they are less than $6ns$ apart, their channels differ by $n$ . Thus, the assignment meets all the constraints, with maximum channel $2k + 3L - 2$ . + +Figure 9 gives a practical example of this method. Suppose that we seek a configuration for which channel assignments for transmitters within $2s$ of one another must differ by at least 3, those within $4s$ of one another must differ by at least 2, and those within $6s$ of one another must differ by at least 1. We use the configuration derived earlier (using the $2s2$ and $4s1$ constraints), which uses 9 integers. The gray vertices in Figure 9 use the integers 1 through 9, the dotted vertices use 12 through 20, and the striped vertices use 23 through 31. + +![](images/7e62429b6203cf4a849550ea0fda6d5cc5f838717faeb4ee378d077e1b09829a.jpg) +Figure 9. Example of labeling constructed to satisfy prescribed constraints. + +We can apply this process to get an assignment configuration that satisfies the $2sk$ , $4sm$ , and $6sn$ constraints, for arbitrary $k$ , $m$ , and $n$ . We have already shown that there exists an assignment configuration satisfying the $2sm$ and $4sn$ requirements whose maximum integer is $1 + 2m + 6n$ . Using the above method, we obtain a configuration that satisfies the $2sk$ , $4sm$ , and $6sm$ requirements, and its largest integer (substituting $1 + 2m + 6n$ for $L$ ) is $2k + 3(1 + 2m + 6n) - 2 = 1 + 2k + 6m + 18n$ . + +Does this method produces efficient configurations? That is, is the maximum integer that it obtains close to the actual span? While we have no proof, we suggest why it is an efficient method. We use the method to move from the 2-layer interference to the 3-layer interference, but we could have used it to move from 1 layer to 2 layers. So let's use this method to generate an assignment configuration with $2sk$ and $4s1$ constraints. We begin by finding the span when the only constraint is the $2s1$ constraint (i.e., that adjacent cells must + +have different channels). This is clearly accomplished by the sequence 1, 2, 3 repeated in a central column, shifted down two in the adjacent column to the right, and shifted up two in the adjacent column to the left. + +If we use our method to construct a configuration with $2sk$ and $4s1$ constraints, its maximum channel assignment (substituting 3 for $L$ ) would be $2k + 3 \times 3 - 2 = 2k + 7$ . This is the span for $k > 3$ . Therefore, our method generates a 2-layer interference from a 1-layer interference efficiently. It is reasonable, therefore, to expect that it also generates 3 layers from 2 fairly efficiently. + +It is possible to expand to even higher layers of interference using our method. For example, in Figure 10, the striped dots are all between $9s$ and $6s$ of the gray cell. A 3-layer interference assignment configuration on the lattice gray, dotted, and striped cells can produce an assignment configuration on the whole grid, with constraints for $2s$ , $4s$ , $6s$ , and $9s$ . + +![](images/3f3e9a6296809be419b743557efcf596f944c9190a72b8800e6e173324e0f016.jpg) +Figure 10. Example of labeling satisfying higher layers of interference. + +# Students Clamor for Bandwidth Optimization + +WINSTON-SALEM, Feb. 30 — A team of three college students testified today before the Congressional Subcommittee on Bandwidth Regulation, revealed key research findings that may unleash a flood of proposed legislation designed to boost the efficiency of the information economy. + +Several months ago, the Subcommittee issued a challenge to the world's mathematicians to find a method by which the United States can conserve its bandwidth efficiently. Yesterday, the three students, all from Wake Forest University here in Winston-Salem, stunned the world with their solution to how to assign radio channel frequencies. They found patterns of frequency assignments that maximize efficiency while maintaining the quality of the signals. The new discovery may be a breakthrough for frequency assignments for TV, radio, wireless modems, and cellular phones. + +Furthermore, the students devised methods that could help the government determine the optimum number of channels for a given area, based on the likelihood of interference between channels. These models may have far-reaching implications, possibly affecting how the Federal Communication Commission assigns radio channels in the future. + +Since its inception, the Congressional Subcommittee on Bandwidth Regulation has sought to make sure that all available parts of the spectrum are conserved for government, commercial, and private use. "We minimize interference by careful regulation of station licensing," commented Subcommittee Chairwoman Jane Doe (D-NY). "And this new discovery will help. Simply put, if we don't waste what we have, there will be more left over to sell, which could mean lower taxes." + +In spite of the potential benefits of the discovery, there was some dissension about implementing policies based on it. "While I agree that the patterns that these kids have generated are quite beautiful from a mathematical standpoint," commented Sen. Laissez Fair (R-TX), "government regulation is not necessary in an industry that tends to regulate itself. After all, radio stations tend to space themselves out naturally." + +Others at the hearing disagreed with Mr. Fair, noting that all popular stations seemed to have converged inexplicably to the high side of the FM band, leaving large portions of the FM spectrum unused ("except for that useless government-welfare National Public Radio down at the bottom of the band somewhere," retorted Mr. Fair). Ms. Doe responded that "The economic consequences are important. We are certainly going to recommend legislation to optimize channel assignments on portions of the bandwidth that are already in use. And future assignments should follow the patterns that these bright young mathematicians have discovered." + +However, industry sources signaled that they are vehemently opposed to radio and TV stations (including satellite TV) being forced to change frequencies to comply with any new efficiency standards. To this industry reaction, one of the student researchers commented, "Our goal isn't to force our model of efficiency on a market that has been functioning for decades. We're just trying to help the government plan for future expansion." + +The Subcommittee on Bandwidth Regulation was initially formed in response to the public's concerns surrounding the notorious HDTV "bandwidth heist," which became a popular issue for the Bob Dole campaign in the 1996 presidential election. With widespread support from his party, Dole promised to auction off the new HDTV broadcast spectrum rather than give it away to TV networks interested in converting to HDTV. Sen. John McCain (R-AZ), a front-runner in this year's Republican presidential primary and Chairman of the Senate Commerce, Science and Transportation Committee, has estimated that such an auction would bring in over $70 billion that could help to reduce taxes. "After all, the public airwaves are owned by the American people and managed by our government," said a McCain spokesperson. + +# Radio Channel Assignments + +Justin Goodwin + +Dan Johnston + +Adam Marcus + +Washington University + +St. Louis, MO + +Advisor: Hiro Mukai + +# Summary + +We use mainly combinatorial methods to estimate and prove bounds for various cases, concentrating on two levels of interference. We use the concept of a span, the minimum largest channel among assignments that satisfy the constraints. + +For Requirements A and B, the span is 9. For Requirement C, the span is 7 when $k = 1, 9$ when $k = 2, 12$ when $k = 3,$ and $2k + 7$ for $k \geq 4$ . + +For Requirement D, we present the results in a table (Table 2). Some of our results improve on upper bounds in Shepherd [1998]. + +Only regular transmitter placement needs to be considered; irregular placement can be accommodated by making the hexagons so small that the transmitters are in regular placement, with the bounds adapted correspondingly. + +For Requirement E, we discuss both the limitations of our model and its ability to produce an upper bound for any situation. + +# Definitions + +- Let $s$ denote the length of a side of a hexagon. Then the distance from the center of one hexagon to the center of an adjacent hexagon is $s\sqrt{3}$ . +- A region is a collection of hexagons, finite or otherwise. +- For $u$ and $v$ hexagons in a region $\mathcal{X}$ , let $D(u, v)$ be the minimum number of hexagons (including the first but not including the last) that one must pass through to move from $u$ to $v$ in region $\mathcal{X}$ . Set $D(u, u) = 0$ . So, for example, the stipulation in the problem that any two different transmitters + +(in hexagons $u$ and $v$ ) are within distance $2s$ is equivalent to $D(u, v) \leq 1$ . Similarly, two hexagons are within distance $4s$ if and only if $D(u, v) \leq 2$ . + +- Let $T$ be the portion of a plane that includes a hexagon $u$ along with all hexagons $v$ such that $D(u, v) \leq 3$ (Figure 1). + +![](images/8df0b08aaa6b62e72a6419ac02beaa8eb695df0ca498686988ef4ae9665dca61.jpg) +Figure 1. Region $T$ . + +- Let $R$ be any planar hexagonal grid that contains $T$ . +- Let $k_{i}$ be the minimum allowed difference in channels of two hexagons $u$ and $v$ in a region $R$ that have $D(u, v) = i$ . For example, if $k_{1} = 2$ and $k_{2} = 1$ , then transmitters in hexagons $u$ and $v$ that are adjacent must have channels that differ by at least 2. If the transmitters in hexagons $u$ and $v$ are two hexagons apart (i.e., $D(u, v) = 2$ ), then their channels must not be the same. +- Let $C$ be a function from the hexagons in a region $R$ to the positive integers. Given a set of constraints, call $C$ a channel assignment to $R$ under those constraints if $C$ maps the hexagons to an allowed set of frequencies. +- The width of the interval of the frequency spectrum in region $R$ is the largest channel used. The minimum width over all channel assignments of a region $R$ is the span. +- Let the function $S(l_{1}, l_{2}, \ldots, l_{n})$ of a region $R$ be the span under the restrictions that $k_{i} = l_{i}$ for all $i$ from 1 to $n$ . +- For a given $k_{1} \geq 4$ , define the set + +$$ +\mathcal {N} _ {k} = \{1, 2, 3, k + 3, k + 4, k + 5, 2 k + 5, 2 k + 6, 2 k + 7 \} +$$ + +as the channel assignment set. That is, for a region $R$ , $C(R) \subseteq \mathcal{N}_k$ . + +# Solution + +We are concerned with planar regions that expand out in every direction infinitely or else are finite. First, we prove some general results. + +Lemma 1. Let $M$ be any positive integer. If $S(k_{1}, k_{2}, \ldots, k_{n}) = L$ , then + +$$ +S \left(k _ {1}, k _ {2}, \dots , k _ {i - 1}, k _ {i} + M, k _ {i + 1}, \dots k _ {n}\right) \leq L + M \left(\left\lceil \frac {L}{k _ {i}} \right\rceil - 1\right). +$$ + +Proof: Let $C_1$ be an assignment of channels on the region $R$ with span $L$ and satisfying the given constraints. We construct an assignment that satisfies the new constraints, with the desired largest channel. Define a new channel arrangement $C_2$ as follows: + +$$ +C _ {2} (u) = C _ {1} (u) + M \left(\left\lceil \frac {C _ {1} (u)}{k _ {i}} \right\rceil - 1\right). +$$ + +To see that the new constraints are satisfied, notice that + +$$ +\left| C _ {2} (u) - C _ {2} (v) \right| \geq \left| C _ {1} (u) - C _ {1} (v) \right|; +$$ + +so all the constraints for $k_{j}, j \neq i$ are still satisfied. Furthermore, if + +$$ +\left| C _ {1} (u) - C _ {1} (v) \right| \geq k _ {i}, +$$ + +then + +$$ +\left| C _ {2} (u) - C _ {2} (v) \right| = \left| C _ {1} (u) - C _ {1} (v) \right| + M. +$$ + +This is because if $|C_1(u) - C_1(v)| \geq k_i$ , then + +$$ +\left\lceil \frac {C _ {1} (u)}{k _ {i}} \right\rceil \neq \left\lceil \frac {C _ {1} (v)}{k _ {i}} \right\rceil . +$$ + +This demonstrates that the constraint for the new value of $k_{i}$ is now satisfied. Thus, the only channels used are of the form + +$$ +C _ {1} (u) + M \left(\left\lceil \frac {C _ {1} (u)}{k _ {i}} \right\rceil - 1\right) \leq L + M \left(\left\lceil \frac {L}{k _ {i}} \right\rceil - 1\right). +$$ + +Therefore, the channel assignment that we have constructed is valid, and we have further shown the desired feature that + +$$ +S \left(k _ {1}, k _ {2} \dots k _ {i - 1}, k _ {i} + M, k _ {i + 1}, \dots k _ {n}\right) \leq L + M \left(\left\lceil \frac {L}{k _ {i}} \right\rceil - 1\right). +$$ + +Lemma 2. On any region $R$ containing $T$ , $S(4,1) > 14$ . + +[EDITOR'S NOTE: We omit the proof.] + +Lemma 3. On any region $R$ containing $T$ , $S(3,1) > 11$ and $S(2,1) > 8$ . + +Proof: If $S(3,1) = L \leq 11$ , then by Lemma 1 we know that + +$$ +S (4, 1) \leq L + \left\lceil \frac {L}{3} \right\rceil - 1 \leq 1 1 + \left\lceil \frac {1 1}{3} \right\rceil - 1 = 1 4, +$$ + +which is a contradiction to Lemma 2. Similarly, if $S(2,1) = L \leq 8$ , then by Lemma 1 we have + +$$ +S (3, 1) \leq L + \left\lceil \frac {L}{2} \right\rceil - 1 \leq 8 + \left\lceil \frac {8}{2} \right\rceil - 1 = 1 1, +$$ + +which violates what we just proved. + +Lemma 4. If $l > 4$ , then for region $T$ , $S(l,1) > 2l + 6$ . + +[EDITOR'S NOTE: We omit the proof.] + +The proof works for any region, finite or infinite. + +$$ +k _ {2} = 1 +$$ + +For any two hexagons $u$ and $v$ , if $D(u, v) = 1$ , then their channels differ by at least $k$ ( $k_1 = k$ ) for any positive integer $k$ , and $k_2 = 1$ . With this generalization, we would like to see how the span relates to $k_1$ . + +Lemma 5. For $k_{1} \geq 4$ , a width of the interval of the frequency spectrum in region $R$ is less than or equal to $2k_{1} + 7$ . + +Proof: By induction. We use the set defined in Definition 9. First we show that $k_{1} = 4$ satisfies Lemma 5. If $k_{1} = 4$ , then for all $u, v$ such that $D(u, v) \leq 1$ we have $|C(u) - C(v)| \geq 4$ , by definition. By Lemma 2, we have $S(4, 1) > 14$ . To see that for $k_{1} = 4$ a frequency width is $2(4) + 7 = 15$ , use the channel assignment set $\mathcal{N}_{4} = \{1, 2, 3, 7, 8, 9, 13, 14, 15\}$ . As shown in Figure 2, the channel assignment set satisfies the constraints. + +Also, the channel assignments tessellate and the resulting pattern always meets the constraints. To see this, translate the channel assignments from $A$ to $B$ . After translation, we have a repeated pattern with no gaps and the constraints still hold. Now, instead translate from $A$ to $C$ , and again we have a repeated pattern with no gaps while keeping all constraints. Since these are the only two possible kinds of translation, we have shown that the pattern is a tessellation. Since the maximum channel assigned is 15, the width of the frequency spectrum is $15 = 2(4) + 7$ . + +Next, let $k$ be any integer such that $k \geq 4$ , assume that Lemma 5 holds true for $k_{1} = k$ with channel assignment set $\mathcal{N}_k$ . This generates a tessellation as illustrated in Figure 3. It is easy to see that this pattern tessellates and meets the constraints. + +Now we need to prove that Lemma 5 holds for $k_{1} = k + 1$ . To do so, we generate a tessellation pattern from $\mathcal{N}_{k + 1}$ in region $R$ that satisfies the constraints. In our hypothesis that Lemma 5 holds for $k$ , we replace all $k$ with $k + 1$ in Figure 3. The result is the tessellation in Figure 4, which meets all the constraints. The maximum frequency used is $2k + 9 = 2(k + 1) + 7$ ; that is, for $k + 1$ the width of the interval of the frequency spectrum is $2(k + 1) + 7$ . Since this matches our inductive hypothesis for $k + 1$ , we have proven the lemma. + +![](images/33d0ee1b2c48448eba10c12ba8ebb0c97328bee775df34d21d994fd3704dd553.jpg) +Figure 2. Channel assignment for $k_{1} = 4$ . + +![](images/c7b0585fdd373326fdbb3cb98ba920a27ce444fe2d2e097f8e43df8c9f8f6090.jpg) +Figure 3. Channel assignment for $k_{1} = k$ . + +![](images/8e0ac9fdff98f1dca21d9ae72d878894df1579312aa946dcc3d2f231062cd059.jpg) +Figure 4. Channel assignment for $k_{1} = k + 1$ . + +Lemma 5 is a very nice result. It gives a way of constructing a tessellation under the constraints that $k_{1} \geq 4$ and $k_{2} = 1$ , and we can make this pattern using the channel assignment set $\mathcal{N}_{k_1}$ . Most important, we can assign the channels with a frequency width of $2k_{1} + 7$ . Next, we prove that this width is a lower bound for any $k_{1} \geq 4$ . + +Theorem 1. Let $k_{1} \geq 4$ . Then $S(k_{1}, 1)$ of a region $R$ is $2k_{1} + 7$ . + +[EDITOR'S NOTE: We omit the proof.] + +With Theorem 1 and Lemma 5, we know how to form a repeating pattern for the given constraints, and we also know the span over the region $R$ . A very nice outcome from these results is that for any $k_{1} \geq 4$ , we can choose nine connected hexagons and produce a channel assignment with $S(k_{1}, 1) = 2k_{1} + 7$ . By looking at Figure 3, we can see that for large $k$ values we have a larger spread in frequencies; that is, for larger $k_{1}$ we have a more efficient system of transmitters in terms of interference because the frequency width is large. + +We now attend to $k_{1} = 2$ and $k_{1} = 3$ . + +Theorem 2. For $k_{1} = 2$ , the channel assignment set is + +$$ +\mathcal {N} _ {2} = \{1, 2, 3, 4, 5, 6, 7, 8, 9 \}, +$$ + +with $S(2,1) = 9$ + +For $k_{1} = 3$ , the channel assignment set is + +$$ +N _ {3} = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 1 0, 1 1, 1 2 \}, +$$ + +with $S(3,1) = 12$ + +Proof: By Lemma 3, we know that $S(2,1) > 8$ and $S(3,1) > 11$ . In Figure 5, we have a tessellation pattern for $k_{1} = 2$ with channel assignment set as $C_{2}$ . By inspection, $S(2,1) = 9$ , the lowest possible value. For $k_{1} = 3$ , we have a similar argument, only we use the channel assignment set $C_{3}$ . By inspection of Figure 6, $S(3,1) = 12$ , the lowest possible value. + +$$ +k _ {1} = k \text {a n d} k _ {2} = 0 +$$ + +The values in Figure 7 meet the constraints. Therefore, the span over a region $R$ for this case is $2k + 1$ . To see this, try for $k - 1$ ; then the channel assignment set is $\{1, k, 2k - 1\}$ , but $k$ and 1 must be at least $k$ apart. Hence, $2k + 1$ is the span. + +$$ +k _ {1} = k _ {2} = k +$$ + +The values in Figure 8 meet the constraints. Hence, the span over a region $R$ is $6k + 1$ . To see this, as above try for $k - 1$ ; then we have a contradiction in Figure 7 with the hexagons containing 1 and $(k - 1) + 1 = k$ . Therefore, $2k + 1$ is the span. + +![](images/7f7104190bc7e233663c494e86484ad151453f8194d8282c5c8e7de8c76ebdd0.jpg) +Figure 5. Channel assignment for $k_{1} = 2$ . + +![](images/c6f128ead46e6007bd9a783d7404d49ed848f68b459a6f0f6675b4fea6b39e59.jpg) +Figure 6. Channel assignment for $k_{1} = 3$ . + +![](images/a7f3584476b9990fbe5337a838ff9e92ae85eb150a174049efab5e751cc3066a.jpg) +Figure 7. Channel assignment for $k_{1} = k$ and $k_{2} = 0$ . + +![](images/3571e0a5a2bac54cccd37821e8ca9a397b9a56f7acf3c5b17cf118f07e43d85d.jpg) +Figure 8. Channel assignment for $k_{1} = k_{2} = k$ . + +# General Case + +In this section, $k_{1}$ and $k_{2}$ can be any positive integers. + +Theorem 3. Let $R$ be a region that contains region $T$ and let $k_{1} \geq 4k_{2}$ . Then + +i) If $k_{2}$ divides $k_{1}$ , then $S(k_{1},k_{2}) = 2k_{1} + 6k_{2} + 1$ . + +ii) If $k_{1} > 6k_{2} + 1$ , then $S(k_{1}, k_{2}) = 2k_{1} + 6k_{2} + 1$ . + +[EDITOR'S NOTE: We omit the proofs.] + +Theorem 4. Let $3k_{2} \leq k_{1} \leq 4k_{2}$ . Then $S(k_{1}, k_{2}) \leq 3k_{1} + 2k_{2} + 1$ . + +Proof: By construction. Consider the tiling in Figure 9. As long as $3k_{2} \leq k_{1} \leq 4k_{2}$ , the channel assignment holds. Then by construction, + +$$ +S \left(k _ {1}, k _ {2}\right) \leq 3 k _ {1} + 2 k _ {2} + 1. +$$ + +As shown by the highlighted tiles, this tiling works only if $2k_{2} + 1$ and $k_{1} + 1$ differ by at least $k_{2}$ (by definition of $k_{2}$ ). + +![](images/9ae134ad5315cafee0c64d27a56d387061e3d67e6f879ce7aba0d6d050b1880e.jpg) +Figure 9. Channel assignment for Theorem 4. + +It follows that + +$$ +(k _ {1} + 1) - (2 k _ {2} + 1) \geq k _ {2}, +$$ + +$$ +k _ {1} - 2 k _ {2} \geq k _ {2}, +$$ + +$$ +\begin{array}{c c c} k _ {1} & \geq & 3 k _ {2}. \end{array} +$$ + +Yet we know from Theorem 3 that for $k_{1} \geq 4k_{2}$ we have a strict lower bound; therefore, we must have a strict upper bound, that is + +$$ +k _ {1} \leq 4 k _ {2}. +$$ + +Hence, we have that if $3k_{2} \leq k_{1} \leq 4k_{2}$ , then $S(k_{1}, k_{2}) \leq 3k_{1} + 2k_{2} + 1$ . + +# Conclusion + +We summarize specific proved results in Table 1. For the cases $k_{2} = 2$ and $k_{1} = 9, 11, 13$ , we are unable to determine $S(k_{1}, k_{2})$ . However, we find bounds for those values by Lemma 2. + +Table 1. Compilation of spans for different values of $k_{1}$ and $k_{2}$ . + +
k1k2S(k1,k2)
117
219
3112
4115
5117
l>512l+7
2213
3217
4217
5221
6223
7226
8229
9230,31 or 32
10233
11234,35 or 36
12237
13239 or 40
l>1322l+13
+ +Table 2. General results for values of $k_{1}$ and $k_{2}$ . + +
ConstraintsSpan
any k1, k2 = 02k1 + 1
k1 = k26k1 + 1
k1 = 2, k2 = 19
k1 = 3, k2 = 112
k1 ≥ 4, k2 = 12k1 + 7
k1 ≥ 4k2≤ 2k1 + 6k2 + 1
3k2 ≤ k1 ≤ 4k2≤ 3k1 + 2k2 + 1
k1 > 4, k2 = 1> 2k1 + 6
3k2 ≥ 2k14k1 + 3k2
+ +Table 2 has our general results. The last row was proven not by us but by Mark Shepherd [1998]. For selected values of $k_{1}$ and $k_{2}$ , we establish the span of an arbitrary planar hexagonal region that includes $T$ . For all combinations, we can find a pattern that repeats—that is, we can find a tessellation of frequencies. This is a major result, because we know how to construct a frequency assignment based on the values of $k_{1}$ and $k_{2}$ through a simple formula, as shown in Figure 4 for $k_{1} \geq 4$ and $k_{2} = 1$ . + +# News Release: Mathematicians Help Clear Out Airwaves + +Last week mathematical researchers at Washington University in St. Louis announced that they have improved the current method of assigning radio frequencies, such as the channel of your favorite station. With the increase in wireless communication, it has become more important than ever to assign frequencies efficiently while avoiding interference as much as possible. + +The mathematicians made use of a certain type of pattern, made popular by the contemporary artist M.C. Escher, called a tessellation. These patterns are carefully constructed to cover an entire region without leaving any gaps. The new results show how to assign channel frequencies to regions in a tessellation so as to minimize several kinds of interference from nearby stations. The work extends efforts currently in progress at Oxford University in England. + +"Our work is quite general," commented one of the researchers. "It applies regardless of geographical situations, such as differences in altitude or other natural phenomena." + +Radio listeners have nothing to fear from these new developments; the frequency of your favorite station is unlikely to change. The new results will help long-term planning by engineers, operators of cell-phone services, and government regulators. "In the future you won't have the kind of interference that causes someone to flip to Rush Limbaugh's channel but end up instead with Howard Stern. We're just making sure that listeners get to hear Rush say his whole two cents worth." + +# Reference + +Shepherd, Mark. 1998. Radio channel assignment. Ph.D. thesis, Merton College, Oxford University. http://www.maths.ox.ac.uk/combinatorics/thesis.html. + +# Author/Judge's Commentary: The Outstanding Channel Assignment Papers + +Jerrold R. Griggs + +Dept. of Mathematics + +University of South Carolina + +Columbia, SC 29208 + +griggs@math.sc.edu + +homepage: http://www.math.sc.edu/\~griggs/ + +# Background + +This 2000 MCM Problem, which I wrote, is based on a subject of considerable current interest to mathematicians and communications engineers. The original "channel assignment problem" has a long history. The problem is to assign an integer channel to each transmitter in a network, with the condition that the absolute difference between channels for two nearby transmitters must not belong to a certain set $T$ that arises from interference considerations (see Hale [1980] for motivation). A feasible assignment can be obtained with channels far apart, but this is highly inefficient. Typically, a frequency band that spans the assigned channels is allocated to the network; the wider the band, the more it costs. The problem, then, is to minimize the "span" of the assignment, which is the difference between the maximum channel and the minimum channel. + +This problem is modeled nicely with graph theory by letting each transmitter correspond to a vertex, with edges corresponding to pairs of nearby transmitters. The problem becomes a special vertex-coloring problem, owing to the set $T$ of forbidden differences [Cozzens and Roberts 1982]. Among the methods that come into play are number theory (in the case of complete graphs [Griggs and Liu 1994]) and the complexity of graph homomorphisms. + +# History of the Problem + +In 1987, I heard from Fred Roberts [1987] about a variation of the $T$ -coloring problem in which channels for two nearby transmitters, say within distance $s$ , must differ by at least two, while those within distance $2s$ must differ by at least one. The network may have thousands of transmitters. Again we seek to minimize the span of a feasible channel assignment. + +In considering this problem, I realized that it no longer translates directly into a graph theory problem by assigning vertices to transmitters and edges to pairs at distance at most $s$ : Although two transmitters at distance at most $s$ correspond to adjacent vertices, a pair of transmitters at distance between $s$ and $2s$ corresponds to a pair of vertices at distance two in the graph only when some third transmitter is within distance $s$ of both of them. (Note that the distance between two vertices in a graph is the number of edges in a shortest path between them.) In fact, the vertices for two transmitters at distance between $s$ and $2s$ may not even be connected in the graph. + +Nonetheless, it is clear that for the real problem it is useful to understand the natural graph analogue, which is to find the minimum span for the integer labelings of a graph such that labels for vertices at distance one (resp., two) differ by at least two (resp., one). For the transmitter networks in Parts A and B of the contest problem this year, the associated graph problem is precisely of this type with one change: The span in the contest problem is one more than in the labeling problems in the literature. + +Griggs and Yeh [1996] introduce this graph labeling problem and pose some fundamental questions about it. Included is the natural generalization of the graph problem in which there are multiple levels of spectral spreading interference: Given integers $d_1, \ldots, d_r$ we seek minimum span labelings such that for all $i$ , the labels for any pair of vertices at distance $i$ differ by at least $d_i$ . Such a labeling is called an $L(d_1, \ldots, d_r)$ -labeling. + +# Applications + +While such problems are mathematically interesting, they have taken on greater importance in recent years due to their potential applicability to the design of mobile radio networks. Large areas are often covered by a network of regularly spaced transmitters such that the associated graph labeling problem exactly models the network problem. The most common design places the transmitters in a triangular lattice, so that the whole region can be tessellated by a honeycomb of hexagons, with each transmitter in the center of a hexagonal region that it covers. An early reference considering such a model is Gamst [1982]; and evidently MacDonald [1979], cited by contest teams, also does this. A group led by Robert Leese at Oxford has been prominent in this program in recent years [Leese 1999]. + +# The Outstanding Papers + +# Requirement A + +Part A of this year's contest problem is a basic instance of this application, a sizable array of transmitters with $d_{1} = 2$ , $d_{2} = 1$ . Feasible solutions are trivial to find, but working down to an optimal one requires some cleverness. We had expected many teams would enjoy working on this and that most would achieve the minimum span; this was indeed the case. + +# Requirement B + +Part B extends the network of Part A to the whole plane. While one can solve Part A by trial-and-error or by a computer search (to obtain an optimal assignment and rule out smaller ones), Part B requires a method to keep going forever labeling this infinite network. Successful teams for Part B usually found a pattern (a strip or a tile of numbers) that could be repeated indefinitely and achieve the same span as the bounded array in Part A. One way is to label a strip by an appropriate ordering, say + +$$ +1 3 5 7 9 2 4 6 8 1 3 \dots , +$$ + +and then use the same strip shifted appropriately for the next row, and so on. Another perspective is to construct an appropriate tile of nine hexagons and replicate it. Judges were pleased to see papers, such as that of the team from California Polytechnic State University, that test a variety of heuristics to assign channels in Parts A and B, since such methods are needed for more general arrays and distance parameters. At least one paper, by the team from Wake Forest University, makes the interesting observation (with proof) that the optimal labeling for Parts A and B is essentially unique! + +# Requirement C + +What is most remarkable is that several teams were able to solve Part C, in which the channel spread parameters for Parts A and B are extended to $d_{1} = k$ , $d_{2} = 1$ . One can give decent labelings for the bounded array in Part A and for the full plane in Part B that are not far off from the lower bounds that one can quickly derive. However, we had not completely solved the problem for general $k$ before the contest. It seems to be a new result. + +# Requirement D + +Part D is the open-ended generalization of the problem to general array configurations and multiple levels of interference. It has the most room for + +creativity and for model design. This part was expected to be the main point of differentiation among the entries. Judges were disappointed that most entries did not do much here—perhaps they ran out of time working on Parts A–C, where the assumptions and model are explicit. Weaker papers only considered, say, what happens if one transmitter is not at the center of its hexagon. But stronger papers gave this part considerable thought. Some considered general conditions with two levels of interference, such as the impressive results contained in the paper by the team from Washington University that nearly solve it. (In part, they built on the thesis of Mark Shepherd [1998].) The paper from the Wake Forest University team analyzes an assignment method for multiple levels of interference. Judges wanted to see teams use the real problem of wireless communication as motivation, such the choice of multiple-level distance parameters analyzed by the team from Lewis & Clark University. Some considered how to adapt their hexagonal lattice approach to other configurations of transmitters. + +# Requirement E + +For Part E, judges wanted to see an article that conveys to the public the sense of the problem and the team's ideas on how to attack it. A particularly amusing article was crafted by a team from Harvey Mudd College whose entry received Honorable Mention. + +# General Remarks + +Several teams located related results in the literature or the Web, particularly for the problem of cyclic labelings, where integers $\{1,2,\dots,n\}$ are used but the distance between two labels is measured modulo $n$ , that is, by the shortest path on the circle labelled 1 through $n$ . This approach can be used when a large number of channels must be assigned to each location: When a vertex receives label $i$ , it is given all channels congruent to $i$ modulo $n$ . For two levels of interference $(L(d_{1},d_{2}))$ , this cyclic problem is solved in van den Heuvel et al. [1998]. However, this does not immediately solve the contest problem. A solution for general $L(d_{1},d_{2})$ of the (linear) contest problem remains to be found. + +Teams typically found good labelings for the bounded array in Part A by trial and error or by exhaustive computer search, for small values of $k$ , and identified patterns or tiles that could be extended to general $k$ to yield good labelings for the bounded and unbounded arrays. + +One cannot be certain that a labeling is optimal without proving that there is no labeling of smaller span. Also, it is by no means clear that there exist optimal labelings using a repeating pattern, although many teams seemed to assume this. Thus, it is not sufficient to check just labelings from a repeating pattern. (In fact, it would be very interesting if one could show that for all sets + +of distance parameters $d_{i}$ that there is an optimal labeling of the plane built from a repeating pattern. This seems to be an open question.) + +Judges favored papers that provide a clear proof that their labelings are optimal for general $k$ . The best proofs that we read were impressive, such as the one by the team from the National University of Defence Technology. That paper is among those that made the interesting observation that for general $k$ there is an optimal labeling for the arrays in Parts A and B that uses only nine different channels—which could be useful in some applications! + +# Related Research + +Chang and Kuo [1992] made noteworthy progress on the original graph labeling problems for $d_{1} = 2$ , $d_{2} = 1$ posed by Griggs and Yeh. Griggs and Yeh conjectured that every graph of maximum degree $\Delta \geq 2$ has an $L(2,1)$ -labeling of span at most $\Delta^{2}$ ; this bound is achieved by cycles. Their conjecture remains open, even for $\Delta = 3$ . For the famous Petersen graph, in which every vertex has degree 3, the minimum span is 9, the conjectured maximum. + +Georges and Mauro [1995] showed how the $L(2,1)$ -labeling problem for general graphs $G$ is equivalent to a path covering problem for the complement of $G$ . Such problems are known to be difficult (consider the problem of whether a graph has a Hamilton path, for instance), and, indeed, it has been recently shown [Fiala et al., to appear] that determining whether a graph has a labeling with span at most $k$ is NP-complete for all $k \geq 4$ . A good general upper bound on the span in the case $L(p,q)$ has been given recently [van den Heuvel and McGuinness 1999] for general planar graphs of maximum degree $\Delta$ , by applying the methods of the proof of the Four Color Theorem. + +Leese [1997] considers channel assignments for the hexagonal array of our problem that are obtained by tilings (periodic labelings). A wide range of applied references is provided in this paper. McDiarmid and Reed [1997] and Fitzpatrick et al. [2000] discuss algorithms for channel assignments for the hexagonal array of our problem in which each location must $v$ must receive a specified number $w_v$ of channels. Many papers seem to be emerging that employ familiar methods of discrete optimization to produce channel assignments of low span (not necessarily optimal). Contest teams discovered work that we were not aware of, by Hurley [n.d.] and by Smith and Hurley [1997], that uses heuristics and search methods, including tabu search and genetic programming; Hurley developed software for these problems. A new project by Leese [2000] tests a linear programming method based on column generation. + +# Conclusion + +Returning to the contest problem, judges had hoped to see more entries employ such methods of discrete optimization on Part B. We also hoped that + +more effort would be spent considering the open-ended modeling Part D of the problem. It may simply be that teams found more direct analysis to be successful on the specific problem instances in Parts A, B, and C, and most of their energy was spent on tackling these parts. The Outstanding papers published here are among the very few that accomplished much with the extension to Part D. + +Judges raised fewer concerns than in past years about specific missing elements in entries; but again this is likely because the model for Parts A, B, and C, which teams focused on, is clear-cut. In general, what judges particularly sought in winning papers was clarity, both in explaining their approach and in proofs, especially for the lower bound in Part C. Since space permits, I reproduce below the discussion in my Judge's Commentary last year [Griggs 1999] of crucial elements in an outstanding contest entry. + +# Crucial Elements in an Outstanding Entry + +Here are some general tips that the judges feel apply to any contest problem. + +- Teams should attempt to address all major issues in the problem. Projects missing several elements are eliminated quickly. +- A thorough, informative summary is essential. Papers that are strong otherwise are often eliminated in early judging rounds due to weak summaries. Don't merely restate the problem in the summary, but indicate how it is being modeled and what was learned from the model. The summary should not be overly technical. +- Develop a model that people can use! The model should be easy to follow. While an occasional "snow job" makes it through the judges, we generally abhor a morass of variables and equations that can't be fathomed. Well-chosen examples enhance the readability of a paper. It is best to work the reader through any algorithm that is presented; too often papers include only computer code or pseudocode for an algorithm without sufficient explanation of why and how it works. +- Supporting information is important. Figures, tables, and illustrations are very helpful in selling your model. A complete list of references is essential—document where your ideas come from. + +# References + +Chang, G., and D. Kuo. 1992. Labelling graphs with a condition at distance 2. SIAM Journal of Discrete Mathematics 5: 586-595. + +Cozzens, M.B., and F.S. Roberts. 1982. $T$ -colorings of graphs and the channel assignment problem. Congressus Numerantium 35: 191-208. + +Fiala, J., T. Kloks, and J. Kratochvil. To appear. Fixed-parameter complexity of $\lambda$ -labellings. Discrete Applied Mathematics. +Fitzpatrick, S., J. Janssen, and R. Nowakowski. 2000. Distributive online channel assignment for hexagonal cellular networks with constraints. Preprint, January 2000. +Gamst, A. 1982. Homogeneous distribution of frequencies in a regular hexagonal cell system. IEEE Transactions on Vehicular Technology VT-31: 132-144. +Georges, J.P., and D.W. Mauro. 1995. Generalized vertex labelings with a condition at distance two. Congressus Numerantium 109: 141-159. +Griggs, Jerrold R. 1999. Judge's commentary: The outstanding lawful capacity papers. *The UMAP Journal* 20 (3): 331-333. +________, and D. D.-F. Liu. 1994. The channel assignment problem for mutually adjacent sites. Journal of Combinatorial Theory A 68: 169-183. +________, and R.K. Yeh. 1996. The $L(2,1)$ -labeling problem on graphs. SIAM Journal of Discrete Mathematics 9: 309-316. +Hale, W.K. 1980. Frequency assignment: Theory and applications. Proceedings of the IEEE 68: 1497-1514. +van den Heuvel, J., R.A. Leese, and M.A. Shepherd. 1998. Graph-labeling and radio channel assignment. Journal of Graph Theory 29: 263-283. +van den Heuvel, J., and S. McGuinness. 1999. Colouring the square of a planar graph. Preprint. +Hurley, S. n.d. Frequency assignment research. http://www.cs.cf.ac.uk/user/Steve.Hurley/freq.htm. Accessed August, 2000. Software description included. +Leese, R.A. 1997. A unified approach to the assignment of radio channels on a regular hexagonal grid. IEEE Transactions on Vehicular Technology 46: 968-980. +_____. 1999. The mathematics of radio spectrum management. Research program report at http://www-radio.gov.uk/busunit/research/mathsp/mathsp.htm. +________. 2000. A linear programming approach to radio channel assignment in heavily loaded, evolving networks. Preprint. +MacDonald, V.H. 1979. Advanced mobile phone service: The cellular concept. Bell System Technical Journal 58: 968-980. +McDiarmid, C., and B. Reed. 1997. Channel assignment and weighted coloring. Preprint. +Roberts, F.S. 1987. Personal communication. + +Shepherd, Mark. 1998. Radio channel assignment. Ph.D. thesis, Merton College, Oxford University. http://www.maths.ox.ac.uk/combinatorics/thesis.html. + +Smith, D.H., and S. Hurley. 1997. Bounds for the frequency assignment problem. Discrete Mathematics 167/168: 571-582. + +# Acknowledgment + +The author's research was supported in part by NSF grant DMS-0072187. + +# About the Author + +Jerry Griggs is a graduate of Pomona College and MIT, where he earned his Ph.D. in 1977. Since 1981, he has been at the University of South Carolina, where he is Professor of Mathematics and a member of the Industrial Mathematics Institute. He received the 1999 award at the University for research in science and engineering. + +His research area is combinatorics and graph theory, both fundamental theory and applications to database security, communications, and biology. He has published more than 60 papers and supervised 11 doctoral and 9 master's students. He serves on the Board of the Mathematics Foundation of America, which oversees the Canada/USA Mathcamp. He has been an MCM judge since 1988. + +# Space Aliens Land, Threaten Global Destruction + +Detroit: “Little green men” land and complain about noise + +Space aliens landed simultaneously in all of the world's major cities at about 3:25 EST this morning. Citing excessive radio noise on Earth's part, they proceeded to spraypaint a hexagonal lattice onto the cities from low-flying spaceships. A spokesbeing for the aliens then demanded that all of Earth's radio transmitters be relocated to the center of one of the hexagons and retuned by one month from today. If the transmitters are not relocated and retuned by then, the spokesbeing threatened to destroy the Earth. + +In response to widespread human protest, the spokesbeing, who said his name was "Jymyzzach," which loosely translates to "Jared the Terrible" in English, defended the aliens' actions. + +"Listen," he said "You humans are using many times more bandwidth than you need and you still manage to have interference and bad reception in some places. My people are astronomers: We survey the furthest reaches of the cosmos for clues as to the secrets of the universe. Whenever we look at the side of our sky that contains your planet, all we can see at all radio frequencies is this huge, brilliant ball of noise, noise, and more noise." + +"And your taste in music is deplorable," added Jared, displaying a false + +color radio image of "The Macarena." + +Jared explains the need for assigning frequencies as follows: "Basically, what we decided was that to reduce the noise from your planet, you need to keep the range of frequencies of your channels to a minimum. With the hexagonal lattice proposed, you can cover your cities with signals that do not interfere. So you'll have clear, crisp signals, and you'll hear what you want to hear. We even accounted for your inferior technology in our calculations, because when your Earth-transmitters are too close together, an awful interference occurs that we cannot stand!" + +"You Earthlings will get better radio reception than ever using no more than 15 channels, and my people will be able to continue our quest for knowledge. We have a win-win situation here," Jared proclaimed. + +Jared also has plans for rural and other areas that the aliens have not divided into hexagons. "It's hard to make up a small set of rules for maximum bandwidth reduction when radio transmitters are randomly distributed, but with some thought and some computation you can drastically reduce the bandwidth used by these scattered transmitters." + +![](images/709bdc9dd2377ac542bc0559b5c82b45fd05343387ccf870d62ad1de61bb4863.jpg) +The aliens' transmitter restructuring plan. + +When pressed for details, Jared explained his scheme for the unpartitioned areas. "You start by assigning each transmitter a number. You set transmitter number one to channel one. Then you set transmitter number two to channel one and see if it interferes with transmitter number one. If it does, you set transmitter two to channel two and check again for interference; but if it doesn't, you leave transmitter two at channel one and move on to transmitter three. By repeating this process until all transmitters have channel numbers assigned so that none of them interfere, you can get good coverage at pretty low bandwidth." + +"When I say that two transmitters interfere, I mean that they can both be heard clearly on the same channel on a radio + +somewhere. If you draw circles around two transmitters set on the same channel to mark the places where they're just barely audible, you can tell whether they interfere or not by whether those circles intersect. Unfortunately," he continued, "transmitters don't broadcast just on the channel they're set on. They also broadcast a little bit on every other channel, so two transmitters can interfere even if they're set on different channels. You have to take that effect, called 'spectral spread,' into account when you're looking for interference." + +Before departing, Jared ordered one of the alien spaceships to destroy the moon. "Just to let you know we're serious," he explained. President Clinton could not be reached for comment. + +![](images/f547ca81048b50bbce2aac30247cfa5056252b28bddc7fe7a85522dd4db10f64.jpg) +Different transmission channels interfering. + +— Christopher R.H. Hanusa, Anand Patil, and Otto Cortez, in Claremont, Calif. + +# Elephant Population: A Linear Model + +Nathan Cappallo + +Daniel Osborn + +Timothy Prescott + +Harvey Mudd College + +Claremont, CA + +Advisor: Michael Moody + +# Introduction + +We use a matrix to model the effects of the darting on the population. Assuming that the age distribution was stable at the inception of darting, we: + +- drop the birth rate by a chosen factor to simulate a percentage of the elephant cows being darted; +- manipulate this factor to model the waning effectiveness over time of the contraceptive used, thus obtaining an accurate estimate of how many cows to dart each year; +- assume the cost of darting to be comparable to current elephant contraceptives and compare this to the cost of removal; and +- model the effect of darting on populations drastically reduced following a disaster. + +The resulting algorithm is sufficiently simple and fast and could be used by many different elephant parks. + +Assuming that culling is not a viable alternative, removal appears to be a more effective solution, since darted elephants will need to be darted multiple times over their lifetime. However, this result does not take into account the increasing cost of removing elephants as humans encroach on their habitat. Also, since the relative number of older, bigger elephants will be greater, tourists revenue will increase. Thus, while removal is less expensive, it may not be the best alternative. + +# Elephant Populations + +# Growth Rates for Elephant Populations + +We first find the rate at which elephants can produce female offspring. Because of the assumed parity between males and females, we ignore the males and assume that the male population is equal to the female population. We feel that this is a safe assumption because the population growth is proportional to the number of females, not to the total number of animals. + +Given that elephants produce offspring every 3.5 years, this gives us a starting rate of 1 elephant/3.5 years. Because $1.35\%$ of births are twins, the birth rate is 1.0135 elephants/3.5 years. Finally, only half of the births are females, which gives the final birth rate of $\frac{1}{2} \times 1.0135$ female elephants/3.5 years, or an average of 0.1448 females born per cow per year. This is a good approximation because of the large number of elephants in the herd; any random variation tends to cancel itself out. + +Given the unusually long gestation period, a further formula is needed to find the average birth rate per female during her first two years of maturity. Because elephants first conceive between the ages of 10 and 12, we assume that half conceive in the first year and the other half conceive in the second. Since the gestation period is not an integral number of years, one-twelfth of the elephants give birth during the ages of 11 to 12 (half of the elephants were able to conceive, one-sixth conceived during the first two months, so that 22 months later one-twelfth gave birth). The remaining five-sixths of the elephants that conceived during age 10 give birth when they are 12. Also, another one-sixth conceive during the first two months of their eleventh year, so they give birth at the end of their twelfth year. This means that $\frac{1}{6} + \frac{1}{2} \times \frac{5}{6} = \frac{7}{12}$ give birth in their eleventh year. From then on, we assume that elephants give birth yearly, always to 0.1448 cows each birth. This noninteger value reflects that we are dealing with averages, not individual cows. + +It is a common practice in biological studies to view the age structure as a vector and the survival and birth rates as an appropriate square matrix. For the population vector, the $i$ th element represents the current population that is between $i - 1$ and $i$ years old. Multiplying the matrix by the vector gives an equal-size vector that represents the population in the next year. + +The example below shows the age structure of a species that lives to the age of 8 years. Note that there is a $75\%$ survival rate for the first year, after which the survival rate for any one year is $\beta$ , except for the next to last year of the animal's life, when the survival rate is $\beta / 2$ . In the second year of the animal's life, it produces an average of 0.097 offspring. In the third through sixth years of the animal's life, it produces an average of 0.290 offspring per year; during years seven and eight, no offspring are produced. Next to the matrix, the age structure vector gives the relative ratios of the animal ages. + +The first row consists of the birth rates. The product of this matrix with a population vector represents the passage of one year. Thus, multiplying the + +$$ +A = \left[ \begin{array}{l l l l l l l l l} 0 & 0 & 0. 0 9 7 & 0. 2 9 0 & 0. 2 9 0 & 0. 2 9 0 & 0. 2 9 0 & 0 & 0 \\ . 7 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \beta & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \beta & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \beta & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \beta & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \beta & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \beta / 2 & 0 \end{array} \right]; \quad \vec {x} = \left[ \begin{array}{l} x _ {0} \\ x _ {1} \\ x _ {2} \\ x _ {3} \\ x _ {4} \\ x _ {5} \\ x _ {6} \\ x _ {7} \\ x _ {8} \end{array} \right] +$$ + +matrix by a vector representing the population in one year, you get a matrix whose value in the first entry is the sum of all the female elephants, scaled by the number of births they had; this is the number of newborn elephants the following year. + +By adding survival rates in the diagonal directly beneath the main diagonal, the entries other than the first become the population of the year before, scaled by this survival rate. All of the other entries are zero, since elephants can age only one year from the year before and can give birth only to newborns. + +We assume that the age ratios have reached an equilibrium; this is safe to say, provided that park management did not selectively hunt or relocate elephants based on age. Then the age vector of the next year is proportional to the age vector of the current year. In other words, $A\vec{x} = \lambda \vec{x}$ , in which $\lambda$ and $\vec{x}$ are an eigenvalue and an eigenvector. In this instance, $\lambda$ gives the growth rate and $\vec{x}$ gives the age distribution of the current population, which can be scaled appropriately to fit the known population size. + +Given a survival rate of elephants for their first year of between $70\%$ and $80\%$ , we set the first-year survival rate at $75\%$ . The average elephant lives to 60, and the growth rate is $5\%$ yearly [Douglas-Hamilton 2000]. This length of life leads to a survival rate is approximately $99\%$ from one year to the next after the first year. + +Because elephants do not live to 70, a lower survival rate is required during their last few years—a decrease from $99\%$ at 60 to $0\%$ at 70. Because these are average values, the survival function should be smooth; it should also be level during the first few years and then decrease more rapidly, much like a sine curve. We use + +$$ +S (a) = 0. 9 9 \sin \left(\frac {(a - 4 8) \pi}{2 2}\right), +$$ + +where $S$ is the survival rate of an elephant $a$ years old. While this curve may be a theoretical construct, any realistic death rate would have to be similar to this curve and thus have a similar effect on the rest of our math. + +# Age Structure and Growth Rate + +We used a mathematical solver to investigate our $70 \times 70$ square matrix (which we do not display here); it has $\lambda = 1.043$ . This means that the elephant population grows $4.3\%$ each year. From the eigenvector, we found the age distribution of the elephants (Figure 1). + +![](images/e2b228d395361c7c16b16ace8e670bd0d0c247473a92bb6e7e2d90611e291473.jpg) +Figure 1. Current elephant age structure. + +This growth rate does not agree with the data for removal of elephants from the park in the last two years (20% removed per year). The removal data do not take into account further elephants who may have been culled, so the numbers may have been even higher. + +A second source [www.africalibrary.org 1999] confirms our estimates of birth and survival rates. Putting maximal values into the matrix, we could not obtain a growth rate of $20\%$ ; even with elephants giving birth every 22 months and a death rate of zero, the growth rate was never above $15\%$ . So we decided that the data on removal were erroneous. + +# Darting Elephants—Now and Tomorrow + +# How Many and Which Ones? + +To maintain a zero-growth population, we need to reduce the birth rate to $27.1\%$ of its current value $\mu$ . This is a much more drastic change than the removal of the 600-800 animals that was required over the past 20 years. The main reason is that the birth rates are already low, so a much larger change is + +needed to affect the population. The survival rates, on the other hand, are high, so a much smaller reduction can affect a larger change. + +We assume that the drug is $99\%$ effective immediately upon injection and is still over $90\%$ effective at the end of the first year; by the end of the second year, the drug drops to zero efficacy. A particular percentage of efficacy means that that percentage of the population is still under the effect of the drug and cannot conceive. Like the sine wave for survival rates, the drug efficacy should be concave downward, decreasing more rapidly as the second year ends. However, we feel that the sine wave decreases too rapidly for the first few months and too slowly in the end to model correctly the effects of the drug. We use instead a fourth-order polynomial: + +$$ +E (t) = -. 0 6 2 t ^ {4} +. 9 9, +$$ + +where $E$ is the drug's effectiveness at time $t$ years after injection. + +This means that there is an average of $97.8\%$ efficacy over the first year (implying a $2.2\%$ chance of pregnancy) and $60.6\%$ efficacy over the second year. This also means that a cow darted both years would have a $0.9\%$ chance of pregnancy. Denote by $\gamma$ the percentage of elephants darted; then in two years' time, the percentage darted both years is $\gamma^2$ , the percentage darted only once is $2\gamma(1 - \gamma)$ , and the percentage not darted either year is $(1 - \gamma)^2$ . This means that the percentage able to get pregnant (and hence the factor by which the birth rate drops) is + +$$ +\mu = 0. 0 0 8 8 3 \gamma^ {2} + (0. 3 9 4 4 + 0. 0 2 2 4) \gamma (1 - \gamma) + (1 - \gamma) ^ {2}, +$$ + +which simplifies to + +$$ +\mu = 0. 5 9 2 0 3 \gamma^ {2} - 1. 5 8 3 2 \gamma + 1. \tag {1} +$$ + +Setting this expression equal to the desired birth rate reduction (to $27.1\%$ of the current value) gives the desired darting rate $\gamma$ as $59\%$ of all reproductive female elephants per year. + +We also need to find the targets for the contraceptive. Should the park staff seek to drug whole herds? a certain percentage of each herd? or every animal in a certain age group? While seeking whole herds would be most cost-effective, this practice would decimate the herd by not allowing it to reproduce. Seeking out specific ages would be too expensive because of the difficulty in determining the ages of specific elephants. Therefore, we decided that we would target a random percentage based on the value of $\gamma$ . + +With the birth rates reduced by $\mu$ , the growth rate is $0.0\%$ , as desired, and the stable age structure (achieved some time in the future) is shown in Figure 2. + +# Uncertainty in Derived Data + +To find how the uncertainty in our given data affects our estimate, we could propagate the uncertainty through the functions involved. But error propagation through the process of finding the eigenvalue of a $70 \times 70$ matrix is tedious, + +![](images/a984adccb2d29d0aacf7df201b60ab4d21b07d9a3e22c5b9a35aa201c1a36abd.jpg) +Figure 2. Equilibrium age structure with darting $59\%$ . + +so we use another method. We put into the population matrix all of the values that would cause a higher-than-calculated birth rate and use this to find the resulting error in $\mu$ . + +First, we take the maximal scenario: $100\%$ survival for adults, $80\%$ for juveniles. Then the birth rate must be reduced by $82\%$ , $9\%$ more than the calculated value of $73\%$ . + +Taking the minimal scenario, we encounter a problem. We take the survival rate to be $99\%$ , because anything lower causes the percentage growth to become unrealistically small. We solve this by taking the error in the growth rate of $4.29\%$ to be $\pm 1\%$ . This allows values as high as $5.29\%$ , which is in agreement with the maximal scenario's pre-darting eigenvalue, and implies a growth rate of $3.29\%$ for the minimal scenario. Assuming the juvenile survival rate drops to $70\%$ , this implies a general survival rate of 0.984. Upon plugging these values into the matrix, we find that we need to reduce the birthrate $\mu$ only by $64\%$ , again $9\%$ away from the calculated $73\%$ . + +So, our estimates point to an error no larger than $\pm 9\%$ . + +Next, we want to find the error in how many elephant cows we need to dart. Our final goal is to estimate the error in the costs of darting and removing elephants. + +Taking the derivative of both sides of (1), we get: + +$$ +\partial \mu = 0. 5 9 2 0 3 * 2 \gamma * \partial \gamma - 1. 5 8 3 2 * \partial \gamma . +$$ + +Solving for $\partial \gamma$ gives + +$$ +\partial \gamma = \frac {\partial \mu}{0 . 5 9 2 0 3 * 2 \gamma - 1 . 5 8 3 2}. +$$ + +Taking the calculated value for $\gamma$ to be $59\%$ yields a value for $\partial \gamma$ of 0.0595, giving + +us the second step in our process: the uncertainty in how much we need to dart is $0.59 \times 0.0595 = 0.035$ ; therefore we need to dart $59 \pm 3.5\%$ of the elephants. + +# Sensitivity and Stability + +The rather high uncertainties— $3.5\%$ for the percentage to dart and $15\%$ for the cost (derived later)—point to low general stability. + +Upon changing individual values in the population matrix, we find that the value for the survival rate is the most important. Changing that rate by only $1\%$ , the growth rate changes by up to $1\%$ and the number we need to dart by $9\%$ . Birth rate and juvenile survival rate are insignificant by comparison, as was the gestation time. + +# Age Structures of the Future + +What will the elephant population look like in 30 years? Assuming that the population was already at equilibrium without darting, we multiply the age vector by the new, adjusted matrix, yielding a vector that represents what the age structure would look like in the next year. Reiterating another 29 times, we find what the age structure would look like in 30 years (Figure 3). + +![](images/d2130158c6edfd3dc622b4d58b293f9ced83572b8cfae4c22e96b811d4fe2109.jpg) +Figure 3. Age structure in 30 years. + +The most notable aspect is the sharp peak that occurs at 30 years. At first glance, this may seem troubling because of the unexpected discontinuity; however, this is to be expected, because it represents the sharp drop in births when the contraceptive program began 30-31 years earlier. + +Another interesting detail is the spike in the zero-year-old range. This is a result of many of these elephants dying off before they reach age one, as reflected in the value of 0.75 for their survival rate. + +Repeating another 30 times allowed us to find the age structure 60 years after the darting process (Figure 4). + +![](images/29629043a68eeb96b18edc33dc2db48ca07600a66439bce8065320a76964c3c9.jpg) +Figure 4. Age structure in 60 years. + +The aspect of this situation that is most troubling at first is the large jump in the population structure in the elderly age range. However, this is to be expected once again, as it points to the last year that the elephants were not darted. A more interesting aspect of the graph is the slight increase of the two- and three-year-olds as opposed to one-year-olds. The reason is that the elderly group that we were examining a moment ago contributed to this slight increase; then they were no longer able to contribute to the current one-year-old population because they had reached the age of 61, at which point elephants no longer bear young. The population dip at the one-year mark is a result of this effect. + +Given the age structure some number of years after the darting began, we find the difference between that year's expected age structure and the calculated equilibrium age structure. After weighting each age vector to have the same total population (their sum), we find the difference between the two and calculate the length of the difference vector. Plotting this length over time, we were able to find the difference from any age vector to the expected equilibrium value (Figure 5). + +A large hump occurs at about 60 years. This happens because up to this point the large spike that was prevalent in the age distributions at 30 and 60 years prevents the age vector from getting closer to the equilibrium vector. At about 60 years into the future, however, this large spike begins to die off as the elephants in that age group reach age 60 and begin to leave the population. + +![](images/034d32c44b431f542c6629b4d88400b7c437c3f5901975d26d2e3860ca5f8532.jpg) +Figure 5. Difference between projected age vector and projected equilibrium value. + +# Effects of Darting on Tourism + +Contraceptive darting is conducive to increased tourism, not simply because it means an end to killing but also because of the changes it creates in the age structure. By shifting the population distribution to favor older elephants and keeping the total population size the same, darting assures a higher number of older elephants in the future. Older elephants are not only bigger, but smarter; a herd led by an experienced elephant is much more stable and tourist-friendly than one led by an inexperienced elephant [Mullins 1997]. + +Removal of elephants, while cheaper, may not be best when one takes tourism into account. The increased tourism due to bigger, calmer elephants may bring in revenues that far exceed the extra spending. Here a model cannot help us, only time can tell which method is cost-effective. + +# Relocation vs. Darting + +Our model shows that it is possible to rely completely on darting female elephants to control the growth rate and keep a stable population, by darting roughly $59\%$ of the fertile cows yearly. + +If we simply remove elephants each year, we are paying a flat cost to remove them from the population forever; removal can be modeled by increasing the death rate of the elephants. When we dart elephants, however, we are not removing them, so the population still increases unless we reduce the birth rate to be as low as the death rate. We also have to re-dart the females every year to keep this birth rate lower, meaning that we are darting more females than the total number of elephants that we would be transporting. Depending on the costs of darting, this could become a more expensive endeavor than + +simply removing several elephants every year. + +It costs $800 per elephant [Shaw 1999] to move an elephant out of the park, and the cost of darting a single wild horse with the same contraceptive is$ 25 [Bama 1998]. + +How much would it would cost to dart an elephant with this contraceptive? The contraceptive works by stimulating the immune system of the mammal to produce antibodies that bind to the sperm receptor sites of the oocytes [www.wildnetAfrica.com]. This means that when the sperm attempt to bind to the eggs of the mammal, there are no places for them to bind to, and hence the sperm do not fertilize the egg. The immune system needs to be triggered by a sufficient concentration of the antigen, so the dosage should depend on the total mass of the animal in question. With this information, and the fact that the average mass of an elephant (3250 kg [Estes n.d.]) is 9.7 times the average mass of a wild horse (335 kg [www.agric.nsw.gov.au]), it should take 9.7 times as much contraceptive to have the same likelihood and strength of a result. At $25 per horse, the cost for an elephant should be about $242.53, or after costs of the helicopter fuel and maintenance, about $250. + +We want to determine the total costs for various levels of removal and darting. The difficulty is that the more elephants we remove from the park, the fewer elephants we need to dart, and we cannot easily tell how one changes with the other. Random darting can be simulated by scaling all the birth rates down by a fixed amount and removal by a change in the death rate. Both rates are measures of how the population, and more specifically the age groups, change from one year to the next. If we assume that the rangers remove animals regardless of age, then we can create a fixed value for the removal rate and multiply it times all of the death rates. + +The two variables that we can control are the darting ratio $\gamma$ and the removal number $\rho$ . The proportion not removed in a given year is $\sigma = 1 - \rho / C$ , where $C$ is the (varying) total number of fertile cows in the population. + +We test two extreme cases, $\rho = 0$ and $\rho = 300$ , to see how they affect $C$ . We tested $\rho = 300$ under the assumption that $C$ does not change from year to year. This results in a variation in $C$ of $2\%$ and the same in $\sigma$ . This tells us that both $C$ and $\sigma$ can be treated as constants. + +We alter our original model population matrix by multiplying all of the survival values (that is, the values in the diagonal directly below the main diagonal) by $\sigma$ . We choose a value for $\rho$ and get a value for $\sigma$ from it. We place this in our matrix accordingly, and then test the positive real eigenvalue. If it is greater than one, then we try a value of $\mu$ and manipulate it until we achieve $\lambda = 1$ . At this point we have a stable equilibrium value. We repeat this process several times for the different values for $\rho$ and recorded. + +The last step is to find the costs. We solve for $\gamma$ and multiply by $\\(250$ per cow to get the cost of darting. We get the cost of removal by multiplying the $\rho$ value by $\$ 800 \) per cow. The sum of these values is the total cost, shown in Figure 6. + +![](images/1f41b26e6a21f65dac445ac72443b61e54894b8509eeb93e14208915eb73dd82.jpg) +Figure 6. Cost of control as a function of number of elephants removed. + +# Recovery from Drastic Loss + +Our model takes advantage of the fact that for large populations, fluctuations tend to cancel each other out. That is why the birth and death rates are so constant. For example, if 50 cows die due to an epidemic of flaccid trunk disease, out of a population of 100 this is a catastrophic $50\%$ loss! However, if the population is 10,000, this is merely $0.5\%$ . Thus, a variation that is a disaster for a small population is negligible for a larger one. + +Therefore, for modeling the effect on the population due to a catastrophic loss, a different model is needed, since the assumption that birth and death rates are constant is no longer valid. This model must be flexible enough to account for differing death rates for a given year. + +Since the only thing we need to know is how the darting affects the outcome, we can use marginal analysis. Thus we can start with a loss of $90\%$ and see how much of a death rate it would take for the population to die out. + +A certain minimum number of a given species is needed for viability. For this minimum value, we choose 20 elephants, about the size of a single herd. + +Applying this analysis to our specific case, if $90\%$ of the 11,000 elephants die off, we have only 1,100 left. To address the effect that darting might have on the situation, we consider two different cases: if the elephants are being darted, and if they are not. + +# No Darting + +For initial population $p_0$ , the population one year later is $p_1 = p_0 + p_0(b - d) = p_0(1 + b - d)$ , where $b$ is the birth rate and $d$ is the death rate. For the sake of simplicity, we set the birth rate to be the average value of 0.1448, assuming that any change will be illustrated by an appropriate change in the death rate. + +Then, doing algebra to turn the recursive function into a general one, we find $p_n = p_0(1 + b - d)^n$ . + +Solving for $d$ , given $b = 0.1448$ , $p_0 = 1,100$ , $p_n = 20$ , and $n = 10$ years, we find (after discarding the unrealistic solution) that for the elephants to be doomed in 10 years after a $90\%$ loss, $d$ would have to be $47\%$ on average! This is an extraordinarily high number, especially when one notices that this was the worst-case scenario. + +# Darting + +For a worst-case scenario, let the first 5 years have a birth rate of 0. After that, the rest of the $n - 5$ years have a normal birth rate, taken again as 0.1448. Thus one gets the general equation $p_n = p_0(1 + b - d)^{n - 5}(1 - d)^5$ . + +Taking $n = 10$ again, $p_n = 20$ , and $p_0 = 1,100$ , we find that the 10-year "doom" value for $d$ becomes $40\%$ . + +# Conclusion + +We conclude that the effect on survival of darting $(d = .40$ vs. $d = .47)$ is minimal. + +# Other Park Populations + +The matrix method that we have applied to the herd can be used with other populations, of elephants or nonelephants. Difficulties arise only if the park is too small, when small variations in data might be amplified and the model possibly lose stability and accuracy. + +# Report to Park Management + +While at first glance modeling may seem quite abstract, in reality it is an extremely useful and practical tool. The universe contains patterns that may be studied and used to make predictions. These patterns are everywhere: human beings need to eat and breathe, a thrown ball will eventually return to the ground, ice cubes never form spontaneously in a cup of coffee. It is this underlying order that a model takes advantage of to make predictions about the future. + +For all the complexity in elephant populations, there are many patterns to it. Most of these are simply common sense. Take the elephant gestation period, for example; it can be used to make an estimate of the maximum elephant fertility. + +Other patterns are the nearly constant survival rate after age 1, the constant birth rate, and negligible migration. They allow us to make predictions about the behavior over the elephant population as a whole. + +It is easier to model the behavior of the whole preserve than that of an individual elephant. For larger, more complex systems, patterns become even easier to see and understand, because random variations cancel each other out. There is no such thing as a statistical certainty; but the larger a group, the fewer fluctuations due to random chance, and the surer one can be. + +We took advantage of these facts in making our model of the elephant populations in your park. Random chance effects tend to cancel each other out in so large a population. Thus, we are able to use a uniform and constant value for the birth and death rates of the elephants. + +Given these rates, we modeled the changes in the population over time by putting them into a matrix. This matrix, when multiplied with a list of elephant populations for every age in a given year, returns a list of how many elephants would be that age the next year. + +We assume that the age distribution of the elephants has become constant over time. In essence, this says that as elephants get older, younger elephants also get older and replace them. Some of the younger ones die before they can get older, however, so the population numbers remain the same. + +Without culling, removal, or darting, the population would tend to grow but the proportion of each age would remain the same. + +Such details as twins, gestation, and gradually increasing death rates can also be accounted for. These details can be simulated by slightly altering a few of the numbers in the matrix. + +Once we can model the population, simulating darting becomes simple. We make the rather safe assumption that darting the elephants drops the birth rate to a uniform degree. We find that the growth rate of the elephants needs to be reduced to $27\%$ of the natural reproductive rate. To accomplish all of the reduction through darting, $59\%$ of females would need to be darted each year. + +But the darting will also affect the age structure of the elephants, and our model easily simulates this over any number of years. + +However, for smaller systems, our approximations of constant, average values are no longer valid. This is not to say that we cannot model smaller systems, only that the larger the system, the more effective a model can be. + +We modeled a much smaller elephant population subjected to catastrophic loss. Take the worst disaster you can think of—say $90\%$ of the elephants dying—and then say that somehow the death rate experiences a continual "random fluctuation" upward constantly, for 10 years. Even with darting it would take a constant death rate of $40\%$ per year to kill off the elephants. Hence, use of contraceptive darts would have no effect upon whether or not the unlikely event of a sudden loss would cause an extinction of the park's elephant population. + +The data for the last two years are startling: $20\%$ of the elephants were removed each year! This seems to imply that the population is growing at $20\%$ per year. However, such a rate is unrealistic. + +Much of our confidence in our model comes from testable predictions that it makes. When we plot the age structure 30 years after darting, we see a graph that gradually dies down until it reaches the number of 30-year-olds, where + +there is a sudden spike, corresponding to elephants born before darting. Furthermore, the graph of the 60-year-olds shows not only this spike but another bump about 10 years later, reflecting the formerly larger number of elephants having given birth to a larger number of calves. These effects are predicted by the model without any manipulation by us. + +With common sense, a little data, and some mathematics, an excellent predictive model can be made. Our model should be a useful tool in your making well-informed decisions. + +# References + +Bama, Lynne. 1998. Wild horses: Do they belong in the West? *High Country News - March 02, 1998.* http://www.hcn.org/1998/mar02/dir/Feature_Wild_horse.html. +Douglas-Hamilton, Iain. 2000. Discovery Online, Field Notebook: Living with Elephants — Q & A. http://izzy.online.discovery.com/area/nature/elephants/qanda.html +Estes, Richard. n.d. Elephant. http://www.nature-wildlife.com/eletxt.htm. +Mullins, Lisa. 1997. Living on Earth Transcript: January 1, 1997. http://www.loe.org/archives/970103.htm#Delinquents. +Shaw, Angus. 1999. Zimbabwe moves elephants out of harm's way. http://www.canoe.ca/CNEWSFeatures9904/06_elephants.html. +http://www.africalibrary.org/course/papers99/yankoupematamala.html. 1999. Page no longer available at this URL. +http://www.agric.nsw.gov.au/mdil/horses/ax24.htm. Pagenolonger available at this URL. +http://www.wildnetAfrica.com/wildlifeorgs/oprepro/#IMMUNOCONTRACEPTION. Page no longer available at this URL. + +# A Computational Solution for Elephant Overpopulation + +Jesse Crossen + +Aaron Hertz + +Danny Morano + +North Carolina School of Science and Mathematics + +Durham, NC + +Advisors: Dot Doyle and Dan Teague + +# Introduction + +We extrapolate longevity data and explore the long-term behavior of the population age distribution. We determine the number of dartings to fix the long-run stable population at 11,000; about 1,300 dartings are needed for an every-other-year strategy. We employ two simulations, one based on averages and the other tracking each elephant individually, whose results agree closely. + +Our modeled population recovers from sudden declines and is not overly sensitive to small changes in survivorship data. The model also allows estimating the number of dartings if up to 250 elephants are relocated each year. + +# Assumptions + +- We are told that emigration and immigration are rare, so in our model no elephants enter the park except those that are born. None leave except those that die or are relocated. +- Fifty percent of the elephants are female, as the problem suggests. +- It is beneficial to the population as a whole, as well as more economically feasible, to use as few contraceptive darts as possible. +- Cows first conceive when they are 11 years old, rather than some time between ages 10 and 12. +- Gestation always takes 22 months exactly, instead of approximately. + +- The darts work, so a cow hit by a dart will not conceive for two years. +- Otherwise, cows give birth every 3.5 years until they reach the age of 60. +- There is a $1.35\%$ chance that a given birth will result in twins. +- The survival rate for the first year is .75. +- The initial population is 11,000 individuals. +- The rangers can readily determine which females are not pregnant, so that no pregnant females are darted, as at Kruger National Park in South Africa, which uses a similar contraceptive program [Purdy 1998]. +- Cows normally mate once every 3.5 years. The cycle of a cow darted is not disrupted. If the effect of the dart wears off before she would normally mate and become pregnant, she conceives and gives birth on schedule. +- Previous methods of population control eliminated individuals randomly, so no age group was disproportionately depleted and the relocated elephants have an age distribution that is typical of the population as a whole. +- Since the methods of population control that have been used have no effect on the fertility of the cows, we assume that the initial birth rate is constant. + +# Analysis of the Problem + +We predict the long-term behavior of the elephant population as a function of the number of females. If we track each elephant individually, we must track 11,000 individuals; if instead we look at the population as a whole and take an average-case scenario, we must find formulas for birth and death, mating, aging, and the added effects of the contraceptive darts. + +We use both methods. First, we use a computer simulation to track each elephant through its lifespan: We use known probabilities to determine when each elephant is born, reaches maturity, gives birth, and dies. We can use this simulation to test any darting strategy. The results are far less smooth than for an average-case scenario. + +We also use another program based on recursive equations to predict the average-case behavior of the population, which we divide into groups of the same age. This method requires far less computer time. The replacement of random events with a deterministic average allows for ready investigation of long-term behavior without interference from individual unlikely events. + +Using these two models, we find a mathematical expression for the dynamics of the population and then use these programs to forecast the results of our darting strategy and to demonstrate its stability and flexibility. + +# Task 1: Predicting Survivorship + +There are three distinct phases in the life of an elephant. + +- From birth until five years old, the young elephant is very susceptible to predators and accidents and cannot fend for itself while it still nurses from its mother [African Wildlife Foundation 1998]. +- After it is weaned, at five years of age, it lives most of the rest of its life in relative safety. Several things can kill an adult elephant, but none has a major effect on the population. There is a low rate of disease, accidents are very rare, and no natural predators can kill something as large as an adult elephant [Hanks 1979, 109]. Therefore, over this period the death rate of the elephant is fairly low, about $2\%$ per year. +- Over the course of its lifetime, the elephant grows six sets of molars; around age 50 the final set of teeth wears out, making it impossible for the elephant to properly chew its food, so that the animal eventually starves to death [Holloway 1994]. + +We construct a survivorship curve as a piecewise function, with each segment corresponding to one of these phases. Using our assumptions that the given data are a random sample of the elephant population, that the birth rate in the park has been essentially constant, and that the previous killing has been evenly distributed over the population of elephants, we conclude that the demographic shape of this population is typical for an elephant population. + +Survivorship $l_{x}$ is the fraction of the population alive after $x$ years. To compute the survivorship from the data, we sum the data from each year to get a larger sample size and divide the entire data set by the population at age zero. The final survivorship data looks like Figure 1. + +![](images/b77da7ee5b4244c78f9d06d5bf38b1c43791fa917ba83a964ab3d1ec83ca38d9.jpg) +Figure 1. Survivorship function. + +The data divide up roughly into three linear sections corresponding to the three stages of the elephant life cycle. These three sections appear to be well approximated by lines, so we generate a piecewise function composed of three + +linear segments for the ages from 2 to 60, based on a least-squares fit. The two points of discontinuity between the pieces of the function cause little error. + +![](images/c4554fa4da89a9ee05314118e32ad0f748b95ad96f3baa14de2730ada1f7b645.jpg) +Figure 2. Survivorship data and fitted function. + +The survivorship function is + +$$ +l _ {x} = \left\{ \begin{array}{l l} - 0. 0 3 8 8 0 6 x + 0. 7 7 5 1 2, & 2 \leq x \leq 5; \\ - 0. 0 0 7 8 1 8 x + 0. 6 4 0 0 1 5, & 5 \leq x \leq 5 0; \\ - 0. 0 1 3 1 1 6 x + 0. 7 9 9 6 6 3, & 5 0 \leq x \leq 6 0. \end{array} \right. +$$ + +We calculate the probability of death $p_d$ as the fractional change in $l_x$ : + +$$ +p + d = 1 - \frac {l _ {x + 1}}{l _ {x}}. +$$ + +The assumption of a constant birth rate is incorrect, as the data are clearly not monotonically decreasing. But given the assumption that previous population control methods (i.e., shooting) did not affect the age distribution, our model is presumably close to the actual profile. + +# Task 2: Achieving Stability + +Birthing cows are females older than 11 and younger than 60 who can give birth; we choose some number $D$ of nonpregnant cows to dart. Because of the additional stress on the darted population and the expense of darting, we should dart as few elephants as necessary. + +How often should we dart cows? Darts remain effective for two years. Because the darted elephants are not tagged when they are darted, annual darting would lead to some elephants being darted two years in a row. Darting every two years uses fewer darts and simplifies our solution. + +In a population with a stable birth rate, the same distribution occurs among the age groups—each segment of the population has a characteristic percentage. + +The only segment that we can directly affect is the fraction $f_{b}$ that are newborns; the goal is to stabilize the number $N$ of newborns. So, in an ideal setting, after the population stabilizes, we have $N = Ef_{b}$ . + +The actual number of newborns is proportional to the number of cows that can give birth in the next year multiplied by the average number of elephants produced at the end of a successful pregnancy and the average chance of a pregnant female surviving long enough to give birth. The average number of elephants born after a pregnancy is one plus the chance $p_t$ of having twins. The average chance of survival $\bar{p}_s$ for up to one year is + +$$ +\bar {p _ {s}} = \frac {\int_ {0} ^ {1} (1 - p _ {d}) ^ {t} d t}{1 - 0} = \frac {(1 - p + d) ^ {1} - (1 - p _ {d}) ^ {0}}{\ln (1 - p _ {d})} = \frac {- p _ {d}}{\ln (1 - p _ {d})}. +$$ + +The number of pregnant cows that could give birth next year is the number of cows that were not darted two years ago, survived for two years, and are now at least 10 months pregnant. Because cows are distributed randomly throughout the mating cycle, the chance that a pregnant cow is within 12 months of giving birth is 12/42. The chance of a cow having survived for two years is simply $(1 - p_d)^2$ . The chance that a cow was not darted two years ago is the probability that a nonpregnant cow was not darted two years ago, or one minus the number that were darted two years ago over the number of cows that were not pregnant then. Let $P$ denote the number of pregnant cows. Substituting for the number of cows within a year of giving birth, we find + +$$ +N = (1 + p _ {t}) \bar {p _ {s}} \cdot \frac {1 2}{4 2} \cdot C (1 - p _ {d}) ^ {2} \left(1 - \frac {D}{C - P}\right). +$$ + +We set the real number of newborns equal to the ideal number of newborns and solve for the number of dartings. This tells us the number of elephants that we should have darted two years ago. We base the number of darts to use this year on the effect that the darts had two years ago. Because other terms are constant every year, we can apply the darting equation and find the number of cows to dart this year using this year's $C$ and $P$ . In the case of an excess of newborns, darting increases; if too few births occur then $D$ becomes negative, suggesting that more pregnancies are needed than the population can produce even if no cows are darted. + +$$ +D = (C - P) \left(1 - \frac {1 1 , 0 0 0 f _ {b}}{\frac {1 2}{4 2} \cdot C (1 + p _ {t}) \bar {p} _ {s} (1 - p _ {d}) ^ {2}}\right) +$$ + +For values of the parameters, we have $1 + p_t = 1.0135$ and $\bar{p_s} (1 - p_d)^2 = 0.94$ . The equation should give the number of dartings for tending toward a steady number of newborn elephants How does it behave? To find out, we wrote a program to trace the progress of the population over time. Each year, the number of elephants in one age group times their chance of survival becomes the number in the next age group. We replace the newborn age group with a new generation calculated as the number of pregnant elephants that gave birth + +in that year times the number of newborns at each birth, $1 + p$ . Figure 3 shows the convergent, oscillating pattern that results. + +![](images/6e9ac9b26c4536b28e863c90a59a8859c05c5ff8d524da1317739edefc366185.jpg) +Population vs. Time +Figure 3. Population over time. + +For the first several years after we introduce the contraceptive, the population fluctuates as the model adjusts to compensate by stabilizing the birth rate. In the past, up to 800 elephants were killed every year; here the population never diverges from 11,000 by that much. + +How many elephants are darted? While the number initially fluctuates between 0 and 2,000, it levels out to around 1,300 darts per biennial darting, or about $25\%$ of the female population. + +![](images/c1ff01dffa5ef226ad31b1ab87bbc21a48c555cf7a62b2bb89ff4172ef8ca0fd.jpg) +Figure 4. Numbers of elephants darted. + +We can simulate the population more accurately by keeping track of each elephant as it ages, gives birth, and dies. Instead of using average probabilities, we use random events to simulate the chaos of the real world. We also keep track of the population on a monthly instead of a yearly basis. Figure 5 shows a graph produced by our random case simulator. The darting strategy still causes the population converge to 11,000 after some time. + +![](images/04418d6171d420cbd933c16f097309ab0947f35aefe1ba5f1f313205c2690180.jpg) +Random Simulation +Figure 5. Numbers of elephants darted. + +![](images/87a77ef781bd34ac3c6876338cf48616ea7946a9dc2e650d3f2fef8a2aaacfe0.jpg) +Histogram:Year 0 + +![](images/eea9185d101eb9caec18e6c01790cabb1f369b5a67810612546ea3602b277b73.jpg) +Histogram:Year 30 + +![](images/40fec8b29593005b7f67a2e71406f8f3ea298e039433c1c490d068eb6f2207bc.jpg) +Histogram:Year 60 +Figure 6. Age distribution initially, after 30 years, and after 60 years of the darting strategy. + +Over time, the age distribution tends to shift towards more very young elephants and newborns and fewer old elephants. Figure 6 shows the initial population distribution and the distributions after 30 years and after 60 years: + +- Initially, there is a large number of animals between 25 and 45 and the number of newborn animals is not much larger than all the others. +- After 30 years, there are noticeable spikes in the population due to the large fluctuations that occur during the first several years of the model. There are large numbers of the slightly younger animals, which is good for tourism—tourists usually are attracted to cute animals; additionally, there are still large numbers of the large majestic elephants that everyone wants to see. +- After 60 years, the curve has become much more regular. The only large peak is at the baby elephants. This is the best possible situation for tourists—you can see a good representation of the whole spectrum of young and old, plus a large number of cute babies. + +# Task 3: Relocation + +Relocating elephants each year could make our method more successful, by reducing the number to dart and reducing the stress on females of monthly oestrus. Since we are darting every two years and relocation would remove pregnant and fertile elephants, the combination of darting and relocating has the potential for creating a population disaster; however, we can avoid such a problem by picking the right number of elephants to relocate. + +A simulation of relocating 100 elephants per year gives a graph of population much like Figure 7. + +![](images/db03eafdf1c97c53773606ca8f08a40f67135d5361368e5f88ce3aaeba8be11a.jpg) +Figure 7. Simulation of removing 100 elephants per year, in addition to darting. + +The population drops severely in the first few years but recovers. If this population drop of up to $8\%$ is acceptable, relocation seems to be a viable option. As well as looking at the effects of relocation on population over time, + +we can also track how many darts we save by relocating various numbers of elephants. The average case results are summarized in Table 1. + +Table 1. Darts saved by relocating. + +
Relocations per yearDarts used in 50 yearsAverage number of darts per darting% of darts saved
029,9001,1960%
5024,70098817%
10020,20080832%
15016,25065046%
20012,75051058%
2509,70038868%
+ +Relocating more than 250 elephants a year could cause an uncontrollable population crash after only a few years. + +# Task 4: Disaster Recovery + +Darting may not allow the population to recover from a disaster even if we immediately stop darting. We examine a number of disaster scenarios and see how our model responds to them. + +- The first case is a major disaster, such as a rapidly spreading and very deadly disease that indiscriminately kills all segments of the elephant population. +- Next we consider a natural disaster, such as a drought or a famine. In such a disaster, the weakest elephants are most likely to die; these tend to be the youngest and oldest elephants in the population. To model this, we kill portions of the population that are under the age of 10, because they have not yet reached maturity, and portions that are over 50, because they are suffering from the effects of old age. +- Finally, we consider the effect of excessive hunting. Hunters hunt elephants with large tusks, found on very mature elephants. Therefore, we remove parts of the population over the (arbitrary) age of 40. + +In each case, we compared removing $10\%$ with removing $50\%$ of the selected population, to simulate moderate and severe disasters. In each scenario, the disaster occurs during year 10. + +In the case where $10\%$ of every segment of the population dies, the population hits a minimum of 9,500 and increases fairly steadily thereafter; even for a $50\%$ kill-off, the population still recovers (Figure 8). While it might be possible to recover faster, doing so causes dangerously large oscillations once the population has returned to its normal levels. This way, the population makes a steady recovery and reaches normal levels while still remaining under control. + +![](images/2d51a4b96276fe57df5b941617224e7fcf54d537870adb77bb101daacdfc76ae.jpg) + +![](images/66afbfe826beb722960c404f72d53c72d0aef4861b4582b6561b44395fec006c.jpg) +Figure 8. Effects of moderate and severe major disasters (age groups affected equally). + +After a natural disaster that kills $10\%$ or even $50\%$ of the very young and very old elephants, the recovery is faster because the young and old are not heavily involved in reproduction (Figure 9). + +If hunters kill $10\%$ or even $50\%$ of the population over the age of 40, a significant number of reproducing animals are killed, so the recovery is somewhat slower (Figure 10). + +Our schedule of darting would allow the population to recover from major disasters. Assuming that such disasters occur only rarely, a park using our management policy should have no trouble with population crashes. + +# Task 5: Justification to the Park Managers + +You may well wonder why mathematics is useful in the task of regulating the elephant population in your park. It seems easier to follow a simple set of rules like the following: + +- If there are more than 11,000 elephants, dart more than last time. +- If there are fewer than 11,000 elephants, dart fewer than last time. + +![](images/b1990799030d948d243fd1949c8d08b79760590b8a51eca0160dd34e68ef0d4e.jpg) + +![](images/2a78e8c2e18d563ed3edb1713b8442e3639a9e53f0d55d073c05b85f5ab4b882.jpg) +Figure 9. Effects of moderate and severe natural disasters (weakest elephants succumb). + +Such a system is simple to understand but difficult to put into practice. For one thing, it is hard to decide on an exact number to increase or decrease the number of darts you are using. The other problem is that changes in the number of dartings does not affect the population for another 22 months. These factors make such a system very problematic in the real world. + +Suppose we tried a system of darting a certain percentage of the elephants every two years. If we picked precisely the right percentage, the population would appear to hold steady at 11,000 for a little while, but the fraction of the population that was pregnant would gradually change over time and the population would go out of control faster than the function could compensate. This result can be shown using a simple computer simulation of the population over time. + +A better goal than keeping the population constant is keeping the number of elephants born each year constant. Since the rate at which elephants die does not change much, keeping the number of births constant should eventually give a constant number of elephants. Based on elephant birth and death statistics for a healthy herd, we can adjust a healthy population of around 11,000 elephants to a state of equilibrium. By calculating the number of elephants that are less than one year old, we get a good idea of how many elephants were born last year. Dividing by the total number of elephants gives the fraction of the total + +![](images/5bf178c6303973c29fc7f5a659f570f96a46784196bd71705460c2432f38886e.jpg) + +![](images/c51870bf1aba458f44f81b2eccab9d749aad2fa2816e9e600237bdded01b51c9.jpg) +Figure 10. Effects of moderate and severe hunting. + +population that must be born each year to keep the population stable. + +We constantly have to readjust the number of dartings based on the effect on the future population, for which we provide a formula. We have tried this formula in several simulations and found it extremely adaptable and effective. Its strength lies in the fact that it was derived using sound reasoning; any darting method that does not use mathematics is little better than a wild guess and will not produce satisfactory results. If you use a mathematical model to control your elephant population, you will be satisfied by the long-term behavior of the population. As always, there will be some random fluctuation, but this model provides and effective solution. + +# Task 6: Generalization + +We show that in many cases we can use our model for other parks with different needs. + +A key aspect of making our model work is finding an acceptable $f_{b}$ (the fraction of the population that are newborns) for each target population and set of conditions; this fraction is derived from survivorship data for the individual + +park's population. Our method forces convergence to the target population. + +Suppose that a park has similar conditions but that the death rate among newborn elephants is .35 and the park aims for 25,000 elephants; we find $f_{b} = 0.046$ . This makes sense—the value of $f_{b}$ must be higher to compensate for the higher death rate, which means that a greater proportion of the population must be newborns in order to maintain the stability of the population. + +As a second example, consider a park with a target population of 300 and an infant death rate of $15\%$ . In this case, $f_{b} = 0.013$ —smaller, to compensate for a smaller infant death rate. + +Any park with reasonable values for death rates and ideal number of animals should be able to work under this system. + +# Sensitivity Analysis + +For a model incorporating as many parameters as this one does, it is vital to determine which introduce the greatest error. Given a $\pm 10\%$ deviation in the value of the parameter, we calculate the percentage change in the value that the final system converges to. Table 2 summarizes the parameters that have significant effects; the model is fairly insensitive to the values of other parameters. + +Table 2. Sensitivity of the model to changes in parameters. + +
VariableFrom data+10%Equilib. +Herd Size% Diff.-10%Equilib. +Herd Size% Diff.
Newborn +survival rate.75.82516,20047%.6757,200-35%
fb.0255.0280512,10010%.022959,935-10%
+ +It is vital to know accurately the newborn survival rate, since the final population is so dependent on this value. + +# Strengths + +- Our methods keep the elephant population under control, which is the main point. The population converges to the ideal number of elephants in a reasonable time. +- Our methods can incorporate various scenarios: contraceptive darting, relocation, compensation for disasters, and application to other similar parks. +- This model is simple enough for the park rangers to understand. +- Our method can produce accurate predictions with very little computer time. + +- Our method is robust, so that other variables or situations can be easily introduced. +- After the first five years, under normal conditions the population does not deviate more than 200 elephants from the target value. + +# Weaknesses + +- Our model is somewhat involved, and predictions cannot be generated without a computer. +- The population does not stabilize at exactly 11,000. +- The model responds slowly (though surely) to dramatic changes in the population. +- The method does not allow the relocation of more than 250 elephants per year, which might be possible with a more radical model. + +# Conclusion + +Keeping a dynamic system like an elephant population under control is a very old and difficult problem. It is made more difficult by the long life spans and steady reproductive rates of elephants. We have developed a system that is more humane and more adaptable than simply killing off excess elephants. + +# References + +African Wildlife Foundation. 1998. Wild lives—Elephant. http://www.awf.org/animals/eleph.html. Accessed 5 February 2000. +Hanks, John. 1979. A Struggle for Survival: The Elephant Problem. Cape Town, South Africa: C. Struik. +Holloway, Marguerite. 1994. On the trail of wild elephants. Scientific American 271 (October 1994) (6): 48-50. +Purdy, Judy Bolyard. 1998. Contraceptive Safari. http://www.ovpr.uga.edu/rcd/researchreporter/spring98/elhant.html. Accessed 4 February 2000. + +# EigenElephants: When Is Enough, Enough? + +David Marks + +Jim Sukha + +Anand Thakker + +North Carolina School of Science and Mathematics + +Durham, NC + +Advisors: Dot Doyle and Dan Teague + +# Statement to the Park Management + +We develop a system that uses contraceptive darts as the primary method for elephant population control. This method provides a practical alternative to expensive relocation and unpopular culling. Using a statistical model to simulate the changes in the elephant population from year to year, we determine a darting plan that effectively brings the elephant population down to a stable total population of about 11,000, the park's desired target. + +Theoretically, this model should accurately predict the structure and size of the elephant population based on the information provided to us about the elephants, such as birthrates, reproductive activity, and life span. Although we had to determine the elephants' survival rates from a rather small sample of data, the survival rates that we determined matched the general information provided. If more accurate survival rates can be found, the model can be adjusted easily by changing a few parameters. + +Additionally, we generalize our model to an adaptive darting method that accounts for random fluctuations due to varying survival rates and birth rates, as well as such external influences as immigration, emigration, and poaching. Thus, despite lack of conclusive data, the darting method will effectively control the population even with the random variations introduced by nature. + +This method involves the following basic procedure: + +- From a survey of the population, determine the approximate population size, age structure, and survival rates. We estimate these from the sample data provided. + +- Feed these data into the mathematical model and from it obtain the initial "dosage" (percentage of females to be darted). + +- If relocation of elephants in not a viable option, base dosage for your park is $57\%$ . +- If it is possible to relocate about 50 to 300 elephants every year, then only $31\%$ of the reproducing female population needs to be darted. + +The females to be darted can be chosen at random, but measures should be taken not to dart the same female twice nor females who are too old or too young to reproduce, as this would reduce the effective proportion of females treated by the contraceptive. Once darting is complete, it is not necessary to track which individuals have been darted, as darting will not be done again until the current dosage wears off, two years later. + +- Every two years, count the population and apply the simple formula given in the technical report. We also provide a separate formula for use if the removal of 50 to 300 elephants per year is anticipated. + +Under ideal conditions, the park would continue to use the same initial darting plan. However, the population will naturally experience some deviations from the ideal. When surveys show fluctuations in the population, the provided formulas supply the new proportions needed to correct for the deviations. + +Our model also tested the survivability of a population after the elimination of a large proportion of elephants. A large natural disaster or widespread disease might cause such a drop in population. Our tests show that when $80\%$ of the population is killed and when survival rates are reduced by $30\%$ for the next 10 years, there is a statistically significant difference between how quickly the population recovers with and without using contraception. However, the darted elephant population still rebounds if darting is stopped, though with a small lag time. + +Concern expressed over the validity of the modeling process, especially when the initial data are not completely accurate, is reasonable given the levels of uncertainty that we are working with. However, no matter what method is used for population control, one must have a relatively good idea of the population structure. Our simulations show that our darting plan is flexible and can accommodate variability or inaccuracies in the initial data. This suggests that our model does not depend as heavily on the initial population structure as other methods of population control might. Of course, the best advice we have for increasing confidence in our model is to collect more data. This would provide the most conclusive evidence for the model's accuracy. + +# Assumptions + +- The number of elephants relocated in the past two years is representative of the actual age structure of the current elephant population. One common practice is to relocate entire family units of elephants at once, which would be generally consistent with this assumption. +- The population in the park never differs greatly from the stable state of the population. +- Elephants mate and give birth at a uniform rate throughout the year. +- The population is sufficiently large that we can compute all relevant quantities concerning the population probabilistically. +- The gestation period can be taken to be two years (as opposed to the given twenty-two months). +- The survival rate within ten-year-wide age groups is roughly uniform. + +# Analysis of the Problem + +The nature of this problem suggests that the population should be modeled by a system of difference equations. The data provided by the park are presented in terms of a discrete age distribution. Since the duration of the darts' effectiveness is given in terms of years rather than a fraction thereof, it is appropriate to approach the problem in terms of a discrete time step, namely $\Delta t = 1$ year. This time step sets iterations at one per year and also stratifies the population into cohorts (age groups) of elephants born in the same year. + +Given that all elephants die by the age of 70, the problem is reduced to 70 difference equations, one for each age cohort. Such a system is most naturally represented in terms of a matrix equation + +$$ +P _ {n + 1} = T P _ {n}, +$$ + +where $P_{n+1}$ and $P_n$ are column vectors with 70 rows, in which the $i$ th element represents the number of elephants of age $i$ . The matrix $T$ is $70 \times 70$ , and each of the elements in the $i$ th row is a coefficient in the $i$ th difference equation. The matrix representation has a powerful advantage over the system of difference equations: $T$ can be manipulated (e.g., by darting) so that it has an eigenvalue of 1, which corresponds to a stable population and age structure. + +A matrix $A$ with eigenvalue $\lambda$ (a scalar) and eigenvector $x$ has the property that $Ax = \lambda x$ . For a general population vector $P$ , as $n \to \infty$ , we have $A^n P$ approaches $x$ or some scalar multiple of $x$ . The convergence is especially fast if $P$ is initially somewhat similar to $x$ , although small variations of $P$ from $x$ can cause $P$ to converge to a scalar multiple of $x$ instead of to $x$ itself. This + +relationship suggests the solution to the dilemma of stabilizing the elephant population: If $T$ is manipulated through darting so that it has an eigenvalue 1, then as it is applied to the population of elephants over time, $P$ will converge to the eigenvector; that is, the population will stabilize. + +# Determining the Transition Matrix + +To determine the elements of the matrix equation, consider the structure of the difference equations. The first reduction in the magnitude of the problem is to consider only female elephants. Given that the sex ratio is "very close to 1:1" for adults as well as for newborns, we can consider only females, knowing that the full population can be determined simply by multiplying by two. Hence, the sum of the elements of the $P$ vector should be close to 5,500. The first element of the $P$ vector is the newborn elephants, age 0. The size of this stratum at iteration $i + 1$ depends on only the number of reproducing females. The difference equation for the newborn elephants is then + +$$ +\left(P _ {0}\right) _ {n + 1} = \sum_ {i = 1 0} ^ {6 0} p _ {i} \cdot \left(P _ {i}\right) _ {n}, +$$ + +where $p_i$ is the probability that an elephant in the $i$ th age group has a calf that year and $(P_i)_n$ is the $i$ th element of the $n$ th iteration of $P$ ; that is, $(P_i)_n$ is the number of elephants in the $i$ th age group in the $n$ th year of iteration. The value of each of the remaining elements in the $P$ vector is determined only by the number of elephants in the previous stratum that survive into that year. This can be written as + +$$ +\left(P _ {i}\right) _ {n + 1} = s _ {i - 1} \cdot \left(P _ {i - 1}\right) _ {n}, +$$ + +where $s_i$ is the probability that an elephant of age $i$ will survive until the next year. This suggests that $T$ is of the form + +$$ +T = \left( \begin{array}{l l l l} 0 & p _ {1} & p _ {2} & 0 \\ s _ {0} & 0 & 0 & 0 \\ 0 & s _ {1} & 0 & 0 \\ 0 & 0 & s _ {2} & 0 \end{array} \right), +$$ + +but much larger $(70 \times 70)$ . Further simplification is desirable. + +If $P$ is "close" to an eigenvector, then with each iteration the same numbers of elephants grow into the next age level as they did the previous year; that is, if $P$ is nearly stable, then the population structure should remain relatively constant from year to year. This also means that a larger stratum, say 10 years, has a predictable age distribution. Namely, if a stratum has $c$ elephants growing into it every year with a constant survival rate $s$ over the stratum, then the total number of elephants in the interval is + +$$ +N = c \left(1 + s ^ {1} + s ^ {2} + \dots + s ^ {n}\right), +$$ + +where $n$ is the width of stratum $N$ . The proportion of elephants growing out of stratum into the next stratum is given by + +$$ +\mathrm {G r o w t h} = \frac {c s ^ {n}}{c (1 + s ^ {1} + s ^ {2} + \cdots + s ^ {n})} = \frac {s ^ {n} (1 - s)}{1 - s ^ {n + 1}}. +$$ + +Thus, in the steady state, several years of elephants can be grouped together without any loss of information. For the purposes of further discussion, we assume (and verify later) that $P$ is indeed sufficiently close to the eigenvector, and thus we collapse the elephant population into 8 strata, the newborns plus one for each decade up to age 70. The $T$ matrix, now only $8 \times 8$ , is of a slightly different form, namely, + +$$ +\left( \begin{array}{c c c c c c c c} 0 & 0 & p _ {2} & p _ {3} & p _ {4} & p _ {5} & p _ {6} & 0 \\ s _ {0} & s _ {1} (1 - g _ {1}) & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & s _ {1} g _ {1} & s _ {2} (1 - g _ {2}) & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & s _ {2} g _ {2} & s _ {3} (1 - g _ {3}) & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & s _ {3} g _ {3} & s _ {4} (1 - g _ {4}) & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & s _ {4} g _ {4} & s _ {5} (1 - g _ {5}) & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & s _ {5} g _ {5} & s _ {6} (1 - g _ {6}) & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & s _ {6} g _ {6} & s _ {7} \end{array} \right), +$$ + +where $g_{i} = \left[s_{i}^{n}(1 - s_{i})\right] / \left[1 - s_{i}^{n + 1}\right]$ is the proportion of elephants that move out of stratum $i$ and into stratum $i + 1$ each year. This leaves the determination of survival rates $s$ , probabilities of birth $p$ , and the initial population matrix $P$ to be determined. + +# Current Age Structure + +To determine the survival rate as a function of age, we look to the data provided by the park. From the past two years, we have the sex and approximate ages of the elephants transported out. Under the assumption that elephants were removed fairly uniformly, we take these data to be an accurate representation of the park's overall elephant population. We extrapolate from these data the age distribution of the elephants in the park. + +We assume that the elephant population is reasonably stable, in particular, that the overall age distribution of the population is the same for both years. Additionally, since the sex ratio is "very close" to 1:1, we can treat a distribution of one sex as representative of the population's age distribution. We have four samples, one from each sex for each year; we combine these four samples to obtain the relative frequency of elephants at each age (which we scale such that the total number of elephants is 11,000, the park's total population). This distribution is shown in Figure 1. + +![](images/f862eb3e918218e4ffdcf134d894c6945c960aafe8b131bd878a3e84cac810b0.jpg) +Figure 1. Projected age structure. + +# Survival Rate + +In addition to the age distribution, we are also interested in determining the survival rate of elephants in a given cohort. We begin by following a cohort over time, plotting the number of survivors each year. For a fairly stable population, this relationship is identical to the age distribution. That is, if a population is stable, the age structure is not changing significantly; so as a cohort ages, its size must change to fit the population's age structure. + +Since the age structure and cohort survivor data are nearly the same, we use the previously determined age structure to determine the elephants' survival rates as well. Survival rate is defined as the probability that an elephant at a given age survives to the next year. For example, if an elephant has a survival rate of $75\%$ at age 0, the probability that it survives for the next year is 0.75. For large mammals such as elephants, we expect the survival rate in the middle of an elephant's life cycle to remain relatively independent of age while being much lower in very young and very old elephants. Since survival rate governs the change in population from one year to the next and is proportional to the current population, we expect the age structure data to be exponential. The graph in Figure 1 verifies this prediction. + +We plot the natural log of the number of elephants versus age (Figure 2) and fit lines through the four major sections of the data, from which we determine the survival rates for four major sections of the population. (Table 1). + +![](images/29dea442d0cd21df5e75a5c237b30f45b832d3338c2491483a91dc643095f516.jpg) +Figure 2. Natural log of elephant data vs. age. + +Table 1. +Elephant survival rates. + +
Age Group (years)Survival Rate (% / year)
0–175
1–5098
51–6096
61–7082
+ +# Probabilities of Birth + +The information provided by the park suggests that the reproductive rate is constant over all reproducing age groups except the teenage group, where not all of the elephants are reproducing. On average, a female cow produces a calf every 3.5 years with twins occurring $1.35\%$ of the time, indicating that the probability of any given female producing a female calf in a given year is $0.5(1.0135) / 3.5 \approx 0.145$ . For the teenage group, we assume that about one-third begin conceiving when they are 10, another third when they are 11, and the remaining when they are 12. However, since it takes about two years from conception to birth, the elephants do not actual have young until they are 12. Taking all this into account, the $p$ value for the teenage stratum is 0.7 times that of the other groups. This completes the matrix $T$ (before we start to consider darting). + +# Introducing the Contraceptive Dart + +If the park management darts female elephants at random, then we can assume that the same proportions of each reproducing stratum are sterilized for the two-year period. Hence, if the management keeps some proportion $1 - q$ of the population sterile at any given year, then the transition matrix is the same as $T$ above except that the first row has a factor of $q$ in front of the probabilities for birth. The parameter $q$ is the proportion of females reproducing, and this is the value that can be altered to allow $T$ to have an eigenvalue of 1. For an eigenvalue of 1, $T$ must have the property that the determinant $|T - I|$ (an eighth-degree polynomial in $q$ ) is 0, where $I$ is the identity matrix. Once $q$ is known and the appropriate $T$ matrix is constructed, an eigenvector can be found quickly. This allows for speculation as to the desirable steady-state elephant population. + +# The Ideal Solution + +Using the values for $s$ and $p$ determined above, the proportion $q$ of females that should be kept reproducing to keep the population stable is about $43\%$ , that is, $57\%$ of the females should be on contraceptives. The appropriate eigenvector associated with a population of 5,500, as well as the extrapolated population estimated from the given data, are shown in Table 2. + +Table 2. +Eigenvector and extrapolated population. + +
Cohort01-1011-2021-3031-4041-5051-6061-70
Eigenvector20714221162950776581221181
Extrapolated population22213961185107783361514427
+ +A measure of the difference of the estimated population from the eigenspace is the cosine of the angle between them. Using the dot product $u \cdot v$ , the cosine of the angle between the two vectors is + +$$ +\cos \theta = \frac {E V \cdot E P}{\sqrt {E V \cdot E V} \sqrt {E P \cdot E P}}. +$$ + +For the population estimated from the initial population, we get $\theta = 5.3^{\circ}$ , so the initial population is indeed already close to the eigenvector and our approximation by using 8 strata instead of 70 is valid. + +Having set $q$ so that $T$ has an eigenvalue of 1, the matrix is left-multiplied on the population vector. It takes two years for the contraceptive plan to begin to work, due to the two-year gestation period. Using the extrapolated population vector from Table 2 and assuming the two-year lag period has already passed, Figure 3 shows the model's prediction of the population over 60 years. + +![](images/708d7125b7c8e4641ae21041a80df46fff0c023b68f0e747e56234890699f63b.jpg) +Figure 3. Elephant population over a 60-year period. + +The contraceptive rate planned causes a convergence of a population, but it takes a long time. Additionally, the population converges to a higher multiple of the desired eigenvector, with the sum all the female elephants at about 5,700. + +The solution to this dilemma is to use not just a single darting plan. Rather, if a more aggressive contraceptive program is used for the first 15 years, the eigenvalue of the matrix will decrease. If the eigenvalue is less than 1, the population eventually begins to drop as it converges to an eigenvector that gets smaller with each iteration (although initially there may be an increase). Once the population has dropped sufficiently, switching to the original darting plan causes convergence to a population distribution with the total number of elephants closer to the desired number (Figure 4). + +![](images/a1fc931ce5e0b0e572c2b6f7118a700a0bd1a0ab11ddffcdeb2f052a1effb236.jpg) +Figure 4. Elephant population over a 60 years with two-phase contraceptive plan. + +One particular solution is initially to inoculate about $60\%$ of the population instead of $57\%$ . This small change in the initial 15 years leads to a less variable convergence, where the total elephant population is never more than 100 elephants from the desired 11,000. + +# The Adaptive Solution + +The complication with the two-phase solution is that the exact values for the inoculation are very dependent on properties of the matrix itself, namely, survival rates and birth rate probabilities. Also, more substantial random perturbations in the population, as caused by such phenomena as immigration, emigration, and poaching, can make a static inoculation plan ineffective. Thus, it makes sense to develop a plan that depends on how the population is reacting. If the park can determine the amount of darting needed in the ideal case, then using that value as a base, the park can adjust the actual number of inoculations as required by year-to-year changes in the population. + +Given a population at year $n$ , the park would like to know what proportion of females to inoculate. However, changing the proportion of elephants inoculated in year $n$ does not have an effect on the birth rate until year $n + 2$ , because of the two-year gestation period. In addition, the exact proportion inoculated cannot be adjusted every year independently of the previous year, as the effect of the dart lasts two years. + +Thus, there are two possible plans: + +- Dart every year, allowing management to raise the levels whenever necessary. If the contraceptive rate every needs to be substantially lower, there will be a one-year lag before a new darting regimen has an effect. +- Dart only every other year, meaning that the management refrains from raising the levels during the off years; but if the dosage needs to go down, there is probability one-half that it can occur immediately. + +Both plans have advantages, but the second plan uses substantially fewer darts and will in general be cheaper and require less work to implement. + +As the female population changes from 5,500, either due to an eigenvalue not being 1 or to natural perturbations, the number under contraception must be adjusted. The desired proportion $p(N)$ sterile as a function of population $N$ should have the properties that when $N = N_0 = 5,500$ , $p(N_0) = p_0$ (the value necessary to give the projected $T$ matrix an eigenvalue of 1), and $p(N)$ decreases as $N$ decreases and increases as $N$ increases. To allow for ease of generalization, the amount that $p$ varies should depend on the percentage difference between $N$ and $N_0$ : The effect should be small for small differences in $N$ but should grow quickly enough to constrain $N$ if changes in $N$ are too great. A linear relationship grows too rapidly, suggesting a natural logarithm function of the form + +$$ +p (N) = p _ {0} + c \mathrm {s g n} \left(\frac {N - N _ {0}}{N _ {0}}\right) \cdot \ln \left(\left| \frac {N - N _ {0}}{N _ {0}} \right| + 1\right), +$$ + +where $c$ is a constant that determines how reactive the darting is to changes in the population. We find that values of $c$ ranging from 3 to 5 work well in keeping the population stable (see the Appendix). We also find from our simulations that constraining $p$ between some upper and lower bounds increases the stability of the population; a reasonable constraint for $p$ is $0.3 < p < 0.7$ . + +# Contraception with Relocation + +If the park managers have the option of removing some elephants in addition to darting, the nature of the problem changes a bit. Consider a population where some number $m$ elephants are to be removed. If they are taken equally from the various strata, then the transition matrix $T$ should be constructed so that it has an eigenvalue $1 + m / N$ . This again will allow for a steady state, as each year there will be $m$ more elephants but that many will be relocated. If $m$ is constant, then the problem is solved, as the solution is exactly the same as before, with a slightly different value for $p - 31\%$ , which is much lower than that amount of contraception use otherwise. + +# A Disaster + +A warranted concern regarding darting is what happens immediately following a natural disaster. We examine the effects by evolving a population for 25 years and then simulating a large natural disaster. This disaster kills $80\%$ of the population and is then followed by a $30\%$ reduction in the survival rates for 10 years. Figure 5 shows the rebound of a population that begins to reproduce immediately, while Figure 6 is for a population that must first go through a lag period due to the contraceptives. While the random effects cause some variations depending on the simulation, the overall trend is that the population without contraceptives bounces back faster. Based on 10 simulation runs, the mean population at year 120 with no contraceptive use is 1,246 (SD = 331), while it is only 1,009 (SD = 286) for a group on contraceptives. + +![](images/9a30c9474f1d4cc102fdc6bb97f854177a87901235b58335cb8b738cea16aa53.jpg) +Figure 5. Effect of a disaster on an undarted population. + +![](images/0825c4ccecf181d9475c29af1bf58f5283bb1a5d178a095523418cbec602dd9e.jpg) +Figure 6. Effect of a disaster on a darted population. + +A one-sided $t$ -test of the difference, with $\mathrm{df} = 17.62$ , gives a $P$ -value of 0.052, on the border of significance at the $5\%$ level. The opponents of darting may be correct in concerns about an impeded ability of the elephants to grow back. However, the elephant population will still return, if at a slightly retarded rate. Controlling the population without culling elephants seems to justify the risk. + +# Generalizations + +The key to stabilizing a different population at another park is to find the survival rates and birth probabilities, then determine the right value of $p$ that allows $T$ to have an eigenvalue of 1. This process is entirely independent of the actual size of the population, depending on only its age distribution. For small populations, the approximation used to simplify the matrix from $70 \times 70$ to $8 \times 8$ may begin to break down, but that can be fixed by simply expanding the matrix, which requires just more computer time. + +# Strengths and Weaknesses + +# Weaknesses + +- Our model for survival rates and age structure depends heavily on the elephant removal data. If these data are not representative of the overall age distribution, then the final population that the model predicts may deviate slightly from the actual value. A more meaningful conclusion on the current age structure cannot be obtained without additional data. +- Our transition matrix considers elephants in 10-year age groups rather than 1-year cohorts. This simplification greatly reduces the size of the transition matrix and allows for quicker calculations, but the approximation may introduce slight inaccuracies. The inaccuracies grow if the population distribution is drastically different from the ideal distribution. +- Elephant populations are discrete quantities, but we approximate them with continuous values. For extremely small populations, this approximation may no longer be valid, especially in the older age groups where there are already very few elephants. +- Our initial model, without adjusting the level of contraceptive darting, is sensitive to changes in survival rates; different values for those can cause the population to converge to a different final value. The modified model that makes adjustments to the level of darting is more capable of handling slight changes in survival rate, but a significant change can still alter the final results. + +# Strengths + +- The final model handles small random fluctuations in the population quite well. These fluctuations add a reality check because they reflect possible error in the park managers' estimate of the population size. The population remains within a reasonable interval around the ideal population, which means the model is not very sensitive to variations in population size. + +- The model considers the possibility of some elephants being relocated each year. Relocation when feasible is a preferred method of population control, but the model is not dependent on this possibility. +- Our model can be modified easily to accommodate other parks with different populations and survival rates. + +# References + +World Wide Fund for Nature. 1997. Conserving Africa's elephants—Conservation inside protected areas. http://www.panda.org/resources/publications/species/elephant/elephant3.html. Accessed Feb. 5, 2000. + +# Appendix: Simulation of Adaptive Darting + +We determine useful values for $c$ . After each iteration of the simulation, each element in $P$ is multiplied by a random number $\epsilon \sim N(1, 0.01)$ . This introduces an effect on the order of $1 - 2\%$ variation from the predicted value. These effects can be due to errors in the matrix, elephant movements, poaching, or other random effects. We find that effective values for $c$ range from 3 to 5, giving the parks much flexibility in estimating the how much the female population deviates from the desired 5,500. + +However, due to the lag involved in the contraceptive's effects, the population meanwhile may deviate farther from 5,500. Placing upper and lower limits on the contraceptive dosage minimizes this problem. Just as with the values for $c$ , the simulation does not change too greatly with different values for the upper and lower bounds on the contraceptive dosage $p$ ; a reasonable range seems to be $0.3 < p < 0.7$ . The graphs shown in Figure A1 are eight consecutive random trials, the first four with constraints, the next without. With the exception of the last simulation with constraints, the constraints restrain variability a little but not a lot, suggesting that the park need not worry about exact calculations. Randomness does cause the convergence of the model to disappear. The advantages are that this plan can be started immediately and does not rely on perfectly uniform natural conditions. + +![](images/3901b90ce9a57ba187d029d015c8e10523aaead3710a5b5cebc7808d7bf2973c.jpg) + +![](images/962b546a9b56646091beeedc664b103b2e857d06d1cbfe23a57e5b8b6da7ce05.jpg) + +![](images/895d6eb4bcec2ae738d01f83fe50e97932d717a29d790486b537bfa7ea54064b.jpg) + +![](images/c6498ffa0301e9c2f7fd3e2b87db8ac4583c0ae2ad0230622776f9c487babcf8.jpg) + +![](images/de62ae94a6c8c56f99f5ddfaadaeee80c259da96b2efdffae66478dd11884b4d.jpg) + +![](images/0d3e0714a45317ee0319794a8a6541dcd4f85d7d3b8ae503ddc9080b37a10d12.jpg) + +![](images/845f7c782eb8b8c939fad5c67238e429cc3cadb0f173e97046b4d3cb9b7beeef.jpg) +Figure A1. Adaptive contraceptive use with random fluctuations. + +![](images/af2c37b8a3a556891cbcdd65888c250a18441fdcd6f7f46749d95a306e8ea89d.jpg) + +# Judge's Commentary: The Outstanding Elephant Population Papers + +Gary Krahn + +Department of Mathematics + +U.S. Military Academy + +West Point, NY + +ag2609@exmail.usma.army.mil + +# Introduction + +The judges were very impressed by the breadth of insight revealed by the "modelers" in this the second year of the ICM Contest. Each of the six required tasks in the problem were individually weighted; however, papers were ultimately evaluated on their overall effectiveness to formulate a policy that would solve the overpopulation problem and create a healthy environment for a herd of 11,000 elephants. + +# The Problem + +At first glance, the information and data provided in the problem statement appear to be sufficient to construct a model to capture the population growth of the elephants under specific control measures. This problem, however, was not clear-cut. As the contestants formulated and refined their assumptions, they confronted the complexities typically associated with an open-ended problem. The initial task was to develop and use a model to investigate how the contraceptive dart might be used for population control. The modelers, however, do not know the initial age structure. Nor are they privy to the implementation procedures of culling, relocation, darting, and the cultural issues surrounding population control. + +# The Science + +Papers were evaluated based on an understanding and application of science, model development, and analysis. Knowledge of the elephant's life-cycle and survival characteristics was important in the design of an appropriate solution to this problem. Environmental science provided the modelers with the foundation to make "realistic" and appropriate assumptions. More important, it gave teams confidence in the data provided and in the results of their models. In essence, understanding the science of the biological and environmental data transformed this problem into a real-world application. Many teams, through research, verified that after 5 years of age, elephants live in relative safety—there is a low rate of terminal diseases, accidents are rare, and there are few natural predators. In addition, they discovered that deaths of elephants often occurs after 60 years of age because of eating complications. Teams that gained an understanding of biological issues affecting elephants also achieved better "control" of the information in the problem statement. As a result, the top interdisciplinary teams were able to find insights into the significant parameters that influenced the elephant population growth. + +# The Model + +Some teams constructed an analytic model, some used population models found in the literature, and others developed simulations to replicate the real world behavior. All teams used some simplifying assumptions to reduce the scope of the problem. The judges thought it was important to keep the assumptions reasonable and to avoid making unnecessary assumptions. Several teams examined (or constructed) different population models simultaneously to verify their work and to gain perspectives on how to adapt established modeling techniques for this particular problem. A few teams effectively simplified the problem by modeling only the female population. More than a few teams used several solution techniques (the Leslie matrix and a simulation) and compared the results. Other teams constructed models that captured the dynamics of individual elephants and compared that to models that grouped elephants into categories. The judges were heartened by the number of teams that attempted to validate the models. + +In Task 4, the modelers were asked to investigate the affect of disease and uncontrolled poaching after darting. Teams that used several modeling approaches almost always discovered that the population would oscillate for several years after a dramatic population change and then would recover. Simulations were effectively used to reveal this phenomenon and graphical techniques were able to display this result very clearly. + +# The Analysis + +The mathematics required to explore the dynamics of the population growth did not require sophisticated methods. Difference equations, discrete dynamical systems, differential equations, transformation matrices (Leslie model), and computer simulations were applied with great success by many of the 69 teams in the contest. Some teams examined the long-term behavior of the system using the eigenvalues of the transition matrix—a very nice application of matrix algebra. The judges were delighted that the top teams discovered that their results were insensitive to assumptions about the initial age structure. Computation and calculation were not the most important features for successfully solving this problem. The reasoning process, modeling, and problem solving were of much greater importance. + +# Interdisciplinary + +Again, the characteristic of a strong paper was the knowledge of environment science and resource management and application of valid modeling concepts, along with terminology that explained the analysis, outcomes, and recommendations. The top papers not only conducted a thorough analysis but also shared their method of reasoning in sufficient detail. The problem statement revealed that park officials were very skeptical about mathematical modeling. Therefore, it was essential to outline the modeling procedures and the implications of any assumptions. The analyst's credibility (essential for the eventual implementation of the model) could be enhanced by revealing knowledge about the elephant life-cycle, discussion of the advantages and disadvantages of models/simulations, and sharing with the park officials an appreciation of the complexity of the problem. Some National Parks in South Africa cover over 2 million acres, larger than the state of New Jersey (and no turnpike!). Counting elephants in these rugged areas is not simple—it necessitates historical evidence and statistical inference. Determining the age and sex distribution of elephants can be extremely difficult especially during periods when external forces are changing the natural equilibrium of the herds. Discussing these considerations of the ecosystem with the Park Officials was an important element of Task 5—increasing the confidence of park managers. A team's lack of appreciation for the complex environment was often revealed in the manner in which the darting would be implemented. Some teams suggested counting elephants every year and only darting elephant of specific age groups—probably an impossible undertaking. Other teams realized that it would be impossible to "tag" darted elephants and suggested darting every two years to help eliminate the problem of darting the same elephant several times in a given year. + +The interplay between darting and relocation was the theme of Task 3. It was interesting how some teams believed that relocation could be done very + +easily and other teams wrestled with the cost and complications of relocating hundreds of elephants—hundreds of big, heavy, cumbersome, stubborn elephants. The understanding of resource management was a critical ingredient in solving this problem. The top teams did this very well. + +# Presentation + +Clarity of presentation is essential to good research and analysis and it provides the ability to effectively influence the decision making process. Many teams this year presented very clear and concise support of their work. The stronger teams carefully created the appropriate mixture of words, graphs, algorithms, and analysis to present their reasoning and recommendations. + +Often, the results for a specific task were spread throughout the paper and were not confined to a particular section of the paper. Over the years, however, modeling teams have continued to place a greater emphasis on the write-up. This has been a very pleasant trend to witness. + +# Conclusion + +Almost every team felt comfortable transitioning their work to other possible scenarios. They revealed a confidence in generalizing their analysis and adapting it to specific situations. + +This problem was successful because to write a top paper required an understanding of science, research, and mathematics. The best teams revealed the value of solving a problem from an interdisciplinary perspective. The top three papers are remarkable efforts to solve an open-ended problem in a very shot period of time! Congratulations to all the interdisciplinary teams and especially the three "outstanding" teams. + +# About the Author + +Gary Krahn received his Ph.D. in Applied Mathematics at the Naval Postgraduate School. He is currently the Deputy Head of the Dept. of Mathematical Sciences at the U.S. Military Academy at West Point. His current interests are in the study of generalized de Bruijn sequences for communication and coding applications. He enjoys his role as a judge and associate director of the ICM. + +# About the Problem Author + +Chris Arney + +Gary Krahn + +Department of Mathematics + +U.S. Military Academy + +West Point, NY + +ad6819@exmail.usma.army.mil + +ag2609@exmail.usma.army.mil + +In the second year of ICM, the contest directors wanted a problem involving resource management as it relates to modeling in environmental sciences. In addition, we wanted the problem to involve data analysis, to be realistic, and to be open-ended where no solution is readily available. Our search converged to the elephant problem of Professor Tony Starfield—and we believe our search was successful. + +Professor Anthony M. Starfield in the Dept. of Ecology, Evolution, and Behavior, University of Minnesota, carefully guided us in composing this problem. He is an applied mathematician who enjoys using mathematics and computers to help solve "real-life" problems. Prof. Starfield has his Ph.D. from the University of the Witwatersrand, Johannesburg, and has worked on many problems in wildlife conservation, in particular, modeling of populations and ecosystems. For 20 years, he has built models to aid management decisions in the game parks of Southern Africa. His current research work is in two separate areas. + +- He investigates how decisions are made in conservation biology and attempts to develop models that feed into a formal multi-objective decision process that reflects both the uncertainty inherent in conservation problems and the various interests of the players in these decisions. +- He develops new paradigms for modeling ecosystem dynamics. This approach has been applied to forest succession in Minnesota, elephant-tree dynamics in Zimbabwe, and global warming on Alaskan tundra. + +Tony Starfield is also the co-author of How to Model It: Problem-Solving for the Computer Age and Building Models for Conservation and Wildlife Management. + +Our question combined two components that Prof. Starfield has incorporated into his work: + +- using mathematical models effectively in ecology and conservation biology, and +- using the creative aspect of modeling as a logical and practical process. + +If you compare the ICM problem with the problem that Tony Starfield has been investigating (see below), you will see that only the names have been changed to protect the innocent. As we coordinated with Tony, we found that he spends considerable time in South Africa solving important problems. Here is the problem that Tony suggested: + +The Kruger National Park in South Africa has tried to maintain a steady elephant population. Their policy is to keep the number of elephants fixed; and for the past 20 or more years they have attempted to count the total population each year, then remove whole herds to keep the population stable. This operation has involved shooting (for the most part) and occasionally relocating elephants every year. There has been a public outcry against the shooting of elephants, and it is not feasible to relocate large number of elephants. A contraceptive dart has been developed that will prevent a mature elephant cow from conceiving for a period of two years. How can darting help control the population of elephants? + +Tony suggested that that overall task is to develop and use models to investigate how the contraceptive dart might be used for population control. He then crafted a series of tasks to help guide the students. + +We thought it would be best not to specify the Kruger Park, to allow for greater generality. Because of the assumptions that have to be made, this is an open-ended problem. This problem was exciting because it was accessible to students with a variety of backgrounds, without losing the "real-world" application. It also required teams to do research to model the situation appropriately. A goal was to provide an opportunity to have students discover that modeling skills can allow them to make significant contributions to society. We hope that a little of Tony Starfield's commitment to our environment and modeling rubbed off on those involved with this problem. + +# References + +Starfield, A.M., K.A. Smith, and A.L. Bleloch. 1990. How to Model It: Problem-Solving for the Computer Age. New York: McGraw-Hill. + +Starfield, A.M., and A.L. Bleloch. 1991. Building Models for Conservation and Wildlife Management. 2nd ed. Edina, MN: Burgess Press. + +# About the Author + +![](images/2ddffdc2ec61083404dc3426e35e15e6fa5d483393f03dec95257f85f7d405fa.jpg) + +Anthony Starfield received his Ph.D. at the University of Witwatersrand, Johannesburg. He is an applied mathematician who enjoys solving environmental problems. For 20 years, Tony has been working with engineers to build models to aid management decisions in the game parks of Southern Africa. What was essentially a hobby grew into a career. Today he would be described as an ecological modeler. Currently, he is a Professor in the Department of Ecology, Evolution, and Behavior + +at the University of Minnesota College of Biological Science. His home page is + +http://biosci.cbs.umn.edu/eeb/faculty/StarfieldAnthony.html. + +Statement of Ownership, Management, and Circulation + +
1. Publication Title +The UMAP Journal2. Publication Number3. Filing Date +09/16/00
0197-3622
4. Issue Frequency +Quarterly5. Number of Issues Published Annually +4 + annual collection6. Annual Subscription Price +$64.00
7. Complete Mailing Address of Known Office of Publication (Not printer) (Street, city, county, state, and ZIP+4) +57 Bedford St., Ste. 210, Lexington, MA 02420Contact Person +Kevin Darcy
Telephone +781-862-7878
+ +8. Complete Mailing Address of Headquarters or General Business Office of Publisher (Not printer) + +same + +9. Full Names and Complete Mailing Addresses of Publisher, Editor, and Managing Editor (Do not leave blank) + +Publisher (Name and complete mailing address) + +Solomon Garfunkel, 57 Bedford St., Ste.210, Lexington, MA 02420 + +Editor (Name and complete mailing address) + +Paul J. Campbell, Beloit College, 700 College St., Beloit, WI 53511 + +Managing Editor (Name and complete mailing address) + +Pauline Wright, 57 Bedford St., Ste. 210, Lexington, MA 02420 + +10. Owner (Do not leave blank. If the publication is owned by a corporation, give the name and address of the corporation immediately followed by the names and addresses of all stockholders owning or holding 1 percent or more of the total amount of stock. If not owned by a corporation, give the names and addresses of the individual owners. If owned by a partnership or other unincorporated firm, give its name and address as well as those of each individual owner. If the publication is published by a nonprofit organization, give its name and address.) + +
Full NameComplete Mailing Address
Consortium for Mathematics57 Bedford St., Ste. 210
and Its Applications, Inc.Lexington, MA 02420
(COMAP Inc.)
+ +11. Known Bondholders, Mortgages, and Other Security Holders Owning or Holding 1 Percent or More of Total Amount of Bonds, Mortgages, or Other Securities. If none, check box + +
Full NameComplete Mailing Address
N/A
+ +12. Tax Status (For completion by nonprofit organizations authorized to mail at nonprofit rates) (Check one) The purpose, function, and nonprofit status of this organization and the exempt status for federal income tax purposes: + +Has Not Changed During Preceding 12 Months + +Has Changed During Preceding 12 Months (Publisher must submit explanation of change with this statement) + +
13. Publication Title +The UMAP Journal14. Issue Date for Circulation Data Below +9/29/00
15. Extent and Nature of CirculationAverage No. Copies Each Issue +During Preceding 12 MonthsNo. Copies of Single Issue +Published Nearest to Filing Date
a. Total Number of Copies (Net press run)12751820
b. Paid and/or Requested Circulation(1)Paid/Requested Outside-County Mail Subscriptions Stated on Form 3541. (Include advertiser's proof and exchange copies)N/AN/A
(2)Paid In-County Subscriptions (Include advertiser's proof and exchange copies)N/AN/A
(3)Sales Through Dealers and Carriers, Street Vendors, Counter Sales, and Other Non-USPS Paid Distribution8075
(4)Other Classes Mailed Through the USPSN/AN/A
c. Total Paid and/or Requested Circulation [Sum of 15b. (1), (2), (3), and (4)]8075
d. Free Distribution by Mail (Samples, compliment, and other free)(1)Outside-County as Stated on Form 3541
(2)In-County as Stated on Form 35418095
(3)Other Classes Mailed Through the USPS
e. Free Distribution Outside the Mail (Carriers or other means)-0--0-
f. Total Free Distribution (Sum of 15d. and 15e.)8095
g. Total Distribution (Sum of 15c. and 15f.)160170
h. Copies not Distributed125590
i. Total (Sum of 15g. and h.)285760
j. Percent Paid and/or Requested Circulation (15c. divided by 15g. times 100)50%44%
16. Publication of Statement of Ownership +Publication required. Will be printed in the third issue of this publication. □ Publication not required.
17. Signature and Title of Editor, Publisher, Business Manager, or Owner +Aolomn AodhleDate +9-19-00
+ +I certify that all information furnished on this form is true and complete. I understand that anyone who furnishes false or misleading information on this form or who omits material or information requested on the form may be subject to criminal sanctions (including fines and imprisonment) and/or civil sanctions (including civil penalties). + +# Instructions to Publishers + +1. Complete and file one copy of this form with your postmaster annually on or before October 1. Keep a copy of the completed form for your records. +2. In cases where the stockholder or security holder is a trustee, include in items 10 and 11 the name of the person or corporation for whom the trustee is acting. Also include the names and addresses of individuals who are stockholders who own or hold 1 percent or more of the total amount of bonds, mortgages, or other securities of the publishing corporation. In item 11, if none, check the box. Use blank sheets if more space is required. +3. Be sure to furnish all circulation information called for in item 15. Free circulation must be shown in items 15d, e, and f. +4. Item 15h, Copies not Distributed, must include (1) newsstand copies originally stated on Form 3541, and returned to the publisher, (2) estimated returns from news agents, and (3), copies for office use, leftovers, spoiled, and all other copies not distributed. +5. If the publication had Periodicals authorization as a general or requester publication, this Statement of Ownership, Management, and Circulation must be published; it must be printed in any issue in October or, if the publication is not published during October, the first issue published after October. +In item 16, indicate the date of the issue in which this Statement of Ownership will be published. +7. Item 17 must be signed. + +Failure to file or publish a statement of ownership may lead to suspension of Periodicals authorization. \ No newline at end of file diff --git a/MCM/1995-2008/2001ICM/2001ICM.md b/MCM/1995-2008/2001ICM/2001ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..56f9b3538c3ba124cc8df52408fbb7c3b86e8730 --- /dev/null +++ b/MCM/1995-2008/2001ICM/2001ICM.md @@ -0,0 +1,1973 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +David C. "Chris" Arney + +Dean of the School of + +Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +P.O.Box 210667 + +Montgomery, AL 36121-0667 + +JMCargal@sprintmail.com + +Development Director + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Project Manager + +Roland Cheyney + +Copy Editors + +Seth A. Maislin + +Pauline Wright + +Distribution Manager + +Kevin Darcy + +Production Secretary + +Gail Wessell + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 22, No. 4 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +David C. "Chris" Arney + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. "Gene" Woolsey + +Brigham Young University + +The College of St. Rose + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription includes print copies of quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in their classes, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2120 $75 + +(Outside U.S.) #2121 $85 + +# INSTITUTIONAL PLUS MEMBERSHIP SUBSCRIBERS + +Institutions can subscribe to the Journal through either Institutional Pus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in any class taught in the institution, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2170 $395 + +(Outside U.S.) #2171 $415 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +Regular Institutional members receive only print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2140 $165 + +(Outside U.S.) #2141 $185 + +# LIBRARY SUBSCRIPTIONS + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching and our organizational newsletter Consortium. + +(Domestic) #2130 $140 + +(Outside U.S.) #2131 $160 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquires readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02420 + +© Copyright 2001 by COMAP, Inc. All rights reserved. + +# Vol. 22, No. 4 2001 + +# Table of Contents + +# Guest Editorial + +The Problem with Algebraic Models of Marriage and Kinship Structure +James M. Cargal 345 + +# Special Section on ICM + +Results of the 2001 Interdisciplinary Contest in Modeling David C. "Chris" Arney and John H. "Jack" Grubbs 355 +A Multiple Regression Model to Predict Zebra Mussel Population Growth +Michael P. Schubmehl, Marcy A. LaViollette, and Deborah A. Chun 367 +Identifying Potential Zebra Mussel Colonization +David E. Stier, Marc Alan Leisenring, and +Matthew Glen Kennedy 385 +Waging War Against the Zebra Mussel +Nasreen A. Ilias, Marie C. Spong, and James F. Tucker 399 +Judge's Commentary: The Outstanding Zebra Mussel Papers Gary Krahn 415 +Author's Commentary: The Outstanding Zebra Mussel Papers Sandra A. Nierwicki-Bauer 421 +Reviews 427 +Annual Index 431 +Acknowledgments 435 +Errata 436 + +![](images/6de243d8ac552a7e20a7e2b722628aee8a565c0e28857164f4f9709e17ad5dc8.jpg) + +# Guest Editorial + +# The Problem with Algebraic Models of Marriage and Kinship Structure + +James M. Cargal + +Mathematics Department + +Troy State University Montgomery + +P.O. Drawer 4419 + +Montgomery, AL 36103 + +jmcargal@sprintmail.com + +# Introduction + +Algebraic models of marriage and kinship systems have been developed for nearly 50 years, but they continue to be a matter of controversy. Most anthropologists have found algebraic models abstract and unnecessary, while mathematical scientists have tended to consider the value of these models self-evident. I argue here that the anthropologists have been largely correct, and that the crux of the matter comes down to the question: What constitutes a mathematical model? + +Abstract algebraic modeling of marriage and kinship systems began in 1949 as an addendum to Claude Levi-Strauss's seminal The Elementary Structures of Kinship [1969]. The addendum, by the great algebraist André Weil, was "at Levi-Strauss's request" [1969, 221]. Variations of the same algebraic model have appeared subsequently in articles and books into the 1990s (for example, Ascher [1991]). Perhaps the most influential use of the model was in the finite mathematics textbook by Kemeny et al. [1966], and its most ambitious application was in the text by Harrison C. White [1963]. I even wrote a paper myself on the subject [Cargal 1978]. + +To some extent I am embarrassed by that paper, as I now believe that my work on algebraic marriage and kinship systems, like the other works, has little value. Many anthropologists have attacked such models; Korn and Needham [1970] offer perhaps the best attack. However, though they make some stinging points, Korn and Needham are caught up too much in mathematical notation, + +as opposed to mathematical substance. The problem with the models does not lie in the mathematics itself but in what these models do not do. + +The rest of this essay is concerned with: + +- What is a (good) mathematical model? +- How good are algebraic models of marriage and kinship systems? + +# What Is a Mathematical Model? + +Essays on the nature of a mathematical model abound in texts on operations research, simulation, probability, and applied mathematics—but are strangely absent from physics and engineering books. This is not because physicists and engineers are less philosophical than their counterparts in other mathematical sciences, but because physicists routinely use models in introductory courses; there is no need for a transition in later courses to modeling. + +Mathematical models are generally defined as mathematical representations of the subject at hand. They are abstractions that represent ideal assumptions, and they are supposed to capture the salient features of the subject and to leave out the irrelevant features. If the model is a good model, predictions made from the model should be true of the subject that is modeled. The question of whether the model is a good model is the core philosophical question. Related questions are: + +- Which features of the subject are relevant for the model? +- To what extent can predictions based on the model be used? +- How accurate are the predictions? + +# Models and Predictions + +Prediction is core to the subject of models and core to this critique of algebraic models of marriage and kinship systems. I suggest that a model is only as good as the predictions that it accurately makes. What is the purpose of a model if not to make predictions? The usual answer is that the model can help us understand the subject. But if the model does not yield predictions, what is the value of this understanding? I will give two historical examples of models in physics and astronomy and will then re-examine the algebraic models of marriage and kinship. + +# Two Models by Kepler + +Although mathematical models of nature seem to have been important to the Greeks, mathematical models in the modern sense took off around 1600 + +with Galileo, Kepler, and others. Kepler constructed two models of interest to this paper. + +- Planetary motion: If we view Kepler's laws of planetary motion as a mathematical model, we can see that it is very much a predictive model. The model not only constructs the path of the planets but also determines the speed of their revolution, and use of the model enabled more accurate forecasting of planetary positions. It may be relevant that this model was based on painstaking analysis of data that were themselves of unprecedented accuracy. + +- Planets as platonic solids: In his Harmonice Mundi, Kepler devised a less well-known model of the planetary system (see Kappraff [1991, 265]). He showed that the orbits of the six known planets could be inscribed about the five platonic solids. The five solids are nested inside one another, and the six planets nest within the solids. Kepler published this model and was proud of it for his entire life. The model was predictive in only one sense: It implies that there are no new planets to be discovered. For its time, it is not a bad model; it is less mystical than prior Greek theories of the universe. For our time, the model is primitive. We do not reject the model because its one prediction failed; the prediction could well have turned out to be true. We reject the model because by our standards it is contrived, and because it is not predictive enough. + +I contend that the algebraic models of marriage and kinship systems are more in the style of Kepler's platonic solids model than in the style of his laws of planetary motion. + +# The Quantum Physics Model + +Quantum physics is difficult to understand and completely unintuitive, but it is has been perhaps the most successful model of modern physics. The quantum theory of physics has been totally dominant in its domain (atomic mechanics) because of two features: + +It is mathematically consistent. +- It has been the source of thousands of predictions, which in all testable cases have been correct. The most famous such prediction might be Bell's theorem (for an elementary account, see Peat [1990]). + +Hence, a mathematical model that is best known for its statement of what cannot be predicted (i.e., the Heisenberg uncertainty principle) has been as successful as any model in science history, because it is the exemplary predictive model. + +Following the example of quantum physics, we should evaluate algebraic models of marriage and kinship systems according to the criterion of how well they predict. + +# Algebraic Models of Marriage and Kinship + +# The Core of the Algebraic Models + +Algebraic models of marriage and kinship are based on clans and their relationships. In some cases—specifically, the Kariera, the Aranda, perhaps the Tarau, and perhaps the Murngin—the structure of the kinship system is an algebraic group. This single observation, which is apparently due to André Weil, is the heart of 40-plus years of writing on mathematical aspects of marriage and kinship systems. + +However, it is not in any way a remarkable observation. Clan systems are either hierarchical or they are not. If, as in the above cases, the clan relationships are not hierarchical, they are likely to be symmetric. Group theory could be described as the mathematics of symmetry (see Armstrong [1988] and Stewart [1992, Chapter 9: The Duellist and the Monster, 115-129]). It would be a remarkable discovery indeed if there were symmetric clan structures that could not be described in the language of mathematical groups! + +If we use groups to describe symmetrical clan structures, there should be a payoff or return to anthropologists: There needs to be some reason to bring group theory into anthropology. This may seem obvious, but it is not. Anyone who has worked in certain areas of industry has seen mathematical models (and simulation models in particular) that seem to have no purpose. On the subject of simulation modeling, E.C. Russell says: "The goal of a simulation project should never be 'To model the ....' Modeling itself is not a goal; it is a means of achieving a goal" [1983, 16]. That models without a purpose exist is a consequence, I believe, of the divergence of mathematics and physics that has been going on for 200 years but which has greatly accelerated in the last 30 years. + +The purpose of a mathematical model is to predict. What do algebraic models of marriage and kinship systems predict? The answer is, nothing. + +Let us ask a simpler question: What information do algebraic models of marriage and kinship systems provide? In his genesis of algebraic models, André Weil [1969, 221] says "I propose to show how a certain type of marriage and kinship laws can be interpreted algebraically, and how algebra and the study of groups ... can facilitate its study and classification." Given the forty-plus years since Weil's essay, it is reasonable to ask whether this has been accomplished: Has anyone aided the study and classification of marriage and kinship systems through the use of group theory (or any use of abstract algebra whatsoever)? + +Another approach is this: How are the traditional anthropological methods for classifying marriage and kinship structures deficient? How does group theory make up for such purported deficiency? + +# Prediction and Falsifiability + +A key element in the twentieth-century view of scientific theories is falsifiability, a key doctrine in the work of the eminent philosopher of science Karl R. Popper (see, for example, Popper [1959]). A scientific theory must be falsifiable, that is, there must potentially be observations or experimental results that would force abandonment of the theory. + +Falsifiability seems nearly equivalent to predictability. A scientific theory must make predictions so that the theory itself can be tested. This seems like a reasonable criterion for scientific theories and for models, but it certainly is not a criterion met by algebraic models of marriage and kinship systems. What predictions do the algebraic models make? + +If algebraic models of marriage and kinship systems are not predictive models, perhaps they are so-called explanatory models. This begs several questions: + +- Can a model that makes no predictions be explanatory? +- What exactly does an explanatory model explain? +- How do we judge the merits of an explanatory model? + +Explanatory models don't predict but perhaps they give us insights. Let us take an example from my own paper [Cargal 1978]. The Kariera have four clans, and their clan system can be represented by the graph in Figure 1. + +![](images/0731494476511091d72efe7383e160f05e5a017c6cfd41da53d02489553e4f4a.jpg) +Figure 1. Diagram of the Kariera kinship system. Solid arcs represent a clan of marriage and kinship and the dotted lines represent the clan of a child. + +This representation, the traditional algebraic view of the Kariera, is a valid way to introduce algebra, since the graph depicted is also the Cayley graph of + +the Klein four-group $Z_{2} \times Z_{2}$ . In my paper, I suggest looking at the subclans of each sex: "[T]here is sex differentiation among the clans referred to. Natives think in terms of men of Clan A or women of Clan A ..." [1978, 161]. I give a graph of the eight subclans and then relabel that graph to realize a Cayley graph of a group. Figure 2 shows the group of subclans of the Kariera. + +![](images/67f533b82964d5537f2bd24738bc0493218a9ac78c3775e423253298b369e018.jpg) +Figure 2. A1 denotes men of Clan A and C0 denotes women of Clan C. The relations are S and O. An S means subclan of children of same sex and an O means subclan of children of opposite sex. + +The group of Figure 2 happens not to be homomorphic to the group of Figure 1. This is a formal mathematical statement to the effect that the two groups are fundamentally different. The one group, of order eight, is not merely a refinement of the other group, of order four. To my knowledge, the group of order eight does not appear anywhere else in the literature; but it is as valid as the standard group (to whatever extent that group can be said to be valid). + +We have two algebraic representations of marriage and kinship relations in the Kariera people, that is, two fundamentally distinct models. If these models have anthropological content, they should give conflicting information, or at least the second model should provide new information. My paper gives a great deal of analysis of this and other groups. It is rather enthusiastic analysis and to some extent I find myself a little impressed as I look at it now. But with the distance of more than 20 years since doing the research, I can ask, as anthropologists asked: What significance does this have to anthropology? + +# Cultures of Mathematics + +Different branches of the mathematical sciences have different understandings of what a model is. Mathematical logic, for example, is concerned a great deal with models that are quite different from the models that interest physicists (see Bell and Slomson [1971]). Models in statistics are different from either of those (see Hinkelmann and Kempthorne [1994]). In much of mathematical culture, a model is a mathematical system that is formally specified and internally consistent; there is absolutely no requirement that it be useful. Operations researchers and statisticians are interested in models that they can use to solve problems, but the importance of making predictions is not emphasized (it certainly wasn't in my own training in those areas). + +To the few anthropologists who can understand the mathematics, the (positive) worth of algebraic models of marriage and kinship is self-evident. The remaining anthropologists are skeptical and tend to ask, "What good is this model? What will it do for me?" Mathematicians tend to respond condescendingly, knowing that the anthropologists are mystified by the mathematics, "It will help you understand the structure of the marriage and kinship system." Unfortunately, I do not remember any anthropologist being prescient enough to ask, "What will this model predict?" In the sciences, one manifestation of different subcultures is the understanding of what a model is. One consequence of this is that in the case of what constitutes a mathematical model, there may be an unlikely alliance between anthropologists and physicists against mathematicians. + +# Mathematics Is Not a Science + +Though it is said to be the language of science, mathematics is generally considered outside of science; this is because scientific facts must be verified empirically but mathematical truths must only be true to their axioms. + +# Is Cultural Anthropology a Science? + +The models referred to in this paper concern cultural anthropology, a field that may not be hospitable to a traditional concept of science (see Fox [1992]). So anthropologists may not care for or about mathematical models of marriage and kinship. Nonetheless, mathematicians who apply their models outside of mathematics should use the paradigm of the scientific method and of predictions, testing, and falsifiability. + +Lastly, when mathematicians apply mathematics to previously non-mathematical areas, people who fail to understand the mathematics may nonetheless be quite right in their criticism. It is important that the needs of the applied + +area be recognized: that in applying mathematics, mathematicians should look outside their own needs and their own culture. + +# Addendum + +What happened in the last 20 years that I should now have an attitude far more sympathetic to anthropologists? Why did I come to reject an application of mathematics that I had published? + +Until recent times almost all mathematics majors studied physics. They are still encouraged to do so, since physics provides much motivation for calculus, not to mention most examples in differential equations and vector calculus. However, it is becoming common for mathematics majors to "get around" the physics requirement (as I did), and today a smaller proportion of mathematicians have any training in physics than in the past. + +When I did my work in anthropology, my background was a master's degree in mathematics with emphasis on logic and abstract algebra. When I published the paper, I was working in aerospace and had made it my business to be comfortable in statistics and computer science. Subsequently I did a Ph.D. in operations research, a mathematical discipline with strong ties to statistics and computer science—but not to physics. + +In 1988, when I was again working in military aerospace as an applied mathematician, a problem came up in classical Newtonian physics. I needed merely to use computer models already constructed for the problems that I faced. However, I felt that I should understand the underlying physics, so I began the study of physics—not realizing that it is addictive! It was exposure to physics that changed my appreciation of the meaning of mathematical models. + +# References + +Armstrong, M.A. 1988. Groups and Symmetry. New York: Springer-Verlag. +Ascher, Marcia. 1991. *Ethnomathematics: A Multicultural View of Mathematical Ideas*. Pacific Grove, CA: Brooks-Cole. +Bell, J.L., and A.B. Slomson. 1971. Models and Ultraproducts. Amsterdam: North Holland. +Burton, D.M. 1991. The History of Mathematics: An Introduction. Dubuque, IA: William C. Brown. +Cargal, J.M. 1978. An analysis of the marriage structure of the Murngin tribe of Australia. Behavioral Science 23: 157-168. +Fox, Robin. 1992. Anthropology and the "Teddy Bear" picnic. Society (November/December 1992): 47-55. + +Hinkelmann, K., and O. Kempthorne. 1994. Design and Analysis of Experiments. Vol. I: Introduction to Experimental Design. New York: Wiley. +Kappraff, J. 1991. Connections: The Geometric Bridge Between Art and Science. New York: McGraw-Hill. +Kemeny, J.G., J.L. Snell, and G.L. Thompson. 1966. Introduction to Finite Mathematics. 2nd ed. Englewood-Cliffs, NJ: Prentice-Hall. +Korn, F., and R. Needham. 1970. Permutation models and prescriptive systems: The Tarau case. Man 5: 393-420. +Levi-Strauss, Claude. 1969. The Elementary Structures of Kinship. Boston: Beacon Press. +Luenberger, D.G. 1979. Introduction to Dynamic Systems: Theory, Models and Applications. New York: Wiley. +Peat, F.D. 1990. Einstein's Moon: Bell's Theorem and the Curious Quest for Quantum Reality. Chicago, IL: Chicago. +Popper, Karl R. 1959. The Logic of Scientific Discovery. London, England: Hutchinson. +Russell, E.C. 1983. Building Simulation Models with Simscript II.5. Los Angeles, CA: CACI. +Snow, C.P. 1969. Two Cultures: And a Second Look. 1969. New York: Cambridge University Press. +Stewart, I. 1992. The Problems of Mathematics. 2nd ed. New York: Oxford University Press. +von Neumann, J. 1955. Mathematical Foundations for Quantum Mechanics. Princeton, NJ: Princeton University Press. +Weil, André. 1969. On the algebraic study of certain types of marriage laws (Murngin system). In Levi-Strauss [1969]. +White, H.C. 1963. An Anatomy of Kinship: Mathematical Models for Structures of Cumulated Roles. Englewood-Cliffs, NJ: Prentice-Hall. + +# Acknowledgments + +An earlier version of this paper was presented at a session devoted to Humanistic Mathematics at the Joint Mathematics Meetings, January 1996, in Orlando, Florida. + +# About the Author + +![](images/85df09991ab00a1a9c59bf4d5d47672b0a2ce206172f8b99ea18dfc4f8a8c27d.jpg) + +James M. Cargal has a B.S. in mathematics from San Diego State University (1973) and an M.S. in mathematics from Purdue (1975); his Ph.D., in operations research, is from the Industrial Engineering Dept. of Texas A & M. He is currently Professor of Mathematics and Chair of Mathematical Sciences at Troy State University Montgomery, where he has been since 1990. He has worked off and on in the military aerospace industry and still consults in that area on statistical questions. His original work on algebraic systems of + +marriage was done in the summer of 1976 (at a long defunct pizza joint in Florida), and about $10\%$ of it was published in his 1978 paper (the other $90\%$ was destroyed in the 1980s). He has been Reviews Editor of this Journal since 1986. + +# Editor's Note + +A future issue of the Journal will contain an article responding to this editorial. + +# Modeling Forum + +# Results of the 2001 Interdisciplinary Contest in Modeling + +David C. “Chris” Arney, Co-Director + +Dean of the School of Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +John H. "Jack" Grubbs, Co-Director + +Dept. of Civil and Environmental Engineering + +Tulane University + +New Orleans, LA 70112 + +jgrubbs@tulane.edu + +# Introduction + +A total of 83 teams of undergraduates, from 58 institutions in 5 countries, spent the second weekend in February working on an applied mathematics problem in the 3rd Interdisciplinary Contest in Modeling (ICM). + +The 2001 ICM began at 12:01 A.M. on Friday, Feb. 9 and officially ended at 11:59 P.M. on Monday, Feb. 12. During that time, teams of up to three undergraduates were to research and submit an optimal solution for an open-ended modeling problem. The 2001 ICM marked the inaugural year for the new online administration contest, and it was a great success. Students were able to register, obtain contest materials, download the problems at the appropriate time, and enter data through COMAP'S ICM website. + +After a weekend of hard work, solution papers were sent to COMAP on Monday. Three of the top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first two contests were published in special issues of The UMAP Journal in 1999 and 2000. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are unique among modeling competitions in that they are the only international contests in which students work in teams to find a solution. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better informed—and prepared—citizens. + +This year's problem was about limiting or preventing the spread of zebra mussels in the Great Lakes and inland waterways of the United States and Canada. + +# Problem: The Zebra Mussel Problem Our Waterways—An Uncertain Future + +![](images/b94d3b15365f25d8ff1ce206b2d4cde221e63951a67bdfb9df3ae77c5787ca4f.jpg) +Figure 1. Map showing U.S. states and Canadian provinces with zebra mussels in inland and adjacent waters in July 2000. + +Zebra mussels (Dreissena polymorpha) are small, fingernail-sized, freshwater mollusks unintentionally introduced to North America via ballast water from a transoceanic vessel. Since their introduction in the mid-1980s, they have spread through all of the Great Lakes and to an increasing number of inland + +waterways in the United States and Canada. Zebra mussels colonize on various surfaces, such as docks, boat hulls, commercial fishing nets, water intake pipes and valves, native mollusks, and other zebra mussels. Their only known predators—some diving ducks, freshwater drum, carp, and sturgeon—are not numerous enough to have a significant effect on them. Zebra mussels have significantly impacted the Great Lakes ecosystem and economy. Many communities are trying to control or eliminate these aquatic pests. (Source: Great Lakes Sea Grant Network http://www.sgnis.org/.) + +Researchers are attempting to identify the environmental variables related to the zebra mussel infestation in North American waterways. The relevant factors that may limit or prevent the spread of the zebra mussel are uncertain. You will have access to some reference data to include listings of several chemicals and substances in the water system that may affect the spread of the zebra mussel throughout waterways. Additionally, you can assume individual zebra mussels grow at a rate of $15\mathrm{mm}$ per year with a life span between 4-6 years. The typical mussel can filter 1 liter of water each day. + +# Requirement A + +Discuss environmental factors that could influence the spread of zebra mussels. + +# Requirement B + +Utilizing the chemical data provided at +http://www.comap.com/undergraduate/contest/s/cm/imagesdata/LakeAChem1.xls +and the mussel population data provided at +http://www.comap.com/undergraduate/contest/scm/imagesdata/LakeAPopulation1.xls, +model the population growth of zebra mussels in Lake A. Be sure to rev information below about the collection of the zebra mussel data. + +# Requirement C + +Utilizing additional data on Lake A from another scientist provided at + +http://www.comap.com/undergraduate/contest/s/cm/imagesdata/LakeAChem2.xls + +and additional mussel population data provided at + +http://www.comap.com/undergraduate/contest/s/cm/imagesdata/LakeAPopulation2.xls, + +corroborate the reasonableness of your model from Requirement B. As a result of this additional data, adjust your earlier model. Analyze the performance of your model. Discuss the sensitivity of your model. + +# Requirement D + +Utilizing the chemical data from two lakes (Lake B and Lake C) in the United States provided at + +http://www.comap.com/undergraduate/contest/s/cm/imagesdata/LakeB.xls and + +http://www.comap.com/undergraduate/contest/s/cm/imagesdata/LakeC.xls, determine if these lakes are vulnerable to the spread of zebra mussels. Discuss your prediction. + +# Requirement E + +The community in the vicinity of Lake B (in Requirement D) is considering specific policies for the de-icing of roadways near the lake during the winter season. Provide guidance to the local government officials regarding a policy on "de-icing agents." In your guidance, include predictions on the long-term impact of de-icing on the zebra mussel population. + +# Requirement F + +It has been recommended by a local community in the United States to introduce round goby fish. Zebra mussels are not often eaten by native fish species, so those fish represent a dead-end ecologically. However, round gobies greater than $100\mathrm{mm}$ feed almost exclusively on zebra mussels. Ironically, because of habitat destruction, the goby is endangered in its native habitat of the Black and Caspian Seas in Russia. In addition to your technical report, include a carefully crafted report (3-page maximum) written explicitly for the local community leaders that responds to their recommendation to introduce the round goby. Also, suggest ways to help reduce the growth of the mussel within and among waterways. + +# Collection of the Zebra Mussel Data + +The developmental state of the zebra mussel is categorized by three stages: veligers (larvae), settling juveniles, and adults. Veligers (microscopic zebra mussel larvae) are free-swimming, suspended in the water for one to three weeks, after which they begin searching for a hard surface to attach to and begin their adult life. Looking for zebra mussel veligers is difficult because they are not easily visible by the naked eye. Settled juvenile zebra mussels can be felt on smooth surfaces like boats and motors. An advanced zebra mussel infestation can cover a surface, even forming thick mats sometimes reaching very high densities. The density of juveniles was determined along the lake using three $15 \times 15$ cm settling plates. The top plate remained in the water for the entire sampling season (S—seasonal) to estimate seasonal accumulation. The middle + +and bottom plates are collected after specific periods (A—alternating) of time denoted by “Lake Days” in the data files. + +![](images/488c87ddde81ef7096a086ddadf09cca5d455b9a0812e42ceeaba3390799e0f6.jpg) +Figure 2. Diagram of collection apparatus. + +The settling plates are placed under the microscope, all juveniles on the undersides of the plate are counted, and densities are reported as juveniles/ $\mathrm{m}^2$ . + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at the College of St. Rose, Albany, NY. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
Zebra Mussel314283883
+ +The three papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those + +![](images/df63f8d000c6fe5ec2db6578e25a1cd63e6159fa781f194f05d86d93f8172e6b.jpg) +Figure 3. A settling plate after collection. + +teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +"A Multiple Regression Model to Predict Zebra Mussel Population" + +Harvey Mudd College + +Claremont, CA + +Michael E. Moody + +Michael Schubmehl + +Marcy LaViolette + +Deborah Chun + +"Identifying Potential Zebra Mussel Colonization" Humboldt State University + +Arcata, CA + +Eileen M. Cashman and Jeffrey B. Haag + +David E. Stier + +Mark Alan Leisenring + +Matthew Glen Kennedy + +"Waging War Against the Zebra Mussel" + +Lewis and Clark College + +Portland, OR + +Robert W. Owens + +Nasreen A. Ilias + +Marie C. Spong + +James F. Tucker + +# Meritorious Teams (14 teams) + +Beijing University of Posts & Telecommunications, Beijing, China (He Zuguo and Luo Shoushan) (two teams) + +Dickinson College, Carlisle, PA (Brian S. Pedersen) + +East China University of Science & Technology, Shanghai, China (Xiwen Lu and Yan Qin) + +Harvey Mudd College, Claremont, CA (Michael E. Moody) + +North Carolina School of Science and Mathematics, Durham, NC (Dot Doyle and Dan Teague) + +Northwestern Polytechnical University, Xian, China (Yong Xiao Hua and Yi Lu Quan) + +South China University of Technology, Guangzhou, China (He Chunxiong and Tao Zhisui) + +Southeast University, Nanjing, China (Sun Zhi-zhong) + +University College Dublin, Dublin, Ireland (Ted Cox) + +University of Science and Technology of China, Hefei, China (Tao Dacheng and Ma Jianxin) + +Villa Julie College, Stevenson, MD (Eileen C. McGraw) + +Zhejiang University, Hangzhou, China (Yang Qifang and He Yong) (two teams) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and the Head Judge. + +# Judging + +Director + +David C. "Chris" Arney, Dean of the School of Mathematics and Sciences, The College of Saint Rose, Albany, NY + +Associate Directors + +Michael Kelley, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Gary W. Krahn, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Judges + +Richard Cassidy, Dept. of Industrial Engineering, University of Arkansas, Fayetteville, AR + +Wayne Jerzak, Dept. of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY + +Sandra Nierzwicki-Bauer, Dept. of Biology and Darrin Fresh Water Institute, Rensselaer Polytechnic Institute, Troy, NY + +# Triage Judges + +Darryl Ahner, Mike Corson, Alex Heidenberg, Jerry Kobylski, Gary Krahn, Joe Myers, Mike Phillips, Kathi Snook, Gideon Weinstein, and Brian Winkel, all of the U.S. Military Academy, West Point, NY + +# Source of the Problem + +The Zebra Mussel Problem was contributed by Sandra Nierzwicki-Bauer, Dept. of Biology and Darrin Fresh Water Institute, Rensselaer Polytechnic Institute. + +# Acknowledgments + +Major funding for the ICM is provided by a grant from the National Science Foundation through COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS). + +We thank: + +- the ICM judges and ICM Board members for their valuable and unflagging efforts; +- the staff of the Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY, for hosting the triage judging; and +- the staff of the School of Mathematics and Sciences, The College of Saint Rose, Albany, NY, for hosting the final judging. + +# Cautions + +# To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +# To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +P = Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORI
CALIFORNIA
Harvey Mudd CollegeClaremontMichael E. MoodyO,M
Humboldt State UniversityArcataEileen M. Cashman and Jeffery B. HaagO
COLORADO
Colorado Northwestern Comm. CollegeRangelyRichard S. KnaubP,P
University of Southern ColoradoPuebloBruce N. LundbergP
FLORIDA
Florida A&M UniversityTallahasseeBruno GuerrieriP
ILLINOIS
Monmouth CollegeMonmouthChristopher Gerard FasanoH
IOWA
Luther CollegeDecorahReginald D. LaursenH
Simpson CollegeIndianolaSteve Emerman and Jeff ParmeleeP
KENTUCKY
Asbury CollegeWilmoreDavid L. CoulietteH,H
MARYLAND
Villa Julie CollegeStevensonEileen C. McGrawM,P
MASSACHUSETTS
Gordon CollegeWenhamDorothy F. Boorse and Mike VeatchH
MITCambridgeMichael P. Brenner and L. MahadevanH
MINNESOTA
Macalester CollegeSt. PaulPeter W. Vaughan and A. RomeroP
MONTANA
Carroll CollegeHelenaKyl StrodeH
NEVADA
Sierra Nevada CollegeIncline VillageSteven D. EllsworthP
NEW JERSEY
Rowan UniversityGlassboroHieu D. NguyenP
NEW YORK
Roberts Wesleyan CollegeRochesterCarlos A. PereiraH
SUNY GeneseoGeneseoChristopher C. Leary and Gregg HartvigsenP
U.S. Military AcademyWest PointScott Nestler and Suzanne DeLongH
Michael Jaye and Michael HuberH
NORTH CAROLINA
Elon CollegeElon CollegeCrista L. Coles and and J. Todd LeeH,P
N.C. School of Science & MathematicsDurhamDot Doyle and Dan TeagueM
OHIO
Youngstown State UniversityYoungstownThomas Smotzer Scott MartinH H
OKLAHOMA
University of Central OklahomaEdmondJesse W. Byrne and Charlotte K. SimmonsP
OREGON
Clatsop Community CollegeAstoriaMichael C. VorwerkH
Eastern Oregon UniversityLa GrandeRichard A. HermensP
Lewis and Clark CollegePortlandRobert W. OwensO
Rogue Community CollegeGrants PassJohn T. SalinasP
PENNSYLVANIA
Clarion UniversityClarionAndy M. Turner and Sharon L. ChallenerH
Dickinson CollegeCarlisleBrian S. PedersenM
SOUTH DAKOTA
Mount Marty CollegeYanktonBonita L. GacnikP,P
TENNESSEE
Austin Peay State UniversityClarksvilleNell K. Rayburn and James BatemanP
VIRGINIA
Chesterfield County Math & Science H.S.MidlothianDiane C. LeightyP
CANADA
University of SaskatchewanSaskatoonTom G. SteeleP
CHINA
Anhui UniversityHeifeWang Hai-xian and Cheng Jun-shengP
Beijing Univ. of Posts & TelecommunicationsBeijingHe Zuguo and Luo ShoushanM,M
Beijing University of Aero. & AstronauticsBeijingLin Guiping and Peng LinpingP
Chongqing UniversityChongqingHe Renbin and Shi JunminH,H
Dalian University of TechnologyDalianZhao LizhongH
East China University of Science & TechnologyShanghaiLu Xiwen and Qin YanM
Lu Yuanhong and Su ChunjieP
Experimental High School of Beijing Normal UniversityBeijingWang JiangciP
Fudan UniversityShanghaiCai ZhijieH,P
Harbin Institute of TechnologyHarbinShang Shouting and Zheng TongP,P
Hefei University of TechnologyHefeiSu HuamingH,P
Mathematics School of Nankai UniversityTianjinFu Lei and Ruan JishouH
Northeastern UniversityShenyangXue DingyuP
Northwestern Polytechnical UniversityXianXiao Hua Yong and Lu Quan YiM
Nie Yu Feng and Sun HaoP
Peking UniversityBeijingMa Ping and Chen XinP
School of Math'l Sciences, Peking UniversityBeijingZhang Tao and Yang XingwenP
Shanghai Jiaotong UniversityShanghaiGong Peimin and BoP,P
South China University of TechnologyGuangzhouHe Chunxiong and Tao ZhisuiM
Xie Lejun and Hong YiH
Southeast UniversityNanjingSun Zhi-zhonM,H
University of Science and Technology of ChinaHefeiTao Dacheng and Ma JianxinM
Sun Guangzhong and Song ZhiweiH
Xian Jiaotong UniversityXianHe XiaoliangP,P
Xian University of TechnologyXianXie Xing LongP
Zhejiang UniversityHangzhouYang Qifanand +He YongM,M
Zhongshan UniversityGuangzhouYuan ZhuojianH
Tang MengxiH
FINLAND
Päivölä CollegeTarttilaMerikki Lappi and +Esa I. LappiH,P
HONG KONG
Hong Kong Baptist UniversityKowloonTong Chong SzeP
IRELAND
University College DublinDublinTed CoxM,H
+ +# Editor's Note + +For team advisors from China, we have endeavored to list family name first, with the help of Susanna Chang '03. + +# A Multiple Regression Model to Predict Zebra Mussel Population Growth + +Michael P. Schubmehl + +Marcy A. LaViollette + +Deborah A. Chun + +Harvey Mudd College + +Claremont, CA 91711 + +Advisor: Michael E. Moody + +# Summary + +Zebra mussels (Dreissena polymorpha) are an invasive mollusk accidentally introduced to the United States by transatlantic ships during the mid-1980s. Because the mussels have few natural predators and adapt quickly to new environments, they have spread quickly from the Great Lakes into many connected waterways. Although the mussel is hardy, sometimes little or no growth is observed in lakes to which it has been introduced; extensive research indicates that the chemical concentrations in these bodies of water may be unsuitable for the mussels. + +To quantify the relationship between chemical contents and mussel population growth, we first use the logistic equation, + +$$ +\frac {d y}{d t} = r y \left(1 - \frac {y}{K}\right), +$$ + +to model Dreissena population as a function of time. After modeling growth rates under a variety of conditions, we used multiple regression to determine which chemicals affect this growth rate. An extensive literature search supported our findings that population growth is linearly dependent on two primary factors: calcium concentration and pH. After further refining our model using the second set of data from Lake A, we obtained the regression equation + +$$ +\text {m a x i m u m g r o w t h r a t e} = 2 3 3 8 \left[ \mathrm {C a} ^ {2 +} \right] + 3 9 2 0 2 \mathrm {p H} - 3 3 4 0 8 9, +$$ + +where the maximum growth rate is in juveniles settling per day, and $\left[\mathrm{Ca}^{2+}\right]$ is in $\mathrm{mg/L}$ . Using this model, we predict that lakes B and C cannot support Dreissena population. Because the levels of calcium in Lake B are close to those required to support a Dreissena population, however, we advise the community near Lake B to use de-icing agents that do not contain calcium. + +# Environmental Factors Affecting Dreissena + +A large body of research links environmental factors such as temperature, pH, calcium ion concentration, and alkalinity to the success or failure of zebra mussel populations. The two factors repeatedly most closely associated with survival are calcium concentration and pH. In a survey of 278 lakes, for example, Ramcharan et al. [1992] found no populated lakes with pH below 7.3 or Ca content below $28.3\mathrm{mg / L}$ . Recent studies have lowered the minimum Ca concentration to $15\mathrm{mg / L}$ for adults and $12\mathrm{mg / L}$ for larvae [McMahon 1996]. The upper bound for pH is somewhere near 9.4 [McMahon 1996]. The optimum conditions for growth are a pH of 8.4 and $34\mathrm{mg / L}$ of Ca [McMahon 1996]. + +Other requirements for survival include alkalinity, which must be kept above $50\mathrm{mg / L}$ [Balog et al. 1995], and dissolved oxygen, which must be above 0.82 ppm (approximately $10\%$ of saturation) [Johnson and McMahon 1996]. Dreissena also cannot survive in magnesium-deficient water; they require a minimum concentration of $0.03\mathrm{mM}$ for a low-density population [Dietz and Byrne 1994]. Sulfate $(\mathrm{SO}_4)$ is also required in small amounts for survival [Dietz and Byrne 1999]. + +Zebra mussels can survive in an amazingly wide range of temperatures, but Van der Velde et al. [1996] determined that exposure to $34^{\circ}\mathrm{C}$ is lethal within 114 minutes and that any temperature above $25^{\circ}\mathrm{C}$ inhibits movement and feeding. Some individuals can tolerate short-term sub-freezing air temperatures [Paukstis et al. 1996]. + +Although not used by the mussels themselves, phosphorus and nitrogen are essential for freshwater phytoplankton survival, and phytoplankton are the main source of food for Dreissena. Densities of mussel populations are negatively related to both phosphates and nitrates; but iron, chlorine, and sodium have no relationship to the existence or density of populations [Ramcharan et al. 1992]. Chlorophyll content measures the density of phytoplankton and thus decreases drastically after the establishment of a zebra mussel colony [Miller and Haynes 1997]. + +Surprisingly, food availability is not an important factor once a zebra mussel is established. In one study, Dreissena were able to survive without food for 524 days with only a $60\%$ mortality rate [Chase and McMahon 1995]. Once a population has acclimatized, limited reproduction can occur in brackish water below 7.0 ppt salinity [Fong et al. 1995], with little mortality even up to 10 ppt [Kennedy et al. 1996]. Potassium can be tolerated only in low concentrations up to $0.3 - 0.5 \mathrm{mM}$ . Ammonia $\left(\mathrm{NH}_{3}\right)$ is lethal in doses as low as $2 \mathrm{mg} / \mathrm{L}$ [Baker et + +al. 1994]. An extensive literature search revealed no correlation between $\mathrm{NH}_4$ and zebra mussel populations. + +# Constructing the Model + +We need to quantify Dreissena population growth, then examine how this growth is affected by the environment. We use the logistic equation, a standard modeling device in ecology [Gotelli 1998]. We choose a continuous approach because of the huge number of individuals involved, and the logistic equation in particular because its simplicity allows us to make as few assumptions as possible. + +Standard techniques for examining the influence of variables like calcium ion concentrations, pH, and temperature on Dreissena populations include multiple regression and discriminant analysis [Ramcharan et al. 1992]. We want to predict actual population growth rates and not just state whether or not a population could exist in certain conditions, so we use multiple regression to relate population growth to chemical concentrations. + +# Assumptions + +- Population growth rate is proportional to total population. + +We assume that the growth rate of an area's population is proportional to the rate at which juveniles settle on plates there. This rate is, in turn, proportional to the total number of larvae present in the water, which is proportional to the total population. Thus, the population growth rate is proportional to the population level. + +- Carrying capacity is constant. + +Larvae can be thought of as a resource necessary for juveniles to exist. Each breeding season, only a certain number of larvae are produced, so the population can increase only to a certain point. Thus, there is effectively a carrying capacity at work. We assume that this carrying capacity does not depend explicitly on time once the breeding season begins. + +- Migration, genetic structure, and age structure do not affect the population. + +Although Dreissena populations spread quickly from one region to another, individuals can move only at a slow crawl. Thus, migration of existing population into or out of a region is negligible. Also, there is no evidence for the existence of individuals whose ages or genes dramatically affect their influence on the population, so we neglect age and genetic variation. + +Predation is negligible. + +We assume that Dreissena are so numerous that any species that prey on them—and there are few—do not have a substantial impact. + +- Sites within a lake can be treated as distinct lakes. + +Although all of the data came from a single lake, we model each site as a separate lake. That is, we assume that the introduction of mussels from another part of the lake is equivalent to their introduction into a fresh lake, and we model the population at the new site independently. + +# Population Growth Model: The Logistic Equation + +We model a Dreissena population with the logistic equation + +$$ +\frac {d y}{d t} = r y \left(1 - \frac {y}{K}\right), +$$ + +where $r$ is the intrinsic growth rate of the population and $K$ is the carrying capacity. For simplicity, we let $a = r$ and $b = r / K$ , so that + +$$ +{\frac {d y}{d t}} = a y - b y ^ {2}. +$$ + +With the initial condition $y(0) = y_0$ , the equation has closed-form solutions + +$$ +y (t) = \frac {a e ^ {a t} y _ {0}}{a - b y _ {0} + b e ^ {a t} y _ {0}}, +$$ + +shown in Figure 1. Because the data from Lake A measure the population growth rate, what we really want to fit to the data is the derivative of this function, + +$$ +y ^ {\prime} (t) = \frac {a ^ {2} e ^ {a t} y _ {0} (a - b y _ {0})}{(a + b (- 1 + e ^ {a t}) y _ {0}) ^ {2}}, +$$ + +whose graph is shown in Figure 2. We can convert the parameters $a$ , $b$ , and $y_0$ into the position, height, and full width at half maximum (FWHM) of this peak, making it easy to fit to data. + +Because the first data set did not include information about changes in chemical concentration over time, we average the population growth rates over all years after the introduction of Dreissena and fit the model curve to this "average year" at each site. The position and width of the peak are fairly constant from site to site, as we expect, since the breeding season usually peaks around mid- to late August and lasts for about three months. The peak heights, however, are radically different at different sites, ranging from about 38,000 juveniles per day at site 2 (Figure 3) to just 1 juvenile per day at site 10. This variation can be explained only by the environmental conditions there, so we determine how these growth rates varied with chemical concentrations. + +![](images/63f6d074f3ee49a2dd205f750d697aa00e530ad044e8d3b45048595f93fd5d15.jpg) +Figure 1. Solution to a generic logistic equation, $y' = ay - by^2$ , with population plotted as a function of time. + +![](images/f2984c0293a7ef9e6e342c73e9efb6eb49aa7ba05420ca2c4661b526dfa33174.jpg) +Figure 2. The derivative of the solution to a generic logistic equation, showing the time rate of change of population. The peak corresponds to Dreissena breeding season in our model. + +![](images/a4745fba8244c3572a4c1c42ec0396459c53c7c3c15d70a8a8296356ef2e8949.jpg) +Actual and Model Growth Rates for an "Average Year" +Figure 3. The derivative of the population growth model, along with data for an average year at Lake A (at site 2, the most populous site). The peak height of 38,000 is the quantity that best characterizes the population's success, so it is used in the regression analysis. + +# Influence of the Environment: Multiple Regression Analysis + +To determine the effect of environmental conditions on growth rates, we must correlate the peak growth rates in the logistic model with the chemical concentrations at each site. To this end, we perform a multiple regression with peak growth rate as the dependent variable and some or all of the chemical concentrations as independent variables. + +There are only 10 data points, far fewer than needed to separate the effects of all 11 variables. Fortunately, the literature provides guidance in selecting which variables to use. The dominant factors influencing the success of a Dreissena population are the concentration of calcium and the pH. Although alkalinity seems to be somewhat important, it is included in only the first data set; moreover, it also appears to be closely correlated with calcium concentration, so we exclude it. Another marginally important factor, dissolved oxygen, was not measured in the first data set. According to the literature, other chemical factors are negligible as long as they are present in trace amounts. Thus, we perform the regression on just two variables: calcium concentration and pH. + +The equation we obtain is + +$$ +\text {m a x i m u m r a t e} = 1 6 8 7 \left[ \mathrm {C a} ^ {2 +} \right] + 5 5 7 0 3 \mathrm {p H} - 4 5 4 9 9 5, \tag {1} +$$ + +where the maximum growth rate is in juveniles settling per day and $\left[\mathrm{Ca}^{2+}\right]$ + +is in $\mathrm{mg / L}$ . Thus, by measuring the concentration of $\mathrm{Ca^{2 + }}$ and the $\mathrm{pH}$ of the water, we can predict the population growth rate. + +# Tests and Refinements + +The population growth model fits the data surprisingly well, considering its simplicity. Although in some cases the model could be strengthened by allowing two peaks of different heights, doing so would introduce at least one more degree of freedom and thus make it difficult to perform a meaningful regression with just 10 sites. Because we are interested in the overall success or failure of the population, we accept some inaccuracy in the population model in order to set up a better regression. + +As a first check on the model, we use it to predict the growth rates at sites 1-10 in Lake A and compared the predictions to the actual rates (Table 1). + +Table 1. +Actual growth rates in Lake A (first data set) vs. predicted growth rates, in thousands per day. + +
SiteActualModel
11218
23828
3156
4110
53020
60.002-100
70.0030.2
80.29
9314
100.0013
+ +Although far from perfect, the agreement gave us confidence that the model can give at least a qualitative idea of how well a Dreissena population will do in a given calcium concentration and pH. + +For a second test of the model, we use it to predict the minimum pH and calcium concentration tolerable to Dreissena. At a pH of 7.7, which is typical of the data available for Lake A, the regression equation predicts that the lowest tolerable concentration of $\mathrm{Ca^{2+}}$ would be $15.4\mathrm{mg/L}$ —very close to the accepted value of $15\mathrm{mg/L}$ [McMahon 1996]. At a calcium concentration of $25\mathrm{mg/L}$ , also typical of freshwater lakes, the model predicts a minimum pH of 7.4; this is only slightly higher than the literature value of about 7.3. + +Having established some confidence in our model, we test it against the second data set for Lake A. Because this data set does not include pH, we assume that the values reported in the first data set are accurate and use them in concert with the new calcium concentrations to predict growth rates (Table 2). + +Although this agreement is coincidentally somewhat better than that with the first data set, we perform a new regression on both data sets at once to see + +Actual growth rates in Lake A (second data set) vs. predicted growth rates, in thousands per day. + +Table 2. + +
SiteActualModel
11616.5
25027
3456
4109.5
53020
615-10
70.02-0.1
80.58
98150
100.035
+ +if we can improve the model. This gives us the new regression equation + +$$ +\text {m a x i m u r e} = 2 3 3 8 \left[ \mathrm {C a} ^ {2 +} \right] + 3 9 2 0 2 \mathrm {p H} - 3 3 4 0 8 9. \tag {2} +$$ + +Using this new equation, we predict the peak growth rates at all ten sites, based on data from both sets. We found the results given in Table 3. + +Actual growth rates in Lake A (from both data set) vs. predicted growth rates from combined regression, in thousands per day. + +Table 3. + +
SiteSet 1ModelSet 2Model
112301628
238325030
315114510
40.001121012
530203019
60.002-50.015-5
70.003360.0205
80.150110.45010
9314815
100.00120.0305
+ +The revised model illustrates the sensitivity of the coefficients to changes in the data. Although the additional data incorporated are from the same physical locations as the first data set, they have a significant impact on the regression equation. This modification improves some predictions and worsens others. + +# Strengths and Weaknesses + +Like any model, the one presented above has its strengths and weaknesses. Some of the major points are presented below. + +# Strengths + +- Applies widely accepted techniques. + +The logistic equation is often used to model population growth under the conditions set forth in our assumptions [Gotelli 1998]. Multiple regression analysis has been used effectively in predicting Dreissena populations previously [Ramcharan et al. 1992]. + +- Produces predictions in agreement with the data and other models. + +Although agreement with the data provided is far from perfect, our model produces peak growth rates that are largely consistent with observed growth rates. The model also correctly predicts minimum $\left[\mathrm{Ca}^{2+}\right]$ and pH levels for Dreissena survival. Additionally, it is consistent with other models in the literature. Ramcharan et al. [1992], for instance, give a probability-of-survival model + +$$ +A = 0. 0 4 5 \left[ \mathrm {C a} ^ {2 +} \right] + 1. 2 4 6 \mathrm {p H} - 1 1. 6 9 6 +$$ + +that is very nearly a constant multiple of our (1). + +- Correctly predicts results at Lakes B and C. + +Equation (2) predicts population growth rates of $-8,000$ juveniles/day for Lake B and $-145,000$ juveniles/day for Lake C. That is, the lakes are incapable of supporting mussel populations. This is consistent with the fact that both lakes are well below the minimum calcium and pH requirements. + +# Weaknesses + +- Extremely sensitive to changes in experimental data. + +Based on the results described above, this seems to be a fairly substantial problem with the model. Given the extraordinarily small amount of data available, though, it is hardly remarkable that a change in any given peak value changes the model significantly. If more data were available, we would expect much better averaging-out of error and a regression equation with much better predictive power. + +- Neglects the effects of all factors but $\left[\mathrm{Ca}^{2+}\right]$ and pH. + +Again, while this would initially appear to limit the predictive power of the model, the literature supports our selection of these two factors as the dominant ones influencing population growth [Ramcharan et al. 1992]. + +# Results and Interpretation + +To apply the model to the data for Lakes B and C, we assume that the values given for the concentrations are representative of the entire lake. With only one + +data point for each lake, we must extrapolate. Thus, the model's predictions might not hold in areas where the concentration or pH differs significantly from this value. + +The model clearly indicates that there is no chance of zebra mussel infestation in Lake C, consistent with the fact that the pH in the lake is far too low to support a mussel population. The literature indicates zero growth at a pH below about 7.3; the highest measurement of pH in Lake C is 6.0, which is clearly far too acidic. In addition, the calcium concentration must be greater than $12\mathrm{mg / L}$ for larvae survival; Lake C is far below this cutoff, with a mere $1.85\mathrm{mg / L}$ at maximum. + +The chemical data for Lake B are less clear cut. The pH is in the required range but the calcium concentration is too low for adult survival. Our model and the literature both indicate that it would take a significant shift in the lake's calcium content for it to support zebra mussels. + +Although taken over the course of several years, the data for Lakes B and C are not spread out spatially. It is possible that some region in either lake has much higher pH and calcium concentrations. For example, Lake George in the Adirondacks was initially thought to be immune to zebra mussels because of the water chemistry, but they were later discovered in a small region near a culvert with elevated calcium concentrations. Scientists are now concerned about Dreissena's potential to spread to other parts of Lake George, as the mussels have an amazingly ability to adapt once they have settled [Revkin 2000]. + +Other models strongly agree with our conclusions about Lakes B and C. Hincks and Mackie's model [1997] also found that zebra mussel populations depend only on pH and calcium concentration. Their formula, + +$$ +p = \frac {e ^ {L}}{1 + e ^ {L}}, +$$ + +where + +$$ +L = 1 3 4. 7 - 3. 6 5 9 \left[ \mathrm {C a} ^ {2 +} \right] - 1 5. 8 6 8 \mathrm {p H} + 0. 4 3 \left[ \mathrm {C a} ^ {2 +} \right] \mathrm {p H}, +$$ + +predicts $100\%$ mortality in Lake C and $99\%$ in Lake B; a population might be able to make some headway if it could establish itself in Lake B. + +Ramcharan et al. [1992] modeled the probability of a population becoming established, finding through discriminant analysis that only pH and calcium levels are significant factors. The discriminant function is + +$$ +A = 1. 2 4 6 \mathrm {p H} + 0. 0 4 5 [ \mathrm {C a} ^ {2 +} ] - 1 1. 6 9 6, +$$ + +where $A$ must be greater than $-0.638$ for a population to exist. This equation, which is nearly a constant multiple of our (1), suggests that no populations would establish themselves in either lake. + +# Recommendations on De-Icing + +Since the 1940s, America has used de-icing agents, primarily road salt (NaCl), to break the bond between road and ice. Ions in the water decrease the freezing temperature and melt the ice, promoting safer driving during icy weather by increasing the wheel/road traction. This proposal will show why Ice Ban or potassium acetate is a better candidate for de-icing near a lake, and discuss anti-icing as an alternative method of combating ice. + +Using standard road salt near freshwater lakes is not a good idea. While doing so might have a negative effect on the zebra mussel population, it would certainly have a greater negative effect on other aquatic life. Since zebra mussels are able to adapt to environmental changes more quickly than other freshwater species, any change in the chemical content of the lake will probably result in an even greater abundance of mussels [Kennedy et al. 1996]. A good de-icing method should remove the road hazard without profoundly impacting any ecosystem. + +Another consideration specific to zebra mussels is that de-icing chemicals used should not promote their growth. Common agents including calcium, such as calcium chloride and calcium magnesium acetate, should therefore not be used. This is especially important near lakes with low levels of calcium, such as Lake B, where the introduction of a source of calcium might lead to successful colonization by zebra mussels. + +According to the comprehensive report *Liquid Road Deicing Environment Impact* [Cheng and Guthrie 1998], the common de-icing chemicals remaining include common road salt, magnesium chloride, potassium acetate, and Ice Ban. This authoritative report lays out the effects of each agent on vegetation, soils, water quality, aquatic life, and people. Sodium chloride (NaCl), the most common, tested worst of all. It damages vegetation, kills some freshwater fish, pollutes groundwater, and causes air-quality concerns. The primary concern with magnesium chloride is that the chloride readily separates from the compound and can pollute groundwater or freshwater lakes. The chlorine often tastes unpleasant to people, and it can kill freshwater fish. + +Potassium acetate and Ice Ban, however, are less ecologically intrusive. Potassium acetate may affect plant growth slightly and is a mild skin and eye irritant. In very high concentrations, it has been shown to kill rats, but the report does not predict any animal deaths at normal concentrations. Acetate is biodegradable in soil and the remaining potassium has no negative affect on the surrounding environment. Ice Ban, on the other hand, is completely biodegradable. Since it is completely organic, it has no effect on the vegetation, aquatic life, or air quality [Cheng and Guthrie 1998]. We therefore suggest both potassium acetate and Ice Ban as possible de-icing agents, favoring the later as the most ecologically benign and less expensive. + +Potassium acetate is available as a liquid de-icing agent under the names Enviro-MLT TM (from Midwest Industrial Supply), Cryotech CF7 or E36LRD (from Cryotech), or Safeway KA Liquid (from Clariant) [Cheng and Guthrie + +1998]. Ice Ban is available from Ice Ban America. For your examination, we have included cost and other information in Table 4. + +Table 4. Data on de-icing materials and the anti-icing solution RWIS. + +
CompanyAmount required per 1,000 sq ftCost/galPhone Number
Enviro-MLT0.x4.671.800.321.0699
Cryotech CF70.53.301.800.346.7237
E36LRD0.52.801.800.364.7237
Safeway KA Liquid0.44.001.419.479.8650
Ice Ban0.760.751.888.488.4273
RWISvariable$3000/unit1.800.363.6224
+ +De-icing, however, is no longer considered to be the best solution to the problem of icy roads. The Strategic Highway Research Program (SHRP), a five-year program started to investigate anti-icing roads, has led an anti-icing initiative in 9 states and shown it to be effective there. Currently, more than 20 states are experimenting with the strategy, including states like Michigan that have zebra mussel populations, making the solution especially relevant [Federal Highway Administration 1997]. + +Anti-icing is a preventative measure that stops the ice from bonding to the road. Simply put, the roads are salted before the snow hits. The precipitation remains slushy and wet instead of frozen and slick. The slush is easily plowed off the road, and the "salt" does not need to be replaced as often, since it is not plowed off with the ice. This leads to many other benefits, but first we discuss cost. + +Instead of reacting to snow, the state's Dept. of Transportation anticipates it from weather forecasts and road conditions. Many states have installed Road Weather Information Systems (RWIS), which report real-time information on pavement conditions. Installation is the major cost of the conversion. In states where there are at least 5 snowstorms a year, the initial cost is quickly offset [Chollar 1996]. + +There are several ways anti-icing has saved money. It requires less "salt" to keep the roads safe. Applying the chemical to the road before or early into a winter storm ensures that when the snow is plowed off the road, the anti-icing agent remains on the surface, while de-icing agents need to be reapplied after each plowing. Using less salt is also beneficial since de-icing agents are notoriously corrosive. This means that vehicles applying the "salt" are less corroded each year and don't need to be replaced as often. The study predicted that overall the states could save almost $108 million a year [Federal Highway Administration 1997]. Less anti-icing agents on the road means less agent on the passing cars and less corrosion of the cars. Less "salt" also helps protect the environment by reducing pollution from foreign chemicals [Chollar 1996]. + +For this method, we recommend using de-icing agents (which also work as anti-icing agents) and the RWIS. In areas with only 100 hours of storms per + +winter, the savings in de-icing chemicals is \(659.50 per mile sanded or salted; with an RWIS unit every mile, the savings pay for the cost of installation in five years [Chollar 1996]. + +# Conclusions + +We treated each data site from Lake A as an independent "lake," providing 10 data points. Using these data, previous work showing that zebra mussel population is linearly dependent on pH and calcium concentration, and multiple regression, we constructed a model for zebra mussel population. We refined the model using the second set of data taken from the same lake. The resulting model is good but would be better with data from more lakes. + +We have presented a recommendation on how to de-ice roads near Lake B. Lake B lacks the high levels of calcium required to support zebra mussel populations, so we suggest that the local community not use de-icing salts containing calcium but an anti-icing strategy using environmentally non-intrusive chemicals, such as Ice Ban. + +# Report on Controlling Zebra Mussel Populations + +To the Lakeside Community: + +You are planning to control the zebra mussel population by introducing a nonindigenous species, the round goby fish. We strongly encourage you not do so. This report explains how round gobies can adversely impact the surrounding ecological system, without producing a substantial impact on the zebra mussel population. + +What round gobies lack in size, they more than make up in aggression. Their fierce nature allows them to dominate prime spawning grounds, forcing native species to move elsewhere or die off. In addition to eating zebra mussels, the gobies will attack other fish species' eggs and young, thus diminishing the population of previously well-established species. With their well-developed sensory system, round gobies can feed in complete darkness—an obvious advantage over competitors. Round gobies can quickly come to dominate a new area. In the rocky parts of Calumet Harbor, for example, the population of gobies already exceeds 20 individuals per square meter—that's 20 fish in a space the size of a bathtub! Their presence has caused a significant drop in the biodiversity of the ecosystem. + +Round gobies are zebra mussels' natural predator in their native habitat, the Black Sea. Adult gobies can eat an average of 47 mussels per day. However, this number isn't particularly impressive when there can be up to 1 million mussels + +per square meter. Experts agree that gobies are a hopelessly inadequate method for controlling a zebra mussel population. + +Another reason not to introduce the gobies is the safety of the people in your city. The potential danger arises because zebra mussels are filter feeders, often processing a liter of water per day. The mussels accumulate pollutants from the water, particularly PCBs (polychlorinated biphenyl), in large amounts in their tissues. Usually these toxins are not passed on, falling to the lake floor when the mussels die. If round gobies eat the mussels, however, the toxins are passed to the gobies. Because many game fish eat gobies, the toxins are passed on again. This time, though, the toxins are passed to fish that might be sold and consumed; the threat to human health is obvious. + +We would like to review some possible alternatives to solve your initial zebra mussel problem. + +The most commonly considered method for removing zebra mussels is chemical treatment. This method does not take into account the fact that zebra mussels are amazingly tolerant of large ranges of all chemicals—much more so than other species. Raising the toxicity level high enough to ensure the complete removal of the zebra mussels would cause other species to die. + +Some methods of removal are effective only in a laboratory and seem ridiculously unsuitable for use in an external body of water. Experiments have shown, for example, that electric current and ultrasonic cavitation can kill zebra mussels. Even if such a method could be controlled and implemented, it would undoubtedly harm other species. Inventions such as the vacuum pump and the blasting hose are not practical except in factories. It has been shown that the neurotransmitter serotonin forces the mussels to spawn before the proper season, thus killing the young. The suggestion of the use of serotonin has some promise; however, this chemical may affect other species, including humans. More research is required in this area. + +We regret that we must close this presentation on a gloomy note. Currently, the only available method of controlling this species is to stop it from spreading to other lakes. Since zebra mussels usually spread on the hulls of boats, we encourage you to inform your residents about the following procedures to prevent the spread of zebra mussels as suggested by the Sea Grant Extension: + +- Inspect your boat's hull carefully. If surfaces feel grainy, tiny zebra mussels may be attached. Scrape off any "hitchhiking" mussels. +- Drain all water from the boat, including the bilge water where they often reside. +- Dry your boat in the sun for two to five days, or use a pressurized steam cleaner to ensure the hull is sterile. +- Throw leftover live bait away or give it to someone to use at the same water body. + +We also encourage you to post signs at boat launches in your area to promote these simple guidelines for boaters. + +# References + +Baker, Patrick, et al. 1994. Criteria for predicting zebra mussel invasions in the mid-Atlantic region. Virginia Institute for Marine Science. +Balog, George G., et al. 1995. Baltimore city adopts a proactive approach to zebra mussel control using potassium permanganate. Proceedings of the Fifth International Zebra Mussel and Other Aquatic Nuisance Organisms Conference. +Chase, R., and Robert F. McMahon. 1995. Starvation tolerance of zebra mussels, Dreissena polymorpha. The Fifth International Zebra Mussel and Other Aquatic Nuisance Species Conference. +Cheng, K.C., and T.F. Guthrie 1998. Liquid Road Deicing Environment Impact. Levelton Engineering Ltd.: Richmond, B.C., Canada. +Chollar, Brian. 1996. A revolution in winter maintenance. *Public Roads*. http://www.tfhrc.gov/pubrds/winter96/p96w2.htm. +Dietz, Thomas H., Diondi Lessard, Harold Silverman, and John W. Lynn. 1994. Osmoregulation in Dreissena polymorpha: The Importance of Na, Cl, K, and particularly Mg. Biological Bulletin 187: 76-83. +Dietz, Thomas H., and Roger A. Byrne. 1999. Measurement of sulfate uptake and loss in the freshwater bivalve Dreissena polymorpha using a semimicroassay. Canadian Journal of Zoology 77 (2): 331-336. +Endicott, D., R.G. Kreis, L. Mackelburg, and D. Kandt. 1998. Modeling PCB bioaccumulation by the zebra mussel (Dreissena polymorpha) in Saginaw Bay, Lake Huron. Journal of Great Lakes Research 24 (2): 422-426. +Federal Highway Administration, U.S. Department of Transportation. 1996. Manual of Practice for an Effective Anti-Icing Program. http://www.fhwa.dot.gov/reports/mopeap/mop0296a.htm. +______ 1997. New technologies keep snow and ice—and winter maintenance expenses—under control. Focus. http://www.tfhrc.gov/focus/archives/37snow.htm. +Fong, P.P., K. Kyozuka, J. Duncan, et al. 1995. The effect of salinity and temperature on spawning and fertilization in the zebra mussel Dreissena polymorpha (Pallas) from North America. Biological Bulletin 189: 320-329. +Ghedotti, M.J., J.C. Smihula, and G.R. Smith. 1995. Zebra mussel predation by round gobies in the laboratory. Journal of Great Lakes Research 21 (4): 665-669. +Gotelli, Nicholas J. 1998. A Primer of Ecology. 2nd ed. Sunderland, MA: Sinauer Associates, Inc. + +Hincks, Sheri S. and Gerald L. Mackie. 1997. Effects of pH, calcium, alkalinity, hardness, and chlorophyll on the survival, growth, and reproductive success of zebra mussel (Dreissena polymorpha) in Ontario lakes. Canadian Journal of Fisheries and Aquatic Sciences 54: 2049-2057. +Johnson, P.D., and R.F. McMahon. 1996. Tolerance of zebra mussels (Dreissena polymorpha) and Asian clams (Corbicula fluminea) to varying levels of hypoxia. The Sixth International Zebra Mussel and Other Aquatic Nuisance Species Conference. +Jude, David J. 1997. Round gobies: Cyberfish of the Third Millennium. Great Lakes Research Review 3 (1): 27-34. +Kennedy, V.S., M. Asplen, and T. Hall. 1996. Salinity and behavior of zebra and quagga mussels. The Sixth International Zebra Mussel and Other Aquatic Nuisance Species Conference. +Marsden, J.E. and D.J. Jude. 1995. Round gobies invade North America. Illinois Natural History Survey, University of Michigan, and Illinois-Indiana and Michigan Sea Grant Programs. +McMahon, Robert F. 1996. The physiological ecology of the zebra mussel, Dreissena polymorpha, in North America and Europe. American Zoologist 36 (3): 339-363. +Michigan Sea Grant College Program. 1994. Potential control of zebra mussels through reproductive intervention. +Miller, Steven J., and James M. Haynes. 1997. Factors limiting colonization of Western New York creeks by the zebra mussel (Dreissena polymorpha). Journal of Freshwater Ecology 12 (1): 81-88. +Paukstis, G.L., F.J. Janzen, and J.K. Tucker. 1996. Response of aerially-exposed zebra mussels (Dreissena polymorpha) to subfreezing temperatures. Journal of Freshwater Ecology 11 (4): 513-520. +Ramcharan, Charles W., Dianna K. Padilla, and Stanley I. Dodson. 1992. Models to predict potential occurrence and density of the zebra mussel, Dreissena polymorpha. Canadian Journal of Fisheries and Aquatic Sciences 49: 2611-2620. +Ray, William J., and Lynda D. Corkum. 1997. Predation of zebra mussels by round gobies, Neogobius melanostomus. *Environmental Biology of Fishes* 50: 267-273. +Revkin, Andrew C. 2000. Invasive mussels turn up in lake thought to be immune. New York Times 149 (51354): B4. +U.S. Dept. of Transportation. 2001. Snow and Ice Control. http://www.fhwa.dot.gov/winter/roadsvr/icebro.htm. +Van der Velde, D., S. Rajagopal, and H.A. Jenner. 1996. Response of zebra mussel, Dreissena polymorpha to elevated temperatures. Sixth International Zebra Mussel and Other Aquatic Nuisance Species Conference. + +# About the Authors + +![](images/f10c72b3112ca2517e88de433b17fe71a0e185e5788caa37b4f1397ff6c49e10.jpg) + +Michael Schubmehl is a senior mathematics major. He has done research in various areas of physics and applied mathematics, including fluid dynamics and laser fusion. After graduation, he plans to teach at the high school level for a few years, then enter graduate school in pure mathematics. + +![](images/8e0256eb5dfbb03ed43453bfa49eb83b9bdc707a88ef5f78c9d80a40410e3222.jpg) + +Marcy LaViollette is a senior environmental engineering major. She spent the past two summers working for Environmental Systems Research Institute (ESRI) and is planning on a career in environmental engineering, particularly related to air or water. In her free time, Marcy enjoys ballroom dance, juggling, and a cappella music. + +![](images/0b15c2a6dfa9ffe326816d32371bf5502797b8ebf7eec2ba1adef90c8790a5cc.jpg) + +Deborah Chun is a senior majoring in engineering and in mathematics. She is working on a Harvey Mudd clinic team with Irvine Ranch Water District to model fluid flow in a water reservoir. Previously, she worked with Boeing to model an electromechanical part and with American Insurance Group to model claims incurred but not reported. After graduation, she plans on a career in systems modeling and engineering. + +# Identifying Potential Zebra Mussel Colonization + +David E. Stier + +Marc Alan Leisenring + +Matthew Glen Kennedy + +Humboldt State University + +Arcata, CA 95221 + +Advisor: Eileen M. Cashman + +# Summary + +Both environmental and anthropogenic factors influence the spread of zebra mussels to new areas. Variations in water quality can affect both proliferation and mortality, which greatly influence colonization rate. High levels of calcium and alkalinity in fresh waters tend to increase juvenile zebra mussel population. Dreissena also requires specific ranges of pH, temperature, and potassium concentration for propagation. Consumption by predators and spread by humans also influence colonization and population dynamics. + +We develop a lumped-parameter stochastic model using data from a lake with known water quality, using optimal water quality parameter ranges for zebra mussel survival. The model predicts the susceptibility to colonization of a lake with known water quality. + +We find a significant probability for seasonal colonization in Lake B but negligible probability for Lake C. + +The use of de-icing agents in the vicinity of Lake B may increase the probability of colonization, due to elevated calcium concentrations in the lake. + +# Literature Review + +# History + +The zebra mussel originated in the Caspian and Black Sea regions. By the early 19th century, a well-developed population was established throughout + +the major drainages of Europe in connection with extensive canal building [USGS 2001]. Researchers surmise that the zebra mussel first arrived in North America in the mid-1980s in a ballast tank of a commercial vessel; the first recorded population appeared in Lake St. Clair, Canada [Herbert et al. 1989]. By 1990, the zebra mussel habitat encompassed the Great Lakes and soon after entered the Mississippi River drainage via the Illinois River. Today, zebra mussels exist in at least 21 states [USGS 2001]. + +# Factors Influencing Propagation + +# Physical Mechanism of Propagation + +Anthropogenic activities are considered the most influential factor in spreading zebra mussels [Mackie and Schoessler 1996]. Zebra mussels attach themselves to firm surfaces including boat hulls, nets, buoys, and floating debris [Balcom and Rohmer 1994; Ram and McMahon 1996]. A zebra mussel dislodged in transport can start a new population. + +Natural dispersion mechanisms include birds, water currents, insects, and other animals [Mackie and Schloesser 1996; Hincks and Mackie 1997]. When carried by currents, microscopic zebra mussel larvae, called veligers, can quickly disperse themselves [Mackie and Schloesser 1996]. The mussels can travel large distances in the two- to three-week free-swimming veliger stage [Rice 1995]. + +The species has demonstrated resilience to long-overland trips. Zebra mussels survive longest under cool, moist conditions, similar to the environment in a boat hull [Payne 1992]. + +# Habitat + +Zebra mussel habitat includes freshwater lakes and reservoirs, as well as cooling ponds, quarries, and irrigation ponds of golf courses. However, the species can survive where salinity does not exceed 8 to 12 parts per thousand (ppt) [Mackie and Schloesser 1996]. + +Zebra mussels prefer hard substrates [Heath 1993] but can survive on soft sediment [Stoeckel et al. 1997]. Current velocities up to $2\mathrm{m / s}$ provide optimal settlement conditions, while speeds ranging from $0.5\mathrm{m / s}$ to $1.5\mathrm{m / s}$ best support growth [Rice 1995]. + +# Water Quality + +pH Zebra mussels have colonized areas with pH values ranging from 7.0 to 9.0. A pH of 7.5 promotes optimum growth [Rice 1995]. + +Potassium The optimal range of potassium in the environment is $0.5 - 1.5\mathrm{mg / L}$ with survival at $2 - 3\mathrm{mg / L}$ [Dietz et al. 1996]. + +Calcium and Alkalinity Calcium and alkalinity are the strongest influences on zebra mussel growth and reproduction [Heath 1993]. Zebra mussels require + +a $\mathrm{Ca}^{+2}$ concentration of $12\mathrm{mg / l}$ and $\mathrm{CaCO_3}$ concentration of $50\mathrm{mg / l}$ [Heath 1993]. Ramcharan et al. [1992] found that European lakes with pH below 7.3 and $\mathrm{Ca}^{+2}$ concentration below $28.3\mathrm{mg / l}$ lacked zebra mussels, but in North America there are numerous examples of invasion at far lower calcium concentrations. + +Dissolved Oxygen Heath [1993] indicates a minimum oxygen threshold of $25\%$ oxygen saturation, or $2\mathrm{mg / l}$ at $25^{\circ}\mathrm{C}$ . Dense overgrowths of zebra mussels may deplete dissolved oxygen enough to cause large die-offs of Dreissena and other aquatic species [Ramcharan et al. 1992]. + +Nutrients and Phytoplankton A water body's chlorophyll-a concentration is a factor in growth variability of the zebra mussel [Mackie and Schloesser 1996]. Zebra mussels compete with herbivorous zooplankton and fish for phytoplankton [Ramcharan et al. 1992]. Zebra mussels collect their food through ciliary filter feeding processes [McMahon 1996]; that filtering increases water clarity, and light penetration fosters growth in the lake's benthic population [MacIsaac 1996], which can increase the nuisance aquatic weed biomass. + +Salinity Research suggests optimal salinity for adults is 1 ppt at high temperatures $(18 - 20^{\circ}\mathrm{C})$ and 2-4 ppt in lower temperatures $(3 - 12^{\circ}\mathrm{C})$ [Kilgour et al. 1994; Mackie and Schloesser 1996]. Rice [1995] suggests 1 ppt as optimal for growth and short-term tolerance of 12 ppt; but zebra mussels have high adaptive ability to nonideal conditions in salinity and other water quality parameters. + +Temperature For reproduction, the zebra mussel requires prolonged periods above $12^{\circ}\mathrm{C}$ and maximum temperatures ranging from 18 to $23^{\circ}\mathrm{C}$ [Heath 1993; McMahon 1996]. It can't survive in temperatures greater than $32^{\circ}\mathrm{C}$ ; the lower temperature survival threshold is $0^{\circ}\mathrm{C}$ [Heath 1993]. + +Predators Crustacean zooplankton and larval fish consume the larval stages of the mussel [Mackie and Schloesser 1996]. Adult Dreissena provide food for crayfish, fish, and waterfowl [Mackie and Schloessler 1996]. Fish observed consuming zebra mussels include yellow perch, white perch, walleye, white bass, lake whitefish, lake sturgeon, and the round goby [MacIsaac 1996; French 1993]. Potential consumers include the freshwater drum, re-dear sunfish, pumpkinseed, copper and river redhorse, and common carp. Round gobies consume 50-100 zebra mussels per day, depending on the size of the mollusk [Ghedotti et al. 1995]. Diving waterfowl consume significant amounts of zebra mussels in proper conditions. Hamilton et al. [1994] found the ducks devoured $57\%$ of the autumn mussel biomass in Lake Erie; but due to icing over of the lake and consequent lack of winter predation, continued juvenile growth diminished the effects of the consumption. + +# Modeling Zebra Mussels + +Zebra mussel populations demonstrate high sensitivity to small changes in water quality parameters. In some lakes, the long-term population size remains fairly constant, while populations in other lakes fluctuate greatly from year to year. + +# Modeling History + +Some of the more common types of models developed include multivariate, bioenergetic, and probabilistic: + +- Multivariate models have been used to determine the environmental factors that most influence the ability of Dreissena to establish viable populations [Ramcharan et al. 1992]. +- Bioenergetic models focused on modeling individual zebra mussel growth as a function of certain environmental factors [Schneider 1992]. +- Probabilistic models used discrete probabilities associated with environmental variables known to contribute to the successful colonization of freshwater bodies to evaluate the susceptibility of certain lakes to zebra mussel colonization [Miller and Ignacio 1994]. + +# Model Development + +# Model Choice and Approach + +We develop an analytical model that is transient, lumped-parameter, and stochastic. + +We obtained from the literature ranges of water quality the parameters that are necessary for survival. Using a time step of one year, we determine the probability of survival based on those and determine the population. We use the data on Lake A to calibrate and verify the model's ability to predict colonization. + +# Data Considerations + +The data files provided contain water quality and population data for Lake A. Shared by most files were calcium concentration (mg/L), chlorophyll concentration $(\mu \mathrm{g} / \mathrm{L})$ , potassium concentration (mg/L), temperature $(^{\circ}\mathrm{C})$ , and pH, all of which the literature shows are important factors. + +We use the average juvenile population for a given year for comparison with the model results, regardless of the amount of data available for that year. Therefore, for each time step, we need an annual average and standard deviation for each parameter and each population. We assume that the average value is the average for the year. + +# Review of Literature + +Calcium, alkalinity, phytoplankton, potassium, water temperature, and pH are important for survival. Because of the dependence between alkalinity and calcium concentration, we use only calcium. We use chlorophyll-a in place of phytoplankton to represent available food. We summarize in Table 1 the ranges of water quality parameters required for survival. + +Table 1. Optimal water quality conditions for survival of each age class. + +
Age GroupConstituents
Ca (mg/L)Chl-a (μg/L)K (mg/L)pHTemp
LLULLLULLLULLLULLLUL
Birth2050+0150.051.27.78.51221
12050+0150.051.27.78.51221
21550+3200.051.37.38.7528
31050+8300.051.55.29.3031
41050+8300.051.55.29.3031
+ +# Methodology + +The model uses assumptions about probabilities of survival at specific age classes. + +# Age Classes + +We divide zebra mussels into four distinct age classes: class 1 (0-1 years), class 2 (1-2 years), class 3 (2-3 years), and class 4 (3-4 years). At the end of each time step (= one year), the population of each age class moves into the next age class, except that class 4 dies. Values for each water quality parameter are specified at each time step. + +# Survival Probabilities + +The ranges of values for each parameter are divided into smaller ranges and assigned survival probabilities. A normal distribution is used to create a probability distribution for each parameter. For each age class, we take the mean of the optimal range found in the literature. Newborns and age class 1 use the same ranges and probabilities; classes 3 and 4 also use their own same ranges and probabilities; age class 2 has its own ranges and probabilities. A normal distribution is fit to the average; we assume that the limits of the optimal ranges in the literature represent one standard deviation from the mean. + +# Constraints and Assumptions + +For each age group, the probabilities of survival at each time step for each of the water quality parameters are assumed to be mutually independent. Thus, the probability of survival of each age class is the product of the probabilities of its survival at each water quality value. + +Additional constraints are also included: + +- Age classes 2, 3, and 4 are able to reproduce in water above $12^{\circ} \mathrm{C}$ . +- The survival of eggs and larvae to age class 1 depends on their probability of migration out of the system and the probability of survival at the current water quality conditions. The probability of migration is calculated as a function of calcium concentration [Hincks and Mackie 1997]. +- Since the number of eggs per adult female varies in the literature (4000–100,000), we use its value as a parameter for calibration. +- An initial number of juveniles (age class 1), specified by the user, is introduced at the first time step, and no additional veligers or juveniles enter the system from outside sources. +- The model allows the user to decide which parameters to consider in the probability calculations depending on the availability of data. + +The model was programmed in Fortran 90 with a Lahey compiler under a Suse Linux operating system. + +# Calibration + +The model was calibrated using the data in the files LakeAChem1.xls and LakeAPopulation1.xls. The water quality data are provided as the median, maximum, minimum, and 25th and 75th percentiles of data for 1992 to 1999. We assume that the median equals the mean and that the average difference between the mean and the 25th and 75th percentiles is the standard deviation. + +We use a random number generator to create two sets of random numbers between 0 and 1, for $n$ years. The value of each water quality parameter for each of the years is given by + +$$ +X _ {i} = \bar {X} + P _ {\text {v a r} i} \times P _ {\text {r a n 1} i} \times \sigma_ {X}, +$$ + +where + +- $X_{i}$ is the value of the parameter at time step $i$ , +- $\bar{X}$ is the parameter mean, +- $\sigma_{X}$ is the parameter standard deviation, +- $P_{\mathrm{ran1} i}$ is the random number at time step $i$ , and + +$$ +\bullet P _ {\text {v a r} i} = \left\{ \begin{array}{l l} - 1, & \text {i f} P _ {\text {r a n} 1 i} < 0. 5 \\ + 1, & \text {i f} P _ {\text {r a n} 1 i} \geq 0. 5. \end{array} \right. +$$ + +Using this method, we created a file of $n$ years of generated data for each parameter for each of 10 sites at Lake A. We calibrated the model for its ability to predict susceptibility of a location to colonization by varying the initial population of juveniles and adjusting the number of eggs per adult female. + +At these sites, trends in the model results replicate trends in the populations. At a site susceptible to colonization, a higher initial population of juveniles yields faster establishment and propagation; at a site not susceptible to infestation, the population does not establish any structure and dies off. However, increasing the number of eggs per female produces colonization at some sites that were not possible at lower levels of egg production; at these sites, water quality is near a juvenile survival threshold. [EDITOR'S NOTE: Space does not permit reproducing the authors' graphs illustrating these conclusions.] + +The model is qualitatively accurate. It predicts zebra mussel colonization where and under circumstances when colonization actually occurs, and predicts no colonization when observed juveniles are low or nonexistent. The ability of a population to proliferate is apparent in the development of a population age class structure over time; if an age structure is not established, the location does not experience successful colonization. + +# Verification + +The model predicts whether or not colonization will occur, but the speed and magnitude of the colonization are not accurately approximated. Also, since the water quality levels were artificially generated from descriptive statistics, the performance of the model with actual data is unknown. With data on the annual accumulation of zebra mussels and the distribution of water quality constituents, as provided in the files LakeAChem2.xls and LakeAPopulation2.xls, the model can be tested, adjusted, and verified. + +Figures 1 and 2 compare 5 of the 10 sites for the two data sets at Lake A; similar trends appear at each site. Running the model with the second set of data indicates that populations proliferate where they have been observed in high numbers. Though the model predictions for juveniles are an order of magnitude greater than the observed values, the model correctly predicts whether populations survive; we attribute the difference to incomplete calibration. + +# Model Sensitivities + +The dominant model sensitivities in predicting the magnitude of proliferation are to the number of water quality constituents incorporated and to the initial juvenile population. When more probabilities are considered in the calculation, overall probability is lowered. Since the model was calibrated using all parameters, using fewer parameters results in a more conservative estimate, + +![](images/c5be37e011ef7139e32061a15ab2e8aef8fcdda3218c5318f39cf73bd458d93d.jpg) +Figure 1. Annual average accumulation rates using the 1st population data for Lake A. + +![](images/3e35a0c81169fc3f3a9b0fd196da025324ca9ff0b4d85fdd6c9c1831f88b884a.jpg) +Figure 2. Annual average accumulation using the 2nd population data set for Lake A. + +that is, the model over-predicts. The dominant factor in the rate of proliferation is the number of eggs or veligers that are allowed to survive. + +# Model Limitations + +The model becomes more conservative as the number of variables considered decreases. It predicts either the occurrence of a large outbreak or that a population never establishes. + +The model assumes that the survival probabilities for each parameter range are independent, but in actuality some parameters have strong dependencies, such as between pH and calcium concentration [Hincks and Mackie 1997]. + +# Application + +# Lake B + +Lake B is at the threshold for zebra mussel survival for the only variables on which we have data: pH, calcium concentration, and chlorophyll concentration. With so few water quality indicators, we expect a conservative estimate (i.e., an overestimate of survivability and colonization potential). We ran the model with an initial juvenile population of 1,000; only 10 survive to age class 2. A population introduced to Lake B will not proliferate. + +# Lake C + +Lake C has a very low average pH and a low annual average calcium concentration; it is not suitable for colonization. The probability of survival predicted by the model is zero. + +# Impacts of De-icing Near Lake B + +Many de-icing agents used to remove snow and ice from roads during the winter contain calcium salts, specifically calcium chloride $\left(\mathrm{CaCl}_2\right)$ . + +Repeated application of calcium chloride to roads may accumulate calcium in Lake B. A small increase in its available calcium level of $11.5\mathrm{mg / L}$ could allow colonization. The model indicates that a calcium concentration of $21.5\mathrm{mg / L}$ would allow zebra mussel colonization, but continuing low values for $\mathsf{pH}$ and and chlorophyll concentration force the colony to die out eventually. + +Other de-icing agents, such as sodium chloride (NaCl), increase sodium concentrations in freshwater bodies, which can inhibit propagation of zebra mussels; however, zebra mussels can adapt to higher levels of salinity. + +# References + +Balcom, N.C., and E.M. Rohmer. 1994. Zebra mussel awareness and boat use patterns among boaters using three "high risk" Connecticut lakes. University of Connecticut, Oberlin College, Connecticut Sea Grant College Program. +Bowman, Michelle F., and R.C. Bailey. 1998. Upper pH limit of the zebra mussel (Dreissena polymorpha). Canadian Journal of Zoology 76 (11): 2119-2123. +Dietz, Thomas H., Shawn J. Wilcox, Roger A. Byrne, John W. Lynn, and Harold Silverman. 1996. Osmotic and ionic regulation of North American zebra mussels. *American Zoologist* 36 (3): 364-372. +French, J.R.P., III. 1993. How well can fishes prey on zebra mussels in Eastern North America. Fisheries 18 (6):13-19. +Ghedotti, Michael J., Joseph C. Smihula, and Gerald R. Smith. 1995. Zebra mussel predation by round gobies in the laboratory. Journal of Great Lakes Research 21 (4): 665-669. +Hamilton, D.J., D. Ankney, and R.C. Bailey. 1994. Predation of zebra mussels by diving ducks. Ecology 75: 521-531. +Heath, R.T. 1993. Zebra mussel migration to inland lakes and reservoirs: A guide for lake managers. Kent State University, Ohio Sea Grant College Program. +Herbert, P.D.N., B.W. Muncater, and G.L. Mackie. 1989. Ecological and genetic studies on Dreissena polymorpha (Pallas): A new mollusk in the Great Lakes. Canadian Journal of Fisheries and Aquatic Sciences 46 (9): 1587-1591. +Hincks, Sheri S., and Gerald L. Mackie. 1997. Effects of pH, calcium, alkalinity, hardness, and chlorophyll on the survival, growth, reproductive success of zebra mussel (Dreissena polymorpha) in Ontario Lakes. Canadian Journal of Fisheries and Aquatic Sciences 54: 2049-2057. +Kilgour, B.W., G.L. Mackie, A. Baker, and R. Keppel. 1994. Effects of salinity on the condition and survival of zebra mussels. Estuaries 17: 385-393. +Mackie, G.L., and D.W. Schloesser. 1996. Comparative biology of zebra mussels in North America and Europe. American Zoologist 36 (3): 300-310. +MacIsaac, Hugh J. 1996. Potential abiotic and biotic impacts of zebra mussels on the inland waters of North America. American Zoologist 36 (3): 287-299. +McMahon, Robert F. 1996. The physiological ecology of the zebra mussel, Dreissena polymorpha, in North America and Europe. American Zoologist 36 (3): 339-363. + +Miller, Allen H., and Andres Ignacio. 1994. An approach to identify potential zebra mussel colonization in large water bodies using best available data and a geographic information system. Proceedings of the Fourth International Zebra Mussel Conference, Madison, WI, March 1994. +Molloy, Daniel P. 1998. The potential for using biological control technologies in the management of Dreissena polymorpha. Journal of Shellfish Research 17 (1): 177-183. +Nichols, Susan Jerrine. 1996. Variations in the reproductive cycle of Dreissena polymorpha in Europe, Russia, and North America. American Zoologist 36 (3): 311-325. +Payne, Barry S. 1992. Aerial exposure and mortality of zebra mussels. Zebra Mussel Research. U.S. Army Corps of Engineers. Technical Note ZMR-2-10. Section 2: Control Methods. +Ram, Jeffrey L., Peter P. Fong, and David W. Garton. 1996. Physiological aspects for zebra mussel reproduction: Maturation, spawning, and fertilization. American Zoologist 36 (3): 326-338. +Ram, Jeffrey L., and Robert F. McMahon. 1996. Introduction: The biology, ecology, and physiology of zebra mussels. American Zoologist 36 (3): 239-243. +Ramcharan, Charles W., Dianna K. Padilla, and Stanley I. Dodson. 1992. A multivariate model for predicting population fluctuations of Dreissena polymorpha in North American Lakes. Canadian Journal of Fisheries and Aquatic Sciences 49 (1): 150-158. +Rice, J. 1995. Zebra mussels and aquaculture: What you should know. North Carolina State University, North Carolina Sea Grant Program. +Schneider, Daniel W. 1992. A bioenergetics model of zebra mussel, Dreissena polymorpha, growth in the Great Lakes. Canadian Journal of Aquatic Fisheries 49 (7): 1406-1416. +Stanczykowski, A. 1984. The effect of various phosphorus loadings on the occurrence of Dreissena polymorpha. Limnologica (Berlin) 15: 535-539. +Stoeckel, J.N., C.J. Gagen, J.W. Phillips, and L.C. Lewis. 1997. Population Dynamics and Potential Ecological Impacts of Zebra Mussel Infestation on Lake Dardanelle, Arkansas. In *Conference Proceedings: Seventh International Zebra Mussel and Aquatic Nuisance Species Conference*, 193-204. State University, AR: Arkansas State University, 1997. +United States Geodetic Survey (USGS). 2001. Dreissena polymorpha. Available at http://nas.er.usgs.gov/zebra.mussel/docs/sp_account.html. Accessed 9 February 2001. +Whittier, Thomas R., Alan T. Herlihy, and Suzanne M. Pierson. 1996. Regional susceptibility of Northeast lakes to zebra mussel invasion. *Fisheries* 20 (6): 20-27. + +Young, Brenda L., Dianna K. Padilia, Daniel W. Schneider, and Stephen W. Hewett. 1996. The importance of size-frequency relationships for predicting ecological impacts of zebra mussel populations. *Hydrobiologia* 332 (3): 151-158. + +# Assessment of Introduction of the Round Goby + +Ironically, the round goby and the zebra mussel both entered North American fresh waters by ballast-water discharge into the Great Lakes region at approximately the same time. They favor similar environments: slow water velocity and higher turbidity. + +The diet of the round goby consists of small mollusks, especially the zebra mussel. The round goby has molar teeth well suited to crushing mollusk shells. + +Biological control agents such as the round goby can have ecological advantages over chemical control. Natural enemies tend to be more specific to a certain pest, while chemical control measures often affect multiple species and the targeted pests can develop a tolerance to the chemical. + +However, although the round goby can consume appreciable numbers of zebra mussels, the round goby violates the requirement of being specific to the target pest. They consume also the fry and eggs of habitat-sharing fish, including smallmouth bass, walleye, and perch, and their aggressive nature allows them to restrict native fish from utilizing optimal spawning locations. + +After the zebra mussels reach a certain size, they are too large for the round goby. Spawning of larger mollusks then prevents the population from dying out. + +During its filter-feeding process, the zebra mussel accumulates and stores pollutants, including PCBs. As the round goby consume the mussels, contaminants bioaccumulate in the fish. The accumulation pattern potentially continues as sport fish eat the round goby and as humans in turn consume the sport fish. + +Thus, both environmental ethical and practical considerations require that additional alternatives be explored. + +Research continues on types of biological control techniques other than round goby fish. Over the past 10 years, some microorganisms have shown promise of inducing very high zebra mussel mortality. + +Until an ideal alternative exists, communities must take other measures to limit the spread of the zebra mussel. Since studies attribute the spread to movement of watercraft between bodies of water, an aggressive education campaign could inform recreational boaters and fishermen how to avoid contributing to proliferation of zebra mussels. If climate conditions necessitate de-icing of highways, a community should consider materials that don't promote zebra mussel growth. + +# About the Authors + +David Stier attended high school in South Burlington, VT, before migrating to California in 1993. His current work activities include assessment of highway culverts in Northern California for anadromous [migrating upriver from the sea to breed in fresh water, as salmon do] fish-passage issues. He is also completing his senior year in Environmental Resources Engineering at Humboldt State University. David is an avid world traveler but unsure of his plans after graduation. + +![](images/4e0a036d0bcb6e7b4a75504cac577d8d02582461ae2e98f11068eec96983110a.jpg) + +After graduating with honors with a B.S. in Environmental Resources Engineering, Marc Leisenring joined GeoSyntec Consultants in August, 2001. He has experience in both one- and two-dimensional hydrodynamic modeling, and now (with the ICM) also in stochastic modeling. As a Staff Engineering Specialist at GeoSyntec, his primary responsibilities have included technical analysis, report preparation and review, and preliminary design; + +future responsibilities may include model development and implementation, final design, and development of stormwater management plans. + +![](images/4d292f3cab3c8a8df2bd36ba838d9a2a8ecaa00d9469d7381e8fd23278581f29.jpg) + +Matthew Kennedy attended Santa Rosa Junior College in Santa Rosa, CA, before transferring to Humboldt State University in 1998. During the summers of 1998 and 1999, he worked with the Hydrology Research Group at the Pacific Northwest National Laboratory in Richland, WA. There he assisted in the development of hydrodynamic and water quality computer models of the Columbia River system. He graduated with honors in 2001 with a B.S. in Environmental Resources Engineering. Matthew is currently a research + +assistant +assistant +assistant +assistant +assistant +assistant +assistant +assistant + +# Waging War Against the Zebra Mussel + +Nasreen A. Ilias + +Marie C. Spong + +James F. Tucker + +Lewis and Clark College + +Portland, OR 97219 + +Advisor: Robert W. Owens + +![](images/696146f1c3039b309d82c059273680e2aaec4a54ebe4099255ebe2140ace25d8.jpg) + +# Summary + +We design a mathematical model that accounts for pH, calcium concentration, and food availability, the most important factors in zebra mussel reproduction and in growth and survival of juvenile mussels. Our model can predict whether a given site is likely to be a suitable environment for a zebra mussel population as well as its potential density. Our model corresponds well with the population data provided and with the threshold values of pH (7.4) and calcium $(12\mathrm{mg / L})$ for zebra mussel viability. + +We recommend to the community of Lake B that they limit their use of de-icing agents containing calcium, because our model predicts that an increase in + +the calcium concentration in the lake will significantly enhance its suitability as zebra mussel habitat. + +We find that using the goby fish to reduce zebra mussels is not a feasible option if the community is concerned with ecological impact, due to the invasive nature of the goby. + +# Environmental Factors in the Spread of Zebra Mussels + +We first discuss the characteristics of a suitable breeding habitat and then address how the population is unintentionally introduced to new areas. + +Population growth depends on successful reproduction and survival to adulthood. Veligers, zebra mussel larvae, are more sensitive to stress in their surrounding environment and therefore have more stringent survival requirements. Hence, we examine environmental conditions that can cause stress for the zebra mussel, especially in the larval and juvenile stages. + +# Ion Concentrations and pH + +Calcium is required for the viability of zebra mussel populations because it is a major component in their shells. Alkalinity, which is directly linked to calcium concentrations, is an important variable in determining habitat suitability for zebra mussels. Calcium concentrations of $12\mathrm{mg / L}$ and alkalinity corresponding to $50\mathrm{mgCaCO_3 / L}$ are required for adult zebra mussel populations [Heath 1993]. A calcium concentration of $12\mathrm{mg / L}$ is also the minimum required for embryo survival, though higher concentrations enhance egg fertilization and embryo survivorship [Sprung 1987]. + +Phosphorous and nitrogen are significant factors to zebra mussel population growth because they are critical nutrients for the freshwater phytoplankton that comprise the primary food source of the zebra mussel. Thus, they are an indirect measure of food availability [Baker et al. 1993]. + +The pH of the water is another critical factor. Adults require a pH of about 7.2; in lower pH environments, they experience a net loss of calcium, sodium, and potassium ions, and in very acidic waters adult zebra mussels eventually die because of ion imbalance [Heath, 1993]. Adults can survive in pH 7 environments, but eggs survive only between pH 7.4 to 9.4 [Baker et al. 1993]. + +# Temperature + +Adult mussels can survive temperatures from $0^{\circ}\mathrm{C}$ to $32^{\circ}\mathrm{C}$ , but growth occurs only above $10^{\circ}\mathrm{C}$ [Morton 1969] and breeding is triggered only in temperatures of at least $12^{\circ}\mathrm{C}$ [Heath 1993]. Higher temperatures increase overall egg + +production [Borchering 1995] but also increase metabolism and demand for dissolved oxygen. Zebra mussels require $25\%$ oxygen saturation $(2\mathrm{mg / L})$ at $25^{\circ}\mathrm{C}$ [Heath 1993]. Based on these values and the data provided for Lake A, we find that neither temperature nor dissolved oxygen is a limiting factor of zebra mussel proliferation there. + +# Saltatory Spread + +Saltatory spread is the movement of a species in large leaps rather than by gradual transitions. It is believed that zebra mussels were introduced to the Great Lakes system in 1986 from larvae discharged in ballast water from a commercial ship [Griffiths et al. 1991]. As of 1996, zebra mussels had spread to 18 states in the United States (as far south as Louisiana) and two provinces in Canada, almost entirely within commercially navigated waters [Johnson and Padilla 1996]—strong evidence that commercial shipping was the primary vector of initial zebra mussel spread in the United States and Canada. + +Most of the United States contains environments suitable for zebra mussel infestation [Strayer 1991], so the identification and elimination of saltatory spread to inland water systems is key to preventing infestation of the western United States. Transient recreational boating seems to be the most likely candidate for inland spread of the species. Based on this and other studies, it appears that recreational boating represents a substantial threat to the containment of the zebra mussel infestation in America. + +# Advective and Diffusive Spread + +Zebra mussels live the first few weeks of their lives as planktonic larvae that are easily diffused or carried by moving water. This allows for the widespread dissemination of offspring by diffusion, currents, and wind-driven advection within a lake or watershed [Johnson and Carlton 1996], which largely explain the species rapid spread [Martel 1993]. However, veligers have been shown to have high mortality in turbulent waters, and mussel density in streams flowing out of infested lakes has been shown to decrease exponentially with the distance downstream [Horvath and Lamberti 1999]. Post-metamorphic zebra mussels have the ability to secrete long monofilament-like mucous threads that increase hydrodynamic drag and allow for faster advective spread [Martel 1993]. These juveniles can survive turbulence much better than veligers, which implies that they are the primary vector of downstream advective spread. + +# Zebra Mussel Population Model for Lake A + +Using our model, we attempt to answer two important questions: + +1. Given chemical information for a given site, is the site suitable for zebra mussels? + +2. If a site is determined to be a suitable habitat, will it support a low- or a high-density zebra mussel population? + +Rather than focusing on developing a complicated model that would predict the exact size of the population, we devised a simple, comprehensive model that answers these questions. The inspiration for our model was derived from Ramcharan [1992]. + +# Assumptions + +- The density of juveniles collected on the settling plates is proportional to the size of the adult population; this assumption allows us to use the provided data to predict the severity of the zebra mussel infestation. +- The chemical composition and concentrations (such as calcium levels) do not significantly vary with changes in the size of the zebra mussel population. + +Examining the first data set from Lake A, we find that pH and calcium concentration are the two most important factors in determining whether a zebra mussel population is viable in a given site. This is reasonable, considering that the zebra mussels are very sensitive to pH and they need calcium to build their shells when developing from veligers to juveniles and onto adults. + +We do not include temperature, because although it is important to the life cycle of the zebra mussel, as long as the temperature is high enough to signal spawning, reproduction will occur. All 10 sites in Lake A had suitable temperatures for spawning. + +We developed a model equation (Model 1) utilizing the values provided for pH and calcium concentration for the 1992 to 1999 period that give a simple measure to predict the viability $(V)$ of a zebra mussel invasion at a particular site. The coefficients of the two variables (pH and [Ca]) are used to weight the relative importance of the two factors. The range of values for pH for the ten sites is smaller than the range of values for calcium concentration, thus the coefficients function to equalize the importance of these two factors. The exact values of the coefficients were determined by successively modifying and refining the values until an equation was found that accurately reflected whether the lake site was a suitable habitat or not based on the population data. We chose the threshold value of 10.4 for viability because there appears to be a break there between the sites where zebra mussels survived and the sites where they were absent, and because 10.4 is close to the value from the equation with 7.4 for pH and $12\mathrm{mg / L}$ for calcium concentration. + +$$ +V = 1. 0 \mathrm {p H} + 0. 2 [ \mathrm {C a} ] +$$ + +If $V > 10.4$ , the site is a suitable habitat for zebra mussels. + +Applying Model 1 to sites 1-10 in Lake A produces Table 1. + +Table 1. Calculated viability values for sites 1-10 in Lake A using model 1. + +
SitepH[Ca] mg/LV
17.6826.813.04
28.0022.312.46
37.7417.611.26
47.8416.511.14
58.0216.911.40
67.5913.410.27
77.6616.911.04
87.8216.611.14
97.9515.711.09
107.8612.010.26
+ +The model predicts that sites 6 and 10 should not be suitable habitats, while the other eight sites should be. Figure 1, which plots date vs. juveniles/day for each of the sites, shows that the data agree well with our model. Sites 6 and 10 have virtually no zebra mussel population growth, and sites 1, 2, 3, 4, 5, and 9 all show evidence of infestation. Although it is predicted that sites 7 and 8 should be susceptible to invasion, enlargement of Figure 1 shows that these two sites are not supporting large populations; correspondingly, $V$ for sites 7 and 8 is relatively low. Also, the source of the zebra mussel invasion was site 1, hence the more southerly sites have had longer to form stable populations than the northern sites 7 and 8. With threshold pH of 7.4 and threshold calcium level of $12\mathrm{mg / L}$ , the model—which predicts that sites 6 and 10, whose values border on the threshold, are not likely to be habitable—is consistent with the literature. + +![](images/2b557a8df58f67fd9046bc6056e0d1580354edd9208b51b445b7f840aaa045a3.jpg) +Graph 1. Relative Population of Local +Figure 1. Relative populations at sites 1-10. + +To improve upon Model 1, we account for trends observed in the second data set from Lake A in constructing a more descriptive model to answer question (2). By including parameters for total phosphorus and total nitrogen, we account for the role of food availability on density. Following Ramcharan [1992], we employ the natural logarithms of total phosphorus and total nitrogen. Once again, by successively altering the coefficients, we determine an equation for the density of populations in the lake sites. We define high density as more than 400,000 juveniles/ $\mathrm{m}^2$ on the settling plates collected at the peak of the reproductive season. + +$$ +D = 1. 0 \mathrm {p H} + 0. 2 [ \mathrm {C a} ] + 0. 1 \ln [ \mathrm {T P} ] + 0. 4 \ln [ \mathrm {T N} ]. +$$ + +If $\left\{ \begin{array}{ll}D < 9.9,\\ 10 < D < 10.4,\\ D > 10.5, \end{array} \right.$ there will be no zebra mussels; the site will support a low-density population; the site will support a high-density population. + +By averaging the total phosphorus (TP) and total nitrogen (TN) values for each site in the second set of chemical data for Lake A, we calculated [TP] and [TN]. Using those values in Model 2, we calculated the density (D) for each site, as shown in Table 2. + +Table 2. Density values in sites 1-10 in Lake A. + +
siteln[TP] mg/Lln[TN] mg/LDlow/high
1-2.99-0.59812.5high
2-3.51-0.89211.8high
3-4.30-0.79610.5high
4-4.47-0.81410.3low
5-4.40-0.87910.6high
6-4.56-0.8529.5absence
7-4.12-0.97110.2low
8-4.39-0.86210.3low
9-4.16-0.96510.3low
10-3.01-0.4059.8absence
+ +Model 2 predicts that sites 1, 2, 3, and 5 should be able to support high density populations. The second set of population data used in Figure 2 is consistent with the first set of population data. Figure 2 shows that all four of the high-density sites have an average of more than 400,000 juveniles/ $\mathrm{m}^2$ , which agrees with the prediction made by our model. In the enlargement of Figure 2, sites 4, 7, 8, and 9 have an average of less than 400,000 juveniles/ $\mathrm{m}^2$ , while sites 6 and 10 have virtually no juvenile zebra mussels. + +The most significant weakness of our model is that it does not predict population versus time. Our model simply classifies an area's risk of invasion by examining the levels of critical chemicals to which the zebra mussels are sensitive. + +![](images/3627b8ee5e0365f01565c1d990c89e0a12b9ae47b974a3de40b6805c495fe297.jpg) +Graph 2. Comparing High and Low Density Popul +Figure 2. Comparison of high- and low-density populations. + +Another weakness of our model is that it relies on chemical and population data from only one lake. By slightly varying the values of the coefficients and observing whether the altered model more accurately predicts the density of the zebra mussels in the newly incorporated lakes, a better model can be achieved. Information from other lakes could also be used to refine the value chosen for the division between low and high densities. Other factors, such as total ion concentration, could also be included in the model if the factor were shown in a variety of lakes to correspond to population densities. + +We are not able to predict, using our model, how fast a population of zebra mussels will spread from one site to another within a lake. However, by qualitatively examining the data from Lake A, it appears to take only a few years for the population to spread from one area to another as long as the new site is suitable for zebra mussels. For example, in site 5 in 1994 and 1995, there were no zebra mussels collected, but from 1996 to 1998, the population rapidly increased to a high density. Since zebra mussels can very quickly reach high density populations in a supportive environment, it seems that knowing whether a given site is a suitable habitat is a more useful piece of information than the rate at which the population grows. + +# Using Model for Lake A to Predict for Lake B and Lake C + +Using the equations from our models, we can average pH, calcium concentration, total phosphorus concentration, and total nitrogen concentration for + +Lake B and Lake C and determine the level of risk of successful zebra mussel invasion in these two lakes. We averaged the values together for all of the years. We also assume that these two lakes are fairly uniform in chemical composition. + +Table 3. Viability and density values for Lake B and Lake C. + +
pH[Ca] mg/L[TP] mg/L[TN] mg/LVD
Lake B7.6311.56.02 × 10-30.1829.938.74
Lake C4.741.154.97
+ +According to our Model 1, Lake B should not be at risk for a zebra mussel invasion because it is not a suitable habitat $(V < 10.4)$ ; this prediction makes sense because the average calcium concentration is $11.5\mathrm{mg / L}$ , which is below the $12\mathrm{mg / L}$ threshold. Lake C is in no danger to an invasion, since $D = 4.97$ which corresponds to the fact that both the pH and the calcium concentration are far below the threshold values. + +# De-icing Policy for Community of Lake B + +De-icing compounds increase the solute concentration in the melted ice, lowering its freezing temperature and preventing the ice from reforming. Thus, de-icing compounds are water soluble and can easily enter the water supply. The most commonly used de-icers are calcium chloride, calcium magnesium acetate, sodium chloride, and potassium acetate salts. Calcium magnesium acetate is popular because it has fewer negative environmental impacts, whereas calcium chloride is widely used because it lowers the freezing point of water more than sodium chloride. + +Although these calcium containing compounds may be excellent choices for de-icing agents, our model indicates that using these compounds increases the risk of zebra mussel invasion. According to Model 2, if calcium levels increase in Lake B by $50\%$ ( $D = 9.9$ ), a low density population of zebra mussels can exist. Doubling the calcium levels ( $D = 11.0$ ) will support a high density population. De-icing agent can therefore have a significant impact on the zebra mussel population. We recommend that this community use sodium chloride or potassium acetate salts, or decrease the amount of calcium salts used by mixing them with the other noncalcium salts or sand. We also suggest pre-wetting the salts before they are applied to the roads, to reduce the amount entering the water system. Lastly, the community should develop a strategy for anti-icing, applying de-icing agents before ice forms, thus decreasing the amount of de-icing agent used in each storm. These efforts should help prevent Lake B from becoming habitable by zebra mussels. + +# Methods for Reducing Zebra Mussel Populations + +It is estimated that \(3 billion will be spent in the next decade combating the zebra mussel infestation [Magee et al. 1996]. Besides damaging infrastructure (pipes, tubing, gratings), the zebra mussel is able to out-compete native species for space and food and can destroy commercial and recreational fish stocks. Since the zebra mussel body fat stores toxic chemicals, the introduction of these mussels into the food-chain could lead to human consumption of these harmful chemicals. There are three available options for dealing with zebra mussel infestation: + +(1) Introduce a natural predator (the round goby). +(2 & 3) Eradicate and/or control the zebra population by utilizing preventative and reactive control strategies. + +Introducing a natural predator, such as the round goby, may be more problematic than the zebra mussel infestation. Although the round goby shows selectivity in consuming zebra mussels over native clams, the goby will nonselectively consume a variety of bait, fishes, and invertebrates [Ghedotti et al. 1995]. In addition, the goby is extremely territorial and can aggressively occupy prime breeding areas and successfully compete for food against native species. Fortunately, there are more environmentally sound methods of controlling zebra mussel infestations. + +# Preventive and Reactive Strategies + +Preventive control methods include implementing restrictive legislation and periodic monitoring of waterways to minimize introduction of zebra mussels and to improve early detection, thereby facilitating the development of appropriate strategies to eradicate or control the mussel population. Reactive strategies are a more aggressive mode of action in response to a potential or ongoing invasion and should be dependent on the level of infestation. + +# Preventive Control Strategies: Legislation and Monitoring + +Legislation is a useful way to coordinate research with monitoring facilities, commercial industries, and the public. The United States Nonindigenous Aquatic Nuisance Prevention Control Act of 1990 (P.L. 101-646) [Florida Caribbean Science Center 2001] recommends that recreational vessels exchange ballast water before entering new waters, since this is the primary mode of saltatory non-native species introduction [Boleman et al. 1997]. In addition, the U.S. Code [Legal Information Institute 2001] suggests implementing alternative ballast water management, including modifying the ballast tank and + +intake system to prevent the unintentional introduction of new species. The improved sighting, reporting, and education under this plan will help the public and commercial sectors prevent the spread of zebra mussels. + +# Reactive Control Strategies + +Acute Zebra Mussel Infestation In cases of acute or localized infestations, applying the least expensive method of preventing infrastructure damage is to employ a foul release coating in concert with mechanical cleanings and mechanical filtration. Coating pipes and surfaces in contact with the water with antifouling polymers, such as silicones and fluoroharmicals, creates a slippery surface that makes it difficult for zebra mussels to attach [Magee et al. 1996]. These reagents are effective for 2-5 years [Boelman et al. 1997]. + +An alternative and equally successful method of infrastructure protection is the application of zinc thermal spray (ZTS) on metal surfaces. In addition to preventing corrosion, ZTS is the most durable and long-lasting zebra mussel repellent. The slow dissolution of heavy metal ions from ZTS is toxic to zebra mussels. In addition, the US Army Corps of Engineers Zebra Mussel Control Handbook suggests that low release of heavy metals and a large dilution factor produce minimal secondary effects on nontarget species. However, before implementing this strategy, it is critical that the environmental effects studied and the implementation meet federal standards. + +Mechanical cleaning is a labor-intensive method of removing zebra mussel from infrastructure. The drawback to simply brushing and scraping zebra mussels off surfaces is that the scrubbings need to be repeated regularly. The removed zebra mussels also have to be transported and disposed of in landfills. + +The final strategy for dealing with acute zebra mussel infestation is installing mechanical filtration systems. Water screen filters and strainers can be placed on water intakes. A mesh size of $25 - 40\mathrm{mm}$ is able to stop the inflow of veligers and translocation of larger zebra mussels. However, this system requires continuous maintenance. + +Global Zebra Mussel Infestation Severe and large-area infestation and population expansion need to be treated with aggressive methods, since it is more beneficial to address the widespread infestation problem rather than fight specific site-related mussel-density problems. Since these methods require widespread application, the expense associated with implementation is higher than the strategies for dealing with acute infestation. There is also a potential for harming native organisms and commercial industries. However, after intense scrutiny, the following methods are the most effective ways to control and potentially eradicate severe zebra mussel infestations. + +Thermal treatment. The discharge of heated water is a cost-effective and efficient method for controlling and eradicating the macrofouling zebra mussel. Since zebra mussels are able to acclimate to temperature + +changes, extreme temperature changes are required to kill the mussels. These extreme temperature changes will also kill a number of native species residing in the lake. There are two thermal treatment strategies that can be employed: acute thermal treatment and chronic thermal treatment [Boelman 1997]. Acute thermal treatment involves rapidly increasing the water temperature to lethal levels followed by a rapid return to original temperature levels. This method is most appropriate for treating infestation in waterways where a higher temperature cannot be maintained for an extensive period of time. Greatly increasing the water temperature for a period of 3-9 hours can yield $100\%$ mortality. + +Chronic treatment involves continuously maintaining a higher water temperature and is a cost-effective strategy for industries that generate and discharge heated water. This method prevents new zebra mussel infestations. This strategy is lethal to most if not all organisms that use the water. The water temperature, in this method, must be raised to greater than or equal to $34^{\circ}\mathrm{C}$ and must be maintained for 6-24 hours to kill the entire zebra mussel population. + +Chemical treatments. Chemical treatments are an alternative to thermal treatment but are more environmentally invasive. Both oxidizing and nonoxidizing chemical treatments are available. Oxidizing treatments are most toxic to zebra mussels when applied rapidly due to the mussel's sensitivity to oxidizing compounds, whereas nonoxidizing chemicals can be administered over a longer period of time with equal effectiveness. + +Of the oxidation treatments available, chlorination is the most widely used method for eradicating zebra mussels. There are large environmental consequences to this method, and terrestrial organisms and birds may also be killed. + +Potassium permanganate is another commonly used oxidizing chemical. To obtain $100\%$ zebra mussel mortality, a higher concentration of and a longer exposure to potassium permanganate is required than for chlorinated compounds. The advantage to using potassium compounds is that they are nontoxic to higher organisms like fish but are highly toxic to zebra mussels. Also, potassium permanganate by-products do not form carcinogenic compounds as is the case when using chlorinated reagents. + +Nonoxidizing molluscicides, such as Mexel 432, are the best available chemical treatments, albeit more expensive than oxidation treatments. The greatest advantage to this strategy is that molluscicides have fewer direct consequences on native organisms and fewer long-term environmental impacts since many of these molluscicides rapidly biodegrade into harmless substances. These reagents induce their effect in three ways: + +- On clean surfaces, the film prevents settlement. +- On infested surfaces, the molluscicides attack the zebra mussel byssal threads, causing the mussels to detach. + +- The molluscicides form a film on zebra mussels that remain in the system, causing lesions on the gill and ultimately killing the organism. For this reason molluscicides are also lethal to other mussels. Application of these chemicals needs to be repeated on a daily basis to sustain the film until all zebra mussels are killed. + +Future species-specific treatments. Although target-specific chemicals are not currently available, research is developing methods for targeting invasive species and interfering specifically with their reproduction cycle through biochemical compounds like serotonin. These targeted treatments would be highly advantageous in terminating zebra mussel propagation without affecting other aquatic organisms or damaging the environment. + +# Response to Community Leaders + +For such small critters, zebra mussels can range from being a mild nuisance to a large environmental and economic cost. The introduction of these species into our lakes and rivers has created situations where communities are forced to control or eradicate zebra mussel populations. The most important question is how to do this in the most environmentally and economically sound manner. In order to develop a solution for this irritating infestation problem, we must first assess how extensive the problem is. We must identify + +- how the zebra mussels were or are being introduced to the lake, +- if the lake provides a supportive environment for zebra mussels, and +- if there other aquatic organisms or terrestrial organisms (including humans!) that depend on the lake or use it as a food source. + +Isolating the source of zebra mussel introduction to the lake is important so that the community can prevent reintroduction of the mussel or other nonnative species that are a threat to indigenous aquatic organisms. This preventive measure will contribute to making the reactive strategies for controlling the zebra mussel invasion more successful and therefore more cost effective. + +There are two types of reactive control strategies that can be implemented: + +- introduction of a natural predator to the lake system or +- the use of mechanical or chemical methods to control or eradicate the zebra mussel population. + +Introducing a natural zebra mussel predator, such as the round goby fish, to the lake system can be a cost-effective and simple solution to the infestation problem. However, if the lake sustains other aquatic organisms or is used by commercial industries (such as fishing), the costs associated with introducing the goby may be much higher. The goby is an aggressive territorial fish that + +prefers zebra mussels but will nonselectively consume bait, fish, and invertebrates. As a consequence, the goby can destroy fishing stocks and out-compete native species for food. + +Another alternative is the use of mechanical or chemical strategies to control the zebra mussel population. For mild to moderate infestation, the following strategies are effective: + +- Mechanical cleaning of pipes and surfaces exposed to water, followed by coating these surfaces with foul release coating. This coating contains environmentally sound antifouling polymers such as silicones and fluorochemicals, which create a slippery surface making it difficult for zebra mussels to attach. +- Installing simple mechanical filtration systems requires periodic maintenance but effectively prevents zebra mussels from clogging intake pipes. + +Severe infestation requires more aggressive and environmentally abrasive strategies to control the zebra mussel population. Both of the following strategies are more expensive than the two methods discussed above and have more extensive environmental impacts. + +- Thermal treatment is the discharge of heated water into the lake system. The water temperature can be raised rapidly (acute treatment) or slowly for an extended period of time (chronic treatment). In either case, 100-percent of the zebra mussels can be killed. However, this method kills most other aquatic organisms as well. +- An equally effective method is treating the lake with chemicals. There are two viable options in this approach. The first is using chlorinated compounds, which in a short duration will kill the entire zebra mussel population, as well as many other aquatic organisms and even birds. The drawback of this approach is the production of carcinogenic by-products that may remain in the environment for an extended period of time. A better alternative to chlorinated compounds is potassium permanganate. This chemical must be applied at larger concentrations for a longer period of time to kill mussels (including native species) without harming other organisms. + +With any environmental problem, a balance has to be reached between the needs of the community and the effects on the environment. The community will have to weigh carefully the problems caused by the zebra mussels with both the economic and environmental costs associated with each method of removal. + +# References + +Baker, P., S. Baker, and R. Mann. 1993. Criteria for predicting zebra mussel invasion in the mid-Atlantic Region. School of Marine Science, Virginia Institute of Marine Science. + +Boelman, S.F., F.M. Neilson, E.A. Dardeau, Jr., and T. Cross. 1997. Zebra mussel (Dreissena polymorpha) control handbook for facility operators. Miscellaneous Paper EL-97-1, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, 1997. +Borchering, J. 1995. Laboratory experiments on the influence of food availability, temperature and photoperiod on gonad development in the freshwater mussel Dreissena polymorpha. Malacologia 36 (1-2): 15-27. +Florida Caribbean Science Center, Biological Resources Division of the United States Geological Survey, Department of the Interior. 2001. Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990 (P.L. 101-646). http://nas.er.usgs.gov/control.htm. +Ghedotti, M.J., J.C. Smihula, and G.R. Smith. 1995. Zebra mussel predation by round gobies in the laboratory. Journal of Great Lakes Research 21 (4): 665-669. +Griffiths, R.W., W.P. Kovalak, and D.W. Schloesser. 1991. The zebra mussel, Dreissena polymorpha (Pallas, 1771), in North America: Impact on raw water users. In Proceedings: EPRI Service Water System Reliability Improvement Seminar, 11-27. Palo Alto, CA: Electric Power Research Institute. +Heath, R.T. 1993. Zebra mussel migration to inland lakes and reservoirs: A guide for lake managers. Kent State University, Ohio: Sea Grant College Program. +Horvath, T.G. and G.A. Lamberti. 1999. Mortality of zebra mussel, Dreissena polymorpha, veligers during downstream transport. *Freshwater Biology* 42: 69-76. +Johnson, L.E., and J.T. Carlton. 1996. Post-establishment spread in large-scale invasions: Dispersal mechanisms of the zebra mussel. *Ecology* 77 (6): 1686-1690. +Johnson, L.E., and D.K. Padilla. 1996. Geographic spread of exotic species: Ecological lessons and opportunities from the invasion of the zebra mussel Dreissena polymorpha. Biological Conservation 78: 23-33. +Legal Information Institute. 2001. U.S. Code, Title 16, Chapter 67, Subchapter I, Sec. 4701: Findings and purposes. http://www4.law.cornell.edu/uscode/16/4701.html. +Magee, J.A., D.A. Wright, and E.M. Setzler-Hamilton. 1996. Penaten to control zebra mussel attachment. The University of Maryland System, Center for Environmental and Estuarine Studies. +Martel, A. 1993. Dispersal and recruitment of zebra mussel (Dreissena polymorpha) in a nearshore area in west-central Lake Erie: The significance of postmetamorphic drifting. Canadian Journal of Fisheries and Aquatic Science 50: 3-12. + +Morton, B.S. 1969. Studies on the biology of Dreissena polymorpha Pall. III. Population dynamics. Proceedings of the Malacological Society of London 38: 471-482. +Ramcharan, C.W., D.K. Padilla, and S.I. Dodson. 1992. Models to predict potential occurrence and density of the zebra mussel, Dreissena polymorpha. Canadian Journal of Fisheries and Aquatic Sciences 49: 2611-2620. +Sprung, J.M. 1987. Ecological requirements of developing Dreissena polymorpha eggs. Archiv für Hydrobiologie 79 (Suppl.) 69-78. +Strayer, D.L. 1991. Project distribution of the zebra mussel, Dreissena polymorpha in North America. Canadian Journal of Fisheries and Aquatic Sciences 48: 1389-1395. + +# Judge's Commentary: The Outstanding Zebra Mussel Papers + +Gary Krahn + +Dept. of Mathematical Sciences + +United States Military Academy + +West Point, NY 10996 + +ag2609@usma.edu + +# Introduction + +The papers were assessed on + +their ability to transform the data into useful information; +- the application of an appropriate modeling process; and +- the integration of environmental science to render appropriate recommendations. + +The judges appreciated the effort and valued the results of the papers. It was a very difficult problem that required a blend of science, mathematics, and conviction to solve a complex interdisciplinary problem during the four-day contest. It was clear that a solution was not going to jump out of the 40 pages of data; rather, it had to be pulled out skillfully. + +# The Problem + +Zebra mussels were introduced to North America in 1980. They are an ecological "dead end," since native fish do not eat them. Researchers are currently attempting to identify environmental factors that may influence the population of zebra mussels within our waterways. Zebra mussels are now spread throughout the eastern waterways of the United States, causing tremendous problems for the ecosystem and the regional economies. + +The data in the problem statement are real: Prof. Nierzwicki-Bauer of Rensselaer Polytechnic Institute, a leading researcher of zebra mussels, provided data from several lakes in New York. Several population models appear in the literature; however, the collection of environmental factors that influence the rate of population growth of the zebra mussel is still unknown. This is a genuine interdisciplinary problem that confronts North America today. + +# The Data + +The data appear to have created an "uncomfortable" feeling in the hearts and minds of the modelers. It was difficult for many to digest all of the data and either incorporate all of it into a model or else justify eliminating portions of the data. Often, teams did not address how they managed "missing" data or why they accepted or refuted data that appeared to be erroneous. In most cases, teams had done a significant amount of work in an attempt to understand the data. Most teams categorized the population data by month in order to synthesize the data into a more useful form. Similarly, they attempted to align the chemical data by averaging several time periods into a single data point. Many teams had difficulty describing their analysis and the interpretation of their results. The successful teams discussed how they transformed data and how they confronted missing or confusing data. Tables 1 and 2 show portions of the data, the zebra population of one lake from 1994-2000 and the chemical information on the same lake for 1999. Confusing—yes, but real. + +The entire set of data included the following categories: stratum, total phosphorus, dissolved phosphorus, calcium, magnesium, total nitrogen, temperature, chlorophyll, alkalinity chloride, iron, potassium, sodium, pH, secchi disk transparency, and population levels. It was essential to explain how the data would be organized for analysis. The judges expected teams to describe why they selected certain data to remain in their analysis and why other chemicals were eliminated. It was clear that contestants had to make several decisions to transform the data into a useful form. This problem, like last year's problem, was not clear-cut. Once again, we found that as the contestants formulated and refined their assumptions, they confronted the complexities typically associated with an open-ended problem. Last year they had reasonably clean data, while this year they had some "dirty" data. + +The characteristic of a strong paper was the ability to uncover the uncertainties of the population growth of zebra mussels due to chemical concentrations using science and mathematical models. In some cases, the incomplete data and large unexplainable fluctuations in the population obscured the affect of specific chemicals. The data alone cannot reveal the complete interaction among the chemicals affecting population growth. For that reason, successful teams had to take an interdisciplinary problem-solving approach. + +Table 1. +Zebra mussel population of one lake. + +
DatePopulation
7/1/94100
8/1/9470
9/1/9450
10/1/94248
11/1/941,045
7/12/95222
8/1/9550
9/1/9570
10/1/9540,000
11/1/95200,385
7/1/9639
8/1/964,843
9/1/9630,033
10/1/96949,433
11/1/9649,333
7/1/970
8/1/9720,456
9/1/9744,678
10/1/97345,555
11/1/9798,789
7/1/98605
8/1/9884,132
9/1/98599,432
10/1/98454,932
11/1/9849,332
7/1/9993
8/1/9945
9/1/9983,962
10/1/99539,229
11/1/9930,012
7/1/000
8/1/0050
9/1/009,483
10/1/00592,339
11/1/00467,876
+ +Table 2. Chemical profiles of the same lake. + +
DateCa mg/LMg mg/LTN mg/LTemp °CChl-a
4/15/9923.205.070.448.504.72
5/17/990.3215.507.27
5/18/9927.506.710.45
6/1/990.4918.2010.18
6/9/990.42
6/14/990.4721.0011.64
7/1/990.5220.809.45
7/19/9926.806.720.8621.0010.18
7/20/9927.206.610.56
7/29/990.4421.8013.58
8/4/990.52
8/11/990.516.30
8/23/990.4421.005.09
8/25/990.41
9/7/990.3822.0012.12
9/13/990.38
9/24/9924.805.670.8916.001.09
10/7/990.7313.503.64
+ +# The Science + +If science is defined to be the knowledge and study of "what is," then most of the teams got half of the science—the knowledge part. Almost every team was able to find an enormous amount of information from the open literature by using the Internet. The stronger teams not only gathered information, but they also explained the impact of specific environmental conditions on the life-cycle process of the zebra mussel. If chemicals such as nitrate and magnesium were eliminated without explaining why, the grader immediately suspected that the student did not know why. Likewise, if variables such as chlorophyll, pH level, and calcium were kept in the model, the outstanding teams explained why, from both a modeling and a scientific perspective. An explanation of the model using both science and mathematics was a characteristic of an outstanding paper. + +An understanding of the ecological fabric of the waterways was important in the design of an outstanding solution to this problem. Environmental science was the thread that related the data to the model and the model to a "realistic" solution. + +# The Model + +It was important that the modeling process be well formulated and that the rationale of the selected model be clearly explained. The definition of variables, identification of simplifying assumptions, and a discussion of the + +ramifications of these assumptions were important ingredients in the paper. Finally, it was important that the model developed was used to answer the question regarding the expected growth of the mussels in lakes B and C. An interdisciplinary discussion of the ramification of the de-icing policy required in Part E was also directly tied to the model. Surprisingly, many teams did not take advantage of their model to address follow-on questions. + +The explanations of the modeling process varied tremendously. Some papers contained models that were well designed with results that were analyzed and interpreted. The teams also recognized their models to be both predictive and descriptive. Unfortunately, other papers had wonderful models that utilized a commercial package or constructed models, but they never explained how the model functioned. It appeared that providing the details of the model's underpinnings impacted the entire paper. Groups who had good explanations of their models also related these models nicely to the environmental science of the zebra mussel rate of growth. + +The analysis of the data tied in nicely to how the students performed their modeling. Some students saw the problem as fitting a growth differential equation, and others as fitting a multivariable regression. The approach did not affect the assessment of the paper. Furthermore, whether a team used a discrete dynamical system, curve fitting, or simulation, or adapted the logistic model, a correlation analysis was very important. The stronger papers tended to perform this analysis graphically. Those groups providing useful graphs and explanations of these graphs fared quite well. + +# The Analysis + +The problem was an interdisciplinary endeavor. Teams that did great mathematics but revealed little knowledge on environmental science could not capture the relationships required to solve this problem. Since the data were not clean, it was impossible to use only the data to uncover the essential relations affecting population growth. Similarly, teams that had a tremendous knowledge of the science but little mathematics were not able to create an appropriate predictive model. A thorough explanation of the implication of each variable on the growth of the mussel population was essential. Good teams shared a modeling process that was well thought out and justified the rationale of the selected model. In Part C (adjustment of the model) a clear explanation of the process involved in modifying the model was important. Finally, in Part D it was important that the analysis of the model was used to answer the question regarding the expected growth of the mussels in Lakes B and C. + +An interdisciplinary discussion of the ramification of the de-icing policy was directly tied to the model. Those students who answered all the requirements had a significantly greater chance of going forward than those groups who either did not answer the requirements or who only addressed one or more requirements superficially. + +# Presentation + +Some papers revealed tremendous analysis but lacked clarity in the presentation. The strong papers presented the problem, discussed the data and explained their analysis, and finally revealed the development of their mathematical methods/models. The big difference in papers was whether they informed the reader of what they did and, more important, how they did it. A clear presentation allowed the judge to comprehend their logic and reasoning. One judge noted that he wished he was a mind reader because there was clearly lots of outstanding work; however, only the result was revealed. The strong papers revealed their analysis, not just the results. + +Very broadly, we saw two types of weak presentations. The first consisted of reports that had a significant narrative, but no support in the form of mathematical modeling or analysis. In these reports, the groups appeared to rely on qualitative observations and the information from the literature (web sites) to reach conclusions. The other type of poor-quality report was those that had a significant amount of mathematics in the form of tables and graphs, but no modeling or analysis to pull it together. These papers appeared to dump their computer runs into the report but did not really know what to do with them. + +This year we noticed that the stronger teams clearly documented information they gathered from outside sources. When constructed models aligned very closely with models found in the open literature, it became difficult for judges to determine what was original work. + +# Conclusion + +The effort and creativity of almost every team was inspiring. It appears, however, that most teams can reason better than they can communicate. Often, wonderful ideas were not revealed to the reader. The necessity to work with large data sets appeared much more difficult than anticipated. The top papers, however, did an amazing effort of blending and revealing the science, research, and mathematics. The best teams revealed the power of interdisciplinary problem solving. + +# About the Author + +Gary Krahn received his Ph.D. in Applied Mathematics at the Naval Postgraduate School. He is currently the Head of the Dept. of Mathematical Sciences at the U.S. Military Academy at West Point. His current interests are in the study of generalized de Bruijn sequences for communication and coding applications. He enjoys his role as a judge and Associate Director of the ICM. + +# Author's Commentary: The Outstanding Zebra Mussel Papers + +Sandra A. Nierwicki-Bauer + +Darrin Fresh Water Institute + +Rensselaer Polytechnic Institute + +Troy, NY 12180 + +nierzs@rpi.edu + +# Introduction + +One cannot underestimate the potential impact of exotic aquatic species. In particular, the zebra mussel, a small, fingernail-sized freshwater mollusk that was unintentionally introduced to North America via ballast water from a transoceanic vessel, has caused havoc to say the least! Zebra mussels have significantly impacted electrical power generation stations, drinking water treatment plants, industrial facilities, navigation lock and dam structures, and recreational water bodies. In fact, zebra mussels cause an estimated $5 billion in economic damage annually, with this amount continuing to escalate. The zebra mussel problem is a national one, which impacts over half of the fifty states. In light of the ecologically devastating and costly consequences of zebra mussels, it is imperative that there is increased education, research, and science-based policy. + +As revealed in this year's contest, the use of real data sets means working with numerous variables and sometimes incomplete information. Additionally, the facts that need to be considered when trying to address issues surrounding the success or failure of zebra mussels to spread and survive are complex. Many important and complex environmental problems lie at the interface of disciplines and therefore require interdisciplinary approaches to be addressed. Interdisciplinary training is more than learning and acquiring the ability to talk different languages across disciplinary boundaries. It is an approach that promotes teamwork, innovation, creativity, and "out-of-the-box" thinking for solving "real-world" issues and problems. The interdisciplinary problem contest plays a vital role in this experiential training bringing together + +teams of students that are focused for four days on "solving" a complex problem. The breadth of approaches that were used by the teams this year was truly impressive. + +# Basis for Contest Question: Queen of the American Lakes, Lake George, NY + +Until recently, it was thought that zebra mussels had not invaded Lake George, New York, the home of the Darrin Fresh Water Institute (DFWI). Since 1995, the DFWI had carried out a zebra mussel monitoring program in Lake George where zebra mussel larvae had been observed in only two of the years. In 1997, larval zebra mussel numbers at 1 of 11 locations were comparable to those observed in the Hudson River, an area of high zebra mussel colonization. Despite the presence of larvae, no adult zebra mussels or settled juveniles had been observed. In December of 1999, the situation changed when two divers from the Bateaux Below Inc., a nonprofit organization dedicated to underwater archaeology, found adult zebra mussels at the southern end of Lake George. + +In response to the discovery of these mussels, the DFWI has been working intensively at the site to determine why adult zebra mussels were able to survive and reproduce, ways in which they could have been introduced to this location, and an appropriate action to eradicate them from this location. + +The discovery of zebra mussels in Lake George was particularly surprising given the low calcium content and low pH of the lake; laboratory tank experiments had previously shown that zebra mussel larvae would not survive under these conditions. However, water chemistry analyses conducted at the site where the mussels were found revealed calcium and pH levels higher than that characteristic of the majority of Lake George. Further investigation revealed that water entering the lake from a nearby culvert was introducing stormwater runoff and groundwater into the lake with calcium levels four times higher than that characteristic of the rest of the lake. In addition, the site contains numerous concrete and rock aggregates that are likely sources of additional calcium. Finally, there is potential contribution of calcium from a concrete boardwalk that was built approximately a year before the discovery of zebra mussels at this location. + +Introduction of zebra mussels may have occurred when boats contaminated from other lakes entered Lake George at the boat launch adjacent to the site. Introduction could also have occurred during the construction of the nearby boardwalk via contaminated equipment. The exact mechanism(s) by which they were introduced may never be known. + +After discovering zebra mussels in Lake George, the DFWI and Bateaux Below SCUBA divers carried out an extensive survey of the location to determine the size of the affected area. The mussels were confined to a 15,000 square-foot area. After consultation with state and local agencies, it was agreed that + +hand-harvesting of the relatively low-density mussels was the best solution. Diving at the site to remove all visible zebra mussels began and has been ongoing since April 2, 2000. This approach has been extremely labor intensive and, while hopefully effective, would not be feasible if multiple sites were found throughout Lake George. + +Currently, a number of activities are being continued at Lake George, including monitoring and removal of any remaining zebra mussels at this site. Removal of any remaining zebra mussels is critical to reduce the likelihood of successful reproduction. In addition, mussels that are not removed may adapt to the lower calcium and pH conditions and spread into surrounding areas. Water samples are continuing to be checked for microscopic larvae and chemical parameters. This information will be used to evaluate success of removal efforts, determine whether to extend the monitoring area beyond the present site and better understand the local water chemistry. + +As can be seen from the above "story," the questions asked in this year's contest—examining environmental factors that could influence the spread of zebra mussels and the potential impacts of human activities and policy issues—are real ones. I read with great interest the solutions provided by this year's teams. In fact, I plan to reread a number of them as we continue to work on these research questions. + +# Proactive vs. Reactive + +There are many ways in which we can be proactive against the potential threat and spread of zebra mussels. Perhaps of primary importance is education of individuals, through which it is hoped that the spread of zebra mussels can be reduced. The primary mode by which zebra mussels are transported to new bodies of water or to new locations within single water bodies is by human activities: mussels attached to boat bottoms, or veligers hitching a ride in bait buckets or scuba gear, for example. Therefore education can be viewed as a preventive measure for the spread of zebra mussels. + +A second critical activity is monitoring for the first appearance of zebra mussel larvae (veligers), young juvenile mussels and adult zebra mussels. Of course, the earlier the detection, the better the opportunity to minimize a widespread colonization. Thus, monitoring programs are paramount in being proactive about zebra mussel infestations. + +Third, and to the point of the contest question, there is a need for development of mathematical models that can be made robust using the numerous data sets that already exist for water bodies that either have or lack zebra mussels. These models may then be used to predict possible new infestations within water bodies potentially in jeopardy of zebra mussel introductions. At the time of the contest only three such models had been published in the scientific literature. To have interdisciplinary student teams and worldwide focus on this important issue was a fantastic opportunity. + +Another aspect of this year's question related to policy. Too often policy development is the result of being reactive. The most beneficial outcomes are likely to occur if we are proactive and policy decisions are put into place before, rather than after there is a serious and sometimes uncorrectable problem. In order to facilitate this scientists must accept the responsibility of effectively conveying scientific findings and results in "layman's" terms. It is only then that policy can be an informed decision influenced by the scientific fact finders. + +# Data Sets for Competition + +Just as the students in the contest worked in teams, the collection of data for this year's problem was also an example of collaboration and teamwork. The sharing of scientific information is critical when working on complex problems, where the saying that "the whole is greater than the individual parts" is truly the case. Data were kindly provided for the contest by + +- Cathi Eliopoulos of the Vermont Department of Environmental Conservation, for Lake A (Lake Champlain); +- Larry Eichler of the Darrin Fresh Water Institute, Rensselaer Polytechnic Institute, for Lake B (Lake George, NY); and +- Scott Kishbaugh of the New York Dept. of Environmental Conservation, for Lake C. + +Zebra mussels were discovered in Lake Champlain in 1993 and have since continued to expand in their distribution throughout the lake. In 1999, adult zebra mussels were found for the first time at the southern end of Lake George. This remains the only location in that lake where they have been observed to date, although the search for additional colonies continues. No zebra mussels have been found in Lake C, and this is likely to remain the case unless there are significant increases in calcium concentrations within the lake. + +# Acknowledgment + +I would like to thank Chris Arney and Gary Krahn for their invaluable contributions in brainstorming and the development of this year's problem. + +# About the Author + +![](images/650613df2d201de563e21486871ad0a70dad761f7b6ca210e53a418f175e0272.jpg) + +Sandra A. Nierzwicki-Bauer received a B.A. and a Ph.D. in Microbiology at the University of New Hampshire. After a two-year postdoc at the University of Chicago, she joined Rensselaer Polytechnic Institute in 1985 as Assistant Professor of Biology. She has served in a number of positions at RPI, including Chair of the Biology Department and most recently Interim Dean of the School of Science, and now Professor of Biology and Director of the Darrin Fresh Water Institute. + +"Although my formal training was as a microbiologist, it did not take long for me to recognize the power of interdisciplinary research and education, as well as the national importance that the zebra mussel problem was taking on." In 1995, when zebra mussels began encroaching closer and closer to the beloved Adirondacks and Lake George, she began a new program that focused on research, education and outreach activities related to the pesky mollusk. Six years later, this exciting work continues. "Participating as a judge for this year's contest reminds me of one of the joys of working on interdisciplinary problems: having the best of both worlds ... being a student and a teacher." + +# Reviews + +Solow, Daniel. 1998. The Keys to Linear Algebra. Ashland, OH: BookMasters. ix + 543 pp, $49.95. ISBN 0-9644519-2-1. + +The UMAP Journal has published many reviews of linear algebra ("l.a.") texts here, including the most important l.a. text of the last forty years, Gilbert Strang's Linear Algebra and Its Applications [1988; Cargal 1989], as well as the editions of his sophomore-level text Introduction to Linear Algebra [1993, 1998; Cargal 1994, 2000]. (It is time that we review some of the texts for a second undergraduate course in l.a.; any volunteers?) + +Solow's The Keys to Linear Algebra is a very good book, better than $90\%$ of the competing texts for a first course and possibly as good as any for self-study (l.a. may be required in all mathematics majors, but in some mathematical fields people often pick l.a. up on their own). + +One of the virtues of the book is that it is unique; Solow was not following a template, to produce a "cookie-cutter" imitation of another popular book, as is sometimes mandated by publishers. If his book resembles any other book, it is indeed the first edition of Strang's Introduction to Linear Algebra. In particular, the first chapter of each book explores the mathematics of lines and planes in three-dimensional space. I know from classroom experience that many students have trouble with this topic; I assume that this is the reason Strang took it out of the second edition. Solow goes less far with this topic but spends more time on its development, pursuing and surpassing in this topic what are the main strengths of Strang's books, motivation and development. + +Solow clearly wrote this book out of love of the subject and of teaching. He spends a great deal of time explaining the logic and reasoning of the proofs. There are copious examples, and he is very good at explaining applications. + +Strang has self-published many of his books including *Introduction to Linear Algebra* because he wants to, and his reputation has sufficed to sell the books. There can be substantial financial advantages (as well as risks) in self-publishing. Out of choice or otherwise, Solow's book too is self-published (Bookmasters is just the distributor), like some of his previous books. Given that Solow's text is as good as I say it is (and it is!), could it have been turned down by major and not-so-major publishers? I have no idea whether in fact it was—but it certainly could have been. I have been closely watching publishers for 25 years. I do not completely understand the publishing business, but I understand it well enough that I could give 15 reasons why publishers would turn down an excellent text such as this. But that is an essay for another day. + +# References + +Cargal, James M. 1989. Review of Strang [1988]. The UMAP Journal 10(3): 275-276. +_____. 1994. Review of Strang [1993]. The UMAP Journal 15(2): 181-184. +_____. 2000. Review of Strang [1998]. The UMAP Journal 21(2): 203-204. +Strang, Gilbert. 1988. Linear Algebra and Its Applications. 3rd ed. San Diego, CA: Harcourt Brace Jovanovich. +_______. 1993. Introduction to Linear Algebra. Cambridge, MA: Wellesley-Cambridge Press. 1998. 2nd ed. +James M. Cargal, Math Dept., Troy State University Montgomery, Montgomery, AL 36121-0667; jmccargal@sprintmail.com. + +Lesmoir-Gordon, Nigel, Will Rood, and Ralph Edney. Introducing Fractal Geometry. Cambridge, UK: Totem Books. 176 pp, $11.95. ISBN 1-84046-123-3. + +Sardar, Ziauddin, and Iwona Abrams. 1998. Introducing Chaos. Cambridge, UK: Totem Books. 176 pp, $11.95. ISBN 1-84046-078-4. + +I was late (39) getting into physics. I knew that Newtonian mechanics was the place to start, which was convenient since I was facing a classic problem in that area (I was then working in aerospace). From Newtonian mechanics, special relativity was a good second place to go. Special relativity is elementary mathematically and can be viewed as an extension of Newtonian mathematics. However, reading various books, one thing escaped me. I could not see the logical flow. Just where and how did special relativity start? Yes, I could tell one set of laws from the other; but I was going up the wall about foundations. + +My dilemma was solved by a cartoon book, *Einstein for Beginners* [Schwartz and McGuinness 1979]. It explains that Einstein based everything on two postulates and that the main point of his first paper on special relativity is that these postulates lead to a consistent theory. Though this is a cartoon book, it compares favorably to the book I read in high school. Both ignored Einstein's work on Brownian motion, quantum mechanics, and general relativity. But the cartoon book does special relativity better and provides a far more complex picture of its subject. Even what I do not like in the cartoon book is in its favor: It is irritating in its leftist politics (a characteristic of the series), but the book in high school did not have the nerve to be provocative. + +Einstein for Beginners is in the Pantheon series $XYZ$ for Beginners. That and the Icon Books (Totem Books in the U.S.) series, Introducing $XYZ$ , which includes the two books reviewed here, are the principal academic cartoon book series. Both series are strongest on philosophy and politics. + +The second author of Introducing Fractal Geometry, Will Rood, has significant mathematical training. Probably most mathematicians would quibble with + +some points (was Weierstrass ever a student under Cauchy?) The book certainly treats Benoit Mandelbrot in herculean terms. But the book is competent. This may seem weak praise; but given its format, it is the easiest introduction to fractal geometry that there is. Fractal geometry is one of the most graphic of all geometries; if there is a mathematical topic that lends itself to a cartoon book, fractal geometry is it. + +Introducing Chaos by Sardar and Abrams is another story. Here, the art, by Abrams, seems more suited to a book on philosophy. Sardar's background is along the lines of journalism, philosophy, and Islamic studies; he is the author of another Icon/Totem book, Introducing Mathematics. Presumably his value to the series is his sympathy to a multicultural viewpoint, a topic to which I will return. (His book *Thomas Kuhn* and the Science Wars [2000], comprised of 74 small pages, seems to be a somewhat postmodern essay on the science wars.) + +The very first heading of Introducing Chaos (p. 3) is "Ying, Yang, and Chaos." On p. 5, we are informed that "Chaos theory is a new and exciting field of scientific inquiry." On p. 17, "Non-linear equations, on the other hand, cannot be solved." P. 19: "Most forces in real life are nonlinear. So why have we not discovered this before? . . . Galileo (1564-1642), an Italian physicist, disregarded small nonlinearities [viz., friction] in order to get neat results. Since the advent of 'modern' Western science, we have been living in a world which acts as if the platypus was the only animal in existence." + +Poincaré's results on the three-body problem are now considered an important landmark in the history of chaos theory, but his achievement is not mentioned in this volume. Instead, we are told on p. 23 that "[W]e have 'solved' the three-body problem by demonstrating that the orbits are inherently unpredictable. Such a solution would have been considered nothing less than sacrilege a few years ago." The middle of the book may be better, but at the end we return to this false East=West dichotomy. We learn on p. 160: "Chaos theory and complexity are tools of understanding. But these new sciences contain understanding that has been indigenous to non-Western societies." If this understanding has ever been translated into mathematical form, we are not told about it, so it is impossible not to be extremely skeptical. Given the book's hype, its questionable mathematics, science, and history, and lastly its clichéd multiculturalism, I cannot recommend it. In fact, I find it somewhat offensive. + +I speak of multiculturalism as it appears in science and mathematics education, where it precedes the current era in which multiculturalism in a broader sense has become a component of academia. In the 1970s, the idea of a close link between quantum physics and Eastern philosophy was very trendy [Shermer 2001]. In the 1980s and 1990s, this phenomenon recurred with fuzzy logic (which goes back at least to the early 1970s). In the best-seller Fuzzy Thinking: The New Science of Fuzzy Logic [1994], Bart Kosko stresses the point that whereas scientists in the West think in binary terms, Eastern philosophy is more subtle and is more compatible with fuzzy logic and its fuzzy shadings of truth. + +My problem with all this is that the stereotype of Western scientists being binary—and linear, and all that—is utter nonsense. The history of Western sci- + +ence shatters this stereotype. Similarly, Eastern philosophy and religion are just that: philosophy and religion, and they do not bear comparison with Western science but with Western philosophy and religion. The Eastern philosophy that is presented in books like Introducing Chaos and Kosko's Fuzzy Thinking... is superficial and in my view glib and patronizing. The corresponding technical discussion invariably seems superficial as well; for example, Kosko claims that fuzzy logic and fuzzy algorithms are a great new development in technology. We get a bit of philosophy; we learn that Kosko is an accomplished fellow, and at one point we learn that he had a particularly brilliant idea while sitting in a hot tub. As to fuzzy algorithms, we get remarkably little detail. + +Last, a little disclosure. I am conservative by most standards and live in the state of Alabama, so one might suppose that I am hostile to Eastern philosophy and religion. On the contrary, I have an affinity for it going back to 1958, when I was a boy of seven in Bangkok. At that age, I was too young for the ideas, but one should never underestimate the "feel" of a culture and the importance of that feel. (In fairness, I should mention that living in a society is not by itself sufficient for exposure to the culture, and some kids are resistant to other cultures by the ripe age of six.) + +When we look at multiculturalism in the recent sense, I am struck by the fact that students who are exposed to it do not seem to know more about other cultures than earlier generations did. As a student and as a professor, I have found that what earns the respect of foreign students is knowledge of their countries. But to get a knowledge of international politics, one must look beyond the extremely shallow coverage of Newsweek, Time, the network television news, and all but a handful of America's daily newspapers. + +# References + +Kosko, Bart. 1994. Fuzzy Thinking: The New Science of Fuzzy Logic. New York: Hyperion. +Sardar, Ziauddin. 2000. Thomas Kuhn and the Science Wars. Cambridge, U.K.: Icon Books. +Schwartz, Joseph, and Michael McGuinness. 1979. Einstein for Beginners. New York: Pantheon Books. +Shermer, Michael. 2001. Starbucks in the Forbidden City: Eastern and Western science are put to political uses in both cultures. Scientific American (July 2001): 34-35. +James M. Cargal, Math Dept., Troy State University Montgomery, Montgomery, AL 36121-0667; jmccargal@sprintmail.com. + +# Annual Index for Vol. 22, 2001 Author Index + +Ackerson, Bruce, Dennis Bertholf, James Choike, Emily Stanley, and John Wolfe. Red and blue laser CDs: How much data can they hold? 22(2): 157-179. +Acknowledgments. 22(4): 435. +Adams, Matthew R. See Howard, Nicholas J. +Annual Index. 21(4): 431-434. +Arney, David "Chris." Introduction: More tools for the toolbox. UMAP/ILAP Modules 2000-01: Tools for Teaching: v-vii. +________, and John H. "Jack" Grubbs. Results of the 2001 Interdisciplinary Contest in Modeling. 22(4): 355-366. +Beery, Janet L. See Selco, Jodye I. +Bertholf, Dennis. See Ackerson, Bruce. +Bertorelli, Joseph. Insolation. 22(1): 63-82. +Black, Kelly. Author-Judge's Commentary: The Outstanding Bicycle Wheel Papers. 22(3): 253-256. +Campbell, Paul J. Acknowledgments. 22(4): 435. +An APt future. 22(1): 1-2. +. ILAPs join UMAPs. 22(2): 93-94. +Cargal, James M. The problem with algebraic models of marriage and kinship structure. 22(4): 345-353. +Charlesworth, Jonathan David, Finale Pankaj Doshi, and Joseph Edgar Gonzalez. The crowd before the storm. 22(3): 291-299. +Chidambaran, N.K., and John Liukkonen. The Lagniappe Fund: The story of diversification and asset allocation. UMAP/ILAP Modules 2000-01: Tools for Teaching: 87-124. +Choike, James. See Ackerson, Bruce. +Chun, Deborah A. See Schubmehl, Michael P. +De Wet, W.D.V., D.F. Malan, and C. Mumbeck. Spokes or discs? 22(3): 211-223. +Dickey, Adam S. See Houmand, Corey R. +Doshi, Finale Pankaj. See Charlesworth, Jonathan David. +Errata. 22(4): 436. +Flynn, Michael B., Eamonn T. Long, and William Whelan-Curtin. A systematic technique for optimal bicycle wheel selection. 22(3): 241-252. +Garfunkel, Solomon A. The face of things to come. 22(3): 185-186. +Giordano, Frank. Results of the 2001 Contest in Mathematical Modeling. 22(3): 187-210. +Gonzalez, Joseph Edgar. See Charlesworth, Jonathan David. +Gossett, Nathan, Barbara Hess, and Michael Page. Project H.E.R.O.: Hurricane Emergency Route Optimization. 22(3): 257-269. +Grubbs, John H. "Jack." See Arney, David C. "Chris." + +Guide for Authors. 22(1): 91-92; UMAP/ILAP Modules 2000-01: Tools for Teaching: 181-182. +Hanusa, Christopher, Ari Nieh, and Matthew Schnaider. Jammin' with Floyd: A traffic flow analysis of South Carolina hurricane evacuation. 22(3): 301-310. +Hess, Barbara. See Gossett, Nathan. +Houmand, Corey R., Andrew D. Pruett, and Adam S. Dickey. Please move quickly and quietly to the nearest freeway. 22(3): 323-336. +Howard, Nicholas J. Selection of a bicycle wheel type. 22(3): 225-239. +Ilias, Nasreen A., Marie C. Spong, and James F. Tucker. Waging war against the zebra mussel. 22(4): 399-413. +Isihara, Paul Atsusi, and Kevin Schoonmaker. An elementary introduction to relational database theory. 22(1): 27-63. +Johnson, Gary. See Vanisko, Marie. +Kennedy, Matthew Glen. See Stier, David E. +Kim, Tonya. See Nichols, Nancy. +Kolasa, William E. See Wagner, Mark. +Kopp, Kenneth. See Wagner, Mark. +Krahn, Gary. Judges Commentary: The Outstanding Zebra Mussel Papers. 22(4): 415-420. +LaViollette, Marcy A. See Schubmehl, Michael P. +Leisenring, Marc Alan. See Stier, David E. +Liukkonen, John. See Chidambaran, N.K. +Long, Eamonn T. Long. See Flynn, Michael B. +Malan, D.F. See De Wet, W.D.V. +Malone, Samuel W., Carl A. Miller, and Daniel B. Neill. Traffic flow models and the evacuation problem. 22(3): 271-290. +MathServe 2001: Results of the Third Annual Competition. 22(2): 95. +Matthews, A.P. BCH codes for error correction in transmission of binary data. 22(2): 129-156. +_____. Polynomial algebra for correction in transmission of binary data. 22(1): 3-25. +Miceli, Brian. See O'Neil, Thomas. +Miller, Carl A. See Malone, Samuel W. +Miller, Zachariah R. See Howard, Nicholas J. +Morris, Brian. See O'Neil, Thomas. +Mumbeck, C. See De Wet, W.D.V. +Myers, Joseph D. How to develop an ILAP. UMAP/ILAP Modules 2000-01: Tools for Teaching: 21-29. +Neill, Daniel B. See Malone, Samuel W. +Nichols, Nancy, Tonya Kim, and others. Traffic in the Center of the Universe. 22(2): 97-110. +Nieh, Ari. See Hanusa, Christopher. + +Nierzwicki-Bauer, Sandra A. Author's Commentary: The Outstanding Zebra Mussel Papers. 22(4): 421-425. +O'Neil, Thomas, Brian Miceli, Brian Morris, and Ryan Tully-Doyle. Wind in their wings: The California Condor Restoration Project. 22(2): 111-128. +Page, Michael. See Gossett, Nathan. +Parker, Mark. *Judge's Commentary: The Outstanding Hurricane Evacuation Papers.* 22(3): 337-343. +Project INTERMATH Staff. Integrated and interdisciplinary curricula. UMAP/ILAP Modules 2000-01: Tools for Teaching: 5-19. +_____. Overview of Project INTERMATH: Seeking cultural change. UMAP/ILAP Modules 2000-01: Tools for Teaching: 1-4. +Pruett, Andrew D. See Houmand, Corey R. +Reviews. 22(1): 89-90; 22(4): 429-430. Reviews Index. 22(4): 429-430. +Robertson, John S. The "lexiconic" sections. 22(1): 83-87. +Schnaider, Matthew. See Hanusa, Christopher. +Schubmehl, Michael P., Marcy A. LaViollette, and Deborah A. Chun. A multiple regression model to predict zebra mussel population growth. 22(4): 367-383. +Selco, Jodye I., and Janet L. Beery. Saving a drug poisoning victim. UMAP/ILAP Modules 2000-01: Tools for Teaching: 31-46. +. Pollution police. UMAP/ILAP Modules 2000-01: Tools for Teaching: 145-179. +Spong, Marie C. See Ilias, Nasreen A. +Stanley, Emily. See Ackerson, Bruce. +Stier, David E., Marc Alan Leisenring, and Matthew Glen Kennedy. Identifying potential zebra mussel colonization. 22(4): 385-397. +Tucker, James F. See Ilias, Nasreen A. +Tully-Doyle, Ryan. See O'Neil, Thomas. +Vanisko, Marie, and Gary Johnson. Managing health insurance premiums. UMAP/ILAP Modules 2000-01: Tools for Teaching: 47-60. +Wagner, Mark, Kenneth Kopp, and William E. Kolas. Blowin in the wind. 22(3): 311-321. +Whelan-Curtin, William. See Flynn, Michael B. +Wolfe, John. See Ackerson, Bruce. +Yu, Lei, and Della D. Bell. Travel demand forecasting and analysis for South Texas. UMAP/ILAP Modules 2000-01: Tools for Teaching: 125-144. +Yu, Lei, and Carrington Steward. Ramp metering of freeways. UMAP/ILAP Modules 2000-01: Tools for Teaching: 63-86. + +# Reviews Index + +(Names of authors are in plain type; names of reviewers are in bold.) +Abrams, Iwona. See Sardar, Ziauddin. +Cederberg, Judith N. 2001. A Course in Modern Geometries. 2nd ed. James M. Cargal. 22(2): 181-184. +Edney, Ralph Edney. See Lesmoir-Gordon, Nigel. +Kalman, Dan. 1997. Elementary Mathematical Models: Order a Plenty and a Glimpse of Chaos. James M. Cargal. 22(1): 89-90. +Lesmoir-Gordon, Nigel, Will Rood, and Ralph Edney. Introducing Fractal Geometry. James M. Cargal. 22(4): 428-430. +Mooney, Douglas, and Randall Swift. 1999. A Course in Mathematical Modeling. James M. Cargal. 22(1): 89-90. +Rood, Will. See Lesmoir-Gordon, Nigel. +Sardar, Ziauddin, and Iwona Abrams. 1998. Introducing Chaos. James M. Cargal. 22(4): 428-430. +Solow, Daniel. 1998. The Keys to Linear Algebra. James M. Cargal. 22(4): 427-428. + +# Acknowledgments + +I would like to express my great appreciation for the help of the associate editors, whose names appear on the masthead. Not only do they do the bulk of work of evaluating manuscripts, but they also solicit new works and encourage and guide potential authors. + +I am also indebted to the additional individuals listed below who have reviewed manuscripts during the past year. Their careful evaluation and judgment have enhanced the quality of the articles and Modules that have appeared in the Journal and in Tools for Teaching. (Reviewers of some papers considered for Vol. 22 of the Journal were acknowledged already in Vol. 21, No. 4. Reviewers of the ILAP Modules in UMAP/ILAP Modules 2000-01: Tools for Teaching are acknowledged on the frontispiece of the corresponding ILAP Module.) + +The Journal offers many opportunities for participation in COMAP's work. If you would like to + +- referee manuscripts—please contact me; +- review books, software, or films—please contact the Reviews Editor; +- write a self-contained expository essay about an area of mathematics, however grand or small—an era, a concept, a theorem, an idea, a term—please contact the On Jargon Editor; +- contribute an Interdisciplinary Lively Applications Project (ILAP) Module—please contact the ILAP Editor; +- contribute an article, UMAP Module, or Minimodule—please contact me; +- encourage and stimulate colleagues to prepare and submit suitable material—please contact me about being nominated to join the Editorial Board. + +Contact information for the associate editors in charge of Reviews, of On Jargon, and of ILAPs, as well as my own information, are on the masthead of every issue. + +Finally, the associate editors and I would like to thank the Journal's authors, without whom none of this would be possible, and its readers, whose benefit and enjoyment are the culmination of our enterprise. + +Paul J. Campbell, Editor + +N.K. Chidambaran, Tulane University + +David Dobson, Beloit College + +David Ellis, Beloit College + +John R. Jungck, Beloit College + +L. Richardson King, Davidson College + +Joe Malkevitch, York College CUNY + +Rama Viswanathan, Beloit College + +Edgar Weippl, Beloit College and Software Competence Center, Hagenberg, Austria + +# Errata + +Vol. 15, No. 4 + +p. 312, l. 3: The multiplication sign should be a division sign. + +Vol. 16, No. 4 + +p. 353, Table 1: The table claims that there is no win in the game Tchouka Ruma for 8 holes and 4 stones per hole, but Jeroen Donkers of the University of Maastricht (Netherlands) communicates the solution: 5 8 2 5 1 8 3 6 8 7 2 8 2 8 4 8 7 8 5 8 6 8, which can be tried at his Website at http://fanth.cs.unimaas.nl/games/ruma. + +# UMAP/ILAP Modules 2000-01: Tools for Teaching + +p. iii, 1.9: Joe Myers $\longrightarrow$ Joseph D. Myers + +p. iii, ll. 13-14: Mark Smillie, Charlotte Jones, David Westlake, and Mary Pietrukowic were early reviewers of this ILAP but should not be listed as authors. + +p. 31, left sidebar: Berry $\longrightarrow$ Beery + +Vol. 22, No. 1 + +back cover, page numbers: 29 $\longrightarrow$ 27, 65 $\longrightarrow$ 63 + +p. 40, l. 22: a relation $R \longrightarrow$ a relation $r$ + +p. 76, l. 11: Figure 7 $\longrightarrow$ Figure 9 + +p. 79, l. 7: (91) $\longrightarrow$ (9a) + +Vol. 22, No. 2 + +p. 164, Figure 5: The beam waist should be denoted by $\ell$ instead of by a sans serif L (which resembles a vertical bar). \ No newline at end of file diff --git a/MCM/1995-2008/2001MCM/2001MCM.md b/MCM/1995-2008/2001MCM/2001MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..0b2505a439ba0f65810da7e82344a9ccfb7cacbd --- /dev/null +++ b/MCM/1995-2008/2001MCM/2001MCM.md @@ -0,0 +1,4040 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +David C. "Chris" Arney + +Dean of the School of + +Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +P.O.Box 210667 + +Montgomery, AL 36121-0667 + +JMCargal@sprintmail.com + +Development Director + +Laurie W. Aragón + +Production Manager + +George W. Ward + +Project Manager + +Roland Cheyne + +Copy Editors + +Seth A. Maislin + +Pauline Wright + +Distribution Manager + +Kevin Darcy + +Production Secretary + +Gail Wessell + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 22, No. 3 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +David C. "Chris" Arney + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. "Gene" Woolsey + +Brigham Young University + +The College of St. Rose + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription includes print copies of quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in their classes, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2120 $75 + +(Outside U.S.) #2121 $85 + +# INSTITUTIONAL PLUS MEMBERSHIP SUBSCRIBERS + +Institutions can subscribe to the Journal through either Institutional Pus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in any class taught in the institution, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2170 $395 + +(Outside U.S.) #2171 $415 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +Regular Institutional members receive only print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2140 $165 + +(Outside U.S.) #2141 $185 + +# LIBRARY SUBSCRIPTIONS + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching and our organizational newsletter Consortium. + +(Domestic) #2130 $140 + +(Outside U.S.) #2131 $160 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2001 by COMAP, Inc. All rights reserved. + +# Vol. 22, No. 3 2001 + +# Table of Contents + +# Publisher's Editorial + +The Face of Things to Come + +Solomon A. Garfunkel 185 + +# Modeling Forum + +Results of the 2001 Contest in Mathematical Modeling 187 + +Frank Giordano + +Spokes or Discs? + +W.D.V. De Wet, D.F. Malan, and C. Mumbeck 211 + +Selection of a Bicycle Wheel Type + +Nicholas J. Howard, Zachariah R. Miller, and Matthew R. Adams 225 + +A Systematic Technique for Optimal Bicycle Wheel Selection + +Michael B. Flynn, Eamonn T. Long, and William Whelan-Curtin 241 + +Author-Judge's Commentary: The Outstanding Bicycle Wheel + +Papers + +Kelly Black 253 + +Project H.E.R.O.: Hurricane Emergency Route Optimization + +Nathan Gossett, Barbara Hess, and Michael Page. 257 + +Traffic Flow Models and the Evacuation Problem + +Samuel W. Malone, Carl A. Miller, and Daniel B. Neill 271 + +The Crowd Before the Storm + +Jonathan David Charlesworth, Finale Pankaj Doshi, and Joseph Edgar Gonzalez. 291 + +Jammin' with Floyd: A Traffic Flow Analysis of South Carolina + +Hurricane Evacuation + +Christopher Hanusa, Ari Nieh, and Matthew Schnaider 301 + +Blowin in the Wind + +......Mark Wagner, Kenneth Kopp, and William E. Kolasa .........311 + +Please Move Quickly and Quietly to the Nearest Freeway + +Corey R. Houmand, Andrew D. Pruett, and Adam S. Dickey...323 + +Judge's Commentary: + +The Outstanding Hurricane Evacuation Papers + +Mark Parker. 337 + +![](images/6737061072c40ec4556b0da9c61404b65c1d4ec639fe235f3c11cb83c5a2e5e8.jpg) + +# Publisher's Editorial + +# The Face of Things to Come + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +s.garfunkel@mail.comap.com + +Typically, I use this space to write once a year about the new activities at COMAP. And this has been an amazing year. We have sent three new undergraduate books to be published, all of which followed from work we had done on major NSF projects. Brooks/Cole has published Mathematics Methods and Modeling for Today's Mathematics Classroom: A Contemporary Approach to Teaching Grades 7-12, by John Dossey, Frank Giordano, and others (ISBN 0-534-36604-X). This book, designed for use in preservice programs for high school teachers, is a direct result of a Division of Undergraduate Education grant from NSF. The idea behind this grant was to help prepare future high school teachers, both in content and in pedagogy, for the changes in curricula, technology, and assessment that have followed the implementation of the NCTM Standards. + +In addition, W.H. Freeman has published two new COMAP texts: Precalculus: Modeling Our World (ISBN 0-7167-4359-0) and College Algebra: Modeling Our World (ISBN 0-7167-4457-0). Based on our secondary series, Mathematics: Modeling Our World (M:MOW), these texts represent activity-based, modeling-driven approaches to entry-level collegiate mathematics. We hope that they will set a new standard for these courses. + +And speaking of new standards, we are also in the process of completing the sixth edition of *For All Practical Purposes*. In this new edition, we greatly expand our coverage of election/voting theory, not surprisingly taking advantage of the data and interest surrounding the 2000 presidential race. We are also adding a section on the human genome, reinforcing the fact that new and important applications of mathematics are being discovered every day. + +But perhaps this year's most important accomplishments are the ideas we have generated and the proposals that we have written. I have often joked that I would like to found two new journals: the Journal of Funded Proposals and the + +Journal of Unfunded Proposals, if for no better reason than at least I would have a great many more publications. As I write this editorial, I do not know into which category our three new NSF proposals will fall, but I would like to share their contents with you. It is my fondest hope that these will represent the face of COMAP to come. + +The first proposal is to revise M:MOw. Our four-year comprehensive reform secondary school series was first published in 1998. In the years since publication, we have learned a great deal from early adopters about ways to help them customize the texts to meet local needs, including new standardized tests. It is time to produce a second edition, which we hope will have widespread appeal. + +The second proposal is to produce a new liberal arts calculus text with accompanying video and web support. COMAP has not undertaken a major video project in some time and we feel that a series of videos visually demonstrating the importance and applicability of the calculus is a natural extension of our previous efforts. Moreover, we will (if funded) prepare shorter video segments for ease of use on the Web. + +The last proposal extends the idea of making materials available on the Web one step further with an ambitious program to produce a series of Web-based courses for present and future teachers, K-12. Again, we plan to make extensive use of new video as well as the interactivity of the Web. Here we hope to use master teachers, with expertise in both content and methods, and use the power of the Internet to reach classrooms and teachers all across the country. + +I do not know whether at this time next year we will be working on all of these projects. But I do know that we will continue our efforts to create new materials and support all of you, who make reform of mathematics education possible. + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for eleven years and has dedicated the last 20 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he also appeared as the on-camera host), Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary school mathematics. + +# Modeling Forum + +# Results of the 2001 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +f.giordano@mail.comap.com + +# Introduction + +A total of 496 teams of undergraduates, from 238 institutions in 11 countries, spent the second weekend in February working on applied mathematics problems in the 17th Mathematical Contest in Modeling (MCM) and in the 3rd Interdisciplinary Contest in Modeling (ICM). This issue of The UMAP Journal reports on the MCM contest; results and Outstanding papers from the ICM contest will appear in the next issue, Vol. 22, No. 4. + +The 2001 MCM began at 12:01 A.M. on Friday, Feb. 9 and officially ended at 11:59 P.M. on Monday, Feb. 12. During that time, teams of up to three undergraduates were to research and submit an optimal solution for one of two open-ended modeling problems. The 2001 MCM marked the inaugural year for the new online contest, and it was a great success. Students were able to register, obtain contest materials, download the problems at the appropriate time, and enter data through COMAP'S MCM website. + +Each team had to choose one of the two contest problems. After a weekend of hard work, solution papers were sent to COMAP on Monday. Nine of the top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first sixteen contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2000). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that + +volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +This year's Problem A was about bicycle wheels and what edge they may give to a race. Before any contest, professional cyclists make educated guesses about which one of two basic types of wheels to choose for any given competition. The team's Sports Director has asked them to come up with a better system to help determine which kind of wheel—wire spoke or solid disk—should be used for any given race course. + +Problem B addressed the evacuation of Charleston, South Carolina during 1999's Hurricane Floyd. Maps, population data, and other specific details were given to the teams. They were tasked with constructing a model to investigate potential strategies. In addition, they were asked to submit a news article that would be used to explain their plan to the public. + +# Problem A: The Bicycle Wheel Problem + +# Introduction + +Cyclists have different types of wheels they can use on their bicycles. The two basic types of wheels are those constructed using wire spokes and those constructed of a solid disk (see Figure 1). The spiked wheels are lighter but the solid wheels are more aerodynamic. A solid wheel is never used on the front for a road race but can be used on the rear of the bike. + +![](images/edb76deb0fb32ce64621345ebe894a5475dc6c2d98c7d5cd822e3eaad804917f.jpg) +Figure 1. Solid wheel (left) and spiked wheel (right). + +![](images/875ff578e91c30bdc060f73d5453b6688b7e7f965bdf338d606481de448a4538.jpg) + +Professional cyclists look at a racecourse and make an educated guess as to what kind of wheels should be used. The decision is based on the number and steepness of the hills, the weather, wind speed, the competition, and other considerations. + +The directeur sportif of your favorite team would like to have a better system in place and has asked your team for information to help determine what kind of wheel should be used for a given course. + +The directeur sportif needs specific information to help make a decision and has asked your team to accomplish the tasks listed below. For each of the tasks, assume that the same spiked wheel will always be used on the front but that there is a choice of wheels for the rear. + +# Task 1 + +Provide a table giving the wind speed at which the power required for a solid rear wheel is less than for a spiked rear wheel. The table should include the wind speeds for different road grades starting from $0\%$ to $10\%$ in $1\%$ increments. (Road grade is defined to be the ratio of the total rise of a hill divided by the length of the road.) A rider starts at the bottom of the hill at a speed of $45\mathrm{kph}$ and the deceleration of the rider is proportional to the road grade. A rider will lose about $8\mathrm{kph}$ for a $5\%$ grade over $100\mathrm{m}$ . + +# Task 2 + +Provide an example of how the table could be used for a specific time trial course. + +# Task 3 + +Determine if the table is an adequate means for deciding on the wheel configuration and offer other suggestions as to how to make this decision. + +# Problem B: The Hurricane Evacuation Problem + +Evacuating the coast of South Carolina ahead of the predicted landfall of Hurricane Floyd in 1999 led to a monumental traffic jam. Traffic slowed to a standstill on Interstate I-26, which is the principal route going inland from Charleston to the relatively safe haven of Columbia in the center of the state. What is normally an easy two-hour drive took up to 18 hours to complete. Many cars simply ran out of gas along the way. Fortunately, Floyd turned north and spared the state this time, but the public outcry is forcing state officials to find ways to avoid a repeat of this traffic nightmare. + +The principal proposal put forth to deal with this problem is the reversal of traffic on I-26, so that both sides, including the coastal-bound lanes, have traffic headed inland from Charleston to Columbia. Plans to carry this out have been prepared (and posted on the Web) by the South Carolina Emergency + +Preparedness Division. Traffic reversal on principal roads leading inland from Myrtle Beach and Hilton Head is also planned. + +A simplified map of South Carolina is shown in Figure 2. Charleston has approximately 500,000 people, Myrtle Beach has about 200,000 people, and another 250,000 people are spread out along the rest of the coastal strip. (More accurate data, if sought, are widely available.) + +![](images/49673acedbfe5e6e72c2f963aa03c221b25c0bc080c03bf08d7a6650b346287b.jpg) +Figure 2. Highways in South Carolina. + +The interstates have two lanes of traffic in each direction except in the metropolitan areas, where they have three. Columbia, another metro area of around 500,000 people, does not have sufficient hotel space to accommodate the evacuees (including some coming from farther north by other routes); so some traffic continues outbound on I-26 towards Spartanburg, on I-77 north to Charlotte, and on I-20 east to Atlanta. In 1999, traffic leaving Columbia going northwest was moving only very slowly. + +Construct a model for the problem to investigate what strategies may reduce the congestion observed in 1999. Here are the questions that need to be addressed: + +1. Under what conditions does the plan for turning the two coastal-bound lanes of I-26 into two lanes of Columbia-bound traffic, essentially turning the entire I-26 into one-way traffic, significantly improve evacuation traffic flow? +2. In 1999, the simultaneous evacuation of the state's entire coastal region was ordered. Would the evacuation traffic flow improve under an alternative strategy that staggers the evacuation, perhaps county by county over some time period consistent with the pattern of how hurricanes affect the coast? +3. Several smaller highways besides I-26 extend inland from the coast. Under what conditions would it improve evacuation flow to turn around traffic on these? +4. What effect would it have on evacuation flow to establish additional temporary shelters in Columbia, to reduce the traffic leaving Columbia? +5. In 1999, many families leaving the coast brought along their boats, campers, and motor homes. Many drove all of their cars. Under what conditions should there be restrictions on vehicle types or numbers of vehicles brought in order to guarantee timely evacuation? +6. It has been suggested that in 1999 some of the coastal residents of Georgia and Florida, who were fleeing the earlier predicted landfalls of Hurricane Floyd to the south, came up I-95 and compounded the traffic problems. How big an impact can they have on the evacuation traffic flow? + +Clearly identify what measures of performance are used to compare strategies. + +Required: Prepare a short newspaper article, not to exceed two pages, explaining the results and conclusions of your study to the public. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at Southern Connecticut State University (Problem A) or at the National Security Agency (Problem B). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +The nine papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Bicycle Wheel32758127215
Hurricane Evacuation64365167281
970123294496
+ +# Outstanding Teams + +Team Members + +# Bicycle Wheel Papers + +# Institution and Advisor + +"Spokes or Discs?" + +Stellenbosch University + +Matieland, South Africa + +Jan H. van Vuuren + +W.D.V. De Wet + +D.F. Maian + +C. Mumbeck + +"Selection of a Bicycle Wheel Type" + +United States Military Academy + +West Point, NY + +Donovan D. Phillips + +Nicholas J. Howard + +Zachariah R. Miller + +Matthew R. Adams + +"A Systematic Technique for Optimal + +Bicycle Wheel Selection" + +University College Cork + +Cork, Ireland + +James Jo Grannell + +Michael Flynn + +Eamonn Long + +William Whelan-Curtin + +# Hurricane Evacuation Papers + +"Project H.E.R.O.: + +Hurricane Evacuation Route Optimization" + +Bethel College + +St. Paul, MN + +William M. Kinney + +Nathan M. Gossett + +Barbara A. Hess + +Michael S. Page + +"Traffic Flow Models and + +the Evacuation Problem" + +Duke University + +Durham, NC + +David P. Kraines + +Samuel W. Malone + +Carl A. Miller + +Daniel B. Neill + +"The Crowd Before the Storm" + +The Governor's School + +Richmond, VA + +John A. Barnes + +Jonathan D. Charlesworth + +Finale P. Doshi + +Joseph E. Gonzalez + +"Jammin' with Floyd: A Traffic Flow Analysis of South Carolina Hurricane Evacuation" + +Harvey Mudd College + +Claremont, CA + +Ran Libeskind-Hadas + +Christopher Hanusa + +Ari Nie + +Matthew Schnaider + +"Blowin' in the Wind" + +Lawrence Technological University + +Southfield, MI + +Ruth G. Favro + +Mark Wagner + +Kenneth Kopp + +William E. Kolasa + +"Please Move Quickly and Quietly to the Nearest Freeway" + +Wake Forest University + +Winston-Salem, NC + +Miaohua Jiang + +Corey R. Houmand + +Andrew D. Pruett + +Adam S. Dickey + +# Meritorious Teams + +Bicycle Wheel Papers (27 teams) + +Beijing University of Chemical Technology, Beijing, P.R. China (Jiang Guangfeng) + +Beijing University of Chemical Technology, Beijing, P.R. China (Wenyan Yuan) + +Brandon University, Brandon, Canada (Doug A. Pickering) + +California Polytechnic State University, San Luis Obispo, CA (Thomas O'Neil) + +Harbin Engineering University, Harbin, P.R. China (Gao Zhenbin) + +Harbin Institute of Technology, Harbin, P.R. China (Shang Shouting) + +Harvey Mudd College, Claremont, CA (Michael E. Moody) + +James Madison University, Harrisonburg, VA (James S. Sochacki) + +Jilin University of Technology, Changchun, P.R. China (Fang Peichen) + +John Carroll University, University Heights, OH (Angela, S. Spalsbury) + +Lafayette College, Easton, PA (Thomas Hill) + +Lake Superior State University, Sault Sainte Marie, MI (J. Jaroma and D. Baumann) + +Lewis and Clark College, Portland, OR (Robert W. Owens) + +Southeast University, Nanjing, P.R. China (Chen En-shui) + +Tianjin University, Tianjin, P.R. China (Dong Wenjun) + +Trinity University, San Antonio, TX (Fred M. Loxsom) + +United States Air Force Academy, USAF Academy, CO (Jim West) + +University College Dublin, Dublin, Ireland (Peter Duffy) + +University of Western Ontario, London, Canada (Peter H. Poole) + +Washington University, St. Louis, MO (Hiro Mukai) + +Westminster College, New Wilmington, PA (Barbara T. Faires) (two teams) + +Wright State University, Dayton, OH (Thomas P. Svobodny) + +Youngstown State University, Youngstown, OH (Thomas Smotzer) + +Zhejiang University, Hangzhou, P.R. China (He Yong) + +Zhejiang University, Hangzhou, P.R. China (Yang Qifan) + +Zhongshan University, Guangzhou, P.R. China (Chen Zepeng) + +# Hurricane Evacuation Papers (43 teams) + +Beijing University of Posts & Telecommunications, Beijing, P.R. China (He Zuguo) + +California Polytechnic State University, San Luis Obispo, CA (Thomas O'Neil) + +Central South University, Changsha, P.R. China (Zheng Zhou-shun) + +Clarion University, Clarion, PA (Jon A. Beal) + +Dong Hua University, Shanghai, China (Ding Yongsheng) + +East China University of Science & Technology, ShangHai, P.R. China (Liu Zhaohui) + +Gettysburg College, Gettysburg, PA (Sharon L. Stephenson) + +Hillsdale College, Hillsdale, MI (Robert J. Hesse) + +James Madison University, Harrisonburg, VA (Caroline Smith) + +Jiading No. 1 High School, Jiading, P.R. China (Wang Yu) + +MIT, Cambridge, MA (Dan Rothman) + +N.C. School of Science and Mathematics, Durham, NC (Dot Doyle) + +National University of Defence Technology, Changsha, P.R. China (Wu Mengda) + +National University of Singapore, Singapore, Singapore (Lim Leong Chye Andrew) + +North Carolina State University, Raleigh, NC (Jeffrey S. Scroggs) + +Northeastern University, Shenyang, P.R. China (Xiao Wendong) + +Pacific Lutheran University, Tacoma, WA (Zhu Mei) + +Päivölä College, Tarttila, Finland (Merikki Lappi) + +Rose-Hulman Institute of Technology, Terre Haute, IN (David J. Rader) + +Rowan University, Glassboro, NJ (Paul J. Laumakis) + +Shanghai Foreign Language School, Shanghai, P.R. China (Pan Li Qun) + +South China University of Technology, Guangzhou, P.R. China (Lin Jianliang) + +Southern Oregon University, Ashland, OR (Lisa M. Ciasullo) + +U.S. Military Academy, West Point, NY (David Sanders) + +U.S. Military Academy, West Point, NY (Edward Connors) + +University of Alaska Fairbanks, Fairbanks, AK (Chris Hartman) + +University of Colorado-Boulder, Boulder, CO (Bengt Fornberg) + +University of Massachusetts Lowell, Lowell, MA (James Graham-Eagle) + +University of North Texas, Denton, TX (John A. Quintanilla) + +University of Richmond, Richmond, VA (Kathy W. Hoke) + +University of Science and Technology of China, Hefei, P.R. China (Gu Jiajun) + +University of South Carolina Aiken, Aiken, SC (Laurene V. Fausett) + +University of South Carolina, Columbia, SC (Ralph E. Howard) + +University of Southern Queensland, Toowoomba, Queensland, Australia (Tony J. Roberts) + +University of Washington, Seattle, WA (James Allen Morrow) + +Wake Forest University, Winston-Salem, NC (Miaohua Jiang) + +Washington University, St. Louis, MO (Hiro Mukai) +Western Washington University, Bellingham, WA (Saim Ural) +Worcester Polytechnic Institute, Worcester, MA (Bogdan Vernescu) +Wuhan University, Wuhan, P.R. China (Huang Chongchao) +York University, Toronto, Ontario, Canada (Juris Steprans) +Zhejiang University, Hangzhou, P.R. China (He Yong) +Zhejiang University, Hangzhou, P.R. China (Yang Qifan) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, gave a cash prize and a three-year membership to each member of the teams from Stellenbosch University (Bicycle Wheel Problem) and Lawrence Technological University (Hurricane Evacuation Problem). Also, INFORMS gave free one-year memberships to all members of Meritorious and Honorable Mention teams. The Lawrence Tech team presented its results at the annual INFORMS meeting in Washington DC in April. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from U.S. Military Academy (Bicycle Wheel Problem) and Wake Forest University (Hurricane Evacuation Problem). Each of the team members was awarded a \(300 cash prize and the teams received partial expenses to present their results at a special Minisymposium of the SIAM Annual Meeting in San Diego CA in July. Their schools were given a framed, hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding team from each problem as an MAA Winner. The teams were from University College Cork (Bicycle Wheel Problem) and Wake Forest University (Hurricane Evacuation Problem). With partial travel support from the MAA, both teams presented their solutions at a special session of the MAA Mathfest in Madison WI in August. Each team member was presented a certificate by MAA President Ann Watkins. + +# Judging + +Director Frank R. Giordano, COMAP, Lexington, MA + +Associate Directors +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +William Fox, Mathematics Dept., Francis Marion University, Florence, SC + +Michael Moody, Mathematics Dept., Harvey Mudd College, + +Claremont, CA + +# Bicycle Wheel Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, + +Stillwater, OK + +Associate Judges + +Ronald Barnes, University of Houston Downtown, Houston TX (MAA) + +Kelly Black, Mathematics Dept., University of New Hampshire, + +Durham, NH (SIAM) + +David Elliott, Institute for System Research, University of Maryland, + +College Park, MD (SIAM) + +Ben Fusaro, Mathematics Dept., Florida State University, + +Tallahassee, FL + +Mario Juncosa, RAND Corporation, Santa Monica, CA + +John Kobza, Texas Tech University, Lubbock, TX (INFORMS) + +Dan Solow, Mathematics Dept., Case Western Reserve University, + +Cleveland, OH (INFORMS) + +# Hurricane Evacuation Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, + +Bloomington, IN + +Associate Judges + +Paul Boisen, National Security Agency, Ft. Meade, MD (Triage) + +James Case, Baltimore, Maryland + +Courtney Coleman, Mathematics Dept., Harvey Mudd College, + +Claremont, CA + +Lisette De Pillis, Harvey Mudd College, Claremont, CA + +William Fox, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Jerry Griggs, University of South Carolina, Columbia, SC + +Jeff Hartzler, Mathematics Dept., Pennsylvania State University Middletown, + +Middletown, PA (MAA) + +Deborah Levinson, Compaq Computer Corp., Colorado Springs, CO + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Dept. of Mathematics, St. Mary's College, Notre Dame, IN (SIAM) + +Mark R. Parker, Mathematics Dept., Carroll College, Helena, MT (SIAM) + +John L. Scharf, Carroll College, Helena, MT + +Lee Seitelman, Glastonbury, CT (SIAM) + +Kathleen M. Shannon, Salisbury State University, Salisbury, MD + +Michael Tortorella, Lucent Technologies, Holmdel, NJ + +Marie Vanisko, Carroll College, Helena, MT + +Cynthia J. Wyels, Dept. of Mathematics, Physics, and Computer Science, California Lutheran University, Thousand Oaks, CA + +# Triage Sessions: + +# Bicycle Wheel Problem + +Head Triage Judge + +Theresa M. Sandifer, Southern Connecticut State University, New Haven, CT + +Associate Judges + +Therese L. Bennett, Southern Connecticut State University, New Haven, CT + +Ross B. Gingrich, Southern Connecticut State University, New Haven, CT + +Cynthia B. Gubitose, Western Connecticut State University, Danbury, CT + +Ron Kutz, Western Connecticut State University, Danbury, CT + +C. Edward Sandifer, Western Connecticut State University, Danbury, CT + +# Hurricane Evacuation Problem + +Head Triage Judge + +Paul Boisen, National Security Agency, Ft. Meade, MD + +Associate Judges + +James Case, Baltimore, Maryland + +Peter Anspach, Jennifer McGreevy, Erin Schram, Larry Wargo, and 7 others + +from the National Security Agency + +# Sources of the Problems + +The Bicycle Wheel Problem was contributed by Kelly Black, Mathematics Dept., University of New Hampshire, Durham, NH. The Hurricane Evacuation Problem was contributed by Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC. + +# Acknowledgments + +The MCM was funded this year by the National Security Agency, whose support we deeply appreciate. We thank Dr. Gene Berg of NSA for his coordinating efforts. The MCM is also indebted to INFORMS, SIAM, and the MAA, which provided judges and prizes. + +I thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +A = Bicycle Wheel Control Problem + +$\mathrm{H} =$ Honorable Mention + +B = Hurricane Evacuation Problem + +$\mathbf{M} =$ Meritorious + +$\mathbf{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORAB
ALABAMA
Huntingdon CollegeMontgomeryRobert L. RobertsonP,P
ALASKA
University of Alaska FairbanksFairbanksChris HartmanM
ARIZONA
McClintock High SchoolTempeJames S. GibsonP
CALIFORNIA
California Lutheran UniversityThousand OaksSandy LofstockH,P
California Poly. State UniversitySan Luis ObispoMatthew J. MoelterP
Thomas O’NeilMM
California State UniversityBakersfieldMaureen E. RushP
Christian Heritage CollegeEl CajonTibor F. SzarvasP
Harvey Mudd CollegeClaremontRan Libeskind-HadasO,H
Michael E. MoodyMH
Occidental CollegeLos AngelesRamin NaimiP
University of CaliforniaBerkeleyBrian W. CurtinP,P
COLORADO
Colorado CollegeColorado SpringsPeter L. StaabHH
Mesa State CollegeGrand JunctionEdward K. Bonan-HamadaH,P
Regis UniversityDenverLinda L. DuchrowP
United States Air Force AcademyUSAF AcademyJames S. RolfP
Jim WestM
University of ColoradoColorado SpringsGregory J. MorrowP
BoulderBengt FornbergHM
University of Southern ColoradoPuebloJames N. LouisellH
CONNECTICUT
Sacred Heart UniversityFairfieldPeter LothP
Southern Conn. State UniversityNew HavenTherese L. BennettH
DISTRICT OF COLUMBIA
Georgetown UniversityWashingtonAndrew J. VogtPP
FLORIDA
Embry-Riddle Aero. UniversityDaytona BeachGreg Scott SpradlinP,P
Florida A&M UniversityTallahasseeBruno GuerrieriPP
Stetson UniversityDeLandLisa O. CoulterP
University of North FloridaJacksonvillePeter A. BrazaP
GEORGIA
Agnes Scott CollegeDecaturRobert A. LeslieP,P
Georgia Southern UniversityStatesboroGoran LesajaH,P
State University of West GeorgiaCarrolltonScott GordonP
IDAHO
Albertson College of IdahoCaldwellMike HitchmanP
Boise State UniversityBoiseJodi L. MeadP
ILLINOIS
Greenville CollegeGreenvilleGalen R. PetersH,P
Illinois Wesleyan UniversityBloomingtonZahia DriciP,P
Northern Illinois UniversityDeKalbEmil CorneaP
Wheaton CollegeWheatonPaul IsiharaH,P
INDIANA
Goshen CollegeGoshenDavid HousmanH,H
Indiana UniversityBloomingtonMichael S. JollyH
Rose-Hulman Inst. of TechnologyTerre HauteDavid J. RaderM,P
Frank YoungH
Saint Mary's CollegeNotre DamePeter D. SmithH,P
IOWA
Grand View CollegeDes MoinesSergio LochPP
Grinnell CollegeGrinnellMarc A. ChamberlandHH
Mark MontgomeryP,P
Luther CollegeDecorahReginald D. LaursenH,P
Mt. Mercy CollegeCedar RapidsK.R. KnoppH
Simpson CollegeIndianolaMurphy WaggonerPP
Werner S. KollnH
Wartburg CollegeWaverlyMariah BirgenP,P
KANSAS
Emporia State UniversityEmporiaTon BoerkoelP
Kansas State UniversityManhattanKorten N. AucklyP
KENTUCKY
Asbury CollegeWilmoreKenneth P. RietzHH
Spalding UniversityLouisvilleScott W. BagleyP
LOUISIANA
Northwestern State UniversityNatchitochesRichard C. DeVaultP
MAINE
Colby CollegeWatervilleJan HollyH
MARYLAND
Goucher CollegeBaltimoreRobert E. LewandH,P
Johns Hopkins UniversityBaltimoreDaniel Q. NaimanP
Mount Saint Mary's CollegeEmmitsburgWilliam E. O'TooleP
Fred PortierP
Salisbury State UniversitySalisburySteven M. HetzlerH
Michael J. BardzellP
MASSACHUSETTS
MITCambridgeDan RothmanM,H
Salem State CollegeSalemKenny ChingP
Smith CollegeNorthamptonRuth HaasP
University of MassachusettsLowellJames Graham-EaglePM
Williams CollegeWilliamstownStewart D. JohnsonP
Frank MorganP
Cesar E. SilvaP
Worcester Poly. Inst.WorcesterBogdan VernescuM
MICHIGAN
Calvin CollegeGrand RapidsRandall J. PruimP
Eastern Michigan UniversityYpsilantiChristopher E. HeePP
Hillsdale CollegeHillsdaleRobert J. HesseM
Lake Superior State UniversitySault Sainte MarieJohn Jaroma and David BaumannM
Ruth G. FavroO
Scott SchneiderH
Howard WhitstonP
Lawrence Tech. UniversitySouthfieldToni CarrollP,P
Rick V. TrujilloP
Siena Heights UniversityAdrianDavid JamesP
University of MichiganDearbornColleen G. LivingstonP,P
MINNESOTA
Bemidji State UniversityBemidjiWilliam M. KinneyO
Bethel CollegeSt. PaulA. Wayne RobertsPP
Macalester CollegeSt. PaulPhilip J. GloorP
St. Olaf CollegeNorthfieldPeh H. NgP
University of MinnesotaMorrisPeh H. Ng
MISSOURI
Crowder CollegeNeoshoCheryl L. IngramP
Missouri Southern State CollegeJoplinPatrick CassensPP
Northwest Missouri State Univ.MaryvilleRussell N. EulerPP
Southeast Missouri State Univ.Cape GirardeauRobert W. SheetsH
Truman State UniversityKirksvilleSteve Jay SmithP
Washington UniversitySt. LouisHiro MukaiMM
Wentworth Mil. Acad. & Jr. Coll.LexingtonJacqueline O. MaxwellPP
MONTANTA
Carroll CollegeHelenaPhilip B. RoseP
Holly S. ZulloPP
St. Andrew UniversityHelenaMark J. KeeffeP
NEBRASKA
Hastings CollegeHastingsDavid B. CookeP
University of NebraskaLincolnGlenn W. LedderH
NEVADA
University of NevadaRenoMark M. MeerschaertP
NEW JERSEY
Montclair State UniversityUpper MontclairMichael A. JonesP
Rowan UniversityGlassboroPaul J. LaumakisM
NEW YORK
Hunter College, City Univ. of NYNew YorkAda PelusoH
Ithaca CollegeIthacaJohn C. MaceliP
Manhattan CollegeRiverdaleKathryn C. WeldP
Marist CollegePoughkeepsieTracey B. McGrailP
State University of NYCortlandGeorge F. FeissnerP
R. Bruce MattinglyP
U.S. Military AcademyWest PointEdward ConnorsM
Gregory ParnellH
Donovan D. PhillipsO
David SandersM
Westchester Comm. CollegeValhallaSheela L. WhelanP,P
NORTH CAROLINA
Appalachian State UniversityBooneHolly P. HirstH
Eric S. MarlandP
Brevard CollegeBrevardClarke WellbornP,P
Davidson CollegeDavidsonLaurie J. HeyerH
Duke UniversityDurhamDavid P. KrainesO
N.C. School of Sci. & Math.DurhamDot DoyleM
North Carolina State Univ.RaleighJeffrey S. ScroggsM,H
University of North CarolinaWilmingtonRussell L. HermanP
Wake Forest UniversityWinston-SalemMiaohua JiangO,M
Western Carolina UniversityCullowheeJeffrey Allen GrahamP
OHIO
The College of WoosterWoosterPamela PierceP
Hiram CollegeHiramBrad S. GubserH
John Carroll UniversityUniversity HeightsAngela S. SpalsburyMP
Miami UniversityOxfordDoug E. WardP
Oberlin CollegeOberlinElizabeth L. WilmerP
Ohio UniversityAthensDavid N. KeckP
Wright State UniversityDaytonThomas P. SvobodnyM,P
Youngstown State UniversityYoungstownStephen HanzelyP
Robert KramerP
Thomas SmotzerMH
OKLAHOMA
Oklahoma State UniversityStillwaterJohn E. WolfeP,P
Southern Nazarene Univ.BethanyVirgil Lee TurnerH
Univ. of Central OklahomaEdmondCharles CooperP
Dan EndresP
OREGON
Eastern Oregon UniversityLa GrandeRobert HuotariH
Anthony TovarH,P
Jennifer WoodworthP
Lewis & Clark CollegePortlandRobert W. OwensM
Portland State UniversityPortlandGerardo A. LafferriereH,P
Southern Oregon UniversityAshlandLisa M. CiasulloM
University of PortlandPortlandThomas W. JudsonP
PENNSYLVANIA
Bloomsburg UniversityBloomsburgKevin K. FerlandPH
Clarion UniversityClarionJon A. BealM
John W. HeardP
Gettysburg CollegeGettysburgJames P. FinkHH
Carl LeinbachH
Sharon L. StephensonM
Lafayette CollegeEastonThomas HillM
Shippensburg UniversityShippensburgCheryl OlsenP
Villanova UniversityVillanovaBruce Pollack-JohnsonP
Westminster CollegeNew WilmingtonBarbara T. FairesM,M
RHODE ISLAND
Rhode Island CollegeProvidenceDavid L. AbrahamsonP
SOUTH CAROLINA
Charleston Southern UniversityCharlestonStan PerrineP,P
Coastal Carolina UniversityConwayIoana MihailaP
Francis Marion UniversityFlorenceThomas L. FitzkeeP
Midlands Technical CollegeColumbiaSJohn R. LongP
University of South CarolinaAikenLaurene V. FausettM,P
ColumbiaRalph E. HowardM
SOUTH DAKOTA
Mount Marty CollegeYanktonJim MinerP
S.D. School of Mines & Tech.Rapid CityKyle L. RileyPP
TENNESSEE
Christian Brothers UniversityMemphisCathy W. CarterP
Lipscomb UniversityNashvilleMark A. MillerP
TEXAS
Abilene Christian UniversityAbileneDavid HendricksH
Angelo State UniversitySan AngeloTrey SmithH
Baylor UniversityWacoFrank H. MathisH
Southwestern UniversityGeorgetownTherese N. SheltonP
Stephen F. Austin State UniversityNacogdochesColin L. StarrP
Trinity UniversitySan AntonioAllen G. HolderP
Jeffrey K. LawsonP
Fred M. LoxsomM
Hector C. MirelesP
University of HoustonHoustonBarbara Lee KeyfitzP
University of North TexasDentonJohn A. QuintanillaM
University of TexasAustinLorenzo A. SadunP
UTAH
Weber State UniversityOgdenRichard R. MillerH
VERMZON
Johnson State CollegeJohnsonGlenn D. SproulPP
VIRGINIA
Eastern Mennonite UniversityHarrisonburgJohn HorstPH
The Governor's SchoolRichmondJohn A. BarnesPO
Crista HamiltonH,P
James Madison UniversityHarrisonburgCaroline SmithM
James S. SochackiM
Randolph-Macon CollegeAshlandBruce F. TorrenceP
Roanoke CollegeSalemJeffrey L. SpielmanH
University of RichmondRichmondKathy W. HokeM
Univ. of Virginia's College at WiseWiseGeorge W. MossPP
Virginia Western Comm. CollegeRoanokeSteve T. HammerH
Ruth A. ShermanP
WASHINGTON
Pacific Lutheran UniversityTacomaMei ZhuHM
University of Puget SoundTacomaDeWayne R. DerryberryPP
Carol M. SmithP
University of WashingtonSeattleRandall J. LeVequeH
James Allen MorrowM
Wenatchee Valley CollegeOmakKit A. ArbuckleP
Western Washington UniversityBellinghamSaim UralM
Tjalling YpmaH,H
WEST VIRGINNIA
West Virginia Wesleyan CollegeBuckhannonJeffery D. SykesP
WISCONSIN
Beloit CollegeBeloitPaul J. CampbellP
RiponRipon CollegeDavid W. ScottP
Univ. of Wisconsin-Stevens PointStevens PointNathan R. WetzelP
Univ. of Wisconsin-StoutMenomonieMaria G. FungH
Wisconsin Lutheran CollegeMilwaukeeMarvin C. PapenfussP
AUSTRALIA
University of New South WalesSydney, NSWJames FranklinH,H
University of Southern QueenslandToowoomba, QLDTony J. RobertsM
CANADA
Brandon UniversityBrandon, MBDoug A. PickeringM
Dalhousie UniversityHalifax, NSJohn C. ClementsP
Durette A. PronkP
Memorial Univ. of NewfoundlandSt. John's, NFAndy FosterP
University of SaskatchewanSaskatoon, SKJames A. BrookeH,P
Tom G. SteeleH
University of TorontoToronto, ONNicholas A. DerzkoP
University of Western OntarioLondon, ONPeter H. PooleM,P
York UniversityToronto, ONJuris StepransM,H
CHINA
Anhui Mechanical and Electronics CollegeWuhuWang ChuanyuP
Wang GengP
Yang YiminP
Anhui UniversityHefeiCai QianP
Wang Da-pengH
Zhang Quan-bingP
Beijing Institute of TechnologyBeijingChen YihongHP
Cui XiaodiP,P
Yao CuizhenHP
Beijing Union UniversityBeijingJiang XinhuaP
Ren KailongP
Wang XinfengP
Zeng QingliP
Beijing University of Aero. & AstronauticsBeijingPeng LinpingH,H
Wu SanxingP,P
Beijing University of Chemical TechnologyBeijingCheng YanP
Jiang GuangfengM
Liu DamingP
Wenyan YuanM
Beijing University of Posts & Telecom.BeijingHe ZuguoM,H
Luo ShoushanP,P
Central South UniversityChangshaZhang Hong-yanP
Zheng Zhou-shunM
China University of Mining & TechnologyXuzhouWu ZongxiangPP
Zhu KaiyongPP
Chongqing UniversityChongqingGong QuPH
Li FuP
Zhan LezhouH
Dalian University of TechnologyDalianDing YongshengM
He MingfengHP
Yu HongquanH
Dong Hua UniversityShanghaiHu LiangjianH
Lu YunshengP
East China Normal UniversityShanghaiJiang LuminP
Zhen Dong YuanP
East China Univ. of Science and TechnologyShanghaiLiu ZhaohuiM
Lu YuanhongH
Qin YanH
Shi JinsongH
Fudan UniversityShanghaiCai ZhijieP,P
Gong XueQingPP
Xu QinfengP
Guangdong Commercial CollegeGuangzhouXiang ZiguiPP
Harbin Engineering UniversityHarbinGao ZhenbinM
Luo YueshengH
Shen JihongP
Zhang XiaoweiP
Harbin Institute of TechnologyHarbinShang ShoutingMP
Shao JiqunH
Wang XuefengP
Hefei University of TechnologyHefeiDu XueqiaoPH
Huang YouduP,P
Hu Ning (individual, one-member team)SuzhouP
Information & Engineering UniversityZhengzhouHan ZhonggengP
Li BinP
Lu ZhiboP
Zhang WujunH
Jiading No. 1 High SchoolJiadingChen GanP
Wang YuM
Jiamusi UniversityJiamusiBai FengshanP
Fan WuiP
Gu LizhiP
Liu YuhuiP
Jilin University of TechnologyChangchunFang PeichenMP
Yang YinshengP,P
Jinan UniversityGuangzhouHu DaiqiangP
Ye Shi QiPP
Nanjing Nankai UniversityTianjinLiang KeH
Ruan JishouP,P
Zhou XingweiH
Nanjing Normal UniversityNanjingChen BoP
Chen XinP
Fu ShitaiP
Zhu Qun-ShengP
Nanjing UniversityNanjingYao TianxingH,P
Nanjing University of Science & TechnologyNanjingWu XingmingP
Xu Chun GenH
Yang JianP
Yu JunP
Nankai UniversityTianjinKe LiangH
Ruan JishouP,P
Zhou XingweiH
National University of Defence TechnologyChangshaLu ShirongHP
Wu MengdaHM
North China Institute of TechnologyTaiyuanLei Ying-jieP
Xue Ya-kuiP
Yong BiH
Northeastern UniversityShenyangCui JianjiangP
Han Tie-minP
Hao PeifengP
Xiao WendongM
Xue DingyuP
Northwest Inst. of Texile Sci. & Tech.Xi'anHe XingShiPH
Northwest UniversityXi'anHe Rui-chanP,P
Northwestern Polytechnic UniversityXi'anHua Peng GuoH
Liu Xiao DongH
Shi Yi MinH
Zhang Sheng GuiP
Peking UniversityBeijingDeng MinghuaPP
Lei GongyanH,P
Shu YoushengHH
Second Aero. Inst. of the Air ForceChangChunZhang Shaohuai and Fu DeyouP,P
Shandong UniversityJinanMa PimingP
Ma ZhengyuanP
Shanghai Foreign Language SchoolShanghaiLi Qun PanM,P,P
Shanghai Jiaotong UniversityShanghaiHuang JianguoP,P
Song BaoruiHP
Shanghai Maritime UniversityShanghaiSheng ZiningP
Shanghai Normal UniversityShanghaiGuo ShenghuanP
Zhang JizhouP
Zhu DetongH
Shanghai Univ. of Finance and Econ.ShanghaiFeng SuweiH
Yang XiaobinH
Shanxi UniversityTaiyuanLi JihongP
Yang AiminP
Zhang XianwenP
Zhao AiminP
Sichuan UniversityChengduLi HuangP
Liu XiaoshiP
Yang ZhiheP
Zhou JieP
South China University of TechnologyGuangzhouLiang ManfaP
Lin JianliangM
Tao ZhisuiH
Zhu FengfengH
Southeast UniversityNanjingChen En-shuiMP
Huang JunH,H
Tianjin UniversityTianjinDong WenjunM
Liu ZeyiP
Wenhua HouP
Tsinghua UniversityBeijingHu Zhi-MingPH
Ye JunHH
University of Elec. Science & Tech.ChengduWang JiangaoPH
Xu QuanzhiP
Zhong ErjieP
University of Sci. & Tech. of ChinaHefeiGu IajunM
Yang JianH
Yang LiuP
Yong NiP
Wuhan University (WUHEE)WuhanChen Gui XingP
Huang ChongchaoM
Wuhan University of Tech.WuhanHuang Zhang-CanP
Xi'an Inst. of Post & Telecom.Xi'anLi Changxing and Fan JiulunP
Xi'an Jiaotong UniversityXi'anDai YonghongP
Zhou YicangH
Xi'an University of TechnologyXi'anCao MaoshengPP
Xidian UniversityXianChen Hui-chanH
Liu Hong-weiH
Zhang Zhuo-kuiH
Zhou Shui-shengP
YanShan UniversityQinHuangDaoWang YongMaoP
Zhong XiaoZhuPP
Zhejiang UniversityHangzhouHe YongMM
Yang QifanMM
Zhongshan UniversityGuangzhouBao YunH
Chen ZepengM
Li CaiweiH
Yin XiaolingP
ENGLAND
University of OxfordOxfordMaciek DunajskiH
FINLAND
Päivölä CollegeTarttilaMerikki LappiM,H
HONG KONG
Hong Kong Baptist UniversityKowloon TongW.C. ShiuH
C.S. TongP
IRELAND
National University of IrelandGalwayNiall MaddenPH
Trinity College DublinDublinTimothy G. MurphyH
University College CorkCorkJames Joseph GrannellO
Donal J. HurleyH
Brian J. TwomeyH
University College DublinDublinPeter DuffyM
Maria G. MeehanP,P
LITHUANIA
Vilnius UniversityVilniusRicardas KudzmaP
SINGAPORE
National Univ. of SingaporeSingaporeLim Leong Chye AndrewM
SOUTH AFRICA
Stellenbosch UniversityMatielandJan H. van VuurenOH
+ +# Editor's Note + +For team advisors from China and Singapore, we have endeavored to list family name first, with the help of Susanna Chang '03. + +# Spokes or Discs? + +W.D.V. De Wet + +D.F. Malan + +C. Mumbeck + +Stellenbosch University + +Matieland, Western Cape + +South Africa + +Advisor: Jan H. van Vuuren + +# Introduction + +It is well known that disc wheels and standard spiked wheels exhibit different performance characteristics on the race track, but as yet no reliable means exist to determine which is superior for a given set of conditions. + +We create a model that, taking the properties of wheel, cyclist, and course into account, may provide a definitive answer to the question, "Which wheel should I use today?" The model provides detailed output on wheel performance and can produce a chart indicating which wheel will provide optimal performance for a given environmental and physical conditions. + +We use laws of physics, plus data from various Web sites and published sources, then numerical methods to obtain solutions from the model. + +We demonstrate the use of the model's output on a sample course. Roughly speaking, standard spiked wheels perform better on steep climbs and trailing winds, while disc wheels are better in most other cases. + +We did some validation of the model for stability, sensitivity, and realism. We also generalised it to allow for a third type of wheel, to provide a more realistic representation of the choice facing the professional cyclist today. + +A major difficulty was obtaining reliable data; sources differed or even contradicted one another. The range of the data that we could find was insufficient, jeopardizing the accuracy of our results. + +# Analysis of the Problem + +Consider the system of a cyclist and the racing cycle. The cyclist provides the energy to drive the bicycle against the forces of drag (from contact with air), friction (from contact between wheels and ground), and gravity (which opposes progress up a slope). Furthermore, when accelerating, the cyclist must provide the energy to set the wheels rotating, due to their moment of inertia. + +The primary problem is to determine, for a given set of conditions, which type of rear wheel is the most effective. "Effectiveness" is what a specific rider desires from the equipment. We assume that the rider desires to complete the course in as short a time as possible, or with the least possible energy expenditure. These definitions are closely linked: A rider who expends more energy to maintain a certain speed will soon tire and will therefore have a lower maintainable speed. + +The differences between standard 32-spoke wheels and disc wheels lie in weight and in their aerodynamic properties. Given the right wind conditions (which we investigate), the disc wheels should allow air to pass the cyclist/cycle combination with less turbulence, that is, less drag. However, disc wheels weigh more, which affects the amount of power required to move the wheels up a slope and to begin rotating the wheels from rest (such as when accelerating). + +To examine which type of wheel performs the best under which conditions, we need to determine which wheel allows the greatest speed given specific conditions or, equivalently, which wheel requires the least power to drive. + +The greatest difficulty is that air resistance is a function of speed while speed is a function of air resistance. The model needs to utilise numerical methods to calculate the speed that a rider can maintain with the given parameters. + +There are other factors to consider, too. Disc wheels are not very stable in gusty wind conditions, since they provide a far greater surface area to cross-winds; with a greater moment of inertia, they accelerate more slowly; and their greater weight may provide more grip on wet roads. + +# Assumptions and Hypothesis + +We investigated the performance of the wheel types noted in Table 1. The wheel data do not conform to any specific make or model but are typical. + +Table 1. +Types of wheels. + +
TypeStandard 32-spokeAero wheel (trispoke)Solid disc wheel
Diameter (m)0.70.70.7
Mass (kg)0.81.01.3
+ +We assume that the cyclist uses a standard spoke wheel in the front and either a disc wheel or standard 32-spoked wheel at the back. We also briefly + +examine aero wheels, which are not solid but more aerodynamic than standard spiked wheels. We refer to the three types as standard, aero, and disc. + +We assume that the rider and cycle frame (excluding wheels) exhibit the same drag for all wind directions. + +Table 2. +Symbols used. + +
SymbolUnitDefinition
Am2area of rider/bicycle exposed to wind
cadimensionlessvariable coefficient of axial air resistance for specific wheel
crrdimensionlessconstant of rolling resistance
cwdimensionlessconstant of air resistance
Dm2reference area of wheel
FadNewton (N)axial air resistance +(against the cyclist's direction of motion)
F*adNewton (N)axial air resistance on a bicycle with box-rimmed spiked wheels in a headwind
FgNewton (N)effect of gravity on the cyclist
FrrNewton (N)rolling resistance
gm/s2gravitational constant
Mkgmass of cyclist and cycle
PWatt (W)rider's effective power output
vbgm/sspeed of the bike relative to the ground
vwbm/sspeed of the wind relative to the bike
wvgm/sspeed of the wind relative to the ground
αdegreeangle of the rise
βdegreeyaw angle, the angle between the direction opposing bicycle motion and perceived wind +(A relative headwind has a yaw of 0°.)
γdegreeangle between wind direction (vwg) and direction of motion +(A straight-on headwind has γ = 180°.)
ψpercentgrade of hill, the sine of α, the angle of the rise
ρkg/m3air density
+ +# Forces at Work + +For a bicycle moving at a constant velocity, there are three significant retarding forces (Figure 1): + +rolling resistance, due to contact between the tires and road; + +gravitational resistance, if the road is sloped; and + +air resistance, usually the largest of the three. + +When accelerating, the rider also uses energy to overcome translational and rotational inertia, although the model does not take these into account. + +![](images/81a221bd6049a81cce0afa8f2e7616b4be20ba1d6b0bd6eaf5324321cbe23f75.jpg) +Figure 1. Diagram of forces. + +The forces of rolling resistance and gravity are as follows: + +$$ +\begin{array}{l} F _ {r r} = c _ {r r} \cdot (\text {n o r m a l f o r c e}) \\ = c _ {r r} M g \cos \alpha \\ = c _ {r r} M g \cos \arcsin \psi \\ = c _ {r r} M g \sqrt {1 - \psi^ {2}}, \\ \end{array} +$$ + +$$ +F _ {g} = M g \sin \alpha = M g \psi . +$$ + +Calculating the axial drag force is more complicated. The air resistance is $f = \frac{1}{2}\rho \cdot c_wAv^2$ , where $v$ is the speed of the air relative to the object. Since we assume that the area of the rider/frame exposed to the wind is constant, we have (neglecting the additional drag on the wheels caused by yaw and type of wheel) + +$$ +F _ {a d} ^ {*} = \frac {1}{2} \rho \cdot A v _ {w b} ^ {2} \cos \beta \qquad (\text {a x i a l c o m p o n e n t}). +$$ + +Observe the sketches in Figure 2. From them we derive that + +$$ +\begin{array}{l} v _ {w b} ^ {2} = v _ {w g (\mathrm {a x i a l})} ^ {2} + v _ {w g (\mathrm {s i d e})} ^ {2} \\ = \left(v _ {b g} + v _ {w g} \cos (1 8 0 ^ {\circ} - \gamma)\right) ^ {2} + \left(v _ {w g} \sin (1 8 0 ^ {\circ} - \gamma)\right) ^ {2} \\ = \left(v _ {b g} - v _ {w g} \cos \gamma\right) ^ {2} + \left(v _ {w g} \sin \gamma\right) ^ {2} \\ = v _ {b g} ^ {2} - 2 v _ {b g} v _ {w g} \cos \gamma + v _ {w g} ^ {2}. \\ \end{array} +$$ + +Also, + +$$ +\beta = \arctan \left(\frac {v _ {w g (\mathrm {s i d e})}}{v _ {w g (\mathrm {a x i a l})}}\right) = \arctan \left(\frac {v _ {w g} \sin \gamma}{v _ {b g} - v _ {w} \cos \gamma}\right). +$$ + +The axial air drag on the rear wheel [Tew and Sayers 1999] is + +$$ +F _ {a d (\mathrm {w h e e l})} = 0. 7 5 \cdot \frac {1}{2} \rho \cdot c _ {a} D \cdot v _ {w b} ^ {2}; +$$ + +the 0.75 is because a rear wheel experiences $75 \%$ of the drag of a wheel in free air, due to interference of the gear cluster, frame, cyclist’s legs, and so forth. + +![](images/5e46c0655397fe26fe135c4e6019610680e6c3bc15b658bb425f39d3d7269715.jpg) + +![](images/6457d6f93e118e4820e511a52f084ecce714ff88d2fc36cac0f10ec98c19b224.jpg) +Figure 2a. Wind speed relative to wheel. +Figure 2b. Forces on wheel. + +For the three basic types of wheel, Tew and Sayers [1999] give typical curves of the axial drag coefficient $c_{a}$ vs. yaw angle $(0^{\circ} \leq \beta \leq 30^{\circ})$ ; this interval accounts for the majority of conditions experienced by a rider. We approximate these curves by straight lines (a close match). + +The curves for different relative wind speeds are very much alike for the standard wheel and the aero wheel. The disc wheel, however, shows major variation for different relative wind speeds. + +Since the axial drag coefficient must be zero at $\beta = 90^{\circ}$ , and by observing the shape of the curves, we extrapolated to larger yaw angles using a sine-shaped curve through $c_{a} = 0$ at $\beta = 90^{\circ}$ , with an appropriate scaling to ensure continuity. Without wind-tunnel testing, the accuracy cannot be guaranteed. + +Comparing the percentage of power dissipated by drag on one wheel (according to the model) with the data of Tew and Sayers [1999], we found a high degree of agreement. Typically, $1\%$ to $10\%$ (depending on wheel type) of the power is dissipated by drag on the wheels. + +From $F_{ad(\text{wheel})}$ we subtracted the drag experienced by a normal (box-rimmed) wheel under headwind, since it was already taken account in $F_{ad}^{*}$ . + +The axial air drag on the bicycle is thus given by + +$$ +F _ {a d} = F _ {a d} ^ {*} + F _ {a d (\mathrm {w h e e l})} = \frac {1}{2} \rho v _ {w b} ^ {2} [ c _ {w} A \cos \beta + 0. 7 5 (c _ {a} - 0. 0 6) D ], +$$ + +where 0.06 is the coefficient of axial drag for a normal wheel in a headwind. The $\cos \beta$ gives the component in the direction of motion of the cyclist. + +# Calculating Results for a Typical Rider + +For data, we used standard values [Analytic Cycling 2001] for a road racer near sea level in normal atmospheric conditions: + +$$ +M = 8 0 \mathrm {k g}, \quad g = 9. 8 1 \mathrm {m} / \mathrm {s} ^ {2}, +$$ + +$$ +c _ {r r} = 0. 0 0 4, \quad c _ {w} = 0. 5, +$$ + +$$ +A = 0. 5 \mathrm {m} ^ {2}, \quad D = 0. 3 8 \mathrm {m} ^ {2} \quad (\text {f o r a 7 0 0 m m w h e e l}), +$$ + +$$ +\rho = 1. 2 2 6 \mathrm {k g} / \mathrm {m} ^ {3} \quad (\text {c o u l d b e c h a n g e d t o i n c o r p o r a t e a l t i t u d e}). +$$ + +We calculated that, to maintain a speed of $45\mathrm{km / h}$ on a level road (as in the problem description), the rider must deliver $340~\mathrm{W}$ of effective pedaling power. + +# The Computer Program + +We wrote a computer program in Pascal that calculates the speed that the rider can sustain for a given $v_{wg}, \gamma,$ and $\psi$ . It does this by trying a speed and determining the wattage necessary to sustain the speed. If the wattage is too high, the speed is lowered; otherwise, the speed is increased. Every time the solution point is crossed, the step size is reduced. The process is carried out until the wattage used is within a tolerance 0.01 W to $P$ . + +To take into account the effect of drag on different types of wheels, our program does the following: + +1. The wind direction, wind speed relative to the ground, and slope of the road $(\gamma, v_{wg},$ and $\psi)$ are provided as inputs. +2. The program tries a value for $v_{bg}$ . +3. $F_{rr}$ and $F_{g}$ are calculated. +4. From $\gamma, v_{wg}, \psi,$ and $v_{bg}$ , we calculate $v_{wb}$ and $\beta$ . +5. From $v_{wb}$ and $\beta$ , we calculate $F_{ad}$ . +6. We calculate the wattage by using the formula $P = (F_{rr} + F_g + F_{ad})v_{bg}$ . +7. We compare the calculated value of $P$ to the known value of 340 W. + +8. We try a new value for $v_{bg}$ , depending on whether the wattage required for the previous value of $v_{bg}$ was higher or lower than the available 340 W. +9. We repeat this process from Step 3 until the maximum maintainable speed is determined. + +Since the wheel that requires the least power in a set of circumstances also enables the highest speed, we used our program to vary the speed of the wind and show which wheel is best for the circumstances. Figure 3 shows a screen shot. The dark colour represents blue and the light colour red. Each of the 11 horizontal strips represents a road gradient, ranging from 0 at the top to 0.1 at the bottom in 0.01 increments. + +![](images/2f79851ba1a23943a0f8c8210dcb9e9b2bf98fe7d4e893019e87195581c64cdf.jpg) +Figure 3. Screen shot from program. + +The horizontal axis is wind speed, from $0\mathrm{km / h}$ at the left to $63.9\mathrm{km / h}$ at the right in $0.1\mathrm{km / h}$ increments. The vertical axis of each bar is the wind angle (relative to track), from $0^{\circ}$ to $180^{\circ}$ in $15^{\circ}$ increments. + +We have a very compact representation showing the transition wind speeds for a range of wind angles and road gradients. The user is provided a crosshair to move over any point on the graph. The colour of a pixel indicates the better wheel to use for the corresponding gradient, wind speed, and wind angle. + +To generate a table of transition speeds (Table 3), we read off the points at which transitions occur. This might seem cumbersome, but developing an algorithm to find the transition points is very difficult, since the number of transitions is not known beforehand and the functions exhibit irregular behaviour. + +To use the table, look up the particular entry corresponding to the road grade and the angle that the wind makes with the forward direction of the bike. An entry of S means that a standard wheel performs better for all wind speeds, a D indicates that a disc wheel is better at all wind speeds. + +Table 3. +Which wheel to use, as a function of road grade and wind angle. + +
Road gradeAngle of wind (in degrees) relative to bike's direction
0153045
<0DDDD
0DD46.5SD36.6SD
0.01DD27.8SD22.6SD21.3S47.3D
0.02D44.5SD16.5SD13.3SD12.3S53.8D
0.03D24.2SD9.4S47.6D54.5SD7.3S56.6DD6.5S
0.04D12.3SD4.8SD3.5S42.0DD3.0S
0.05D4.2SSS33.9DS
0.06SS63.4DS54.8D61.8SS38.6D46.2S
0.07SSS55.6D59.1SS32.3D46.5S
0.08SSSS28.0D45.8S
0.09SSSS24.9D44.8S
0.10SSSS
+ +
Road gradeAngle of wind (in degrees) relative to bike's direction
607590105
<0DDDD
0DDDD
0.01D22.1S33.6DDDD
0.02D12.7S39.1DD14.5S32.0DDD
0.03D6.4S43.2DD6.9S35.1DD8.5S30.6DD14.2S23.8D
0.04D2.9S47.9DD2.8S37.2DD3.2S33.4DD3.8S29.6D
0.05SS40.0DS34.7DS32.4D
0.06SS42.8DS35.7DS34.2D
0.07SS46.6DS37.5DS35.5D
0.08SSS39.2DS36.4D
0.09SSS40.6DS37.2D
0.10SSS42.1DS37.8D
+ +
Road gradeAngle of wind (in degrees) relative to bike's direction
120135150165180
<0DDDDD
0DDDDD
0.01DDDDD
0.02DDDDD
0.03DDDDD
0.04D6.1S22.1DDDDD
0.05S28.0DS17.4DDDD
0.06S31.1DS23.7DS15.4DS8.3DS1.7D
0.07S33.3DS27.9DS19.8DS13.7DS6.5D
0.08S34.9DS30.6DS23.0DS17.5DS10.5D
0.09S36.3DS32.5DS25.5DS20.2DS13.8D
0.10S37.4DS34.1DS27.6DS22.6DS16.8D
+ +The other entries can be decoded as follows: A number between two letter entries indicates at which wind speed a transition occurs; the first letter indicates which wheel is most efficient at lower speeds, and the second number which wheel is best at higher speeds. For example, S28.0D indicates that standard wheels are better at speeds below $28.0\mathrm{km/h}$ . An entry of D6.1S22.1D indicates that the standard wheel performs better at speeds between 6.1 and $22.1\mathrm{km/h}$ , while the disc wheel performs better at all other wind speeds. + +The table applies for wind speeds up to $64\mathrm{km / h}$ . Strong winds are very rare and a disc wheel will cause major stability problems in these conditions. + +As an aside, we created graphs comparing standard, aero, and disc wheels simultaneously and allowed for negative gradients as well. The aero wheel dominated in most conditions. + +# Applying the Table to a Sample Course + +We designed a simple time-trial course. The map of the course and a view of the elevation are given in Figure 4. The course consists of four different segments, with each turning point labelled with a letter. The data for each point are in Table 4. + +![](images/25d1cb4e9ff31a45050b1a6e4fba2ec79ba71b633d27a9da689c7a2275ff6059.jpg) +Figure 4a. + +![](images/fe151df3c14fecf7805f01cefaf77c58009d04bbd8adfac0c57e712aa6e35464.jpg) +Figure 4b. + +Table 4. Details of the sample course. + +
PointMap coordinates (km, km)Elevation (m)
A (Start)(0,36)600
B(40,36)200
C(52,24)1600
D(36,20)1580
E (Finish)(36,4)2350
+ +Assume that the wind is blowing at $25\mathrm{km / h}$ in the direction shown on the map. For each segment, we compute the gradient and the angle of the segment + +with the wind by trigonometry. The length of each segment is slightly longer than the straight-line distance, because the road is not perfectly straight. + +We look at Table 3 to determine the best wheel for each section of the course. For instance, in the second section, the gradient is 0.08 and the angle $135^{\circ}$ ; according to the table, the standard wheel is better at a wind speed below $30.6\mathrm{km / h}$ , so at $25\mathrm{km / h}$ , a standard wheel is better for this section. We fill in the other entries in a similar manner (Table 5). + +Table 5. Best wheel for each section of the sample course. + +
SectionDistance (km)Wind angle (°)GradientBest wheel
AB40.8180-0.01Disc
BC17.51350.08Standard
CD16.7140.00Disc
DE16.2900.05Standard
+ +The disc wheel and the standard wheel both win in two segments. However, the disc wheel wins over $58\mathrm{km}$ of the course, while the standard wheel wins over only $33\mathrm{km}$ . Thus, the table advises that the cyclist use the disc wheel. + +# Getting More Refined Results + +For each section, we calculated the expected speed for the rider with each wheel. We add the times for the individual sections to obtain an estimate of the total time, obtaining Table 6. The table shows two interesting results: + +- The disc wheel beats the standard wheel by about $50 \, \text{s}$ . This is consistent with the result obtained earlier in this section. +- The aero wheel is almost 2 min faster than the disc wheel! + +Table 6. Total time for sample course for each wheel. + +
Standard wheelAero wheelDisc wheel
SectionLength (km)Speed (km/h)Time taken (h:min:sec)Speed (km/h)Time taken (h:min:sec)Speed (km/h)Time taken (h:min:sec)
AB40.832.731:14:47.633.271:13:34.833.461:13:09.7
BC17.512.561:23:35.912.681:22:48.512.471:24:12.1
CD16.759.520:16:50.160.130:16:39.859.970:16:42.5
DE16.219.720:49:17.419.940:48:44.819.580:49:38.6
Total91.23:44:313:41:483:43:43
+ +# Validating the Model + +# Sensitivity Analysis + +Does the table generated for one rider with a specific set of physical attributes apply to another rider, and if not, can the model easily be adjusted? + +To determine whether the same table could be used for different riders, we varied one of the rider's parameters, either power output, mass, or cross-sectional surface area, while keeping the others constant. In these analyses we found that + +- Changing any one or any combination of the parameters $P, A,$ or $M$ does not affect the basic pattern but slightly distorts (shifts, scales, or skews) it. +- Every rider-cycle combination would need its own chart for determining which wheel to use at which speed. + +# Other Validation + +We compared our model's output to data available at Analytic Cycling [2001], which provides interactive forms. Our model's output matched their output almost exactly for all the different combinations of input parameters that we used. Unfortunately, this site does not make provision for wind speed or angle, so this part of our model could not be compared. + +We tested the model with a completely different set of parameters approximating a very powerful sports car ( $P = 300 \, \mathrm{kW}$ , $C_d = 0.3$ , $A = 2.3 \, \mathrm{m}^2$ , $M = 1100 \, \mathrm{kg}$ ). We kept the other parameters the same. Our model predicted a top speed of $320 \, \mathrm{km/h}$ on a level road, which seemed very realistic. + +# Error Analysis + +We were concerned about the disc wheel "islands" that showed up in our graphical output at wind speeds of $40 - 50\mathrm{km / h}$ , wind angles of $30^{\circ} - 60^{\circ}$ , and higher gradients (see Figure 3). They probably are due to the peculiar behaviour of disc wheels in crosswinds. Since we extrapolated the drag coefficient function, we have no way of knowing whether this strange behaviour is realistic or not. + +# Model Strengths + +If a rider can obtain reasonably accurate course data (something that is not difficult at all), then the rider can determine exactly what type of rear wheel to use for a race by referencing this information to a chart or computer. + +The model has many parameters (air density, coefficients, rider mass, etc.) that can be adjusted to account for various situations. + +It is easy to extend the model to include the front wheel of the bicycle. + +# Model Weaknesses + +When the angle of the wind with the cyclist changes, there are a number of factors that influence the amount of drag that the cyclist experiences. The most obvious is the changing surface area; a cyclist from the side presents a far larger surface area to the wind than a cyclist heading into the wind. + +Less obvious, but still a large contributing factor, is that the drag coefficient is a function of the shape of the object. A cyclist from the side is far less streamlined and has a higher drag coefficient. + +Both these factors are extremely difficult to model. The cross-sectional surface area of a complex three-dimensional shape could be the subject of a paper on its own, and determining the drag coefficient of the same complex shape would require empirical tests. + +As a result, we ignored these effects and assume that the drag coefficient and cross-sectional surface area of the rider are the same for all directions. Strong cross-winds would cause too much rider instability to even consider using disc wheels, so it would not really be necessary to investigate drag in such cases. + +We also ignored the effects of the energy needed to overcome the rotational and translational inertia for each wheel. We cannot comment on the effect of these forces, since we did not have time to implement these effects. + +The model is only as accurate as the data used, and much of the available data on wheel drag is, at best, dubious. Many wheel manufacturers exaggerate the performance of their brands, while rival manufacturers quickly denounce their findings. Further, few data were available for yaw angles greater than $30^{\circ}$ , and we were forced to extrapolate, leading to a high degree of uncertainty. + +Using the table requires specific information about rider characteristics, some of which may be difficult to obtain, such as the surface area of rider and bicycle. + +# Conclusion + +Our model provides a good means for making an informed decision as to which wheel to use in a particular situation, but it needs more accurate data and refinement. + +# References + +Analytic Cycling. 2001. http://www.analyticycling.com. +Basis Systeme Netzwerk. 2001. Wheel aerodynamics. http://www.bsn.com. +Beaconsfield Cycling Club. 2001. Aero wheels under scrutiny. http://www.clubcycliste.com/english/archives/freewheel98/ freew498/f498aero.html. +Campaignola. 2001. http://www.campaignola.com. +Drag forces in formulas. 2001. http://www.damopjinard.com/aero/formulas.htm. +Mavic cycles. 2001. http://www.mavic.com. +Tew, G.S., and A.T. Sayers. 1999. Aerodynamics of yawed racing cycle wheels. Journal of Wind Engineering and Industrial Aerodynamics 82: 209-221. + +# Acknowledgments + +We would like to thank the Department of Applied Mathematics at Stellenbosch University for the generous use of their computing facilities and for catering for us during the weekend of the competition. + +# Selection of a Bicycle Wheel Type + +Nicholas J. Howard + +Zachariah R. Miller + +Matthew R. Adams + +United States Military Academy + +West Point, NY + +Advisor: Donovan D. Phillips + +# Introduction + +We present a model that compares the performance of various wheels over a user-determined course. We approach the modeling problem by beginning with Newton's Second Law of Motion: The sum of the forces acting on an object equals the mass of that object multiplied by its acceleration. + +We identify the four principal forces that contribute to a cyclist's motion: applied force, drag force, gravity, and rolling resistance. We further classify drag force into three components: the cyclist and bicycle frame, the front wheel, and the rear wheel. + +Drag force is dependent on a cyclist's velocity, and the force of gravity is dependent on a cyclist's position. Thus, our force equation is a function of cyclist position and cyclist velocity. + +We can then arrange Newton's Second Law to yield a second-order differential equation. Given position $S$ , velocity $dS/dt$ , acceleration $d^2 S/dt^2$ , mass $m$ , and a force function $F$ , the differential equation is + +$$ +\frac {d ^ {2} S}{d t ^ {2}} = \frac {F \left(S , \frac {d S}{d t}\right)}{m}. +$$ + +To implement our model, we created a computer software program that allows a user to input numerous pieces of data, including course layout, elevation profile, wind, weather conditions, and cyclist characteristics. The software iterates the differential equation using the fourth-order Runge-Kutta method. The software reports the preferred wheel choice based on the data. + +As a real-world application of our model, we analyze the 2000 Olympic Cycling time-trial race. Over that course, a disk wheel provided a considerable advantage over a spiked wheel. + +# Problem Analysis + +We must determine whether a spiked wheel (lighter but less aerodynamic) or a disk wheel (more aerodynamic but heavier) is more power-efficient. The choice depends on many factors, including the number and steepness of hills, the weather, wind speed and direction, and type of competition. + +We are not striving to discover which wheel is best for all situations. To the contrary, we are interested in which wheel outperforms other wheels given specific conditions. Given accurate data concerning factors such as hills, weather, wind, and competition type, a good model will be able to determine and recommend which wheel is preferred. + +# Our Approach + +We determine equations for each force that acts on the cyclist and the bicycle. These forces are not constant throughout a race. For example, air drag increases with the square of velocity. + +We apply Newton's Second Law of Motion to the cyclist-bicycle system: The sum of the forces acting on an object equals mass of the object times its acceleration. The sum of the forces is a function of both position on the course and velocity of the cyclist. Because the forces are different at different positions, the power required with a given type of wheel is also different. Our approach is to apply the same power to both types of wheels and determine how long it takes to traverse the course. The wheel that allows the cyclist to complete the course in the shortest time requires less power, that is, is more power-efficient. + +A more comprehensive model would incorporate wind based upon a probabilistic function. However, to do so would be at odds with our goal: We wish to determine performance times for two types of wheels given a constant set of conditions. If we included probabilistic functions, differing performance times would be due to both wheel differences and random fluctuations instead of just to wheel differences. + +Let the position of the bicycle on the course be $S$ with components $S_{x}, S_{y}$ , and $S_{z}$ . The second-order differential equation for acceleration is + +$$ +\frac {d ^ {2} S}{d t ^ {2}} = \frac {F (S , \frac {d S}{d t})}{m}, +$$ + +where $d^2 S / dt^2$ is the acceleration, $F(S, dS / dt)$ is the total force, and $m$ is the total mass of the system. + +We determine the time that it takes to complete a course by solving this equation numerically. + +# Assumptions + +- Weather conditions (temperature, humidity, wind direction, and wind speed) are uniform over the course and constant throughout the race. Because wind varies over time, and terrain significantly changes wind speed and direction, larger courses with greater variability in elevation are most affected by this assumption. However, due to the unpredictability of the weather, a general wind direction and speed is probably the most detailed information to which the rider will have access. +- Both wheel types use the same tire. +- Turning does not significantly affect power efficiency of the wheel, speed of the rider, or acceleration of the rider. +- Based on the previous assumption, we assume that the bicycle moves in a linear path in 3-D space. +- The cyclist applies power according to the function that we develop in the Model Design section below, where we introduce other assumptions associated with developing this function. +- The drag coefficient for the rider plus bike frame is the same for all riders and does not change as a function of yaw angle. The drag coefficient is 0.5 and the cross sectional area is $0.5\mathrm{m}^2$ [Analytic Cycling 2001a]. +- The wheels do not slip in any direction as they roll over the course. +- Other riders have no effect on the aerodynamic characteristics of the bike-rider system. This means that we ignore the effects of drafting (which can reduce drag by up to $25\%$ ). +- The rider uses a conventional 36-spoke wheel on the front of the bike. +- The rotational moments of inertia for disc wheels and spoke wheels are approximately $0.1000\mathrm{kg}\cdot \mathrm{m}^2$ and $0.0528\mathrm{kg}\cdot \mathrm{m}^2$ , respectively. In reality, these must be determined experimentally. + +# Model Design + +We identify the forces that act on a bicycle and rider: + +- The forward force that the rider applies with pedaling. +- The drag force that opposes the motion of the bicycle. Since we are concerned with analyzing wheel performance, we divide the total drag force into three components: + +- The drag force $F_{f}$ on the front wheel. +- The drag force $F_{r}$ on the rear wheel. +- The drag force $F_{B}$ on the bicycle frame and rider. + +- The force of gravity $F_{g}$ that either opposes or aids motion due to the road grade. +- The force of rolling resistance $F_{rr}$ due to the compression and deformation of air in the tires. + +Because we assume that the bicycle travels in a linear path, we need consider only components of these forces that act co-axially (i.e., parallel to the bicycle's direction of movement). This will remain realistic so long as the assumption that the wheels do not slip remains true, because the static frictional force between the wheels and the ground prevents movement normal to the velocity. + +Consequently, we do not need to treat the forces as vectors; we must note only whether they aid or oppose the bike's movement. + +# The Force that the Rider Applies + +The rider applies a force to the pedals, which the gears translate to the wheels of the bicycle. Competitive bicycle racers generally shift gears to maintain a constant force on the pedals as well as a constant pedaling rate (cadence) whenever the sum of the other forces oppose the motion (i.e., going up a hill or into the wind) [Harris Cyclery 2001]. In other words, the cyclist attempts to exert constant power. + +However, as the rider moves downhill, gravity aids the effort. As the bicycle gains speed, the rider's pedaling results in a diminished effect on speed because drag forces increase with the square of speed. Eventually, at a speed $v_{\mathrm{cutoff}}$ , the rider ceases pedaling. + +We model the power $P_{a}$ that the rider inputs as a function of speed $v_{g}$ : + +$$ +P _ {a} = \left\{ \begin{array}{l l} 0, & \text {i f} v _ {g} \geq v _ {\text {c u t o f f}}; \\ P _ {\text {a v g}}, & \text {i f} v _ {g} < v _ {\text {c u t o f f}}, \end{array} \right. +$$ + +where $P_{\mathrm{avg}}$ is the average power that the rider can sustain; its value varies from rider to rider and with the type of race. Typical values range from 200 W for casual riders to between 420-460 W for Olympic athletes on long-distance road races, to as much as 1500 W in sprint races [Seiler 2001]. + +If we assume that there is no energy lost in the translation of the power between the pedal and the wheel (i.e., in the gears), then by conservation of energy the power goes into either rotating the wheels or moving the bicycle forward: + +$$ +P _ {a} = P _ {w} + P _ {f}, \tag {1} +$$ + +where $P_{w}$ is the power to rotate the wheels and $P_{f}$ is the power to drive the bicycle forward. + +From elementary physics, the rotational kinetic energy of an object is $\frac{1}{2} I\omega^2$ where $I$ is the rotational moment of inertia and $\omega$ is the angular velocity of the object. For the front and rear wheels, we have + +$$ +K _ {f} = \frac {1}{2} I _ {f} \omega^ {2}, K _ {r} = \frac {1}{2} I _ {r} \omega^ {2}, +$$ + +where $I_{f}$ and $I_{r}$ are the rotational moments of inertia of the front and rear wheels. The total rotational energy $K_{T}$ of the wheels is then + +$$ +K _ {T} = K _ {f} + K _ {r}. +$$ + +The power due to the rotation of the object is the time derivative of the rotational energy: + +$$ +P _ {w} = \frac {d K _ {T}}{d t} = \frac {d}{d t} \left(\frac {1}{2} I _ {f} \omega^ {2} + \frac {1}{2} I _ {r} \omega^ {2}\right) = (I _ {f} + I _ {r}) \omega \omega^ {\prime}. +$$ + +The angular velocity $\omega$ of an object equals its ground speed $v_{g}$ divided by its radius $R$ , while its angular acceleration $\omega^{\prime}$ is $a / R$ . + +Substituting these into the above equation yields + +$$ +P _ {w} = \left(I _ {f} + I _ {r}\right) \frac {v _ {g} a}{R ^ {2}}. +$$ + +If we solve for the power $P_{a}$ that pushes the bike forward, then divide both sides of (1) by $v_{g}$ , we obtain the applied force $F_{A}$ that pushes the bicycle forward: + +$$ +P _ {f} = P _ {a} - P _ {w}, \qquad F _ {A} = \frac {P _ {a}}{v _ {g}} - (I _ {f} + I _ {r}) \frac {a}{R ^ {2}}. +$$ + +# The Drag Forces on the Bicycle + +The drag force acting on an object moving through a fluid is + +$$ +F = \frac {1}{2} C \rho A V ^ {2}, +$$ + +where $\rho$ is the density of the fluid (air), $A$ is the cross-sectional area of the object, $V$ is its velocity relative to the fluid, and $C$ is the coefficient of drag that must be determined experimentally. + +Air density, which can have a significant effect on drag forces, depends on temperature, pressure, and humidity. Pressure depends on altitude and weather. Most baseball fans can attest to the significant affects of air density on drag forces: Baseballs carry much farther in Coors Field in Denver, Colorado, because the high altitude leads to a low pressure, which means that the air density is less as well. + +We consider all of these factors that affect air density in our model, via calculations in the Appendix [EDITOR'S NOTE: We omit the appendix]. The + +most important aspect is that the air density is a function of the bike's elevation, $S_{z}$ . The bike's acceleration $d^{2}S / dt^{2}$ depends on the drag forces, which depend on air density, which depends on the bike's position. This means that the differential equation that we develop will be second-order, because the second derivative of position depends on the position. + +# The Movement of the Air and Bicycle + +The air through which the bicycle moves is not stagnant—wind blowing over the course has a significant effect. We represent the wind as a vector field $\vec{V}_A$ with a magnitude and direction that are uniform over the racecourse and constant for the duration of the race. + +The cyclist's speed over the ground is $v_{g}$ ; thus, the velocity is $v_{g}\vec{u}$ , where $\vec{u}$ is a unit vector in the bicycle's direction of movement. We now consider the air's velocity relative to the bicycle instead of the bicycle's velocity relative to the air, because this is an easier way of thinking about the problem and the magnitudes of these two velocities are equal. Since the bike's velocity over the ground is $v_{g}\vec{u}$ , the air's velocity relative to the bike due to the bike's motion is $-v_{g}\vec{u}$ . + +The total velocity of the air moving relative to the bike is a function of two motions: the air's movement relative to the ground in the form of wind, $\vec{V}_A$ , and the air's movement relative to the bike due to the bike's movement over the ground, $-v_g\hat{u}$ , where we use the hat over $u$ to denote that $u$ is a unit vector. The total speed is then the magnitude of the sum of these two velocities (Figure 1), + +$$ +v _ {T} = \left\| \vec {V} _ {A} - v _ {g} \hat {u} \right\|. +$$ + +![](images/1dfcb32f7681a613ccfb95b6791bb690e34c4ffb8175c7399539eac84f948c7b.jpg) +Figure 1. Components of total air velocity. + +The yaw angle $\theta$ is the angle between the bicycle's axis of movement and the air direction, which we find from the dot product, $\vec{a} \cdot \vec{b} = \|\vec{a}\| \|\vec{b}\| \cos \theta$ , where + +$\theta$ is the angle between the vectors. We have + +$$ +\begin{array}{l} - v _ {g} \hat {u} \cdot (\vec {V} _ {A} - v _ {g} \hat {u}) = v _ {g} \| \vec {V} _ {A} - v _ {g} \hat {u} \| \cos \theta , \\ \theta = \arccos \left[ \frac {- v _ {g} \hat {u} \cdot (\vec {V} _ {A} - v _ {g} \hat {u}}{v _ {g} \| \vec {V} _ {A} - v _ {g} \hat {u} \|} \right]. \\ \end{array} +$$ + +The bicycle's aerodynamic characteristics, and thus the drag forces, change with $\theta$ . Furthermore, because the bicycle does not always head into the wind, the overall drag force has components both normal and axial to the rider's path. We assume that the normal component is negligible and consider only the axial component (the component parallel to the cycle's axis of travel). + +# The Wheels + +The axial drag force on the wheels largely depends on the yaw angle of the air moving past them and must be calculated experimentally for different types of wheels. For experimental results, we rely on Greenwell et al. [1995], who used a wind tunnel to determine the axial drag coefficient at different yaw angles for various commercially available wheels. + +For each type of wheel, we plotted axial drag coefficient vs. yaw angle. From the plots, we constructed a polynomial regression of axial drag coefficients as a function of the yaw angle. Greenwell et al. considered the reference cross-sectional area of each wheel $S_{\mathrm{ref}}$ to be the total side cross-sectional area of the wheel, + +$$ +S _ {\mathrm {r e f}} = \pi R ^ {2}. +$$ + +Thus, the effects of the cross-sectional area changing with yaw angle are included in the axial drag coefficient. Consequently, to use their results, we must use the same reference area. The axial drag force on the front wheel is then + +$$ +F _ {f} = K _ {W} C _ {F} v _ {T} ^ {2}, +$$ + +where $K_{W} = \frac{1}{2}\rho S_{\mathrm{ref}}$ and $C_F$ is the axial drag coefficient of the front wheel at yaw angle $\theta$ . + +An interesting result of Greenwell et al. is that drag forces on the rear wheel are generally reduced by about $25\%$ due to aerodynamic effects of the seat tube. This means that the axial drag force on the rear wheel is + +$$ +F _ {R} = (0. 7 5) K _ {W} C _ {R} v _ {T} ^ {2}, +$$ + +where $C_F$ is the axial drag coefficient of the rear wheel at yaw angle $\theta$ . + +# The Bicycle Frame and Rider + +Unfortunately, we were not able to obtain data relating the drag coefficient of the rider and frame to the yaw angle, so we could consider only the effects of the component of the wind that is parallel to the bicycle's velocity. + +![](images/60bbc21163bdd8d5d369dcb9f2a664728319c14bdb4fdbdfaa0f7f952b4eba0a.jpg) +Figure 2. Drag force on the frame and rider. + +In other words, for the drag force on the frame and rider, we consider only the vector projection of the wind onto the rider's velocity (Figure 2). The total air velocity is then the sum of this projection and the negative of the rider's velocity. We find the total drag force on the frame and rider to be + +$$ +F _ {B} = K _ {B} \| (\hat {\boldsymbol {u}} \cdot \vec {V} _ {A}) \hat {\boldsymbol {u}} - v _ {g} \hat {\boldsymbol {u}} \| ^ {2}, +$$ + +where $K_B = \frac{1}{2} C_B\rho A$ , $A$ is the cross-sectional area of bicycle frame and rider, and $C_B$ is the drag coefficient of bicycle frame and cyclist. + +# Force of Gravity + +If the bicycle is on a hill, the component of gravitational force that is in the direction of the motion is + +$$ +F _ {g} = m _ {T} g \sin \phi , +$$ + +where $m_T$ is the total mass of the bicycle and rider, $g$ is the acceleration of gravity, and $\phi$ is the angle at the bottom of the hill. But since the road grade is $G = \sin \phi$ , we have $F_g = m_T g G$ . + +# Force of Rolling Resistance + +Because the wheels have inflatable tires, the compression of the air within the tires causes a resistance to their rolling. This rolling resistance is a reaction to the rolling of the tires, which means that it will be 0 so long as the tires are not rotating but proportional to the total weight of the bicycle and rider when they are. Thus, + +$$ +F _ {r r} = \left\{ \begin{array}{l l} C _ {r r} m _ {T} g, & \text {i f} v _ {g} \neq 0; \\ 0, & \text {i f} v _ {g} = 0, \end{array} \right. +$$ + +where $C_{rr}$ , the coefficient of rolling resistance, is about 0.004 for most tires. + +# Summing the Forces + +We sum the forces that act in the model: + +$$ +F _ {A} - F _ {B} - F _ {f} - F _ {r} - F _ {g} - F _ {R R} = m _ {T} a, \tag {2} +$$ + +where + +$F_{A} = \frac{P_{a}}{v_{g}} -\left(I_{f} + I_{r}\right)\frac{a}{R^{2}}$ (forward force that rider exerts), + +$F_{B} = K_{B}\| (\hat{u}\cdot \vec{V}_{A})\hat{u} -v_{g}\hat{u})\|^{2}$ (drag force on bike frame and rider), + +$F_{F} = K_{W}C_{F}v_{T}^{2}$ (drag force on front wheel), + +$F_{R} = 0.75K_{W}C_{R}v_{T}^{2}$ (drag force on rear wheel), + +$F_{g} = m_{T}gG$ (force of gravity), + +$F_{rr} = C_{rr}m_Tg$ if $v_{g}\neq 0,0$ if $v_{g} = 0$ (force of rolling resistance). + +The forward force of the rider depends on acceleration. Since we want to have all the acceleration terms together, we first group them: + +$$ +\begin{array}{l} \left(\frac {P _ {a}}{v _ {g}} - (I _ {f} + I _ {r}) \frac {a}{R ^ {2}}\right) - F _ {B} - F _ {f} - F _ {r} - F _ {g} - F _ {R R} = m _ {T} a, \\ \frac {P _ {a}}{v _ {g}} - F _ {B} - F _ {f} - F _ {r} - F _ {g} - F _ {R R} = a \left(m _ {T} + \frac {I _ {f} + I _ {r}}{R ^ {2}}\right). \\ \end{array} +$$ + +Substituting the other forces into (2), we obtain + +$$ +\frac {P _ {a}}{v _ {g}} - K _ {B} \| (\hat {u} \cdot \vec {V _ {A}}) \hat {u} - v _ {g} \hat {u} \| ^ {2} - K _ {W} C _ {F} v _ {T} ^ {2} - 0. 7 5 K _ {W} C _ {R} v _ {T} ^ {2} - m _ {T} g G = \left(m _ {T} + \frac {I _ {f} + I _ {r}}{R ^ {2}}\right) a. +$$ + +Solving for $a$ , we find the second-order differential equation + +$$ +a = \frac {d ^ {2} S}{d t ^ {2}} = \frac {\frac {P _ {a}}{v _ {g}} - K _ {B} \| (\hat {u} \cdot \vec {V} _ {A}) \hat {u} - v _ {g} \hat {u} \| ^ {2} - K _ {W} C _ {F} v _ {T} ^ {2} - 0 . 7 5 K _ {W} C _ {R} v _ {T} ^ {2} - m _ {T} g G}{m _ {T} + \frac {I _ {f} + I _ {r}}{R ^ {2}}}. +$$ + +# Completing the Model + +This differential equation is too complicated to solve analytically, so we solve it numerically using a fourth-order Runge-Kutta (RK4) approximation method [Burden and Faires 1997]. This method is generally more accurate than other numerical approximation methods such as Euler's method, especially at points farther away from the start point. + +To make the approximation, we first define acceleration based on time $t$ , position (elevation $S_{z}$ ), and speed $v_{g}$ : + +$$ +a = a (t, S _ {z}, v _ {g}) +$$ + +The RK4 method uses a weighted approximation of the acceleration at a given time $t_k$ with speed $v_{g_k}$ to determine the speed at some time $t_{k+1}$ in the future, where $t_{k+1} = t_k + h$ , with $h$ the time-step size. + +$$ +\left(v _ {g}\right) _ {k + 1} = \left(v _ {g}\right) + \frac {1}{6} h \left(W _ {K 1} + 2 W _ {K 2} + 2 W _ {K 3} + W _ {K 4}\right), +$$ + +where + +$$ +W _ {K 1} = a \left(t _ {k}, v _ {g _ {k}}\right), +$$ + +$$ +W _ {K 2} = a \left(t _ {k} + \frac {h}{2}, v _ {k} + \frac {h W _ {K 1}}{2}\right), +$$ + +$$ +W _ {K 3} = a \left(t _ {k} + \frac {h}{2}, v _ {k} + \frac {h W _ {K 2}}{2}\right), +$$ + +$$ +W _ {K 4} = a \left(t _ {k} + h, v _ {k} + h W _ {K 3}\right). +$$ + +As with any numerical approximation method, we must know the initial speed $v_{g_0}$ . Then, with the velocity computed at a given time, we calculate the position of the bike at time $t_{k+1}$ as + +$$ +\vec {S} _ {k + 1} = \vec {S} _ {k} + h (v _ {g}) _ {k} \hat {u} +$$ + +from the initial position of the bike $\vec{S}_0$ . Again, because we consider only axial forces acting on the bike and ignore turning, we can model only a bike moving in a straight line; this means that $\hat{u}$ , the unit vector in the direction of the velocity, is constant. + +# Model Validation + +To validate our model, we developed a computer program to simulate traversal of a course (see screen display in Figure 3). We inputted a map of the 2000 Olympic Games time-trial course [NBC Olympics 2000a]. The prevailing wind speed and direction for Sydney on 27 September 2000 (the day of the Olympic finals for the road race) was $10\mathrm{m / s}$ at $315^{\circ}$ [Analytic Cycling 2001]. We assume that the winner would maintain an average power of $450\mathrm{W}$ . + +Using our model, we calculated that cyclists would complete 15 laps of the $15.3\mathrm{km}$ course in a time between $5\mathrm{h}18\mathrm{min}$ and $5\mathrm{h}47\mathrm{min}$ (depending on which wheel is used). In Sydney that day, Jan Ullrich of Germany won the race in a time of $5\mathrm{h}29\mathrm{min}$ . As this result falls within the range of predicted values, we believe that our model is relatively accurate. + +However, to test truly the validity of the model, we would need to experiment with various riders and wheel types for which we know the drag + +![](images/6210b3b4fb4c775c0efbbee9f7ba3f2e8e44dd7c238adc2e761c5b0c4c18d2a2.jpg) +Figure 3. 2000 Olympic Games time-trial course. + +coefficient as a function of yaw angle. The riders could use an exercise bike to determine their average power outputs. We would then have them traverse a course with different wheel types. After inputting the course in our model, we could compare the times that the model predicted with the experimental results. + +# Model Application + +# Table Creation + +We construct tables for varying road grades and wind speeds. Since the direction of the wind has an appreciable effect on wheel performance, we create three tables: one for a headwind, one for a crosswind, and one for a tailwind. We applied our model to a hill $1.0\mathrm{km}$ long, beginning with a velocity of $10\mathrm{m / s}$ . The results in tabular form are in Table 1. + +# Table Analysis + +Our model accounts for many additional factors other than wind speed and road grade; these contributions are lost if the results are pressed into table + +Table 1. + +Preferred wheel type for a hill $1.0\mathrm{km}$ long and starting speed $10\mathrm{m / s}$ . + +Headwind +Preferred Wheel given Road Grade and Wind Speed + +
Wind Speed
048121620
Road Grade0SpokeEitherEitherDiscDiscDisc
1SpokeSpokeDiscDiscDiscDisc
2SpokeEitherDiscDiscDiscDisc
3SpokeSpokeEitherSpokeEitherEither
4SpokeSpokeSpokeSpokeSpokeSpoke
5SpokeSpokeSpokeSpokeSpokeSpoke
6SpokeSpokeSpokeSpokeSpokeSpoke
7SpokeSpokeSpokeSpokeSpokeSpoke
8SpokeSpokeSpokeSpokeSpokeSpoke
9SpokeSpokeSpokeSpokeSpokeSpoke
10SpokeSpokeSpokeSpokeSpokeSpoke
+ +Tailwind +Preferred Wheel given Road Grade and Wind Speed + +
Wind Speed
048121620
Road Grade0SpokeSpokeSpokeSpokeSpokeSpoke
1SpokeSpokeSpokeSpokeSpokeSpoke
2SpokeSpokeSpokeSpokeSpokeEither
3SpokeSpokeSpokeSpokeSpokeDisc
4SpokeSpokeSpokeSpokeSpokeEither
5SpokeSpokeSpokeSpokeSpokeSpoke
6SpokeSpokeSpokeSpokeSpokeSpoke
7SpokeSpokeSpokeSpokeSpokeSpoke
8SpokeSpokeSpokeSpokeSpokeSpoke
9SpokeSpokeSpokeSpokeSpokeSpoke
10SpokeSpokeSpokeSpokeSpokeSpoke
+ +Crosswind +Preferred Wheel given Road Grade and Wind Speed + +
Wind Speed
048121620
Road Grade0SpokeDiscDiscDiscDiscDisc
1SpokeDiscDiscDiscDiscSpoke
2SpokeDiscDiscDiscDiscSpoke
3SpokeDiscDiscDiscDiscSpoke
4SpokeDiscDiscDiscDiscSpoke
5SpokeDiscDiscDiscDiscSpoke
6SpokeDiscDiscDiscDiscSpoke
7SpokeDiscDiscDiscDiscSpoke
8SpokeDiscDiscDiscDiscSpoke
9SpokeDiscDiscDiscDiscSpoke
10SpokeDiscDiscDiscEitherSpoke
+ +form. Additionally, cycling races generally do not consist of one large hill of a uniform grade; there are typically many turns, hills, and valleys. + +As a result, we recommend against using the tables that we provide! Our software implementation of the model allows the entry of all factors relating to the course that affect wheel choice. + +Typically, course layout and elevation profile are available well in advance of a cycling race. We recommend that users of our software input the course and run multiple scenarios based on varying wind speeds and directions. + +# Results and Conclusions + +We analyzed 7 different wheels: 5 spoked (Compagnolo, Conventional 36-Spoke, HED 24-Spoke, Specialized Tri-Spoke, and FIR Tri-Spoke) and 2 disc (HED Disc and ZIPP Disc). We chose these wheels because we could find data relating the drag coefficients to the yaw angle of the air. After running our model over varying courses and conditions, we came to some interesting conclusions, which mesh well with what intuition would suggest. + +# Crosswinds + +In crosswinds, disc wheels dramatically outperform spokes wheel, since the drag coefficients for disc wheels decreases sharply as the yaw angle increases. As yaw angle increases to around $20^{\circ}$ , drag coefficients for disc wheels drop dramatically. For the HED Disc, the drag coefficients actually become negative at larger yaw angles, indicating that the wheel acts like a sail and helps propel the cycle forward instead of slowing it down! The ZIPP Disc drag coefficients do not become negative but drop very close to zero at larger yaw angles. Consequently, the difference in speed between discs and spokes in a crosswind is significant. If a course has a strong crosswind (greater than 20 mph), then a disc wheel can make time differences on the order of $20\%$ or more. However, even in a light crosswind, the two disc wheels that we analyzed outperformed every spiked wheel. + +# Direct Head and Tail Winds + +In direct head and tail winds, spiked wheels slightly outperform disc wheels, because both types have similar drag coefficients at small yaw angles and spiked wheels are lighter. + +Spoked wheels generally outperform disc wheels when going uphill in the absence of wind, because they are lighter, whereas disk wheels outperform spokes when going downhill, because they are heavier. When there is a head wind or a tail wind when going up a hill, spokes generally still outperform discs; however, the introduction of any cross wind with a yaw + +angle much greater than $15^{\circ}$ or $20^{\circ}$ causes the smaller disc drag coefficients to outweigh the mass differences. This means that disc wheels are more efficient than spiked wheels. + +# Long Races + +In long races, disc wheels generally outperform spiked wheels, because the smaller drag forces on the disc have a more significant effect on overall performance. Generally, in a race of any length much over $5\mathrm{km}$ , discs are more efficient, because the wind acts on them for longer amounts of time than in short races, which means that aerodynamic characteristics are more important than mass differences. + +# Short Races + +In short races, where acceleration is more important, the spiked wheels, with their smaller masses and moments of inertia, outperform the disc wheels. + +# Strengths of the Model + +Robustness. We derive our model from basic physical relationships and limit our use of assumptions. In every case where an assumption is required, we substantiate it with evidence and reasoning that illustrates why the assumption is valid. + +Grounded in theory and research. We constructed our model based on both theory (Newton's second law) and research (Greenwell et al. [1995]). + +Ease of use. Although the physical concepts behind our model are relatively simple to understand, the mathematical derivations and calculations are difficult to perform. We created a user-friendly computer program with a graphical interface that allows anyone with a basic knowledge of computers to input the required data. The program performs the mathematical calculations and reports the preferred wheel choice, along with calculated stage times for the race. + +# Weaknesses of the Model + +Representing the course. We represent the course as a sequence of points in three-dimensional space, with the path from point to point represented by a straight line. This is not an accurate representation of any course; the traversal of hills, valleys, and curves all create nonlinear movement. Accuracy + +could be increased through a detailed entering of the course with a larger number of location nodes. + +Approximation of wheel drag data. Greenwell et al. [1995] reported data for yaw angle $0^{\circ}$ through $60^{\circ}$ only in $7.5^{\circ}$ increments. We developed interpolating polynomials through this range. We would conduct further tests at various yaw angles. + +# References + +Analytic Cycling. 2001a. Wind on rider. http://www.analyticycling.com/ DiffEqWindCourse_Disc.html. Accessed 10 February 2001. +______ 2001b. Forces source input. http://www.analyticycling.com/ForcesSource_Input.html. Accessed 10 February 2001. +Burden, Richard L., and Douglas Faires. 1997. Numerical Analysis. 6th ed. New York: Brooks/Cole. +Greenwell, D., N. Wood, E. Bridge, and R. Addy. 1995. Aerodynamic characteristics of low-drag bicycle wheels. *Aeronautical Journal* 99 (March 1995): 109-120. +Harris Cyclery. 2001. Bicycle glossary. http://www.sheldonbrown.com/gloss_ca-m.html#cadence. Accessed 10 February 2001. +NBC Olympics. 2000a. Individual time trial. http://sydney2000.nbcollympics.com/features/cy/2000/09/cyfeat_roadmap/cy_roadmap_01.html. Accessed 11 February 2001. +______ . 2000b. Men's road race results. http://sydney2000.nbcollympics.com/results/oly/cy/cym012.html?event=cym012100o.js. Accessed 11 February 2001. +Palmer, Chad. 2001. Virtual temperature and humidity. USA Today. http:// www.usatoday.com/weather/wvirtual.htm. Accessed 10 February 2001. +Seiler, Stephen. 2001. MAPP Questions and Answers Page. Masters Athlete Physiology and Performance. http://home.hia.no/~stephens/qanda.htm. Accessed 10 February 2001. + +# A Systematic Technique for Optimal Bicycle Wheel Selection + +Michael B. Flynn + +Eamonn T. Long + +William Whelan-Curtin + +University College Cork + +Cork, Ireland + +Advisor: James Joseph Grannell + +# Introduction + +We present a theoretical investigation of the dynamics and aerodynamics of a racing bicycle. We identify the dominant physical mechanisms and apply Newtonian mechanics principles to obtain a differential equation that gives the power output required of the rider. We then approximate the time-averaged power difference between the two types of rear wheel. We develop an easy-to-read, unambiguous, and comprehensive table that allows a person familiar with track and wind conditions to select the correct wheel type. We apply this table in the analysis of a time-trial stage of the Tour de France. We compare and contrast our choice of wheel with that of the leading competitors, with enlightening results. + +Our criterion on the wind speed and direction gives predictions that agree with the exhaustive experimental data considered. We complement our criterion with suggestions based on the experience of an extensive range of experienced cyclists. We provide an informative critique of our model and suggest innovative ways to enhance the wheel-choice criterion. + +# Assumptions + +- The spiked wheel is the Campagnolo Vento 16 HPW clincher wheel, used by the ONCE cycling team [ONCE Cycling Team Website 2000]. Reported mass = 1.193 kg [Hi-Tech Bikes 2001]. + +Table 1. Model Inputs and Symbols. + +
sdistance over which the average power is calculated
kroad grade
uinitial speed of the rider before any incline
crcoefficient of air drag for the bike and the rider frame
Arcross-sectional area of bike and rider frame
ρair density
m(spk)mass of spiked rear wheel
m(sld)mass of disc rear wheel
crcoefficient of rolling resistance
g9.81m/s2acceleration due to gravity
cfwdrag coefficient of front wheel
crw(sld)drag coefficient of solid wheel
crw(spk)drag coefficient of spiked wheel
r rwradius of rear wheel
μconstant of deceleration
φangle of wind with respect to direction of bike (degrees)
vfspeed of the wind with respect to the cyclist
I_rwmoment of inertia of the rear wheel
I_fwmoment of inertia of the front wheel
vwwind speed
νkinematic viscosity of air
+ +- The density and pressure of the air across the bike are essentially constant. (See Appendix. [EDITOR'S NOTE: We omit the Appendix.]) +- For most of the journey, the bike travels in a straight line $(\pm 10\%)$ . This is reasonable, since cyclists avoid turning as much as possible and they have a wide enough road to achieve a straight line. +- The solid wheel is taken to be the HED disc tubular freewheel, the solid wheel of choice in the Tour de France. Reported mass $= 1.229 \mathrm{~kg}$ [Hi-Tech Bikes 2001]. +- The drag coefficients of both wheels are independent of the wind speed in the range of raceable weather conditions and vary significantly with the relative direction of the wind [Flanagan 1996]. +- The drag area of a typical crouched racer is $0.3\mathrm{m}^2$ [Compton 1998]. +- The radius of the wheel is $35 \, \text{cm}$ [Compton 1998]. +- The efficiency of the drive train is essentially $100\%$ , reasonable for elite racing bikes [Pivit 2001]. +- The coefficient of rolling resistance between road and wheel rubber is 0.007 [Privit 2001]. +- The moment of inertia of a bicycle wheel about an axis through its centre and perpendicular to the plane of the wheel is $I = \frac{1}{9} mr^2$ , where $m$ is the mass + +of the wheel and $r$ is the radius. This agrees with empirical data [Compton 1998]. + +- The deceleration to terminal speed on a uniform incline is constant to within $\pm 2\%$ . +- The terminal speed on a uniform incline is reached at $100\mathrm{m}$ up the incline. +- The deceleration of a rider on a slope is proportional to the gradient of the slope (given in the problem statement). +- The power is averaged over $100\mathrm{m}$ of acceleration, where the acceleration is that calculated in the Appendix. This power is used in determining the wind speed criterion for solid wheels. +- All of the drag due to the rider and bike frame is due to the rider cross-sectional area, since the area of the rider is much greater. +- The rolling resistance $c_{rr}mg$ is the same for the solid wheel as for the spiked wheel. The tires surrounding the wheels are identical and the mass difference between the wheels is only about $0.1\%$ of the overall mass of the bike plus rider. + +# The Wind Speed Criterion + +We have from Newton's Second Law that $F = ma$ , where $F$ is the force acting the object, $m$ is its mass, and $a$ is its net acceleration. For the bicycle, the force equation can be written as follows: + +Force applied by rider $=$ Retarding Forces $+ ma +$ rotational acceleration, + +where $m$ is the mass of the rider-bike system. The retarding forces consist of + +- the drag due to the bike frame and the rider, +- the drag due to the individual wheels, +- the friction due to the motion of the wheels in the air, and +- the rolling resistance of the wheels on the surface. + +The formulas for these torques and forces are + +$$ +\operatorname {d r a g} \text {f o r c e o f b i k e f r a m e a n d r i d e r} = \frac {c _ {w} A \rho v _ {f} ^ {2}}{2}, +$$ + +$$ +\operatorname {d r a g} \text {f o r c e d u e t o f r o n t w h e e l} = \frac {c _ {f w} A \rho v _ {f} ^ {2}}{2}, +$$ + +$$ +\text {d r a g} = \frac {c _ {r w} A \rho v _ {f} ^ {2}}{2}, +$$ + +$$ +\text {f r i c t i o n a l} = 0. 6 1 6 \pi \rho \omega^ {3 / 2} \nu^ {1 / 2} r ^ {4}, +$$ + +$$ +\left(m _ {r} + m _ {b} + m _ {f w} + \frac {I _ {f w}}{r ^ {2}} + m _ {r w} + \frac {I _ {r w}}{r ^ {2}}\right) a = \mathrm {R e s u l t a n t F o r c e}. +$$ + +With this in mind, the power is averaged. The criterion on the wind speed $v_{w}$ becomes: + +$$ +v _ {w} > \sqrt {\frac {m _ {r w} (s l d) - m _ {r w} (s p k)}{\left(\sigma_ {s p k} - \sigma_ {s l d}\right)} + \left(\frac {\gamma}{s}\right) ^ {2} - \frac {\lambda}{s}} + \frac {\gamma}{s}, +$$ + +where $\sigma, \lambda,$ and $\gamma$ are as given in the Appendix. If the quantity on the left side is not positive, then the solid wheel is always best. The quantity $\sigma$ is effectively an aerodynamic term, whereas $\gamma$ and $\lambda$ can be thought of as representing the acceleration, and time of acceleration respectively. Note that $\gamma$ is dependent on the wind direction $\phi$ . + +# The Minimum Wind Speed Table + +Table 2 gives the wind speed below which the power for the spiked wheel is less, for various rider speeds. A 0 means that for any nonzero wind speed, the solid wheel requires less power; $\infty$ means that the rider will not be able to continue up the slope for $100\mathrm{m}$ . + +# Course Example + +Time-trial course: Stage 1, Tour de France 2000 (Figure 1): A $16.5\mathrm{km}$ circuit starting at the Futuroscope building in Chassenuil-du-Poitou, France [Tour de France 2000]. + +Modeling the course: The course is modeled by a quadrilateral. This is a faithful representation of the course [CNN/Sports Illustrated Website 2000]. This course is predominantly level (Figure 2) save for a $1\mathrm{km}$ climb at a gradient of $3.7\%$ . This climb is treated as the crucial feature of the time trial course, insofar as wheel choice is concerned. + +# Table 2. + +Wind speed (in $\mathrm{m} / \mathrm{s}$ ) below which the power for the spiked wheel is less. + +# a. + +Rider speed of $10\mathrm{m / s}$ + +
Wind direction180°173°165°150°135°
Road Grade
0.00160000
0.01230000
0.02291003
0.03343005
0.04394007
0.05436108
0.064873010
0.075184011
0.0855105113
0.0959116214
0.103
+ +# b. + +Rider speed of $12.5\mathrm{m / s}$ + +
Wind direction180°173°165°150°135°
Road Grade
0.00130000
0.01210000
0.02250000
0.03260002
0.04322003
0.05363005
0.06414017
0.07455028
0.084871310
0.095282411
0.105993512
+ +# C. + +Rider speed of $14\mathrm{m / s}$ + +
Wind direction180°173°165°150°135°
Road Grade
0.00120000
0.01190000
0.02250000
0.03300000
0.04350001
0.05391003
0.06433005
0.07474006
0.08505018
0.09546029
0.105771310
+ +Weather: The air temperature was about $20^{\circ}\mathrm{C}$ . The wind was variable west to southwest, between 20 and 30 kph (5.6-8.3 m/s) [Official Tour de France Website 2000]. + +![](images/90fa0bb75b2506b3bab4df29874765f45b284ddd25568e61e9cc8fa7c7826d9b.jpg) +Figure 1. The course. + +North points towards the top of the page. The climb is westward, meaning that the riders faced the wind at an angle varying from $0^{\circ}$ to $45^{\circ}$ to the wind: exactly the range of applicability of the table. The average speed of the top ten finishers was about $14\mathrm{m / s}$ . We analyse the climb from the perspective of a rider who begins the climb at that speed and whose speed levels off after $100\mathrm{m}$ of the climb. Knowing the speed of the rider to be $14\mathrm{m / s}$ and the gradient of the slope to be approximately $4\%$ , we can apply Table 2c. The minimum wind speed in the table is $1.44\mathrm{m / s}$ (assuming that the wind is at an angle to the direction of motion). On the basis of the climb alone, the solid wheel is the better choice. In fact, assuming that the wind is at an angle to the rider for most of the journey (wind that "follows" a cyclist around is unlikely), the solid wheel is better at all nonzero wind speeds (assuming level terrain and rider speed of $14\mathrm{m / s}$ ). + +The recommended wheel for this race is therefore the solid wheel. In this time trial, U.S. Postal Team and the Spanish ONCE Team used a solid rear wheel and a tri-spoke front wheel (a more aerodynamically efficient version + +![](images/046ba9d5fef9d3dbbf04060180e2fe960a335e1d1a5a1bfaa83233098cfd5241.jpg) +Figure 2. Profile of the course. + +of the standard spiked wheel). The two teams had five riders in the top ten [Official Tour de France Website 2000; Pediana 2000]. + +# The Adequacy of the Table + +# Comparing Its Predictions with Experimental Data + +On an indoor circuit, the gradient and the wind can be considered to be zero. The corresponding entries in Table 2, for a direct headwind $(180^{\circ})$ at zero gradient, are nonzero, indicating that spiked wheels are better. This prediction is borne out by the empirical results on an indoor circuit [Beer 1999] (Table 3). + +Table 3. Measurements made on a 1-km indoor circuit. + +
WattsSpoked Rear WheelSolid Rear Wheel
100time (min:sec)2:22.582:26.78
speed (ms-1)6.976.77
200time (min:sec)1:51.751:52.55
speed (ms-1)8.908.83
300time (min:sec)1:36.981:39.70
speed (ms-1)10.259.97
+ +The times correspond to seven laps of $147\mathrm{m}$ . The test cyclist rode at power outputs of $100\mathrm{W}$ , $200\mathrm{W}$ , and $300\mathrm{W}$ . The tests were repeated with no more than a $2\%$ time variation; the spiked rear wheel has the better performance by a margin greater than the experimental error. + +However, for a road circuit (where one would expect variable and nonzero wind velocities), Table 2 predicts that for nonzero cross winds and slopes of a gradient less than $5\%$ , the solid wheel is best, based on minimising aerodynamic losses. Thus, if the solid wheel is the right choice for a particular cyclist speed, + +then the solid wheel choice is valid for all greater cyclist speeds, given the same atmospheric conditions. Experimental data in Table 4 back up this claim. + +Table 4. Measurements made on a 7.2-km road circuit. + +
WattsSpoked Rear WheelSolid Rear Wheel
200time (min:sec)15:2213:53
speed (ms-1)7.818.63
+ +The circuit consisted of a 7.2-km looped course of rolling hills (gradient $< 5\%$ ) with varying wind conditions (nonzero wind speeds, unlike the indoor track). The power output of the cyclist was approximately 200 W. The tests were repeated with no more than a $2.3\%$ time variation. For the road circuit, the solid rear wheel is the better choice (again by more than the experimental error). According to the appropriate table, the solid rear wheel is the more efficient for most of the conditions that were encountered. Thus, the predictions of the model are once more confirmed. + +# Additional Factors Not Considered by the Table + +Stability of the bike: Stability is a major factor in cycling. A cyclist wants to concentrate on putting the maximum possible power through the pedals. If the bicycle is unstable, the jerking of the handlebars in response to sudden gusts of wind will be very disruptive. In general, bicycles with standard spiked wheels front and rear are the least affected by changing crosswinds, but bikes with solid wheels are fastest. A solid front wheel can increase the rider's pedaling wattage by $20\%$ to $30\%$ , but it removes all the bicycle's self-centering effects, making for a difficult ride. A solid rear wheel has similar but milder effects [Cobb 2000]. + +Turning radius: This factor is important if the course contains many turns or if maneuvering in response to other riders is necessary. A greater turning radius loses time at each turn, where more maneuverable riders may overtake. + +Rider comfort: The solid wheel also creates the problem of rider comfort. As it has no spokes, it cannot flex to absorb any shocks due to irregularities of the road surface. Towards the end of a long race, a rider's concentration may be impaired due to the resulting discomfort, causing a drop in performance [Bicycle Encyclopedia 2000]. + +Very few cyclists ride with two solid wheels (1 out of 177 in the time trial that we studied), despite the aerodynamic superiority—steering problems negate the gain. Similarly, we have shown the superiority of the solid rear wheel in race conditions, yet in the long-distance $(150 + \mathrm{km})$ stages of the Tour de France, + +all riders use spiked wheels: Solid wheels have insufficient maneuverability and steering problems cause mental fatigue near the end of the stage. + +Thus, the directeur sportif should consider the length of the race, the wind conditions, and the quality of the road surface in making the decision. + +# Error Analysis + +The error was determined by calculating the differential. + +The wind direction may vary considerably over a given course due to local effects (e.g., buildings, trees). We chose an error of $10\%$ for $c_{fw}(\phi)$ and $c_{rw}(\phi)$ . + +Air density can usually be determined to a high accuracy but varies by location. Therefore, we chose a $1\%$ error, since the necessary equipment is unlikely to be available to the directeur sportif at the time of the race. + +Using a wind tunnel, drag coefficients can be determined to a very high precision (to within $0.02\mathrm{kg}$ ); but variables such as pedaling speed, wheel rotation, and rider posture introduce inaccuracies that cannot be easily determined. The main difficulty is that the flowfield about a rider in a wind tunnel is not ergodic, primarily as a result of airflow irregularities caused by pedaling. Therefore, we chose an error of $3\%$ for $c_{r}$ [Flanagan 1996]. + +For each rider/bicycle combination, the directeur sportif should determine the appropriate masses and dimensions. We assume that they can be determined to a high degree of accuracy ( $\sim 0.1\%$ ) with the exception of the rider's cross section, in which we take an error of $2\%$ . From error analysis in the Appendix, we find + +$$ +\frac{\triangle v_{w}}{v_{w}} = 16\%. +$$ + +# Strengths and Weaknesses + +Our model clearly and concisely states which wheel should be used under the various conditions. The model is based on equations that are sufficiently versatile so that further factors pertaining to a particular situation may be included as required, without any need to construct new equations. + +We could not verify the model experimentally under conditions of the high power output of elite cyclists. However, in data available for various lesser power outputs, we could not discern any noticeable trends in time differential between the different wheels with respect to increasing power. + +Our model does not contain a quantitative analysis of the wind speeds at which the solid wheel is unacceptably unstable. However, this question depends largely on the abilities and preferences of each individual rider and requires detailed local information about road conditions and wind variability. + +No data were available regarding the coefficients of drag in a tail wind. However, since the cyclist ( $\sim 15\mathrm{m / s}$ ) in general travels faster than the wind + +(race conditions $< 10\mathrm{m / s}$ ), only the crosswind effects are important, and these are adequately handled in the tables. + +The major weakness of the model is assuming that the rider's speed does not differ much from the rider's average speed over the duration of the race. Moreover, we could find no evidence to support the problem's assumption that a rider reaches terminal velocity after $100\mathrm{m}$ of a slope. + +# Conclusion + +- When there is a head wind, the spoked wheel is better. If the course is flat, the solid wheel is better for strong winds; however, the wind (particularly if gusting) may cause instability problems. +- When the wind is not weak ( $>5\mathrm{m/s}$ ) and strikes the wheel at an angle, the solid wheel is nearly always better. Even a small component of wind perpendicular to the rider direction makes the solid wheel the better choice. +- If the circuit has many tight turns or involves riding in close company with other cyclists, the solid wheel's lack of maneuverability dictates the spiked wheel; otherwise, the risk of an accident and injury is unacceptable. +- The region of superiority of the solid wheel increases with rider speed. Since the power required to overcome air resistance goes as the cube of the velocity, the aerodynamic savings of the solid wheel become more important with higher speed. + +# References + +Beer, Joe. 1999. *Cycling Plus* (January 1999) 19-22. Is a racing recumbent really faster than an aero trial bike or a quality road bike? http://http://www.necj.nj.nec.com/homepages/sandiway/bike/festina/cplus.html. +The Bicycle Encyclopedia. 2001. http://my.voyager.net/older/bcwebsite/test/w/wheelset.htm. +Chow, Chuen-Yen. 1979. An Introduction to Computational Fluid Mechanics. New York: Wiley. +CNN/Sports Illustrated Website. 2001. http://sportsillustrated.cnn. com/cycling/2000/tour_de_france/stages/1/. +Coast, J.R. 1996. What determines the optimal cadence? Cycling Science (Spring 1996) http://www.bsn.com/cycling/articles/. +Cobb, John. 2000. Steering stability explained. http://www.bicyclesports.com/technical/aerodynamics. + +Cochran, W.G. 1934. Proceedings of the Cambridge Philosophical Society 30: 365ff. +Compton, Tom. 1998. Performance and wheel concepts. http://www.analyticycling.com. +Flanagan, Michael J. 1996. Considerations for data quality and uncertainty in testing of bicycle aerodynamics. Cycling Science (Fall 1996). http://www.bsn.com/cycling/articles. +Douglas, J.F. 1975. *Solutions of Problems in Fluid Mechanics*. Bath, England: Pitman Press. +_____, J.M. Gasiorek, and J.A. Swaffield. 1979. Fluid Mechanics. Bath, England: Pitman Press. +Giant Manufacturing Co., Ltd. 2001. http://www.giant-bicycle.com. +Hi-Tech Bikes. 2001. http://www.hi-techbikes.com/. +Hull, Wang, and Moore. 1996. An empirical model for determining the radial force-deflection behavior of off-road bicycle tires. Cycling Science (Spring 1996). http://www.bsn.com/cycling/articles/. +The K-8 Aeronautics Internet Textbook. 2001. http://wings.ucdavis.edu/books/sports/instructor/bicycling. +von Kármán, Th. 1921. Über laminare und turbulente Reibung. Zeitschrift für angewandte Mathematik und Mechanik 1: 245ff. +Kaufman, W. 1963. Fluid Mechanics. New York: McGraw-Hill. +Kreyszig, Erwin. 1999. Advanced Engineering Mathematics. 8th Edition. New York: Wiley. +The Official Tour de France Website. 2000. http://www.letour.com. +The ONCE Cycling Team Website. 2000. http://oncedb.deutsche-bank.es/. +Pediana, Paul. 2000. Aerowheels. http://www.diablocyclists/paul06001.htm. +Pivit, Rainer. 1990. Bicycles and aerodynamics. Radfahren 21: 40-44. http://www.lustaufzukunft.de/pivit/aero/formel.htm. +Rinard, Damon. Bicycle Tech Site. http://www.damonrinard.com. +Shimano American Corporation 2001. http://www.shimano.com +Tour de France 2000. 2000. The Irish Times Website (June 2000). http://www.ireland.com/sports/tdt/stages. + +![](images/6d20c8e8f6ae8bb98df903b189bc4c4f1b2a33e3e4ce408c1c85c960c5d058a7.jpg) + +Dr. Ann Watkins, President of the Mathematical Association of America, congratulating MAA Winners Eamonn Long, Michael Flynn, and William Whelan-Curtin, after they presented their model at MathFest in Madison, WI, in August. [Photo courtesy of Ruth Favro.] + +# Author-Judge's Commentary: The Outstanding Bicycle Wheel Papers + +Kelly Black + +Visiting Associate Professor + +Department of Mathematics and Statistics + +Utah State University + +Logan, UT 84341 + +kelly.black@unh.edu + +# Introduction: The Problem + +Professional bicycle racers have a wide variety of wheel types available to them. The types of wheels range from the familiar spoked wheels, to wheels with three or four blades, to solid wheels. The spoked wheels have the lowest mass but have the highest friction forces due to interactions with air. The solid wheels have the most mass but have the lowest friction forces. The question posed was to demonstrate a method to determine what kind of wheel to use for a given race course. + +The problem focused on the two most basic types of wheels, the spiked wheel and the solid wheel. Three tasks were given: + +- Find the wind speeds for which one wheel has an advantage over the other for particular inclines. +- Demonstrate how to use the information in the first task to determine which wheel to use for a specific course. +- Evaluate whether the information provided in the first task achieved the overall goal. + +# General Remarks on the Solution Papers + +As is the case each year, many fine papers were submitted. The papers were judged on both their technical merit and on their presentation. The submissions in which both aspects were superior received the most attention. The problem required that many assumptions had to be made; because of the severe time restrictions, it was extremely important to be able to choose assumptions that did not make the problem too simple but still relevant. + +For example, a number of submissions concentrated on the yaw angle, the angle that the wind makes with respect to the direction of movement of the bicycle. While some of these submissions were quite good, it appeared that others spent so much time trying to figure out how to deal with this complicated aspect that sufficient progress was not made with respect to the other parts of the problem. Moreover, it was often difficult to read and interpret the resulting descriptions of the teams' efforts. + +While the assumptions were important, it was also important in developing a mathematical model for this problem to stay consistent with the basic definitions of mechanics. There were a number of entries in which Newton's Second Law, the torque equations, or the power was not correctly identified. There was also some confusion about units. Such difficulties represented a key division between the lower and higher rankings. + +# Approaches + +Overall, there were two different approaches: + +- The first approach focused on the mechanics of a bicycle moving on an incline. The forces acting on the bicycle and rider were used to find the equations of motion from Newton's Second Law and the torque equations. The equations could then be used to isolate the force acting to move the bicycle forward. + +The main difficulty with this approach was in isolating and identifying the relevant force based on the equations from Newton's Second Law and the corresponding torque equations. In many cases, it was difficult to identify exactly how the system of equations was manipulated and how the equations were found. The submissions in this category that were highly rated did an excellent job of displaying and referring to the free-body diagrams, as well as discussing how the relevant force was isolated by manipulating the system of equations. + +- The second approach focussed on the aerodynamic forces acting on the wheels, then calculated the power to move the wheels forward. For the spiked wheel, the total force acting on the wheel was found by adding the + +effects on each spoke (along its entire length) with its respective orientations. For the solid wheel, the forces acting on the whole wheel were found with respect to the wind yaw angle. + +This second approach turned out to be a difficult one. In some cases, it was hard for the judges to identify the approach and what assumptions were being made. The submissions in this category were also more likely to concentrate on the yaw angle and its associated complications. The teams that carefully structured their approach and clearly identified each step stood out. + +For either approach, there were different assumptions that could be made about the motion of the bicycle and rider. The most common approach was to make some assumption about either the acceleration or the steady-state velocity as the bike and rider moved along the hill. The second most common assumption was to assume that the rider provided a constant power output and then work backwards to isolate the forces acting on the wheels. For the most part, the judges did not question the technical merits of these kinds of assumptions. The judges concentrated instead on whether or not the submissions presented a clear and consistent case based on the given assumptions. + +# Fulfillment of the Tasks + +There were many fine entries in which the first task (provide a table) was addressed. The first task was the most specific and straightforward part of the problem. The factor that set the entries apart was in how the two remaining tasks (use the table in a time trial, determine if the table is adequate) were addressed and presented. The majority of submissions discussed the second task by dividing the race course into discrete pieces; the total power could then be found by adding up the power requirements over each piece. This part of the submissions often seemed to have received the least amount of attention by the different teams and was often the hardest part to read and interpret. + +The analysis and qualitative comparisons within each submission were crucial in determining how a team's efforts were ranked. Many teams provided an adequate formulation for the first task in the problem but addressed the other tasks in a superficial manner. The real opportunity to express a deeper understanding of the problem and show some creativity lay in how the remaining aspects of the problem were approached. + +The entries that most impressed the judges went further in their analysis. In particular, a small number of entries approached the third task by noting that the real goal was to minimize the time spent on a particular race course. By assuming that the rider would expend a constant power output, the equations of motions from Newton's Second Law could then be found. The position of the rider on the course at any given time could then be approximated through a numerical integration of the resulting system of equations. For a given racecourse, + +the total time on the course could be found for different wheel configurations. A simple comparison of total times determined which wheel to use for the course. + +The submissions that went beyond the stated problem and stayed true to the original goal received the most attention from the judges. Such papers showed creative and original thought, and they truly stood apart from the rest. Moreover, they showed the deepest understanding of the task at hand. + +# About the Author + +Kelly Black is visiting Utah State University and is on sabbatical leave from the University of New Hampshire. He received his undergraduate degree in Mathematics and Computer Science from Rose-Hulman Institute of Technology and his Master's and Ph.D. from the Applied Mathematics program at Brown University. His research is in scientific computing and has interests in computational fluid dynamics, laser simulations, and mathematical biology. + +# Project H.E.R.O.: Hurricane Evacuation Route Optimization + +Nathan Gossett + +Barbara Hess + +Michael Page + +Bethel College + +St. Paul, MN + +Advisor: William M. Kinney + +# Introduction + +Through modeling and computer simulation, we established an evacuation plan for the coastal region of South Carolina in the event of an evacuation order. + +We derive nine evacuation routes running from the coastal region inland. Based on geography, counties are given access to appropriate routes. Combining flow theory with geographic, demographic, and time constraints, we formulate a maximum flow problem. Using linear optimization, we find a feasible solution. This solution serves as a basis for our evacuation model. The validity of the model is confirmed through computer simulation. + +A total evacuation (1.1 million people) in 24 to 26 hours is possible only if all traffic is reversed on the nine evacuation routes. + +# Terms and Definitions + +Flow $F$ : the number of cars that pass a given point per unit time (cars per hour per lane, unless otherwise specified). + +Speed $s$ : the rate of movement of a single car (mph, unless otherwise specified). + +Density $k$ : the number of cars per unit length of roadway (cars per mile per lane, unless otherwise specified). + +Headway distance $h_d$ : the space between the back of the leading car and the front of the immediately trailing car (ft). (Note: This is not a standard definition of headway distance.) + +Headway time $h_t$ : the time required to travel headway distance. + +Car Length $C$ : length from front bumper to rear bumper of a single car (feet). + +# Goals + +Our first priority is to maximize the number of people who reach safety; in terms of our model, we must maximize the flow of the entire system. A secondary goal is to minimize the total travel time for evacuees; this means that we must maximize speed. As we establish, these goals are one and the same. + +# Assumptions + +- Vehicles hold 2 people on average. This seems reasonable, based on the percentage of the population who would be unable to drive themselves and those who would carpool. +- Vehicles average 17 ft in length. This is based on a generous average following a quick survey of car manufacturers' Web sites. +- Vehicles have an average headway time of 3 s. This is based on numbers for driver reaction time, found in various driving manuals. +- 50 mph is a safe driving speed. +- Merging of traffic does not significantly affect our model. See the Appendix for justification. +Highways 26, 76/328, and 501 are 4-lane. [Rand McNally 1998] +- Safety is defined as $50\mathrm{mi}$ from the nearest coastal point. Counties that lie beyond this point will not be evacuated [SCAN21 2001]. +- Only the following counties need to be evacuated: Allendale, Beaufort, Berkeley, Charleston, Colleton, Dorchester, Georgetown, Hampton, Horry, Jasper, Marion, Williamsburg, and a minimal part of Florence County (based on the previous assumption). +- Myrtle Beach will not be at its full tourist population during a hurricane warning. This seems reasonable because tourists do not like imminent bad weather. + +- The evacuation order will be given at least 24 to $26\mathrm{h}$ prior to the arrival of a hurricane. This is based on the timeline of the 1999 evacuation [Intergraph 2001]. +- Boats, trailers, and other large vehicles will be limited from entering the main evacuation routes. Being able to evacuate people should have a higher priority than evacuating property. +- If we can get everyone on a road within $24\mathrm{h}$ and keep traffic moving at a reasonable speed, everyone should be at a safe zone within 25 to $26\mathrm{h}$ . This is based on our assumption of what a safe zone is and our assumption of average speed. + +# Developing the Model + +# Abstracted Flow Modeling + +Upon inspecting the evacuation route map, we decided that there are only nine evacuation routes. There appear to be more, but many are interconnected and in fact merge at some point. By identifying all bottlenecks, we separated out the discrete paths. + +Using this nine-path map in combination with the county map, we constructed an abstracted flow model with nodes for each county, merge point, and destination, so as to translate our model into a form for computer use. + +# A Brief Discussion of Flow + +The flow $F$ is equal the product of density and speed: $F = ks$ [Winston 1994]. We can find the density $k$ of cars per mile by dividing $1\mathrm{mi} = 5,280$ ft by the sum $C + h_{d}$ , the length of a car plus headway distance (in ft), so + +$$ +F = k s = \frac {5 2 8 0 s}{C + h _ {d}}. +$$ + +Using the fact that headway distance $h_d$ is speed $s$ (ft/s) times headway time $h_t$ (sec), we have + +$$ +F = \frac {5 2 8 0 s}{C + s h _ {t}} = \frac {5 2 8 0}{\frac {C}{s} + h _ {t}}. +$$ + +Increasing $s$ increases $F$ . This result is exciting, because it shows that maximizing flow is the same as maximizing speed. The graph of $F$ versus $s$ gives even more insight (Figure 1). Increases in speed past a certain point benefit $F$ less and less. So we might sacrifice parts of our model to increase low speeds but not necessarily to increase high speeds. + +![](images/5dd4d1c01ce21de8000394a830c8c6b4b943c4eb267e5e6f7210cb2f6f281f82.jpg) +Figure 1. Flow vs. speed. + +According to our assumptions, we have $C = 17$ ft and $h_t = 3$ sec, and converting to units of miles and hours, we get + +$$ +F = \frac {1}{\frac {1 7}{5 2 8 0 s} + \frac {1}{1 2 0 0}}, +$$ + +or + +$$ +s = \frac {1 7}{5 2 8 0} \cdot \frac {1 2 0 0 F}{1 2 0 0 - F}. \tag {1} +$$ + +At our assumed maximum safe speed of $s = 50 \, \mathrm{mph}$ , we have $F = 1114 \, \mathrm{cars/h}$ . + +# Determining Bounds + +We combined our knowledge of county populations with the 24-h deadline and generated a minimum output flow for each county. We also determined the maximum flow for each node-to-node segment, based on the number of lanes. It would be unrealistic to assume that each segment would reach optimal flow, so we set maximum flow at $90\%$ of optimal flow. This reduction in flow is meant to cover problems that arise from accidents, slow drivers, less than ideal merging conditions, or other unexpected road conditions. Putting $F = 0.9 F_{\mathrm{opt}} = (0.9)(1113.92)$ into (1), we find $s \approx 19.6 \mathrm{mph}$ . We decided that this is an acceptable minimal speed. + +# Finding a Feasible Solution + +We used the linear optimizing program LINDO to find a feasible solution; the solution takes $26\mathrm{h}$ . Since this scenario does not take into account geographical convenience, we did some minor hand-tweaking. The final product is in the Appendix. + +# The Simulation + +To confirm the feasibility of our model, we conducted a computer simulation using Arena simulation software. The simulation encompassed $24\mathrm{h}$ of traffic flow on the nine evacuation routes assuming $90\%$ flow efficiency. The model assumed that there was an unlimited number of vehicles ready to enter the simulation in all counties. The time headway between entering vehicles was considered to be normally distributed with a mean of 3 s and a standard deviation of 1 s. The simulation verified our model. + +# Implementation Requirements + +For optimal performance of our model: + +- Evacuees must follow the evacuation routes. The State of South Carolina should notify specific communities or households which route to take. +- Flow must be monitored on all evacuation routes; this requires metering entry of evacuees onto the evacuation routes. Allowing vehicles to enter an evacuation route too quickly may result in congestion at bottlenecks. +- Advance notification that there will be ticketing by photograph could enforce the restriction on towing boats and trailers, which might otherwise be ignored. + +# Applying the Model + +# Requirement 1 + +If an evacuation order included both Charleston and Dorchester counties $24\mathrm{~h}$ prior to the predicted arrival of a hurricane, it would be necessary to reverse all four lanes of I-26 to ensure the evacuation of the entire population of the two counties. In our simulation runs, all of the exit routes from Charleston and Dorchester ran at full capacity (all lanes reversed, $90\%$ of maximum flow) for $24\mathrm{~h}$ to evacuate the counties completely. If the lanes are not reversed, it is doubtful that the two counties could evacuate in a timely fashion. + +# Requirement 2 + +To optimize use of the available bandwidth while ensuring that the entire population is displaced inland within $24\mathrm{h}$ , we opted for a simultaneous evacuation strategy: All counties begin evacuation at the same time. + +Since hurricanes typically arrive in South Carolina moving northward, a staggering strategy would evacuate southernmost counties first. Our model + +has discrete evacuation routes servicing each part of the coastline, so it is not necessary to stagger evacuation. For example, since Beaufort County, which would be among the first counties to be hit in the case of a hurricane, and Horry County, which would be hit significantly later, do not depend on the same evacuation route, nothing is gained by delaying the evacuation of Horry County until Beaufort County has cleared out. + +# Requirement 3 + +To evacuate the entire coastal region within $24\mathrm{h}$ , we found it necessary to turn around traffic on all designated evacuation routes. With greater time allowance, not all routes would need to be turned around. + +# Requirement 4 + +Our model directs approximately 480,000 evacuees to Columbia. This surge entering a city of 500,000 would undoubtedly disrupt traffic flow. While three major interstates head farther inland from Columbia and could easily accommodate the traffic from the coast, the extreme congestion within the city would disrupt the flow coming into Columbia from the coast. It would be best to set up temporary shelters around the outskirts of Columbia (and at other destination sites) to avoid having too many people vying for space within Columbia itself. + +# Requirement 5 + +Because heavy vehicles take up more road space and generally require a greater headway time, they adversely affect our model. Heavy vehicles are allowable if they are the only available means of transportation, as may be the case for tourists in recreational vehicles. Boats and trailers are strictly forbidden on the evacuation routes. A rule of one car per household can be announced, but the model can probably handle up to two cars per household. Our assumption of two people per car can still hold up, given the number of people who are unable to drive themselves to safety. + +# Requirement 6 + +With the flow and time constraints defined within our model, the entrance of large numbers of additional evacuees onto the designated evacuation routes from I-95 would cause serious disruptions. The traffic on I-95 coming from Georgia and Florida may turn west onto any nonevacuation roadway. Ideally, better evacuation routes could be established within Georgia and Florida to minimize their evacuees entering South Carolina. + +# Limitations + +Time forced us to simplify our model. Here are extensions that we would have liked to include: + +Factor in the "first-hour" effect. The western counties could potentially have full use of the evacuation routes for a limited time at the very beginning of the evacuation. The population of the eastern counties would take time to reach the western counties; but once they did, both groups would have to share the route. + +Do more work with the impact of large vehicles. + +Explore headway time in greater detail. We know that headway time is not dependent on speed in an ideal world, but does human psychology make headway time dependent on speed? Also, although we assume that all vehicles exhibit the same stopping pattern, we know that car size and brake condition have an effect. We would like to find a more accurate concept of headway time. + +Inspect all nine routes on-site. We assume that any road listed by the SCDOT as a hurricane evacuation route is well maintained and appropriate for that use, and that these are the only appropriate routes. + +Expand the complexity of our model. We kept the number of paths to a manageable level, but it would be nice to factor in the smaller routes. + +Develop mechanisms to implement our model. This would involve planning out a block-by-block time-table, metering techniques, merging techniques, traffic reversal techniques, and large-vehicle restriction techniques. + +Add accidents, breakdowns, and other problems to the model specifically, rather than just lumping them into "efficiency." + +Study the potential costs of complete traffic reversal. + +Study the population fluctuations of Myrtle Beach, so that we would know how many tourists to expect in the event of a hurricane. + +# Authorities Fear Floyd Repeat, Enlist Help of Undergraduates + +ARDEN HILLS, MN, FEB. 12—Responding to complaints over the disorder of South Carolina's 1999 coastal evacuation in preparation for Hurricane Floyd, authorities enlisted the aid of three undergraduates from Bethel College. The task set before the three young mathematicians was to plan an orderly and timely evacuation of South Carolina's coastal region. + +A denial of funding for travel expenses prevented the students from making an on-site inspection, but they managed to get a feel for the territory based on maps and census reports. Using a technique they dubbed "Abstracted Flow Modeling," the team constructed several computer models of what a full coastal evacuation would involve. Using all the tools available to them and a little human intuition, the trio created a 26-hour scenario for the full evacuation of more + +than 12 counties. + +Such an evacuation would involve all people in the area being divided among 9 separate evacuation routes and released in a timed fashion. Using the timings and routings suggested by the Bethel team, the entire coast could be evacuated within a reasonable time and the travel time for individual vehicles could be kept to a minimum. + +Compared to the 1999 evacuation, when fewer than 800,000 people were evacuated, the Bethel model can evacuate more than 1.1 million people. Much of this increase can be attributed to grid-lock prevention, lanedoubling, and access restriction to the main highways. + +Concerned citizens should be on the lookout for announcements concerning route assignments and departure timings for their neighborhoods. + +— Nathan Gossett, Barbara Hess, and Michael Page in Arden Hills, MN + +# Appendix + +# Abstracted Flow Model + +We created the flow model in Figure A1 to represent what we perceived as 9 routes from the coast of South Carolina to the interior of the state. Rectangles represent locations and ovals represent junctions. The arrows represent flow direction. The flow assigned to various segments can be found in Table A1. + +Table A1. Flow rates by route and county. + +
CountyPopulation (thousands)JunctionFlow (cars/min)
Jasper171a5.9
Hampton191b6.6
Allendale111c3.8
Beaufort1131a17.1
2a22
Colleton382a4.4
2b4.4
34.4
Charleston3202b2.6
329
417.6
2651
55.5
6a5.5
Dorchester91415.8
2615.8
Berkeley142527.4
6a21.4
Georgetown556a3
6b16.1
Williamsburg376a3.5
6b9.3
Florence1256b4
7b4
Horry1796b4
751
833.4
Marion347b5.9
7a5.9
+ +# Merge Considerations + +We do not want congestion at merges, so we maintain a constant traffic density in a "merge zone." Let $A$ and $B$ be the pre-merge flows and $C$ be the post-merge outflow. Then we must have $C = A + B$ to avoid congestion. + +![](images/601671e0747d16d27cc36cfee1d844787ad5fb0518e47f823d598a9c144c26c6.jpg) +Figure A1. Abstracted flow model. + +We must regulate the pre-merge flows to prevent congestion and allow for maximum post-merge flow. + +If the density of merging traffic is large enough, we set aside one or two lanes for merging for about a mile prior to the merge point. We investigate whether this is possible without lowering the pre-merge flows $A$ and $B$ , and when this strategy is physically possible and beneficial. + +Does shifting main-road traffic left to open a merge lane(s) increase the main road's flow (before merge traffic is added) and thus increase its contribution to post-merge flow? If so, then the post-merge flow would be + +$$ +(A + \text {i n c r e a s e}) + B = (C + \text {i n c r e a s e}). +$$ + +But we fixed $C = A + B$ as the maximum flow, so $A$ or $B$ or both must decrease. Is this reduction really a concern? + +Recall that $F = ks$ , where $F$ is flow, $k$ is density, and $s$ is speed. Let $N$ be the number of cars on a one-mile stretch and $L$ be the number of lanes in the direction of concern. The total flow of the road is the product of the lane flow and the number of lanes: + +$$ +L k s = L (N / L) s = N s. +$$ + +To shift one lane left, we must move $N / L$ vehicles to $L - 1$ lanes, adding $\frac{N / L}{L - 1}$ vehicles to each non-merge lane, thus giving a new lane population of + +$$ +\frac {N}{L} + \frac {N / L}{L - 1} = \frac {N}{L} \cdot \frac {L}{L - 1}. +$$ + +So the new flow is + +$$ +(L - 1) \left(\frac {N}{L} \cdot \frac {L}{L - 1}\right) s = N s. +$$ + +No change! We do not have to reduce pre-merge flows to add a merge lane (assuming that the merge lane modification is physically possible). The same argument confirms that pre-merging works for shifting two lanes. + +When are these shifts physically possible? Let $m = N / L$ , and let $q$ be a lane's carrying capacity per mile, which is a function of $s$ for fixed $h_t$ . Then for clearing one merge lane to be physically possible, we need + +$$ +\frac {m}{L - 1} \leq q - m. +$$ + +If two lanes need to be shifted left, the same process yields the requirement + +$$ +\frac {2 m}{L - 2} \le (q - m). +$$ + +Moving everything to the right side of the inequalities gives two functions that are greater than or equal to 0, each a function of $L$ and $m$ . We fix $L = 2, 3,$ and 4 lanes and graph the functions to see when they are within the constraints. + +Figure A2 shows valid and invalid $ms$ given four lanes (two lines for two possible shifts, single or double). We have $q = 309.83$ , the carrying capacity for a mile-long lane with $s = 50$ , and $h_t = 3$ seconds. The function must lie on or above the horizontal axis for creation of a merge lane. + +![](images/9646b8f275f06b012f7931384223833ca0e597b6b41873f0dec12ba4d7395f92.jpg) +Figure A2. Feasibility functions for one lane (thin line) and two lanes (thick line), as a function of the number of cars per mile ( $m = N / L$ ). + +The merge patrol officer can determine $q$ by multiplying the number of cars counted in one minute by $6 / 5$ . If the ratio of merging traffic to main-road traffic is higher than $1:20$ , there may be enough disruption that merge efficiency could benefit from a merge lane(s). With this ratio, and $90\%$ flow, there would be one car merging every minute. + +# References + +Gerlough, Daniel L., and Matthew J. Huber. 1975. Traffic Flow Theory. Transportation Research Board. +Intergraph. www.intergraph.com/geoinfo/scdot.asp. Accessed 9 February 2001. +Pressley, Sue Anne. 1999. As Hurricane Floyd approaches, record number of people moved. Washington Post 119, No. 42 (19 September 1999). Posted at http://tech.mit.edu/V119/N42/floyd.42w.html. Accessed 9 February 2001. +Rand McNally & Company. 1998. Skokie, IL: United States Road Atlas. +Scan21. 2001. http://www.scan21.com/hurricane_main.html. Accessed 9 February 2001. +US Census. 2001. http://www.census.gov/population/estimatescounty/ co-99-1/99C1_45.txt. Accessed 9 February 2001. + +US Department of Transportation. 2001. http://www.dot.state.sc.us. Accessed 9 February 2001. + +Winston, Wayne L. 1994. Operations Research. Duxbury Press. + +# Traffic Flow Models and the Evacuation Problem + +Samuel W. Malone + +Carl A. Miller + +Daniel B. Neill + +Duke University + +Durham, NC + +Advisor: David P. Kraines + +# Introduction + +We consider several models for traffic flow. A steady-state model employs a model for car-following distance to derive the traffic-flow rate in terms of empirically estimated driving parameters. We go on to derive a formula for total evacuation time as a function of the number of cars to be evacuated. + +The steady-state model does not take into account variance in speeds of vehicles. To address this problem, we develop a cellular automata model for traffic flow in one and two lanes and augment its results through simulation. + +After presenting the steady-state model and the cellular automata models, we derive a space-speed curve that synthesizes results from both. + +We address restricting vehicle types by analyzing vehicle speed variance. To assess traffic merging, we investigate how congestion occurs. + +We bring the collective theory of our assorted models to bear on five evacuation strategies. + +# Assumptions + +- Driver reaction time is approximately 1 sec. +- Drivers tend to maintain a safe distance; tailgating is unusual. +- All cars are approximately 10 ft long and 5 ft wide. +- Almost all cars on the road are headed to the same destination. + +# Terms + +Density $d$ : the number of cars per unit distance. + +Occupancy $n$ : the proportion of the road covered by cars. + +Flow $q$ : the number of cars per time unit that pass a given point. + +Separation distance $s$ : the average distance between midpoints of successive cars. + +Speed $v$ : the average steady-state speed of cars. + +Travel Time: how long a given car spends on the road during evacuation. + +Total Travel Time: the time until the last car reaches safety. + +# The Steady-State Model + +# Development + +Car-following is described successfully by mathematical models; following Rothery [1992, 4-1], we model the average separation distance $s$ as a function of common speed $v$ : + +$$ +s = \alpha + \beta v + \gamma v ^ {2}, \tag {1} +$$ + +where $\alpha, \beta,$ and $\gamma$ have the physical interpretations: + +$$ +\alpha = \text {t h e e f f e c t i v e v e h i c l e l e n g t h} L, +$$ + +$$ +\beta = \text {t h e r e a c t i o n t i m e , a n d} +$$ + +$$ +\gamma = \text {t h e r e c i p r o c a l o f t w i c e t h e m a x i m u m a v e a g e d e c e l e r a t i o n} +$$ + +This relationship allows us to obtain the optimal value of traffic density (and speed) that maximizes flow. + +Theorem. For $q = kV$ (the fundamental equation for traffic flow) and (1), traffic flow $q$ is maximized at + +$$ +q ^ {*} = (\beta + 2 \gamma^ {1 / 2} L ^ {1 / 2}) ^ {- 1}, \qquad v ^ {*} = (L / \gamma) ^ {1 / 2}, \qquad k ^ {*} = \frac {\beta (\gamma / L) ^ {1 / 2} - 2 \gamma}{\beta^ {2} - 4 \gamma L}. +$$ + +Proof: Consider $N$ identical vehicles, each of length $L$ , traveling at a steady-state speed $v$ with separation distance given by (1). If we take a freeze-frame picture of these vehicles spaced over a distance $D$ , the relation $D = NL + Ns'$ + +must hold, where $s'$ is the bumper-to-bumper separation. Since $s' = s - L$ , we obtain $k = N / D = N / (NL + Ns') = 1 / (L + s') = 1 / s$ . We invoke (1) to get + +$$ +k = \frac {1}{\alpha + \beta v + \gamma v ^ {2}}. +$$ + +This is a quadratic equation in $v$ ; taking the positive root yields + +$$ +v (k) = \frac {1}{2 \gamma} \sqrt {4 \frac {\gamma}{k} + (\beta^ {2} - 4 \gamma L)} - \frac {\beta}{2 \gamma}. +$$ + +Applying $q = kv$ , we have + +$$ +q (k) = \frac {k}{2 \gamma} \sqrt {4 \frac {\gamma}{k} + (\beta^ {2} - 4 \gamma L)} - \frac {k \beta}{2 \gamma}. +$$ + +Differentiating with respect to $k$ , setting the result equal to zero, and wading through algebra yields the optimal values given. + +# Interpretation and Uses + +We can estimate $q^{*}$ , $k^{*}$ , and $v^{*}$ from assumptions regarding car length $(L)$ , reaction time $(\beta)$ , and the deceleration parameter $(\gamma)$ . If we let $L = 10\mathrm{ft}$ , $\beta = 1\mathrm{s}$ , and $\gamma \approx .023\mathrm{s}^2/\mathrm{ft}$ (a typical value [Rothery 1992]), we obtain + +$$ +q ^ {*} = 0. 5 1 0 \mathrm {c a r s} / \mathrm {s}, \quad v ^ {*} = 2 0. 8 5 \mathrm {f t} / \mathrm {s}, \quad k ^ {*} = 0. 0 2 4 \mathrm {c a r s} / \mathrm {f t}. +$$ + +A less conservative estimate for $\gamma$ is $\gamma = \frac{1}{2} (a_f^{-1} - a_l^{-1})$ , where $a_{f}$ and $a_{l}$ are the average maximum decelerations of the following and lead vehicles Rothery [1992]. We assume that instead of being able to stop instantaneously (infinite deceleration capacity), the leading car has deceleration capacity twice that of the following car. Thus, instead of $\gamma = 1 / 2a = .023\mathrm{s}^2 /\mathrm{ft}$ , we use the implied value for $a$ to compute $\gamma^\prime = \frac{1}{2}\left(a^{-1} - 2a^{-1}\right) = \frac{1}{2}\gamma = 0.0115\mathrm{s}^2 /\mathrm{ft}$ and get + +$$ +q ^ {*} = 0. 5 9 6 \mathrm {c a r s} / \mathrm {s}, \quad v ^ {*} = 2 9. 5 \mathrm {f t} / \mathrm {s} \approx 2 0 \mathrm {m p h}, \quad k ^ {*} = . 0 2 0 \mathrm {c a r s} / \mathrm {f t}. +$$ + +Going $20\mathrm{mph}$ in high-density traffic with a bumper-to-bumper separation of 40 ft is not bad. + +The 1999 evacuation was far from optimal. Taking $18\mathrm{h}$ for the 120-mi trip from Charleston to Columbia implies an average speed of $7\mathrm{mph}$ and a bumper-to-bumper separation of 7 ft. + +# Limitations of the Steady-State Model + +The steady-state model does not take into account the variance of cars' speeds. Dense traffic is especially susceptible to overcompensating or undercompensating for the movements of other drivers. + +A second weakness is that the value for maximum flow gives only a first-order approximation of the minimum evacuation time. Determining maximum flow is distinct from determining minimum evacuation time. + +# Minimizing Evacuation Time with the Steady-State Model + +# Initial Considerations + +The goal is to keep evacuation time to a minimum, but the evacuation route must be as safe as possible under the circumstances. How long on average it takes a driver to get to safety (Columbia) is related to minimizing total evacuation time but is not equivalent. + +# A General Performance Measure + +A metric $M$ that takes into account both maximizing traffic flow and minimizing individual transit time $T$ is + +$$ +M = W \frac {N}{l q} + (1 - W) \frac {D}{v}, +$$ + +where $0 \leq W \leq 1$ is a weight factor, $D$ is the distance that to traverse, $l$ is the number of lanes, and $N$ is the number of cars to evacuate. This metric assumes that the interaction between lanes of traffic (passing) is negligible, so that total flow is that of an individual lane times the number of lanes. Given $W$ , minimizing $M$ amounts to solving a one-variable optimization problem in either $v$ or $k$ . Setting $W = 1$ corresponds to maximizing flow, as in the preceding section. Setting $W = 0$ corresponds to maximizing speed, subject to the constraint $v \leq v_{\mathrm{cruise}}$ , the preferred cruising speed; this problem has solution $M = D / v_{\mathrm{cruise}}$ . The model does not apply when cars can travel at $v_{\mathrm{cruise}}$ . + +Setting $W = 1 / 2$ corresponds to minimizing the total evacuation time + +$$ +\frac {N}{l q} + \frac {D}{v}. +$$ + +The evacuation time is the time $D / v$ for the first car to travel distance $D$ plus the time $N / lq$ for the $N$ cars to flow by the endpoint. + +To illustrate that maximizing traffic flow and maximizing speed are out of sync, we calculate the highest value of $W$ for which minimizing $M$ would result in an equilibrium speed of $v_{\mathrm{cruise}}$ . This requires a formula for the equilibrium value $v^{*}$ that solves the problem + +$$ +\mathrm {m i n i m i z e} M (v) = W \frac {N (L + \beta v + \gamma v ^ {2})}{l v} + (1 - W) \frac {D}{v} +$$ + +$$ +\text {s u b j e c t} 0 < v \leq v _ {\text {c r u i s e}}. +$$ + +The formula for $M(v)$ comes from (1), $q = kv$ , and $k = 1 / s$ . Differentiating with respect to $v$ , setting the result equal to zero, and solving for speed yields + +$$ +v ^ {*} = \min \left\{v _ {\mathrm {c r u i s e}}, \sqrt {\frac {1}{\gamma} \left[ L + \frac {(1 - W)}{W} \cdot \frac {D l}{N} \right]} \right\}. +$$ + +For $v_{\mathrm{cruise}}$ to equal the square root, we need + +$$ +W = \left(1 + \frac {N}{D l} \left(v _ {\text {c r u i s e}} ^ {2} \gamma - L\right)\right) ^ {- 1}. +$$ + +Using $N = 160,000$ cars, $D = 633,600$ ft (120 mi), $l = 2$ lanes, $v_{\text{cruise}} = 60$ mph = 88 ft/s, $\gamma = .0115$ s²/ft, and $L = 10$ ft, we obtain $W \approx 1 / 11$ . Thus, minimizing evacuation time in situations involving heavy traffic flow is incompatible with allowing drivers to travel at cruise speed with a safe stopping distance. + +# Computing Minimum Evacuation Time + +From the fact that $T = 2M$ when $W = 1/2$ , we obtain + +$$ +q ^ {*} = k ^ {*} v ^ {*}, \quad v ^ {*} = \sqrt {\frac {1}{\gamma} [ L + D l / N ]}, +$$ + +$$ +k ^ {*} = \frac {\beta \gamma^ {1 / 2} [ L + D l / N ] ^ {- 1 / 2} - 2 \gamma \frac {[ L + \frac {1}{2} D l / N ]}{[ L + D l / N ]}}{[ \beta^ {2} - 4 \gamma L ] - \gamma \frac {[ D l / N ] ^ {2}}{[ L + D l / N ]}}. +$$ + +The minimum evacuation time is + +$$ +T ^ {*} = \frac {N}{l q ^ {*}} + \frac {D}{v ^ {*}}. +$$ + +# Predictions of the Steady-State Model + +For $N$ large, evacuation time minimization is essentially equivalent to the flow maximization (Figure 1), and it can be shown analytically that + +$$ +\lim _ {N \to \infty} \frac {T _ {\mathrm {f l o w}} (N)}{T _ {\mathrm {m i n}} (N)} = 1. +$$ + +The predicted evacuation time of $40\mathrm{h}$ for $N = 160,000$ seems reasonable. We can evaluate the impact of converting I-26 to four lanes by setting $l = 4$ in the equation for minimum evacuation time, yielding $T \approx 23\mathrm{h}$ . For the steady-state model, this prediction makes sense, since the model does not deal with the effect of the bottleneck that will occur when Columbia is swamped by evacuees. The bottleneck would be compounded by using four lanes instead of two. On balance, however, doubling the number of lanes would lead to a net decrease in evacuation time. + +![](images/b40d82d314e643bb8488c3168e3ffd5341c45d492c4eeaad346a1656c5fee5d3.jpg) +Figure 1. Comparison of minimum evacuation time (lower line) and maximum flow evacuation time (upper line). + +# One-Dimensional Cellular Automata Model + +# Development + +In heavy traffic, cars make repeated stops and starts, with somewhat arbitrary timing; a good model of heavy traffic should take this randomness into account but also be simple enough to give an explicit formula for speed. + +We divide a single-lane road into cells of equal length. A cell contains one car or no car. A car is blocked if the cell directly in front of it is occupied. At each time state, cars move according to the following rules: + +- A blocked car does not move. +- If a car is not blocked, it advances to the next cell with probability $p$ . + +The decisions of drivers to move forward are made independently. + +A traffic configuration can be represented by a function $f \colon \mathbb{Z} \to \{0,1\}$ , where $f(k) = 1$ if cell $k$ contains a car and $f(k) = 0$ if not. Probability distributions on the set of all such functions are called binary processes. + +Given a process $X$ , define a process $I_{p}(X)$ according to the following rule: + +If $\left(X(i),X(i + 1)\right) = (1,0)$ + +then $\left(I_p(X)(i),I_p(X)(i + 1)\right) = \left\{ \begin{array}{ll}(0,1), & \text{with probability } p;\\ (1,0), & \text{with probability } 1 - p. \end{array} \right.$ + +This rule is identical to the traffic flow rule given above: If $X$ represents the traffic configuration at time $t$ , $I_{p}(X)$ gives the traffic configuration at time $t + 1$ . + +We are interested in what the traffic configuration looks like after several iterations of $I$ . Let $I_{p}^{n}(X)$ mean $I_{p}$ applied $n$ times to $X$ . The formula for traffic speed in terms of density comes from the following theorem: + +Theorem. Suppose that $X$ is a binary process of density $d$ . Let + +$$ +r = \frac {1 - \sqrt {1 - 4 d (1 - d) p}}{2 p d} +$$ + +and let $\mathcal{M}_{p,d}$ denote the Markov chain with transition probabilities + +$$ +0 \longrightarrow \left\{ \begin{array}{l l} 0, & w / p r o b. 1 - r; \\ 1, & w / p r o b. 1 - r; \end{array} \right. \qquad 1 \longrightarrow \left\{ \begin{array}{l l} 0, & w / p r o b. r; \\ 1, & w / p r o b. r. \end{array} \right. +$$ + +The sequence of processes $X, I_p(X), I_p^2(X), I_p^3(X), \ldots$ converges to $\mathcal{M}_{p,d}$ . + +Here "density" means the frequency with which 1s appear, analogous to the average number of cars per cell. This theorem tells what the traffic configuration looks like after a long period of time. + +Knowing the transition probabilities allows us to compute easily the average speed of the cars in $\mathcal{M}_{p,d}$ : the average speed is the likelihood that a randomly chosen car is not blocked and advances to the next cell at the next time state: + +$$ +\begin{array}{l} v = P r \left[ I \left(\mathcal {M} _ {p, d} (i)\right) = 0 \mid \mathcal {M} _ {p, d} (i) = 1 \right] \\ { = } { P r \left[ \mathcal { M } _ { p , d } ( i + 1 ) = 0 \mid \mathcal { M } _ { p , d } ( i ) = 1 \right] } \\ \cdot P r \left[ I (\mathcal {M} _ {p, d} (i)) = 0 \mid \mathcal {M} _ {p, d} (i) = 1 \mathrm {a n d} \mathcal {M} _ {p, d} (i + 1) = 0 \right] \\ = r p = \left(\frac {1 - \sqrt {1 - 4 d (1 - d) p}}{2 p d}\right) p = \frac {1 - \sqrt {1 - 4 d (1 - d) p}}{2 d}. \qquad (2) \\ \end{array} +$$ + +The model does not accurately simulate high-speed traffic and does not take into account following distance, and the stop-and-start model of car movement is not accurate when traffic is sparse. The model is best for slow traffic (under 15 mph) with frequent stops. + +# Low Speeds + +We must set three parameters: + +$\Delta x =$ the size of one cell, the space taken up by a car in a tight traffic jam; we set it to 15 ft, slightly longer than most cars. + +$\Delta t =$ the length of one time interval, the shortest time by a driver to move into the space in front; we take this to be $0.5 \mathrm{~s}$ . + +$p =$ the movement probability, representing the proportion of drivers who move at close to the overall traffic speed; we let $p = 0.85$ . + +In (2) we insert the factor $(\Delta x / \Delta t)$ to convert from cells/time-state to ft/s: + +$$ +v = \left(\frac {\Delta x}{\Delta t}\right) \frac {1 - \sqrt {1 - 4 d (1 - d) p}}{2 d}. +$$ + +Density $d$ is in cars per cell, related to occupancy $n$ by $d = (15\mathrm{ft} / 10\mathrm{ft})\times n = 3n / 2$ . Table 1 gives $v$ for various values of $n$ and $k$ . + +Table 1. $v$ for various values of $n$ and $k$ + +
nK ft-1dv (mph)
0.600.0600.902
0.550.0550.834
0.500.0500.755
0.450.0450.688
0.400.0400.6010
0.350.0350.5312
0.300.0300.4514
0.250.0250.3815
0.200.0200.3016
+ +# One Lane + +To explore the one-dimensional cellular automata model, we wrote a simple simulation in C++. The simulation consists of a 5,000-element long (circular) array of bits, with a 1 representing a car and a 0 representing a (car-sized) empty space. The array is initialized randomly based on a value for the occupancy $n$ : An element is initialized to 1 with probability $n$ , or to 0 with probability $1 - n$ . The array is iterated over 5,000 time cycles: On each cycle, a car moves forward with probability $p$ if the square in front of it is empty. The flow $q$ is calculated as the number of cars $N$ passing the end of the array divided by the number of time cycles (i.e., $q = N / 5000$ ), and thus the average speed of an individual car in cells per time cycle is $v = q / n = N / 5000n$ . + +Table 2. Comparison of simulation and equation values for speed. + +
nSpeed
p=1/2p=3/4
SimulationEquationSimulationEquation
0.20.4330.4380.6940.697
0.40.3560.3490.5920.589
0.60.2340.2320.3920.392
0.80.1080.1100.1710.174
+ +Table 2 shows results for various values of the occupancy $n$ and probability $p$ , verifying the accuracy of the one-dimensional cellular automata equation (2). + +The value for $p$ should be related to the mean and standard deviation of $v_{\mathrm{cruise}}$ . The mean and standard deviation of a binary random variable are $p$ and $\sqrt{p(1 - p)}$ . We have + +$$ +\frac {p}{\sqrt {p (1 - p)}} = \frac {\mu}{\sigma} \quad \longrightarrow \quad p = \frac {1}{1 + \left(\frac {\sigma}{\mu}\right) ^ {2}}. +$$ + +For $\mu(v_{\mathrm{cruise}}) = 60 \mathrm{mph}$ and $\sigma(v_{\mathrm{cruise}}) = 5 \mathrm{mph}$ , we get $p = 144 / 145$ . Now, $L = \mu t p$ ; so assuming $L = 10 \mathrm{ft}$ , $p = 144 / 145$ , and $\mu = 60 \mathrm{mph} = 88 \mathrm{ft/s}$ , we obtain a time step of $0.113 \mathrm{s}$ . + +We now use the model to predict how fast (on average) a car moves in a single lane, as a function of the occupancy. We consider the "relative speed" $v_{\mathrm{rel}}$ , the average speed divided by the (mean) cruise speed. The average speed is given by the one-dimensional cellular automata equation, and the cruise speed is $p$ cells per time cycle, so this gives us + +$$ +v _ {\mathrm {r e l}} = \frac {v _ {\mathrm {a v g}}}{v _ {\mathrm {c r u i s e}}} = \frac {1 - \sqrt {1 - 4 n (1 - n) p}}{2 p n}. +$$ + +Using $p = 144 / 145$ , we calculate $v_{\mathrm{rel}}$ and $v_{\mathrm{avg}}$ as a function of $n$ (Table 3). + +The model predicts that for low occupancy the average speed will be near the cruise speed, but for occupancies greater than 0.5 the average speed will be significantly lower. The cellular automata model does not take following distance into account; thus, it tends to overestimate $v_{\mathrm{avg}}$ for high speeds and is most accurate when occupancy is high and speed is low. + +Table 3 also shows flow rate $q = n v_{\mathrm{avg}} / L$ , in cars/s, as a function of occupancy. The flow rate is symmetric about $n = 0.5$ . Each car movement can be thought of as switching a car with an empty space, so the movement of cars to the right is equivalent to the movement of holes to the left. + +The model fails, however, to give a reasonable value for the maximum flow rate: 4.1 cars/s ≈ 14,600 cars/h, about seven times a reasonable maximum rate [Rothery 1992]. The reason is that the cell size equals the car length, a correct + +Table 3. Relative speed, average speed, and flow rate as a function of occupancy. + +
nvrel(ft/s)vavg(ft/s)flow rate (cars/s)
0.1.999880.9
0.2.998881.8
0.3.995882.6
0.4.987873.5
0.5.923814.1
0.6.658583.5
0.7.426382.6
0.8.249221.8
0.9.111100.9
+ +approximation only as car speeds approach zero and occupancy approaches 1. For $n \geq 0.5$ , we should have a larger cell size; so we assume that cell size equals car length plus following distance and that following distance is proportional to speed. Assuming a 1 s following distance, we obtain cell size as + +$$ +C = L + v _ {\mathrm {a v g}} \times (1 \mathrm {s e c}). +$$ + +But we do not know the value of $v_{\mathrm{avg}}$ until we use the cell size to obtain it! For $n$ large, we can assume that $v_{\mathrm{avg}} \approx v_{\mathrm{cruise}}$ and find an upper bound on cell size: + +$$ +C = L + v _ {\text {c r u i s e}} \times (1 \sec) = 9 8 \mathrm {f t}. +$$ + +We divide the original flow rate by the increased cell size to obtain a more reasonable flow rate: + +$$ +q = \frac {4 . 0 6 3 \mathrm {c a r s / s}}{9 8 \mathrm {f t / 1 0 f t}} = 0. 4 1 5 \mathrm {c a r s / s} \approx 1, 5 0 0 \mathrm {c a r s / h}. +$$ + +This is likely to be an underestimate; for greater accuracy, we must find a method to compute the correct cell size before finding the speed. We address this problem later. + +# Two Lanes + +We expand the one-dimensional model. The simulation consists of a two-dimensional $(1000 \times 2)$ array of bits. The array is initialized randomly and then iterated over 1,000 time cycles: On each cycle, a car moves forward with probability $p$ if the cell in front of it is empty. If not, provided the cells beside it and diagonally forward from it are unoccupied, with probability $p$ the car changes lanes and moves one cell forward. + +Like the one-lane simulation, the two-lane one is correct only for high densities and low speeds, since it uses cell size equal to car length. Hence, we do not use the two-lane simulation to compute the maximum flow rate. However, + +since cell size affects flow rate by a constant factor, we can compare flow rates by varying parameters of the simulation. In particular, we use this model to examine how the flow rate changes with the variance of speeds. + +# "But I Want to Bring My Boat!" + +There are two main types of variance in speeds: + +- $\sigma_t^2$ of traveling speed (random fluctuations in the speed of a single vehicle over time), and +- $\sigma_{m}^{2}$ of mean speed (variation in the mean speeds of all vehicles). + +In the one-lane simulation, we assumed that $\sigma_{m} = 0$ and $\sigma_{t} = 5\mathrm{mph}$ ; this choice was reflected in the calculation of $p$ , since $p = 1 / [1 + (\sigma_t / \mu)^2]$ for every car. When we take $\sigma_{m}$ into account, each car gets a different value of $p$ : + +- Choose the car's mean speed $\mu$ randomly from the normal distribution with mean $v_{\mathrm{cruise}}$ and standard deviation $\sigma_{m}$ . +- The car's traveling speed will be normally distributed with mean $\mu$ and standard deviation $\sigma_t$ . +- The car's transition probability $p$ is + +$$ +p = \frac {\mu}{v _ {\mathrm {c r u i s e}} + \lambda \sigma_ {m}} \left(\frac {1}{1 + \left(\frac {\sigma_ {t}}{\mu}\right) ^ {2}}\right), +$$ + +where $\lambda$ is a constant best determined empirically. We use $\lambda = 0$ , a conservative estimate of the change in flow rate as a function of $\sigma_{m}$ . + +We consider what effect $\sigma_{t}$ and $\sigma_{m}$ have on the speed at a given occupancy. Considering the cars' movement as a directed random walk, increasing $\sigma_{t}$ increases randomness in the system, causing cars to interact (and hence block one other's movement) more often, decreasing average speed. + +The effects of $\sigma_{m}$ are even more dramatic: Cars with low mean speeds impede faster cars behind them. + +We ran simulations in which we fixed $n = 0.5$ and varied both $\sigma_{m}$ and $\sigma_{t}$ from 0 to 15 mph. For each pair of values, we calculated average flow rate for the one- and two-lane simulations. + +Table 4 shows the effects of lane-changing by comparing maximum flows for the two-lane model with twice that for the one-lane model. For small $\sigma_{t}$ and $\sigma_{m}$ , allowing cars to switch lanes does not increase the flow rate much; for high values, the two-lane model has a much higher flow rate. Each 5 mph increase in $\sigma_{m}$ results in an $11 - 16\%$ decrease in flow rate (two-lane model, $\sigma_{t} = 0$ ), while each 5 mph increase in $\sigma_{t}$ results in a $5 - 7\%$ decrease in flow rate (two-lane model, $\sigma_{m} = 0$ ). Both variances dramatically affect flow rate, and $\sigma_{m}$ is more significant than $\sigma_{t}$ . + +Table 4. Two-lane average flow / Twice the one-lane average flow. + +
σmσt
051015
0979/976923/908856/830805/776
5822/757817/746790/729753/683
10699/537691/518659/485642/469
15588/393569/366540/319518/292
+ +# "So, Can I Bring My Boat?" + +We consider how variations in vehicle type affect $\sigma_{m},\sigma_{t}$ , and flow rate. Most large vehicles (boats, campers, semis, and motor homes) travel more slowly than most cars. A significant proportion of large vehicles results in increased $\sigma_{m}$ and hence a lower flow rate. + +As a simplified approximation, we assume that there are two types of vehicles: fast cars $(\mu = \mu_1)$ and slow trucks $(\mu = \mu_2)$ , with proportion $\alpha$ of slow trucks $\alpha$ . We calculate + +$$ +\begin{array}{l} \sigma_ {m} ^ {2} = \alpha (\mu_ {2} - \bar {\mu}) ^ {2} + (1 - \alpha) (\mu_ {1} - \bar {\mu}) ^ {2} \\ = \alpha \left[ \mu_ {2} - \left(\mu_ {1} - \left(\mu_ {1} - \mu_ {2}\right) \alpha\right) \right] ^ {2} + (1 - \alpha) \left(\mu_ {1} - \left(\mu_ {1} - \mu_ {2}\right) \alpha\right) ^ {2} \\ = \left(\mu_ {1} - \mu_ {2}\right) ^ {2} \left(\alpha^ {2} (1 - \alpha) + \alpha (1 - \alpha) ^ {2}\right) = \left(\mu_ {1} - \mu_ {2}\right) ^ {2} \alpha (1 - \alpha). \\ \end{array} +$$ + +Thus, $\sigma_{m} = (\mu_{1} - \mu_{2})\sqrt{\alpha(1 - \alpha)}$ . We now assume that fast cars travel at $\mu_{1} = 70$ mph and slow trucks at $\mu_{2} = 50$ mph and find $\sigma_{m}$ as a function of $\alpha$ . Random fluctuations in vehicle speed are likely to depend more on driver psychology than on vehicle type, so we assume $\sigma_{t} = 5$ mph. We interpolate linearly in Table 4 to find the flow rate (cars per 1,000 time cycles) as a function of the proportion of slow vehicles $\alpha$ (Table 5). + +Table 5. Flow rate as a function of proportion of slow vehicles. + +
ασtflow rate% reduction in flow
00923/9080/0
.011.99881/8444.6/7.0
.022.80864/8176.4/10.0
.054.36831/76710.0/15.5
.16.00792/70014.1/22.9
.28.00741/60919.7/32.9
.510.0691/51825.1/43.0
+ +The flow rate is decreased significantly by slow vehicles: If $1\%$ of vehicles are slow, the flow rate decreases by $5\%$ ; if $10\%$ of vehicles are slow, the flow rate decreases by $15\%$ . The effects of $\sigma_{m}$ are magnified if vehicles are unable to pass slower vehicles; so if the highway went down to one lane at any point (due to + +construction or accidents, for example), the flow rate would be reduced even further. Hence, we recommend no large vehicles (vehicles that may potentially block multiple lanes) and no slow vehicles (vehicles with a significantly lower mean cruising speed). Exceptions could be made if a family has no other vehicle and for vehicles with a large number of people (e.g., buses). Slow-moving vehicles should be required to stay in the right lane and families should be encouraged to take as few vehicles as possible. + +# The Space-Speed Curve + +To determine optimal traffic flow rates, we can combine the one-dimensional cellular automata and the steady-state models to get a good estimate of the relationship between speed $v$ and the separation distance $s$ . + +$s \leq 15$ : There is essentially no traffic flow: $v(s) = 0$ . + +$15 \leq s \leq 30$ : Traffic travels at between 0 and $12\mathrm{mph}$ and the one-dimensional cellular automata model applies; $v$ is approximately a linear function of $s$ . + +$30 \leq s \leq 140$ : Traffic travels between 12 and $55 \mathrm{mph}$ and the steady-state model is appropriate; $v$ is again approximately a linear function of $s$ , with less steep slope. + +$140 \leq s$ : Traffic travels at the speed limit of $60 \mathrm{mph}$ . + +# Incoming Traffic Rates + +The optimal flow is determined by the optimal flow through the smallest bottleneck. However, the time of travel (which is a more important measure for our purposes) is affected by other factors, including the rate of incoming traffic. If incoming traffic is heavy, congestion occurs at the beginning of the route, decreasing speed and increasing travel time for each car. + +How does congestion occur and how much does it influence travel time? Consider the one-dimensional cellular automata model with $p = 1/2$ . Represent the road by the real line and let $F(x, t)$ denote the density of cars at point $x$ on the road at time $t$ . (For our purposes now, the cells and cars are infinitesimal in length.) Suppose that the initial configuration $F(x, 0)$ is given by + +$$ +F (x, 0) = \left\{ \begin{array}{l l} 1, \text {i f} x < 0; \\ 0, \text {i f} x \geq 0. \end{array} \right. +$$ + +This represents a dense line of cars about to move onto an uncongested road. + +We omit units for the time being. By formulas derived earlier, the speed $v(x_0, t_0)$ at position $x_0$ and time $t_0$ is given by + +$$ +v (x _ {0}, t _ {0}) = \frac {1 - \sqrt {1 - 2 F (x _ {0} , t _ {0}) [ 1 - F (x _ {0} , t _ {0}) ]}}{2 F (x _ {0} , t _ {0})}, +$$ + +while speed must also equal the rate at which the number of cars past point $x$ is increasing; that is, + +$$ +v \left(x _ {0}, t _ {0}\right) = \frac {d}{d t} \left(\int_ {x} ^ {\infty} F (x, t) d x\right) \left(t _ {0}\right). +$$ + +Thus, + +$$ +{\frac {d F}{d t}} = - {\frac {d v}{d x}} = - {\frac {d}{d x}} \left({\frac {1 - \sqrt {1 - 2 F (1 - F)}}{2 F}}\right). +$$ + +This is a partial differential equation whose unique solution is + +$$ +F (x, t) = \left\{ \begin{array}{c l} 1, & \text {i f} x / t < - \frac {1}{2}; \\ \frac {1}{2} - \frac {(x / t)}{\sqrt {2 - 4 (x / t) ^ {2}}}, & \text {i f} - \frac {1}{2} \leq x / t \leq \frac {1}{2}; \\ 0, & \text {i f} \frac {1}{2} < x / t. \end{array} \right. +$$ + +Thus, after a steady influx of cars for a period of $\Delta t$ , the resulting congestion is + +$$ +\frac {1}{2} - \frac {\frac {x}{\Delta t}}{\sqrt {2 - 4 \left(\frac {x}{\Delta t}\right) ^ {2}}} +$$ + +and the congestion ends at $x = \Delta t / 2$ . + +Thus, the extent of the congested traffic is linear in $\Delta t$ . So if there is a steady influx of cars onto a highway, the extent of the resulting congestion is directly proportional to how long it takes them to enter, thus to the number $N$ of them. Likewise, the time for the congestion to dissipate is proportional to $N$ . + +This allows us to evaluate staggering evacuation times for different counties. Suppose that $n$ counties have populations $P_{1},\ldots ,P_{n}$ . If all evacuate at the same time, the effect of the resulting traffic jam on total travel time is proportional to the product of the extent of the jam and the time before it dissipates: + +$$ +\begin{array}{l} \Delta T _ {\text {t r a v e l t i m e}} = c _ {1} \cdot c _ {2} (P _ {1} + \dots + P _ {n}) \cdot c _ {3} (P _ {1} + \dots + P _ {n}) \\ = c _ {1} c _ {2} c _ {3} \left(P _ {1} + \dots + P _ {n}\right) ^ {2} \\ \end{array} +$$ + +for some constants $c_{1}, c_{2}, c_{3}$ . If the evacuations are staggered, the effect is + +$$ +\Delta T _ {\mathrm {t r a v e l t i m e}} = c _ {1} c _ {2} c _ {3} P _ {1} ^ {2} + \dots + c _ {1} c _ {2} c _ {3} P _ {n} ^ {2} = c _ {1} c _ {2} c _ {3} \left(P _ {1} ^ {2} + \dots + P _ {n} ^ {2}\right). +$$ + +Now, $P_1^2 + \dots + P_n^2 < (P_1 + \dots + P_n)^2$ ; so unless one of the counties has a much larger population than the rest, the difference between these two values is relatively large. We therefore recommend staggering counties. + +# The Effects of Merges and Diverges + +While the steady-state model is a reasonably accurate predictor of traffic behavior on long homogeneous stretches of highway, we must also consider how to + +deal with the effects of road inhomogeneities: merges of two lanes into a single lane and "diverges" of one lane into two lanes. To do so, we apply the principle of conservation of traffic [Kuhne and Michalopolous 1992]. Assuming that there are no sources or sinks in a region, we have + +$$ +\frac {\partial q}{\partial x} + \frac {\partial k}{\partial t} = 0, +$$ + +where $q$ is flow rate (cars/s), $k$ is density (cars/ft), $x$ is location (ft), and $t$ is time (s). Let the merge or diverge occurs at a specific point $x$ . In the steady state (i.e., for $\partial D / \partial t = 0$ ), we have $\partial q / \partial x = 0$ , so flow conserved at a junction. + +For a flow $q_{s}$ merging or diverging flows $q_{1}$ and $q_{2}$ , we have $q_{s} = q_{1} + q_{2}$ . If proportion $\mathrm{P}(0 < P < 1)$ of the flow $q_{s}$ going to (or coming from) $q_{1}$ , and we know either density $(k_{s}$ or $k_{1})$ , we can solve for the other density: + +$$ +q _ {1} = P q _ {s}, \qquad k _ {1} v _ {1} = P k _ {s} v _ {s}, \qquad k _ {1} v (k _ {1}) = P k _ {s} v (k _ {s}). +$$ + +From the steady-state model, for the given values of the constants, we know + +$$ +v (k) = \left\{ \begin{array}{l l} 8 8 \mathrm {f t / s}, & 0 < k < 0. 0 0 5 6; \\ 2 1. 7 \left(\sqrt {. 0 8 + \frac {. 0 9 2}{k}} - 1\right), & 0. 0 0 5 6 < k < 0. 1. \end{array} \right. +$$ + +Assuming that both densities are greater than the free-travel density $k = .0056$ , we can set + +$$ +k _ {1} \left(\sqrt {. 0 8 + \frac {. 0 9 2}{k _ {1}}} - 1\right) = P k _ {s} \left(\sqrt {. 0 8 + \frac {. 0 9 2}{k _ {s}}} - 1\right). +$$ + +Given either $k_{s}$ or $k_{1}$ , we can solve numerically for the other. Then we can find the speeds associated with each density using the above expression for $v(k)$ . + +Solving the equation gives two values; we assume that the density is greater on the single-lane side of the junction (i.e., density increases at a merge and decreases at a diverge). Also, if solving produces a speed $v_{1}$ that is larger than $v_{\mathrm{cruise}}$ , we set $v_{1} = v_{\mathrm{cruise}}$ and calculate $n_{1} = q_{1} / v_{1}$ . + +How is the steady-state flow rate determined on a path with merges and diverges? Following Daganzo [1997], we consider a bottleneck to be a location (such as a merge or diverge) where queues can form and persist with free flow downstream. The bottleneck capacity is the maximum flow rate through the bottleneck, which (with Daganzo) we assume to be constant. If a steady-state flow greater than the bottleneck capacity attempts to enter the bottleneck, the queue size will increase until it stretches all the way back to its origin. At that point, the steady-state flow is blocked by the queue of cars and decreases to the bottleneck capacity. Hence, the maximum steady-state flow rate along a path is the minimum capacity of all bottlenecks along the path. + +# Parallel Paths + +From $A$ to $B$ , let there be multiple parallel paths $p_1, \ldots, p_m$ with bottleneck capacities $c_1, \ldots, c_m$ . The maximum steady-state flow rate from $A$ to $B$ is the minimum of: + +- the bottleneck capacity of the diverge at point $A$ , +- the bottleneck capacity of the merge at point $B$ , and +- the sum of the bottleneck capacities of all paths $p_i$ . + +If a path has no bottlenecks, its capacity is the maximum flow rate predicted by the steady-state model. + +We focus on maximizing flow rate rather than minimizing evacuation time, since for a large number of cars, maximizing flow gives near-minimal evacuation time. The maximum flow rate $q_{\mathrm{max}}$ from Charleston to Columbia is + +$$ +q _ {\mathrm {m a x}} = \min \left(\sum_ {i} q _ {i}, c _ {0}, c _ {f}\right), +$$ + +where $q_{i}$ is the maximum flow rate of path $i$ , $c_{0}$ is the bottleneck capacity of Charleston, and $c_{f}$ is the bottleneck capacity of Columbia. The flow rates $q_{i}$ are + +$$ +q _ {i} = \min (b _ {1} \dots b _ {n}, q _ {i, s s}), +$$ + +where $b_{1}, \ldots, b_{n}$ are the capacities of bottlenecks along the given route and $q_{i,ss}$ is the maximum flow along that route as predicted by the steady-state model. + +We first consider evacuation with no bottlenecks along I-26. Denoting the steady-state value $q_{\mathrm{I - 26,ss}}$ by $q_{I}$ , this gives us $q_{\mathrm{max}} = \min (q_I,c_0,c_f)$ . Which factor limits $q_{\mathrm{max}}$ ? We cannot achieve $q_{I}\approx 2,000$ cars/h if either $c_{0} < 2,000$ (traffic jam in Charleston) or $c_{f} < 2,000$ (traffic jam in Columbia). With the traffic in Columbia splitting into three different roads, there should be less congestion there than with everyone merging onto I-26 in Charleston. Hence, we assume that $c_{0} < c_{f}$ , so the limiting factor is $c_{0}$ if $c_{0} < 2,000$ and $q_{I}$ if $c_{0} > 2,000$ . The value of $c_{0}$ is best determined empirically, perhaps by extrapolation from Charleston rush-hour traffic data or from data from the 1999 evacuation. + +# Effects of Proposed Strategies + +# Reversing I-26 + +Reversing I-26 doubles $q_{I}$ to 4,000 cars/h. It is likely to increase $c_{0}$ as well, since cars can be directed to two different paths onto I-26 and are thus less likely to interfere with the merging of cars going on the other set of lanes. On the other hand, twice as many cars will enter the Columbia area simultaneously, and the + +unchanged capacity $c_f$ may become the limiting factor. It may be possible to increase $c_f$ by rerouting some of the extra traffic to avoid Columbia, or even turning around traffic on some highways leading out of Columbia. + +Thus, this strategy is likely to improve evacuation traffic flow. + +# Reversing Other Highways + +A similar argument applies to turning around the traffic on the smaller highways. Each highway adds some capacity to the total $\sum_{i} q_{i}$ , increasing this term, but each highway's capacity is significantly less than $q_{I}$ , and increasing the number of usable highways has unclear effects on $c_{0}$ . It may increase capacity by spreading out Charleston residents to different roads, or crossing evacuation routes may lead to traffic jams. As with the reversal of I-26, reversal of smaller highways does not affect $c_{f}$ (unless crossing evacuation routes becomes a problem in Columbia). More important, the interactions between highways (merges and diverges) may lead to bottlenecks, reducing capacity. In fact, interactions between these highways and I-26 could cause bottlenecks that slow the flow on I-26, perhaps offsetting the extra capacity of the smaller highways. Thus, it is safer not to turn around traffic on the secondary highways or to encourage using these as evacuation routes. + +# Temporary Shelters + +Establishing temporary shelters in Columbia, to reduce the traffic leaving that city, could be useful if only some of the cars are directed into Columbia; thus the flow of traffic in the Columbia area would be split into four streams rather than three, possibly increasing $c_f$ . Nevertheless, we hesitate to recommend this strategy, since the actual effects are likely to be the opposite. Evacuees entering Columbia are likely to create congestion there, making it difficult for traffic to enter, resulting in a major bottleneck. Without careful regulation, more people will try to stay in Columbia than the available housing, and frantic attempts of individuals driving around looking for housing will exacerbate the bottleneck. Hence, it is most likely that $c_f$ will decrease significantly, probably becoming the limiting factor on maximum flow rate. + +# Staggering Traffic Flows + +Staggering is likely to reduce the time for an average car to travel from Charleston to Columbia while leaving the value of the steady-state flow rate $q_{I}$ unchanged. Hence, staggering decreases total evacuation time; it may also increase maximum flow rate, since it decreases the number of cars traveling toward I-26 at any one time, reducing the size of the $c_{o}$ bottleneck. Increasing the capacity $c_{o}$ , however, increases the flow rate only when $c_{o}$ is the limiting factor. + +# Evacuees from Florida and Georgia + +Since evacuation time is proportional to number of cars over flow rate, out-of-state evacuees add to total evacuation time unless they take a route that does not intersect the paths of the South Carolina evacuees. However, it is very hard to constrain the routes of out-of-state evacuees, since they come from a variety of paths and are unlikely to be informed of the state's evacuation procedures. In particular, major bottlenecks are likely at the intersections of I-26 with I-95 and of I-95 with I-20 and U.S. 501. If many cars from I-95 attempt to go northwest on I-26 toward Columbia, $q_{I}$ will no longer equal 2,000 cars/h but instead the capacity of the I-26/I-95 bottleneck. This is likely to reduce $q_{I}$ significantly and likely make $q_{I - 26}$ the limiting factor. + +A similar argument suggests that I-95 traffic will impede the flow of traffic west from Myrtle Beach by causing a bottleneck at the I-95/I-20 junction. Traffic flow from Myrtle Beach is less than from Charleston, and many of the cars from I-95 may have already exited at I-26; so the bottleneck at the I-20 junction is likely to be less severe. Nevertheless, the flow of evacuees from Florida and Georgia has the potential to reduce dramatically the success of the evacuation. + +# References + +Daganzo, C.F., et al. 1997. Causes and effects of phase transitions in highway traffic. ITS Research Report UCB-ITS-RR-97-8 (December 1997). +Gartner, Nathan, Carroll J. Messer, and Ajay K. Rathi. 1992. Traffic Flow Theory: A State of the Art Report. Revised monograph. Special Report 165. Oak Ridge, TN: Oak Ridge National Laboratory. http://www.tfhrc.gov/its/tft/tft.htm. +Kuhne, R., and P. Michalopolous. 1992. Continuum flow models. Chapter 5 in Gartner et al. [1992]. +Miller, Carl A. 2001. Traffic flow, random growth models, and the longest common subsequence problem. In preparation. +Rothery, R.W. 1992. Car following models. Chapter 4 in Gartner et al. [1992]. + +# Modeling Hurricane Evacuation Strategies + +DURHAM, NC, FEB. 12—Hurricanes pose a serious threat to citizens on the South Carolina coastline, as well as other beach dwellers in Florida, Georgia, and other neighboring states. In 1999, the evacuation effort preceding the expected landfall of Hurricane Floyd led to a monumental traffic jam that posed other, also serious, problems to the more than 500,000 commuters who fled the coastline and headed for the safe haven of Columbia. Several strategies have been proposed to avoid a future repeat of this traffic disaster. + +First, it has been suggested that the two coastal bound lanes of I-26 be turned into two lanes of Columbia bound traffic. A second strategy would involve staggering the evacuation of the coastal counties over some time period consistent with how hurricanes affect the coast, instead of all at once. Third, the state might turn around traffic flow on several of the smaller highways besides I-26 that extend inland from the coast. The fourth strategy under consideration is a plan to establish more temporary shelters in Columbia. Finally, the state is considering placing restrictions on the type and number of vehicles that can be brought to the coast. + +In the interest of the public, we have developed and tested several mathematical models of traffic flow to determine the efficacy of each proposal. On balance, they suggest that the first strategy is sound and should + +be implemented. Although doubling the number of lanes will not necessarily cut the evacuation time in half, or even double the flow rate on I-26 away from the coast, it will significantly improve the evacuation time under almost any weather conditions. + +Our models suggest that staggering the evacuation of different counties is also a good idea. Taking such action on the one hand will reduce the severity of the bottleneck that occurs when the masse of evacuees reaches Columbia, and on the other hand could potentially increase average traffic speed without significantly increasing traffic density. The net effect of implementing this strategy will likely be an overall decrease in coastal evacuation time. + +The next strategy, which suggests turning traffic around on several smaller highways, is not so easy to recommend. The main reason for this is that the unorganized evacuation attempts of many people on frequently intersecting secondary roads is a recipe for inefficiency. In places where these roads intersect I-26, the merging of a heightened volume of secondary road traffic is sure to cause bottlenecks on the interstate that could significantly impede flow. To make a strategy of turning around traffic on secondary roads workable, the state would have to use only roads that have a high capacity, at least two lanes, and a low potential for traffic conflicts with other highways. This would re + +quire competent traffic management directed at avoiding bottlenecks and moving Charleston traffic to Columbia with as few evacuation route conflicts as possible. + +The fourth proposal, of establishing more temporary shelters in Columbia, is a poor idea. Because it is assumed that travelers are relatively safe once they reach Columbia, the main objective of the evacuation effort should be minimizing the transit time to Columbia and the surrounding area. It is fairly clear that increasing the number of temporary shelters in Columbia would lead to an increased volume of traffic to the city (by raising expectations that there will be free beds there) and exacerbate the traffic problem in the city itself (due to an increased demand for parking). Together, these two factors are sure to worsen the bottleneck caused by I-26 traffic entering Columbia and would probably increase the total evacuation time by decreasing the traffic flow on the interstate. + +The final proposal of placing limitations on the number and types of vehicles that can be brought to the beach is reasonable. Families with several cars should be discouraged from bringing all of their vehicles and perhaps required to register with the state if the latter is their intention. Large, cumbersome vehicles such as motor homes should be discouraged unless they are a family's only op + +tion. Although buses slow down traffic, they are beneficial because they appreciably decrease the overall number of drivers. In all cases, slow-moving vehicles should be required to travel in the right lane during the evacuation. + +In addition to the strategies mentioned above, commuters in the 1999 evacuation were acutely aware of the effect on traffic flow produced by coastal residents of Georgia and Florida traveling up I-95. We have concluded that, when high-volume traffic flows such as these compete for the same traffic pipeline, the nearly inevitable result is a bottleneck. A reasonable solution to this problem would be to bar I-95 traffic from merging onto I-26 and instead encourage and assist drivers on I-95 to use the more prominent, inland bound secondary roads connected to that interstate. + +To conclude, we think that combining the more successful strategies suggested could lead to a substantial reduction in evacuation time, the primary measure of evacuation success. Minimizing the number of accidents that occur en route is also important, but our models directed at the former goal do not make compromises with the latter objective. In fact, the problem of minimizing accidents is chiefly taken care of by ensuring that traffic flow is as orderly and efficient as possible. + +— Samuel W. Malone, Carl A. Miller, and Daniel B. Neill in Durham, NC + +# The Crowd Before the Storm + +Jonathan David Charlesworth + +Finale Pankaj Doshi + +Joseph Edgar Gonzalez + +The Governor's School + +renamed August 2001: + +Maggie L. Walker Governor's School + +for Government and International Studies + +Richmond, VA + +Advisor: John A. Barnes + +# Introduction + +Applying safety regulations and flow-density equations, we find the maximum rate of flow through a lane of road is 1,500 cars/h, occurring when cars travel at 27.6 mph. + +We construct a computer simulation that tracks the exit of cars through South Carolina's evacuation network. We attempt to optimize the network by reversing opposing lanes on various roads and altering the time that each city should begin evacuating, using a modified genetic algorithm. + +The best solution—the one that evacuates the most people in $24\mathrm{h}$ —involves reversing all the opposing lanes on evacuation routes. Increasing the holding capacity of Columbia is only marginally helpful. Georgia and Florida traffic on I-95 is only mildly detrimental, but allowing people to take their boats and campers greatly decreases the number of people that can be evacuated. + +# Background on Evacuation Plans + +After the 1999 evacuation, the South Carolina Department of Transportation (SCDOT) designated evacuation routes for all major coastal areas, including 14 different ways to leave the coast from 32 regions. The routes take evacuees past I-95 and I-20. Although officers direct traffic at intersections, traffic on roads not in the plan may have long waits to get onto roads in the plan. Moreover, the South Carolina Emergency Preparedness Division (SCEPD) does not call + +for any traffic type limitations (i.e., all campers, RVs, and cars with boats are allowed) [South Carolina Department of Public Safety 1999]. + +# Assumptions + +# Assumptions About Hurricanes + +- There is exactly one hurricane on the East Coast of the United States at the time of the evacuation. +- The hurricane, like Floyd, moves along the South Carolina coast. Most Atlantic hurricanes that reach the United States follow a northeasterly path along the coast [Vaccaro 2000]. + +# Assumptions About Traffic Flow + +- All cities act as points. The smaller streets within a city do not affect flow in and out of a city. +- The capacity of a city is the sum of its hotel rooms and the number of cars that can fit on the city's roads. +- The flow between intersections is constant. +- Density of traffic between intersections is constant. +- Charlotte and Gastonia in North Carolina; Spartanburg, Greenville, and Anderson in South Carolina; and Augusta, Georgia are infinite drains, meaning that we do not route people beyond them. Flow out of these cities should not create traffic jams. The cities are also large and therefore should be able to accommodate most if not all incoming evacuees. +- After the order to evacuate is issued, vehicles immediately fill the roads. +- Traffic regulators attempt to maintain the ideal density, using South Carolina's GIS system. +- All motorized vehicles are 16 ft long. This takes into account the percentage of motorcycles, compact cars, sedans, trucks, boats, and RVs and their lengths. +- On average, three people travel in one vehicle. +- The traffic that enters I-95 from Georgia or Florida stays on I-95 and travels through South Carolina. + +# Assumptions About People + +- All people on the coast follow evacuation regulations immediately. +- Drivers obey the speed limit and keep a safe following distance. + +# Flow-Density Relationship + +Flow is the number of vehicles passing a point on the road per unit time. The flow $q$ on a road depends on the velocity $v$ and density $k$ of vehicles on the road: + +$$ +q = k v. \tag {1} +$$ + +Empirical studies suggest that velocity and density are related by [Jayakrishnan et al. 1996]: + +$$ +v = u _ {f} \left(1 - \frac {k}{k _ {j}}\right) ^ {a}, \tag {2} +$$ + +where $k_{j}$ is the density of a road in a traffic jam, $a$ is a parameter dependent on the road and vehicle conditions, and $u_{f}$ , free velocity, is speed at which a vehicle would travel if their were no other vehicles on the road. Generally, the free velocity is the speed limit of the road. + +We substitute (2) into (1) to obtain flow as a function of density: + +$$ +q = k u _ {f} \left(1 - \frac {k}{k _ {j}}\right) ^ {a}. \tag {3} +$$ + +This equation is linear in the free velocity. To find the ideal density that produces the fastest flow, we take the first derivative of the flow with respect to density and set it equal to zero: + +$$ +u _ {f} \left(1 - \frac {k}{k _ {j}}\right) ^ {a} - a u _ {f} \frac {k}{k _ {j}} \left(1 - \frac {k}{k _ {j}}\right) ^ {a - 1} = 0. +$$ + +Solving for $k$ , we find the ideal density $k_{i}$ : + +$$ +k _ {i} = \frac {k _ {j}}{a + 1}. +$$ + +Assuming that all roads behave similarly, we find a numerical value for the ideal density. Jam density is generally between 185 and 250 vehicles/mile [Haynie 2000]; we use the average value of 218 vehicles/mile. By fitting (2) to Kockelman's flow-density data for various cars, road conditions, and driver types in Kockelman [1998], we find that $a$ has an average value of 3 (Figure 1). Therefore, the ideal density is 54 vehicles per mile. + +![](images/c6332c8adc0a2c2af0f730bf7be7ef1ad2c03a88bda1c345c298ea190109741a.jpg) +Figure 1. Plot of observed counts vs. density. Data from Kockelman [1998] with our curve of fit of the form (2). + +To account for reaction time, vehicles must be spaced at least 2 s apart [NJDOT 1999]. For vehicles spaced exactly 2 s apart, we can find the density of a road where all vehicles are traveling at speed $v$ . The distance $d_{c}$ required by a vehicle traveling at speed $v$ is the sum of the vehicle's length and its following distance: + +$$ +d _ {c} = l + \frac {2 v}{3 6 0 0}, +$$ + +where the units are miles and hours. The maximum safe density of a road is the maximum number of vehicles on the road (the length of the road divided by the space required for each vehicle) divided by the length of the road: + +$$ +k = \frac {1}{l + \frac {2 v}{3 6 0 0}}. +$$ + +If each car is 16 ft $(3.03 \times 10^{-3} \mathrm{mi})$ long and the density is ideal, then the maximum safe velocity of the vehicles is $27.6 \mathrm{mph}$ . + +Knowing the ideal density and the maximum safe velocity at that density, we use (1) to find the maximum flow: + +$$ +q = k v = 1 5 0 0. \tag {4} +$$ + +The free velocity parameter is needed to find the flow at situations other than ideal. Using (2), we find that the free velocity is $65.2\mathrm{mph}$ close to the highway speed limit, thus validating the approach for finding the free velocity. Substituting the known and derived values of free velocity, jam density, and the exponential parameter into (3), we quantify the flow-density relationship: + +$$ +q = 6 2. 5 k \left(1 - \frac {k}{2 1 8}\right) ^ {3}. +$$ + +# Traffic Flow Model + +# Mapping the Region + +We programmed in Java a simplified map of South Carolina that consists of 107 junctions (cities) and 154 roads. A junction is an intersection point between two or more roads. A road connects exactly two junctions. Our map includes most of the roads in by SCEPD's model and many more. Data for the number of cars, boats, and campers in each city used in the computer model can be found in the Appendix. [EDITOR'S NOTE: We omit the Appendix.] + +# Behavior of Cities + +Each point in the program stores a city's population and regulates traffic flow into and out of the roads connected to it. First, it flows cars out of the city into each road. The desired flow out—the maximum number of vehicles that the road can take—is defined by the flow-density equation (4) for the road that the cars are entering. If the total number of vehicles that can be exited exceeds the evacuee population of the city, then the evacuees are distributed proportionally among the roads with respect to the size of the road. + +Next, the city lets vehicles in. The roads always try to flow into the city at the ideal flow rate. The city counts the total number of cars being sent to it and compares this to its current evacuee capacity. If the evacuee capacity is less than the number of vehicles trying to enter, the city accepts a proportion of the vehicles wanting to enter, depending on the road size. A check in the program ensures that the number of vehicles taken from the road does not exceed the number of cars on the road at that time. + +After repeating the entering and exiting steps for each road, the city recalculates its current evacuee population, removing all the vehicles that left and adding all the vehicles that entered. + +# Behavior of Roads + +We define each road by its origin junction, destination junction, length, and number of lanes. The number of lanes is the number of lanes in a certain direction under nonemergency circumstances. For example, a road that normally has one lane north and one lane south is considered a one-lane highway. If the number of lanes on a road changes between cities, we use the smaller number of lanes. To analyze the possibility of turning both lanes to go only north or only south, our program would double the number of lanes. + +During an evacuation, traffic never needs to flow in both directions, because the net flow of a road that flowed equally in two directions would be zero. Therefore, each road has a direction defined by its origin junction and destination junction. While the origin junction is normally the point closer to + +the coast, the program analyzes the possibility of having the road flow from its "destination" to "origin." In some cases, this could provide the optimal flow out of the coastal areas by finding alternative routes. + +We model traffic congestion as a funnel. As long as vehicles are on the road, they attempt to exit at the ideal flow rate. However, if the road begins to fill, then the number of vehicles entering the road varies depending on the flow-density equation (4). We determine the new density of the road via + +$$ +D _ {i} = \frac {n _ {i} + \Delta n _ {i}}{d}, +$$ + +where $n_i$ is the initial number of vehicles on the road, $\Delta n_i$ is the difference between the cars entered and cars exited, and $d$ is the length of the road (mi). + +# Optimization Algorithm + +We use a modified genetic optimization algorithm, beginning from South Carolina's current solution into the evacuation. The simulation stores a possible solution as two chromosomes: a city chromosome, storing the time to start evacuating coastal cities, and a road chromosome, storing directions and reversals of the roads. Stored with the solution is the number of people left in the evacuation zone after $24\mathrm{h}$ , the usual advance notice for evacuation. + +The simulation randomly chooses a chromosome and a gene to mutate. We use a uniform distribution to choose first the chromosome, then the gene, and finally a value for that gene. City genes can take the value of any time step between 0 and $12\mathrm{h}$ before starting to evacuate. Road direction can be in either the specified direction, the reversed direction, or closed. Opposing lanes can be either reversed or not reversed. If the changed chromosome leaves fewer people in the evacuation region in $24\mathrm{h}$ , it replaces the old chromosomes. + +# Results + +Knowing that the maximum flow of a one-lane road is 1,500 cars per hour, we first tested South Carolina's current evacuation route with our modified flow equations, which allow for more people to leave the evacuation region. After $24\mathrm{h}$ , 556,000 people (58% of the people needing to leave) were still left in the cities needing evacuation. If only I-26 was reversed, the people left dropped to 476,000 (50%). However, if people were allowed to take their boats and campers with them, the number left was 619,000. Therefore, boats, campers, and extra cars generally should not be allowed to evacuate. + +After 10,000 iterations, the solutions were still improving. Therefore, we restarted the program with all roads beginning with lanes reversed and cities evacuating immediately. After 10,000 iterations, the program could not find a better solution than one in which 233,000 people (25%) are left. + +If I-95 is too congested due to traffic from Georgia and South Carolina (i.e., I-95 is not used in the simulation), the number of people left is 254,000, $9.4\%$ more than if the highway had been clear. + +Increasing Columbia's evacuee capacity helps the evacuation only marginally, removing only 948 more people from danger. + +Regardless of the situation, the solutions always have cities that start evacuating immediately: Staggered solutions are not optimal. + +# Stability Analysis + +We tested the flow equations in the evacuation simulation by varying the exponential parameter, the free velocity, and the jam density. When the exponential parameter was increased or decreased by 1, the number of people evacuated changed by only $0.2\%$ and $1.0\%$ . Doubling or halving the free velocity caused variations of $3.4\%$ . Doubling and halving the jam density caused variations of $2.2\%$ . The length of the car did not affect the number of people emptied from the city because the following distance was so large compared to the length of the car. Finally, we doubled and halved the iterative time step of the evacuation simulation, which did not change number of people evacuated. Thus, this model was robust in every variable tested. + +# Strengths and Weaknesses + +# Strengths + +- The model can be used for any evacuation. For, example, if a meteor were predicted to hit the Atlantic Ocean and flood a strip of land $50\mathrm{mi}$ wide along the Atlantic Coast, this model could be used to evacuate residents of South Carolina to areas not affected by the flood. +- Moreover, the program is flexible enough to work for any possible map; it is not specific to the individual roads and cities of South Carolina. +- The model is stable with regards to all variables tested, and the optimization algorithm runs very quickly. + +# Weaknesses + +- The greatest weakness of this model is that it assumes that people will follow directions: use the two-second following distance rule, travel at the speed limit, and travel on assigned roads. +- The model assumes density homogeneity along each road after each iteration, while in reality the density varies. + +- The model can handle only situations where the roads are empty at the beginning of the simulation. +- We underestimated the holding capacity of the cities, leading to a slower exit from the unsafe regions. Thus, although the relative changes in the results are probably correct, the actual number of people in danger after 24 hours is probably fewer. +- The optimization algorithm can get stuck in local minima. + +# Conclusion + +Reversing traffic on all evacuation routes evacuates the most people. Traffic from Georgia and Florida is not a problem, but many boats and campers would significantly decrease flow. + +Hence, we suggest that roads be reversed and that to maintain maximum flow, traffic regulators not allow the number of cars in a stretch of road to exceed 54.4 per lane per mile, by regulating on- and off-ramps at cities. + +# References + +Feigenbaum, Edward D., and Mark L. Ford. 1984. South Carolina Title 25, Chapter 1, Article 4. In *Emergency Management in the States*, 46. Council on State Governments. +Haynie, S.G. 2000. WWW Transportation Tutorial Site. North Carolina State University. http://www.eos.ncsu.edu/eos/info/ce400_info/www2/flow1.html. Accessed 12 February 2001. +Jayakrishnan, R., U. Rathi, C. Rindt, and G. Vaideeshwaran. 1996. Enhancements to a simulated framework for analyzing urban traffic networks with ATIS/ATMS. PATH Research Report UCB-ITS-PRR-96-27 (October 1996). +Kockelman, K.M. 1998. Changes in the flow-density relation due to environmental, vehicle, and driver characteristics. *Transportation Research Record* (Paper Number 980734) 1644: 47-56. +New Jersey Department of Transportation (NJDOT). 1999. 1999 Driver's Manual. http://liberty.state.nj.us/mvs/dm99/99ch5c.htm#Two. Accessed 12 February 2001. +Real-time GIS assists hurricane evacuation. 2000. American City and County (February 2000) 115 (2): 24. +South Carolina Emergency Preparedness Division, South Carolina Department of Public Safety. 1999. Evacuation Plan. http://www.dot.state.sc.us/getting/evacmid.html. Accessed 12 February 2001. + +Vaccaro, Chris. 2000. 1999 hurricane season closes, sets records. USA Today. http://www.usatoday.com/weather/hurricane/1999/atlantic/ wrapup99.htm. Accessed 12 February 2001. + +# The Crowd Before the Storm: Improved Hurricane Evacuation Routes Planned + +COLUMBIA, SC, FEB. 12—A new mathematical model prepared for the South Carolina Emergency Preparedness Division should end aggravating and dangerous backups when a hurricane threatens South Carolina. In response to the monstrous traffic jams that turned I-26 northbound out of Charleston into a parking lot during the Hurricane Floyd evacuation, the new model distributes traffic over several smaller routes. + +One of the new program's controversial traffic-control methods is to make all lanes of traffic on evacuation routes to run in the same direction, away from the beach, to improve traffic flow. + +People with campers, RVs, boats, and more than one car should leave as early as possible if a hurricane is predicted; once the new plan is implemented, families may be permitted only one car, to reduce traffic. + +The model counteracts these minor inconveniences by evacuating $75\%$ of the population at risk within 24 hours, + +compared to $42\%$ under South Carolina's current plan. + +Although more people may be evacuated, don't expect to get out of the region too quickly. The model predicts that the fastest evacuation will occur if all cars travel at 28 mph. At that rate, it will take you 4 hours to get out of the evacuation region. + +The model used a "genetic algorithm" approach, which involves testing possible solutions against each other and "breeding" new ones from the best ones so far. Further analysis showed that reversing lanes of all roads along the evacuation routes is the best method for quick evacuation. The computer tested 10,000 minor changes without finding a more effective solution. + +Increasing the housing and parking capacity of Columbia by constructing a shelter there would be only mildly helpful. On a more positive note, residents do not need to worry about traffic from Georgia and Florida slowing the evacuation. + +— Jonathan David Charlesworth, Finale Pankaj Doshi, and Joseph Edgar Gonzalez, in Richmond, Virginia. + +# Jammin' with Floyd: A Traffic Flow Analysis of South Carolina Hurricane Evacuation + +Christopher Hanusa + +Ari Nieh + +Matthew Schnaider + +Harvey Mudd College + +Claremont, CA + +Advisor: Ran Libeskind-Hadas + +# Introduction + +We analyze the 1999 Hurricane Floyd evacuation with a traffic-flow model, explaining the extreme congestion on I-26. Then we look at the new South Carolina Hurricane Evacuation Plan, which includes lane reversals. We analyze their effect; they would significantly benefit traffic leaving Charleston. With lane reversals, the maximum number of vehicles passing any point on I-26 is 6,000 cars/h. + +We develop two plans to evacuate the South Carolina coast: the first by geographic location, the second by license-plate parity. + +We explore the use of temporary shelters; we find that I-26 has sufficient capacity for oversized vehicles; and we determine the effects of evacuees from Georgia and Florida. + +# Traffic Flow Model + +The following definitions and model are taken directly from Mannering and Kilareski [1990, 168-182]. + +The primary dependent variable is level of service (LOS), or amount of congestion, of a roadway. There are six different LOS conditions, A through F, with A being the least congested and F being the most congested. We focus on the distinction between levels E and F. + +- Level of Service $E$ represents operating conditions at or near capacity level. All speeds are reduced to a low but relatively uniform value, normally between 30 and 46 mph. +- Level of Service $F$ is used to define forced or breakdown flow, with speeds of less than $30\mathrm{mph}$ . This condition exists wherever the amount of traffic approaching a point exceeds the amount that can traverse that point. Queues form behind such locations. + +If we enter LOS F, the roadway has exceeded its capacity and the usefulness of the evacuation has broken down. An evacuation strategy that results in a highway reaching LOS F is unacceptable. + +For a given highway, we can determine the maximum number of vehicles that can flow through a particular section while maintaining a desired level of service. To make this more concrete, we define the characteristic quantity maximum service flow. + +Definition. Maximum Service Flow $(\mathrm{MSF}_i)$ for a given level of service $i$ , assuming ideal roadway conditions, is the maximum possible rate of flow for a peak 15-min period, expanded to an hourly volume and expressed in passenger cars per hour per lane (pcphpl). To calculate the MSF of a highway for a given LOS, we multiply the road's capacity under ideal conditions by the volume-to-capacity ratio for the desired LOS. More formally, + +$$ +\mathrm {M S F} _ {i} = c _ {j} \frac {v}{c _ {i}}, \tag {1} +$$ + +where $c_{j}$ is the capacity under ideal conditions for a freeway with Design Speed $j$ , and $(v / c)_i$ is the maximum volume-to-capacity ratio associated with LOS $i$ . For highways with 60- and 70-mph design speeds, $c_{j}$ is 2,000 pcphpl [Transportation Research Board 1985]. Since LOS E is considered to be "at capacity," $(v / c)_{\mathrm{E}} = 1.0$ . The design speed of a road is based mostly on the importance and grade of the road; roads that are major and have shallower grades have higher design speeds. The elevation profile along I-26 shows that South Carolina is flat enough to warrant the highest design speed. + +An immediate consequence of (1) is that to maintain $\mathrm{MSF_E}$ or better (which we consider necessary for a successful evacuation), the number of passenger cars per hour per lane must not exceed 2,000 for any highway. + +For it to be useful in model calculations, we need to convert the maximum service flow to a quantity that conveys information about a particular roadway. This quantity is known as the service flow rate of a roadway. + +Definition. The service flow rate for level of service $i$ , denoted $\mathrm{SF}_i$ , is the actual maximal flow that can be achieved given a roadway and its unique set of prevailing conditions. The service flow rate is calculated as + +$$ +\mathrm {S F} _ {i} = \mathrm {M S F} _ {i} N f _ {w} f _ {\mathrm {H V}} f _ {p}, \tag {2} +$$ + +in terms of the adjustment factors: + +$N$ : the number of lanes, + +$f_{w}$ : the adjustment for nonideal lane widths and lateral clearances, + +$f_{\mathrm{HV}}$ : effect of nonpassenger vehicles, and + +$f_{p}$ : the adjustment for nonideal driver populations. + +We assume that the lanes on I-26 and other highways are ideal (i.e., $f_{w} = 1$ ): at least 12 ft wide with obstructions at least 6 ft from traveled pavement [Mannering and Kilareski 1990]. To account for driver unfamiliarity with reversed lanes and stress of evacuation, we set $f_{p} = 0.7$ for reversed lanes and $f_{p} = 0.8$ for normal lanes, in accordance with Mannering and Kilareski [1990]. The model also employs an adjustment factor, denoted $f_{\mathrm{HV}}$ , for reduction of flow due to heavy vehicles such as trucks, buses, RVs, and trailers. Later we discuss the effects of heavy vehicles on traffic flow. + +# Strengths and Weaknesses + +This model is easy to implement, the mathematics behind it is quite simple, and it is backed by the National Transportation Board. We establish its reliability by using it to predict traffic flow patterns in the 1999 evacuation. + +We assume that the number of lanes does not change, which requires that there are no lane restrictions throughout the length of the freeway and no lanes are added or taken away by construction. + +The major weakness of our model is that it fails to take into account the erratic behavior of people under the strain of a natural disaster. + +The simplicity of our model also limits its usefulness. It can be applied only to normal highway situations, not to a network of roads. + +# Improving Evacuation Flow + +Gathering data from a various sources, we estimate the number of vehicles used in the 1999 evacuation. According to Dow and Cutter [2000], $65\%$ of households that were surveyed chose to evacuate. About $70\%$ of households used one vehicle or fewer, leaving $30\%$ of households taking two vehicles. Of the evacuees, $25\%$ used I-26 during the evacuation. Based on population estimates [County Population Estimates ... 1999] and average number of people per household [Estimates of Housing Units ... 1998], and assuming a relatively uniform distribution of people per household, we calculate the number of vehicles used during the evacuation (Table 1). + +Table 1. Evacuation participation estimates for Hurricane Floyd, in thousands. + +
PopulationEvacueesEvacuating HouseholdsVehiclesVehicles on I-26
Southern1871224761
Central553359139181
Northern2331525976
Total97363224531961
+ +# Reversing Lanes + +According to our model, the capacity of a highway is directly proportional to the number of lanes. This implies that lane reversal would nearly double the capacity of I-26. + +Approximately 319,000 vehicles were used to evacuate the coastal counties of South Carolina. Of evacuees surveyed by Dow and Cutter [2000], $16.3\%$ evacuated between noon and 3 P.M. on Sept. 14. Assuming independence between the above factors, in the hours between 9 A.M. and noon, I-26 must have been clogged by an attempted influx of about 3,300 vehicles/h. Even if evenly distributed, this was more than the 3,200 vehicles/h that the two Columbia-bound lanes of I-26 could take under evacuation conditions. The result was LOS F—a large traffic jam. Our model predicts that this jam would have lingered for hours, even after the influx of vehicles had died down. + +What if the coastal-bound lanes of I-26 were reversed? With corrections for nonideal conditions, our model predicts an $\mathrm{SF_E}$ of 6,000 pcphpl. Therefore, reversing the lanes of I-26 has the potential to increase service flow rate by a factor of 1.6. + +# Simultaneous Evacuation Strategies + +# By Hurricane Path + +Hurricanes sweep from south to north. Because a hurricane commonly travels at a speed of less than $30\mathrm{mph}$ , the southernmost counties of South Carolina would be affected at least two hours before the northernmost ones. + +However, analysis indicates that a staggered evacuation strategy would not improve the speed of the evacuation. The evacuation routes are largely parallel to one another and rarely intersect. Thus, the evacuation of each county should affect only the traffic on evacuation routes of nearby counties. Therefore, postponing evacuation of counties farther from the hurricane would be counterproductive. + +# By County + +What about avoiding simultaneous evacuation of adjacent counties? We recommend evacuating Jasper, Beaufort, Charleston, Georgetown, and Horry counties in the first wave, and leaving Hampton, Colleton, Dorchester, and Berkeley until $3 - 6\mathrm{h}$ later, depending on the time of day. This solution would decrease the probability of traffic reaching LOS F on any highway without significantly delaying the evacuation. The nearby state of Virginia has a similar plan for evacuating county by county [Virginia Hurricane ... 1991]. + +# By License Plate Number + +By dividing cars into two categories, depending on the parity of the last digit on their license plate, we could separate traffic into two waves without giving preference to residents of any county. Our solution would request that the even group evacuate $3 - 6\mathrm{h}$ after the odd group was given the evacuation order. This would spread out the hours of peak evacuation traffic, resulting in improved traffic conditions and decreased risk of LOS F being reached. A comparison of Figures 1 and 2 demonstrates the change in time distribution of evacuation when half of the drivers evacuate six hours later. Clearly, the distribution is much smoother, reducing the likelihood of reaching LOS F. + +![](images/a50cb3b0f8ef9b047af2bddfdaf31ab58a2cbd0830ee77786ab7dacd9a774b8a.jpg) +Figure 1. Hurricane Floyd: Fraction of evacuating population vs. hours after the 1999 mandatory evacuation order. (Data from Dow and Cutter [2000].) + +![](images/97fffce562a442d88813955d0629620419c968e1f3e6072dc5db89fc7b21fb29.jpg) +Figure 2. Even/odd license plate plan: Our projected fraction of the evacuating population vs. hours after the mandatory evacuation order. + +# Lane Reversal on Smaller Highways + +As only $29\%$ of evacuees took I-26 or I-95, the majority took smaller roads inland. Because our model results in linear growth of flow with number of available lanes, lane reversals should improve evacuation rates on all roads. Because the evacuation routes are nearly perpendicular to the coastline, there is little risk of opposing traffic being disrupted by these reversals. The logistics of such an action, however, might be prohibitive. + +The number of personnel needed to facilitate the I-26 lane reversal is 206 [South Carolina Emergency Preparedness Division 2000]. Smaller highways have less distance between exits, which suggests that more personnel per mile would be needed to blockade highway entrances. The total length of highway on all evacuation routes is approximately ten times as great as the length of I-26. Therefore, a truly prodigious amount of human effort would be necessary to implement lane reversals on all evacuation routes. + +It would be imprudent to spend resources for what we estimate to be only a marginal gain in actual highway use. According to Dow and Cutter [2000], these alternative routes did not even approach capacity during the last evacuation. Since our license-plate evacuation strategy increases the overall throughput of major evacuation routes through lane reversal and smoother time distribution, there is no reason to expect a heavier load on state and country roads. + +So, there is little evidence to support the utility of lane reversal on all smaller roads. Still, reversing lanes on a small number of evacuation routes might prove useful. The three major population centers of the coast (Beaufort, Charleston, and Horry counties) have different evacuee distributions. Therefore, those highways which most merit reversal are I-26, I-501 from Myrtle Beach to Marion and 301 from Marion to Florence, and the southern corridor from Beaufort County to the Augusta area. + +# Effect of Additional Temporary Shelters + +In 1999, South Carolina housed about 325,000 people in shelters [Dow and Cutter 2000]. In a hurricane, one-third of evacuees go to each of shelters, family and friends, and commercial establishments [Zelinski and Kosinski 1991, 39-44]. According to South Carolina Hurricane Information [2001], the number of predesignated shelters in Columbia is insignificant. However, there must be an efficient way to funnel the evacuees to the evacuation sites, such as a central coordination center with an up-to-date list of where the next group of cars should go. + +# Vehicle Type Restrictions + +Although our model generally calculates flow using only normal passenger cars, it is not difficult to take other types of vehicles into account. The equation + +used to calculate the heavy-vehicle adjustment factor is + +$$ +f _ {\mathrm {H V}} = \frac {1}{1 + 0 . 6 P}, \tag {3} +$$ + +where $P$ is the proportion of nonpassenger vehicles (RVs, trailers, and boats). + +Using this equation, our model predicts an upper bound on the proportion of nonpassenger vehicles that occur without causing LOS F. We demonstrate this with a sample calculation using I-26. Earlier, we estimated the $\mathrm{SF_E}$ of I-26, including reversed lanes, as 6,000 pcphpl. We also estimated that a maximum of 3,300 vehicles/h would enter I-26, ignoring the possibility of spikes in activity. Therefore, the minimum safe value of $f_{\mathrm{HV}}$ is approximately 0.55, which means that I-26 has enough leeway to support any mix of passenger cars and heavy vehicles. There is no need to restrict large vehicles on I-26. + +# Georgians, Floridians, and the I-95 Corridor + +According to Georgia's hurricane evacuation plan [Hurricane Evacuation Routes 2001], I-95 is not a valid evacuation route. However, thousands of Floridians and Georgians flocked north on I-95 during Hurricane Floyd. In Savannah, the most popular evacuation route was I-16, which goes directly away from South Carolina [Officials deserve high marks ... 1999]. In South Carolina, as shown in Figures 1 and 2, the farther away the destination, the smaller the percentage of the evacuee population that plans to go there. Taking all this into account, a realistic upper bound for the percentage of Georgians or Floridians using I-95 is $20\%$ . + +Any population entering South Carolina on I-95 from Georgia or Florida is mostly bound for major cities; an upper bound on the traffic headed through Columbia would be $75\%$ . Since Floyd's landfall was extraordinarily unpredictable, we propose that it was one of the largest evacuations that will affect South Carolina. + +Our reasoning is as follows: Hurricanes of lesser strength have fewer evacuees. If the landfall of the hurricane is more southerly, there is less need to evacuate South Carolina and North Carolina and so there will be less traffic on the I-26. Lastly, if the hurricane tends more towards the north, the number of evacuee drivers from Georgia and Florida taing I-95 will be decreased greatly. So, we can take Floyd as a relative upper bound on evacuees. + +From CNN's coverage of the lead up to Hurricane Floyd's landfall, in Georgia, we know that "the evacuation orders affected 500,000 people." We bound this rough estimate by 600,000. So the upper bound of people using I-95 can be estimated as $(0.20)(0.75)(600,000) \approx 90,000$ , or about 45,000 vehicles, spread over a two-day period. From Dow and Cutter [2000], we know that about $10\%$ of South Carolina evacuees used I-95, so the Georgians and Floridians effectively doubled the traffic on I-95, which is a huge impact on the model that we have proposed. + +# Improvements in the Model + +Our model needs additional evacuation data. With precise statistics regarding number of evacuees, routes taken, time distributions, and traffic conditions, we could apply it to a greater variety of situations. + +Additional refinements might be made to the parameters of the model with information on the highways themselves. The lane widths and distances to roadside obstacles affect the service flow rate, and knowing the exact layout of the highways would enable us to take them into account. + +We could also use information regarding the resources available to the state: how many personnel and vehicles would be available to run lane reversals. + +With sufficient information, we could use this model to create a simulation of a hurricane evacuation. We would treat the highways of South Carolina as edges in a network flow problem and run a discrete computer simulation to test our premises and conclusions regarding evacuation policies. + +# References + +County Population Estimates for July 1, 1999. 1999. http://www.census.gov/population/estimates/county/co-99-1/99C1_45.txt. +Dow, K., and S.L. Cutter. 2000. South Carolina's Response to Hurricane Floyd. Quick Response Report 128. Columbia, SC: University of South Carolina. http://www.colorado.edu/hazards/qr/qr128/qr128.html. +Estimates of Housing Units, Households, Households by Age of Householder, and Persons per Household: July 1, 1998. 1999. http://www.census.gov/population/estimates/housing/sthuhh1.txt. +Floyd bashes Bahamas, takes aim at Florida. 1999. http://www.cnn.com/WEATHER/9909/14/floyd.07/. +Gartner, Nathan, Carroll J. Messer, and Ajay K. Rathi. 1992. Traffic Flow Theory: A State of the Art Report. Revised monograph. Special Report 165. Oak Ridge, TN: Oak Ridge National Laboratory. http://www.tfhrc.gov/its/tft/tft.htm. +Hurricane Evacuation Routes—TextForm. 2001. http://www.georgia-navigator.com/hurricane/textroutes.html. +Mannering, F., and W. Kilareski. 1990. Principles of Highway Engineering and Traffic Analysis. New York: Wiley. +Number of Households in South Carolina by County: 1950-1990 and Projected 1995. 1996. http://www.sciway.net/statistics/scsa96/hou/hou13.html. +Officials deserve high marks for handling of Hurricane Floyd threat. 1999. http://www線上athens.com/stories/091999/ope_0919990026.shtml + +PublicRoads. 1999. http://www.tfhrc.gov/pubreds/janfeb99/traffic.htm. +South Carolina Emergency Preparedness Division. 2000. Traffic Management Central Coastal Conglomerate Annex I-26 Lane Reversal Operation Plan. Formerly available at http://www.state.sc.us/epd/hurricaneplan/ ccci26rev.pdf ; now at http://www.cs.hmc.edu/~matt/ccc.pdf . +South Carolina Hurricane Evacuation Routes. 2001. http://www.dot.state.sc.us/getting/evacuationmap.pdf. +South Carolina Hurricane Information. 2000. http://www.scan21.com/selectcounty.html. +South Carolina Population Density. 1999. http://www.callsouthcarolina.com/Maps/Map-Pop_Density.htm. +Total Number of Housing Units by County: 1970, 1980 and 1990. 1999. http://www.ors.state.sc.us/abstract_99/chap12/hou3.htm. +Transportation Research Board. 1985. Highway Capacity Manual. Special Report 209. +Virginia Hurricane Evacuation Routes. 1991. http://www.vdot.state.va.us/traf/hurricane.html. +Zelinsky, Wilbur, and Leszek A. Kosinski. 1991. The Emergency Evacuation of Cities: A Cross-National Historical and Geographical Study. Savage, MD: Rowman and Littlefield. + +# Maurice Knocks on Door, No One Home + +COLUMBIA, S.C. (AP)—Hurricane season has come with a fury here to South Carolina, where Hurricane Maurice, the 13th named storm of the season, bears down on Charleston this evening. A record $80\%$ of the population has been evacuated through the new evacuation plan. + +When Hurricane Floyd narrowly missed South Carolina in 1999, the lack of preparedness for an evacuation of such a magnitude was highly evident. The state government asked the Research All Day Corporation (RAD Corp.) to come up with a new evacuation plan that would help the coastal residents escape the ferocity of a similar storm. + +The RAD Corporation's team of world-class hurricane experts and top-notch traffic engineers analyzed the situation and developed a new evacuation plan. "The basic idea of the plan stems from simple math," explained Dr. K. Esner, Director of RAD Modeling. "Two sets of two lanes almost doubles the evacuation rate." + +When asked to explain further, Dr. Esner continued, "On I-26, where there was a colossal traffic jam in 1999, we decided to reverse the flow of the coastal-bound lanes at the first decision of a mandatory evacuation." In this way, people leaving Charleston, the most populous city in South Carolina, could take either the normal two + +lanes of I-26 or the two "contra-flow" (reversed) lanes of I-26 all the way to Columbia. + +The evacuees in the many Columbia shelters seemed in good spirits. There was much less traffic-related annoyance than was felt in 1999. John C. Lately, a British resident of Myrtle Beach, joked, "You realize that in England, driving on the left is commonplace; I felt right at home." + +There was nothing but praise for the RAD engineers. "A remarkable difference was seen between the chaos of evacuating for hurricane Floyd in 1999 and the evacuation today," said Joseph P. Riley, Jr., mayor of Charleston. "This time, the drive between Charleston and Columbia took 4 hours instead of 18. And it's a good thing, too; this time the storm didn't miss." + +Another feature of the new evacuation plan was the breakup of the evacuating public into two groups. "One of our concerns about the 1999 Floyd evacuation was the volume of cars all trying to access the emergency roads at the same time," explained Dr. Esner. He continued, "To alleviate the traffic volume pressure, we wanted to divide the population into two groups. We had two different proposals; we could break up the population geograph- + +ically or basically divide the population right down the middle, using even/odd license plate numbers." + +With the proposed RAD plan, people with even license plates left in the first group, right when the evacuation order was given, and people with odd license plates or vanity plates left starting 6 hours later. "I thought the plan was crazy," remarked Charles Orange, a 24-year-old Charleston resident. "They told us to evacuate by license plate number; you'd never think high school math would help you one day, but this is one time it did!" he exclaimed. + +Using the 1999 data, the RAD researchers calculated that breaking the evacuating population into two equal groups and delaying one group by 6 hours led to a condition where the volume of cars at no time exceeded the maximum volume that the road could handle. In this way, there was no problem with traffic jams, and Charleston became a ghost town, safe for Maurice to make its appearance. + +The majority of the people left on the beaches are surfers and media, but even they are sparse in number; all that remains is a hurricane without an audience. + +Hurricane Maurice could not be reached for comment. + +— Christopher Hanusa, Ari Nieh, Matthew Schnaider, in Claremont, Calif. + +# Blowin' in the Wind + +Mark Wagner + +Kenneth Kopp + +William E. Kolasa + +Lawrence Technological University + +Southfield, MI + +Advisor: Ruth G. Favro + +# Introduction + +We present a model to determine the optimal evacuation plan for coastal South Carolina in the event of a large hurricane. The model simulates the flow of traffic on major roads. We explored several possible evacuation plans, comparing the time each requires. + +Traffic flow can be significantly improved by reversing the eastbound lanes of I-26 from Charleston to Columbia. By closing the interchange between I-26 and I-95 and restricting access to I-26 at Charleston, we can reduce the overall evacuation time from an original $31\mathrm{h}$ to $13\mathrm{h}$ . + +However, a staggered evacuation plan, which evacuates the coastline county by county, does not improve the evacuation time, since traffic from each coastal population center interferes little with traffic flowing from other areas being evacuated. Although reversing traffic on other highways could slightly improve traffic flow, it would be impractical. Restrictions on the number and types of vehicles could speed up the evacuation but would likely cause more problems than improvements. + +# Theory of Traffic Flow + +We require a model that simulates traffic flow on a large scale rather than individual car movement. We take formulas to model traffic flow from Beltrami [1998]. Although traffic is not evenly distributed along a segment of road, it can be modeled as if it were when large segments of road are being considered. We can measure the traffic density of a section of road in cars/mi. The traffic + +speed $u$ at a point on the road can be calculated from the density according to the formula + +$$ +u (r) = u _ {m} \left(1 - \frac {\rho}{\rho_ {m}}\right), +$$ + +where $\rho$ is the traffic density, $u_{m}$ is the maximum speed of any car on the road, and $\rho_{m}$ is the maximum traffic density (with no space between cars). We define the flow of traffic at a point on the road as the number of cars passing that point in a unit of time. The flow $q$ can be easily calculated as + +$$ +q (\rho) = \rho u. +$$ + +It is the flow of traffic that we desire to optimize, since greater flow results in a greater volume of traffic moving along a road. + +# Assumptions + +- During an evacuation, there is an average of 3 people per car. This is reasonable, since people evacuate with their entire families, and the average household in South Carolina has 2.7 people, according to the 1990 census. +- The average length of a car on the road is about 16 ft. +- In a traffic jam, there is an average of 1 ft of space between cars. +- The two above assumptions lead to a maximum traffic density of + +$$ +\frac {5 2 8 0 \mathrm {f t / m i l e}}{1 7 \mathrm {f t / c a r}} = 3 1 0 \mathrm {c a r s / m i l e / l a n e}. +$$ + +- The maximum speed is $60\mathrm{mph}$ on a 4-lane divided highway, $50\mathrm{mph}$ on a 2-lane undivided country road. +- Vehicles follow natural human tendencies in choosing directions at intersections, such as preferring larger highways and direct routes. +- The traffic flow of evacuees from Florida and Georgia on I-95 is a continuous stream inward to South Carolina. +- When vehicles leave the area of the model, they are considered safely evacuated and no longer need to be tracked. +- There will not be traffic backups on the interstates at the points at which they leave the area of the model. +- A maximum of 30 cars/min can enter or exit a 1-mi stretch of road in a populated area, by means of ramps or other access roads. Up to the maximum exit rate, all cars desiring to exit a highway successfully exit. + +- The weather does not affect traffic speeds. The justifications are: + +- During the early part of the evacuation, when the hurricane is far from the coast, there is no weather to interfere with traffic flowing at the maximum speed possible. +- During the later part of the evacuation, when the hurricane is approaching the coast, traffic flows sufficiently slowly that storm weather would not further reduce the speed of traffic. + +- There is sufficient personnel available for any reasonable tasks. + +# Objective Statement + +We measure the success of an evacuation plan by its ability to evacuate all lives from the endangered areas to safe areas between announcement of mandatory evacuation and landfall of the hurricane; the best evacuation plan takes the shortest time. + +# Model Design + +# The Traffic Simulator + +Our traffic simulator is based on the formulas above. Both space and time are discretized, so that the roads are divided into 1-mi segments and time is divided into 1-min intervals. Vehicles enter roads at on-ramps in populated areas, leave them by off-ramps, and travel through intersections to other roads. + +Each 1-mi road segment has a density (the number of cars on that segment), a speed (mph), and a flow (the maximum number of cars that move to the next 1-mile segment in 1 min). Each complete road section has a theoretical maximum density $\rho_{m}$ and a practical maximum density $\rho_{m}^{\prime}$ (accounting for 1 ft of space between cars), which can never be exceeded. + +# Moving Traffic Along a Single Road + +The flow for each road segment is calculated as + +$$ +q (\rho) = \frac {\rho u}{u _ {m}}. +$$ + +If the following road segment is unable to accommodate this many cars, the flow is the maximum number of cars that can move to the next segment. + +# Moving Traffic Through Intersections + +When traffic reaches the end of a section of road and arrives at an intersection, it must be divided among the exits of the intersection. For each intersection, we make assumptions about percentages of cars taking each direction, based on the known road network, the capacities of the roads, and natural human tendencies. If a road ends at an intersection with no roads leading out (i.e., the state border), there is assumed to be no traffic backup; traffic flow simply continues at the highest rate possible, and the simulation keeps track of the number of cars that have left the model. + +Conflicts occur when more cars attempt to enter a road section at an intersection than that road section can accommodate. Consider a section of road that begins at an intersection. Let: + +$q_{\mathrm{max}} = \rho_m' - \rho =$ the maximum influx of cars the road can accommodate at the intersection, + +$q_{1},\ldots ,q_{n} =$ the flows of cars entering the road at an intersection, and + +$q_{\mathrm{in}} = \sum q_i =$ the total flow of cars attempting to enter the road at the intersection. + +If $q_{\mathrm{in}} > q_{\mathrm{max}}$ , then we adjust the flow of cars entering the road from its entrance roads as follows: + +$$ +q _ {i} ^ {\prime} = \frac {q _ {i}}{q _ {i n}} q _ {\mathrm {m a x}}. +$$ + +Therefore, $q_{i}^{\prime}$ is the number of cars entering the road from road $i$ . The flow of traffic allowed in from each road is distributed according to the flow trying to enter from each road. Clearly, $\sum q_{i}^{\prime} = q_{\max}$ . + +# Simulating Populated Areas + +A section of road that passes through a populated area has cars enter and leave by ramps or other access roads. We assume that the maximum flow of traffic for an access ramp is 30 cars/min. We estimate the actual number of cars entering and leaving each road segment based on the population of the area. + +Cars cannot enter a road if its maximum density has been reached. For simplicity, however, we assume that cars desiring to exit always can, up to the maximum flow of 30 cars/min per exit ramp. + +We desire to know how the population of each populated area changes during the evacuation, so that we can determine the time required. Therefore, we keep track of the population in the areas being evacuated, Columbia, and certain other cities in South Carolina. If all people have been evacuated from an area, no more enter the road system from that area. + +Areas do not have to be evacuated immediately when the simulation starts. Each area may be assigned an evacuation delay, during which normal traffic is simulated. Once the delay has passed, traffic in the area assumes its evacuation behavior. + +# Completing an Evacuation + +The six coastal counties of South Carolina (where Charleston includes the entire Charleston area) and the roads leading inland from these areas must be evacuated. When the population of these areas reaches zero, and the average traffic density along the roads is less than 5 cars/mi, the evacuation is complete and the simulation terminates. + +# Implementing the Model + +We implemented the model described above in a computer program written in C++. The logic for the main function is as follows: For each road, we let traffic exit, resolve traffic at intersections, move traffic along the rest of the road, and finally let cars enter the road. We loop until the evacuation is complete. + +Traffic flow is considered simultaneous; the traffic flow along every road is determined before traffic densities are updated. However, exits occur first and entrances last, to accurately simulate traffic at access ramps. + +# Model Results + +# Simulating the 1999 Evacuation + +To simulate the evacuation of 1999, we prepared a simplified map that includes the interstates, other the 4-lane divided highways, and some 2-lane undivided roads. We simulated the evacuation of the coastal counties—Beaufort, Jasper, Colleton, Georgetown, and Horry (including Myrtle Beach)—and the Charleston metro area. The inland areas we considered are Columbia, Spartanburg, Greenville, Augusta, Florence, and Sumter. In addition, we simulated large amounts of traffic from farther south entering I-95 N from the Savannah area. A map of the entire simulation is shown in Figure 1. + +The results of running this simulation with conditions similar to those of the actual evacuation produced an evacuation time of $31\mathrm{h}$ to get everyone farther inland than I-95. This is significantly greater than the actual evacuation time and completely unacceptable. The increase in time can be explained by two features of the actual evacuation that are missing in the simulation: + +- Only $64\%$ of the population of Charleston left when the mandatory evacuation was announced [Cutter and Dow 2000; Cutter et al. 2000]; our model assumes that everyone leaves. +- Late in the day, the eastbound lanes of I-26 were reversed, eliminating the congestion. + +![](images/ed850a583431572e9a2c6e20c8c34a7e88267c676d67d4c87330362ccc67aeb6.jpg) +Figure 1. Map of the simulation. + +# Simulating Reversal of I-26 + +In this simulation, I-26 E was turned into a second 2-lane highway leading from Charleston to Columbia. The evacuation time was reduced to $19\mathrm{h}$ . Under all conditions tested, reversing traffic on the eastbound lanes of I-26 significantly reduces evacuation time. + +# Simulating a Staggered Evacuation + +A staggered evacuation of the coastal counties of South Carolina, going from south to north with $1\mathrm{h}$ delays, decreases the time for evacuation to $15.5\mathrm{h}-2.5$ longer than the best time (described below). This is because the second-lowest county to evacuate, Horry County, is the northernmost and the last to evacuate. An analysis of the evacuation routes used reveals why there is no improvement: The roads for the large counties do not intersect until they reach Columbia. Given that the evacuation of Charleston County takes $13\mathrm{h}$ , the evacuations of the other large counties (Horry and Beaufort) would need to be advanced or delayed at least this much to have any effect. + +# Reversing Other Highways + +Reversing traffic on smaller highways might improve traffic flow, but this is not a practical option. None of the roads besides I-26 is a controlled-access road; therefore, it is impossible to ensure that the traffic entering the reversed lanes would all move in the desired direction. A single vehicle entering and attempting to travel in the undesired direction would cause a massive jam. + +The possible minor highways to Columbia that could be reversed are U.S. highways 321, 176, 521, 378, 501, and 21. All are non-controlled-access roads, meaning that there are no restrictions on where vehicles may exit or enter. Together, they have $450\mathrm{mi}$ of roadway. A quick examination of U.S. 501, the highest-capacity of these, reveals two intersections per mile with other roads. Considering this as typical, there are 900 intersections outside of towns that would need to be blocked. Factoring in the no fewer than 60 towns along the way, the blocking becomes prohibitive. + +Therefore, reversal of minor highways leading inland is not feasible. The only road that can be feasibly reversed is I-26. + +# Adding Temporary Shelters to Columbia + +According to our simulation, the population of the Columbia area after the evacuation (in the best-case scenario) was 1,147,000, a massive number above the 516,000 permanent residents. If more temporary shelters were established in Columbia, there would be less traffic leaving the city and therefore more congestion within the city. This would reduce the rate at which traffic could enter Columbia and lead to extra traffic problems on the highways leading into it. The effect of this congestion is beyond our computer simulation. + +We investigated buildings for sheltering evacuees. Using smartpages.com to search for schools, hotels, and churches in the Columbia area, we found the numbers of buildings given in Table 1. We assumed an average capacity for each type of building. According to the table, Columbia can shelter 1,058,251; this leaves a deficit of 89,000. + +Table 1. +Post-evacuation sheltering in the two counties (Richland and Lexington) that Columbia occupies. + +
TypeBuildings in Richlandin LexingtonTotalPeople sheltered
Per buildingNumber
Permanent residents516,251
Schools—general *83113196900176,400
Hotels/motels803211250056,000
Churches568386954250238,500
Schools—other**63167990071,100
Total1,058,251
+ +*We assume that schools average 600 students and can shelter 900. +**Includes academies but excludes beauty schools, trade schools, driving schools, etc. + +However, Charlotte NC had only a very small increase in population due to evacuation (from 396,000 to 411,000). The people that Columbia cannot shelter can easily find shelter in Charlotte. + +# Restricting Vehicle Types and Vehicle Numbers + +Restrictions on numbers and types of vehicles would indeed increase the speed of the evacuation. However, there are no reliable ways to enforce such restrictions. Consider the following arguments: + +- Forbidding camper vehicles may be unsuccessful, since for a sizable fraction of tourists the camper is their only vehicle. +- Restricting the number of vehicles to one per family: + +- The record-keeping involved would be prohibitive. +- For some families, more than one vehicle is needed to carry all of the family members. + +# The I-95 Traffic Problem + +We assume that if the interchange is not closed, at least $75\%$ of the people coming up from Florida and Georgia on I-95 will take I-26 to Columbia. This is because the next major city reachable from I-95 is Raleigh, 150 mi further on. In our simulation, not closing this intersection (but keeping the eastbound lanes of I-26 reversed) increases the evacuation time to $19\mathrm{h}$ . + +# The Best Simulated Evacuation Plan + +By altering various model parameters, we reduced the overall evacuation time to $13\mathrm{h}$ : + +- Reverse the eastbound lanes of I-26. +- Close the exit on I-95 N leading to I-26 W. +- Limit the flow of traffic from Charleston to I-26 W. + +The third item is necessary to reduce congestion along I-26 in the Charleston area. If too many cars are allowed on, the speed of traffic in Charleston drops significantly. Although this unlimited access results in a greater average speed on the section of I-26 between Charleston and the I-95 interchange, the slowdown in the Charleston area is exactly the type of backup that caused complaints in 1999 and resulted in a greater total time to evacuate the city. + +# Conclusions + +It is possible to evacuate coastal South Carolina in $13\mathrm{h}$ . Assuming that a hurricane watch is issued $36\mathrm{h}$ prior to landfall, the state can allow an ample delay between voluntary evacuation announcement and a subsequent mandatory order. However, state agencies must take considerable action to ensure that the evacuation will go as planned: + +- Close the interchange between I-26 and I-95. Traffic on I-26 must remain on I-26; traffic on I-95 must remain on I-95. +- The two eastbound lanes of I-26 must be reversed immediately upon the mandatory evacuation order. +- In Charleston, restrict entrance to I-26 to 15 cars/min at each entrance ramp. + +Everyone in the areas to evacuate must be notified. Within South Carolina, the existing Emergency Alert System includes many radio stations that can inform the public of the incoming hurricane, the steps to take during evacuation, and which roads to use. + +Residents must be more convinced to evacuate than they were during Hurricane Floyd. Appropriate measures must be taken to ensure that residents evacuate and evacuate far enough inland. + +# Model Strengths and Weaknesses + +# Strengths + +The model's predictions have a number of features found in a real evacuation or other high-density traffic flow: + +- An initial congested area around the entrance ramps gives way to a high-flow area when there is no entering traffic. +- Overall traffic speed in high-flow areas is around $35\mathrm{mph}$ . +- Merging traffic causes a major decrease in flow. + +# Weaknesses + +The model does not take into account + +- city streets, which are important in moving people from the highways to shelter in Columbia. +- accidents. A single accident or breakdown could result in several hours of delay. Tow trucks should be stationed at regular intervals along major roads. + +- local traffic on the non-controlled-access highways, which would slow traffic on those roads. + +# References + +Beltrami, Edward. 1998. Mathematics for Dynamic Modeling. 2nd ed. San Diego, CA: Academic. +City of Myrtle Beach: Frequently Asked Questions. 2000. http://www.cityofmyrtlebeach.com/faq.html. Accessed 12 February 2001. +Compton's Encyclopedia Online, vol. 3.0. South Carolina. http://www.comptons.com/encyclopedia/ARTICLES/0150/01710336_A.html. The Learning Company., Inc. Accessed 12 February 2001. +Cutter, Susan L., and Kirstin Dow. 2000. University of South Carolina: Quick Response Report 128: South Carolina's Response to Hurricane Floyd. http://www.cla.sc.edu/geog/hr1/Quick%20Response%20Report.htm. Accessed 12 February 2001. +_____, Robert Oldendick, and Patrice Burns. 2000. University of South Carolina: Preliminary Report 1: South Carolina's Evacuation Experience with Hurricane Floyd. http://www.cla.sc.edu/geog/hr1/Floyd evacuation.html. Accessed 10 February 2001. +Emergency Response Planning and Management, Inc. 2000. Assessment of South Carolina's experience with the 1999 hurricane season. http://www.state.sc.us/epd/HurrBrief.pdf. Accessed 9 February 2001. +Rand McNally Road Atlas: United States, Canada, Mexico. 1995. Skokie, IL: Rand McNally & Company. +smartpages.com . +South Carolina Asset Network. http://www.scan21.com/hurricane_main.html. Accessed 9 February 2001. +U.S. Army: Hurricane/Tropical Storm Classification Scale. http://www.sam.usace.army.mil/op/opr/hurrclass.htm. Accessed 10 February 2001. +U.S. Census Bureau: South Carolina Profiles (from the 1990 census). http://www.census.gov/datamap/www/45.html. Accessed 10 February 2001. +U.S. Census Bureau: Georgia Profiles (from the 1990 census). http://www.census.gov/datamap/www/13.html. Accessed 12 February 2001. + +# Students Develop Optimal Coastal Evacuation Plan + +SOUTHFIELD, MICH., FEB. 12—During September 13–15, 1999, Hurricane Floyd threatened landfall along the coast of South Carolina. In response to weather advisories and a mandatory evacuation order from the governor, hundreds of thousands of people simultaneously attempted to evacuate the coastal regions including Charleston and Myrtle Beach, causing unprecedented traffic jams along major highways. Although the evacuation was successful in that no lives were lost (largely since Floyd did not have as great an impact in the expected area), the evacuation was a failure in that it was not executed quickly nor completely enough to ensure the safety and well-being of all evacuating citizens had Hurricane Floyd made landfall in the Charleston area. + +Since that problematic evacuation in 1999, state officials have been working on plans for a safe, efficient evacuation of the South Carolina coast, preparing for the event that a hurricane like Floyd threatens the coast again. They posed the problem to teams of mathematicians all over the country. + +After working for four days, a group of talented students evolved a specific plan to safely and quickly evacuate every coastal county in South Carolina (nearly 1 million people) + +within 13 hours, using a computer simulation of their own design. The plan involves the reversal of the two coastal-bound lanes on Interstate 26 (the main east-west highway), as well as traffic control and detours throughout the major roads heading inland. + +The students' plan guides the mass traffic flow to areas the students felt were capable of sheltering large numbers of evacuees. The main destination was Columbia, the capital and largest inland city in South Carolina. Other destinations were Spartanburg, Florence, Sumter, and Greenville in South Carolina; Augusta in Georgia; and Charlotte in North Carolina. The plan also accounted for the possibility of very heavy traffic coming northward from Georgia and Florida on I-95, fleeing from the same hurricane, which could adversely affect the evacuation in South Carolina. + +Additionally, the students set forth plans to shelter the more than 1 million people who would be in Columbia after the evacuation is complete. By making use of all the city's schools, hotels, motels, and churches as shelters, nearly all the evacuees could be sheltered. The few remaining evacuees could easily find shelter north in Charlotte, which in 1999 received few evacuees. + +— Mark Wagner, Kenneth Kopp, and William E. Kolasa in Southfield, Mich. + +# Please Move Quickly and Quietly to the Nearest Freeway + +Corey R. Houmand + +Andrew D. Pruett + +Adam S. Dickey + +Wake Forest University + +Winston-Salem, NC + +Advisor: Miaohua Jiang + +# Introduction + +We construct a model that expresses total evacuation time for a given route as a function of car density, route length, and number of cars on the route. When supplied values for the last two variables, this function can be minimized to give an optimal car density at which to evacuate. + +We use this model to compare strategies and find that any evacuation plan must first and foremost develop a method of staggering traffic flow to create a constant and moderate car density. This greatly decreases the evacuation time as well as congestion problems. + +If an even speedier evacuation is necessary, we found that making I-26 one-way would be effective. Making other routes one-way or placing limits on the type or number of cars prove to be unnecessary or ineffective. + +We also conclude that other traffic on I-95 would have a negligible impact on the evacuation time, and that shelters built in Columbia would improve evacuation time only if backups were forming on the highways leading away from the city. + +# Prologue + +As Locke asserted [1690], power is bestowed upon the government by the will of the people, namely to protect their property. A government that cannot provide this, such as South Carolina during an act of God as threatening as Hurricane Floyd and his super-friends, is in serious danger of revolutionary + +overthrow by the stranded masses marching from the highways to the capitol. Therefore, South Carolina must find the most effective evacuation program—one that not only provides for the safety of its citizens but also allows for households to rescue as many of their vehicles (read: property) as possible. Pitted against the wrath of God and Nature, one can only hope the power of mathematical modeling can protect the stability of South Carolinian bureaucracy. + +Since the goal is to create a useful model that even an elected official can use, our model operates most effectively on the idea that government agencies are poor at higher-level math but good at number-crunching. Our model provides a clear, concise formula for weighing relative total evacuation times and likely individual trip times. This is crucial in deciding how to order a wide-area evacuation while maintaining public approval of the operation and preventing a coup d'etat. + +Our model shows that of the four strategies for evacuation suggested by the problem statement, staggering evacuation orders is always the most effective choice rather than simply reversing I-26. After that, applying any one of the other options, like lane reversal on I-26 and/or on secondary evacuation routes, can improve the evacuation plan. However, using more than one of these techniques results in a predicted average driver speed in excess of the state speed limit of $70\mathrm{mph}$ . + +Of the three additional methods, we find that the most effective is to make I-26 one-way during peak evacuation times. Implementing the same plan on secondary highways would require excessive manpower from law enforcement officials, and regulating the passenger capacity is too difficult a venture in a critical situation. + +We also find that, given the simplifications of the model, I-95 should have a negligible effect. + +Furthermore, the construction of shelters in Columbia would facilitate the evacuation only if the highways leading away from Columbia were causing backups in the city. + +# Analysis of the Problem + +To explain the massive slowdowns on I-26 during the 1999 evacuation, our team theorized that substantially high vehicle density causes the average speed of traffic to decrease drastically. Our principal goal is to minimize evacuation times for the entire area by maximizing highway throughput, that is, the highest speed at which the highest density of traffic can travel. Given this fact, we seek to find the relationships between speed, car density, and total evacuation time. + +# Assumptions + +To restrict our model, we assume that all evacuation travel uses designated evacuation highways. + +We assume that traffic patterns are smoothly and evenly distributed and that drivers drive as safely as possible. There are no accidents or erratically driving "weavers" in our scenario. This is perhaps our weakest assumption, since this is clearly not the case in reality, but it is one that we felt was necessary to keep our model simple. + +Our model also requires that when unhindered by obstacles, drivers travel at the maximum legal speed. Many drivers exceed the speed limit; however, we do not have the information to model accurately the effects of unsafe driving speeds, and a plan designed for the government should avoid encouraging speeding. + +As suggested by the problem, we simplify the actual distribution of population across the region placing 500,000 in Charleston, 200,000 in Myrtle Beach, and an even distribution of the remaining approximately 250,000 people. + +Multiple-lane highways and highway interchanges are likely to be more complicated than our approximation, but we simplify these aspects so that our model will be clear and simple enough to be implemented by the government. + +Distribution of traffic among the interstate and secondary highways in our model behaves according to the results of a survey, which indicates that $20\%$ of evacuees chose to use I-26 for some part of their trip. + +# The Model + +We begin by modeling the traffic of I-26 from Charleston to Columbia, as we believe that understanding I-26 is the key to solving the traffic problems. + +We derived two key formulas, the first $s(\rho)$ describing speed as a function of car density and the second $e(\rho)$ describing the total evacuation time as a function of car density: + +$$ +s (\rho) = \sqrt {\frac {1}{k} \left(\frac {5 2 8 0}{\rho} - l - b\right)}, \qquad e (\rho) = \frac {L \rho + N}{\rho s (\rho)}. +$$ + +The constants are: + +$$ +k = \text {b r a k i n g c o n s t a n t}, +$$ + +$$ +l = \text {a v e r a g e} +$$ + +$$ +b = \text {b u f f e r} +$$ + +The variables are: + +$$ +\rho = \text {c a r d e n s i t y i n} \text {c a r s} / \text {m i}, +$$ + +$L =$ length of highway in feet, and + +$N =$ number of cars to be evacuated. + +The method is to maximize $e(\rho)$ for a given $N$ and $L$ , which gives us an optimal car density. + +# Derivation of the Model + +The massive number of variables associated with modeling traffic on a micro basis leads to a very complex and difficult problem. Ideally, one could consider such factors as a driver's experience, his or her psychological profile and current mood, the condition of the mode of transportation, whether his or her favorite Beach Boys song was currently playing on the radio, etc. Then one could use a supercomputer to model the behavior of several hundred thousand individuals interacting on one of our nation's vast interstate highways. Instead, our model analyzes traffic on a macro basis. + +The greater the concentration of cars, the slower the speed at which the individuals can safely drive. What dictates the concentration of cars? Well, the concentration is clearly related to the distance between cars, since the greater the distance, the smaller the concentration, and vice versa. On any interstate highway, drivers allot a certain safe traveling distance between their car and the car directly in front of them, to allow time to react. Higher speeds require the same reaction time but consequently a greater safe traveling distance. How do we determine what the correct distance between cars at a given speed? The braking distance $d$ of a car is proportional to the square of that car's speed $v$ . That is, $d = kv^2$ for some constant $k$ . The value for $k$ is 0.0136049 ft-hr²mi²; we derive this value by fitting $\ln d = \ln k + b \ln v$ with data from Dean et al. [2001]; the fit has $r^2 = .99999996$ . + +However, the distance between cars is an awkward measurement to use. Our goal is to model traffic flow. With our model, we manipulate the traffic flow until we find its optimal value. The distance between cars is hard to control, but other values, such as the concentration or density of the cars in a given space, are much easier to control. + +How do we find the value of the car density? To start, any distance can be subdivided into the space occupied by cars and the space between cars. The space occupied by cars can be assumed to be a multiple of the average car length $l$ . The space between cars is clearly related to the braking distance, but the two are not necessarily the same. The braking distance at low speeds ( $< 10\mathrm{mph}$ ) is less than a foot. However, ordinary experience reveals that even at standstill traffic, the distance between cars is still much greater than a foot; drivers still leave a buffer zone in addition to the safe breaking distance. Then each car has a space associated with it, given by $d + l + b$ , where $b$ is the average buffer zone in feet. Since this expression is of the form of 1 car per unit distance, this is in itself a density. We can also convert this to more useful units, such as cars per + +mile: + +$$ +\rho = \frac {\mathrm {c a r s}}{\mathrm {m i}} = \frac {\mathrm {c a r s}}{\mathrm {f t}} \times \frac {5 2 8 0 \mathrm {f t}}{m i} = \frac {5 2 8 0}{d + b + l} \times \frac {\mathrm {c a r s}}{\mathrm {m i}} = \frac {5 2 8 0}{k v ^ {2} + b + l} \frac {\mathrm {c a r s}}{\mathrm {m i}}. +$$ + +Solving for $v$ gives + +$$ +s (\rho) = v = \sqrt {\frac {1}{k} \left(\frac {5 2 8 0}{\rho} - l - b\right)}. +$$ + +At this point we can substitute $k = 0.0136049 \, \text{ft} \cdot \text{h}^2 / \text{mi}^2$ , $l = 17 \, \text{ft}$ (from researching sizes of cars), and $b = 10 \, \text{ft}$ (from our personal experience) and graph speed as a function of density (Figure 1). + +![](images/ab4db2273f74e3476e3cd6e02d4c231669ea5f810338c7c731c0e69762fae08f.jpg) +Figure 1. Speed as a function of density. + +Note the maximum density at 195.6 cars/mi. To understand why a maximum density exists, consider the case $b = 0$ ; as $v \to 0$ , the distance between the cars approaches zero. + +We now determine how long it takes this group of cars to reach their destination. For now, we say that the group reaches its goal whenever the first car arrives. This is a simple calculation: We divide the length of the road $L$ by the average speed of the group + +$$ +t (\rho) = \frac {L}{s (\rho)} = \frac {L}{\sqrt {\frac {1}{k} \left(\frac {5 2 8 0}{\rho} - l - b\right)}}. +$$ + +![](images/d0a63dd58de3a00175e30648d1c3f53d991a9ddd43f84c7d88ffd635800a12a5.jpg) +Figure 2. Evacuation time as a function of density. + +We refer to this function as $t$ because it gives the time of the trip. Figure 2 gives $t$ for I-26 between Charleston and Columbia, which has a length of $117 \, \text{mi}$ . + +However, the real problem is not simply evacuating one group of cars, it is quickly evacuating a large number of cars. Our goal poses an interesting dilemma: If we evacuate in a stream of low density, they will travel very fast but evacuation takes an extremely long time. If we evacuate in a stream of very high car density (which is what happened during Hurricane Floyd), many people will move at once but they will move very slowly. We must seek the middle ground. + +We express the total evacuation time as a function of car density; then taking the minimum gives the optimum car density. + +Unfortunately, the concept of average car density doesn't work as well over large distances. The problem with traffic flow is that it tends to clump, creating high car densities and thus low speeds. However, the instantaneous car density won't vary much for a small distance, such as a mile, compared to the average car density over the entire highway. To look ahead to the problem: Making I-26 one-way will certainly help facilitate evacuation, but it won't help nearly as much as staggering the evacuation flow. Staggering is the only way to realistically create a constant traffic flow and thus an average car density that is more or less constant over the entire length of the highway. + +Suppose that we take the $N$ cars that need to be evacuated and subdivide them into groups, each consisting of the number of cars that there are in 1 mi. Call these groups packets. Each packet is 1 mi long. + +We look at two cases, one where we send only one group and another where + +we send more than one group. For the first case, we assume that all $N$ cars fit in one packet, so $0 < N < 196$ , where 196 is the maximum car density for our values of buffer zone and average car length. Everyone is not technically evacuated until the end of the packet arrives safely, so we need to add to $t(\rho)$ the additional time for the end of the packet to arrive: + +$$ +e (\rho) = t (\rho) + \frac {1}{s (\rho)}. +$$ + +We call this expression $e$ because it express total evacuation time as a function of car density. + +For the second case, we can say that the packets travel like a train, as you can't release another packet until the first packet is a mile away. Evacuation time is the sum of the time for the first packet to arrive plus the time until the last packet arrives. Since the packets arrive in order, and they are all one mile long, the time it takes the last packet to arrive is equal to the number of packets times the time it takes the packets to move 1 mi. The equation is + +$$ +e (\rho) = t (\rho) + \frac {1}{s (\rho)} \frac {N}{\rho}. +$$ + +For one packet, this second equation simplifies to the first. + +After some algebra, we arrive at + +$$ +e (\rho) = \frac {L \rho + N}{\rho s (\rho)}. +$$ + +The total evacuation time $e$ is simply a function of three variables: the length of the highway $L$ , the number of cars $N$ , and the car density $\rho$ . The speed $s(\rho)$ is itself a function of $\rho$ and three other constants (the braking constant $k$ , the average car length $l$ , and the buffer zone $b$ ). Setting $L = 117$ and $N = 65,000$ produces the graph in Figure 3, which has a clear minimum. + +# Application of the Model + +The model should first deal with I-26. We assume that evacuation along this road takes the longest. We limit our consideration to Charleston, Columbia, and I-26 between them. + +# Modeling I-26 Traffic Flow + +The problem states that the evacuation consists of 500,000 residents from Charleston, 200,000 from Myrtle Beach, and 250,000 others. However, the model need not evacuate all 950,000. The evacuation rate of $64\%$ for Hurricane Floyd was one of the highest evacuation rates ever seen for a hurricane. + +![](images/6cd7d221bb9595f8e2f2543c6b1a1a39eacfe81ebb3f612038ab1477c6015e8d.jpg) +Figure 3. Total evacuation time as a function of density. + +Many individuals, whether elderly, financially unable to make the trip, or just darn stubborn remain close to home rather than join the throng of frantic drivers fleeing for their lives. Further, only about $20\%$ of those who left did so along Interstate 26. Taking $20\%$ of $64\%$ of 950,000 gives 122,000 as the number of people that we can expect to evacuate on I-26. + +We must convert number of people to cars, the unit in our model. In South Carolina there are 3,716,645 people and 1,604,000 households, or 2.3 people per household. Further research allowed us to find the average number of cars taken for each household. Dow and Cutter [2001] included Table 2 on the number cars taken by each household. + +Table 2. +Cars taken by each household in 1999 Hurricane Floyd evacuation [Dow and Cutter 2001]. + +
Cars% of households
03%
172%
221%
3+4%
+ +Since the number of households taking more than three cars is merely a fraction of the $4\%$ of the population evacuating, we assume that a household takes 0, 1, 2, or 3 cars. Thus, we find a weighted average of 1.26 cars/household. + +We now calculate the average number of people per car: + +$$ +\frac {\mathrm {p e o p l e}}{\mathrm {c a r}} = \frac {\mathrm {p e o p l e / h o u s e h o l d}}{\mathrm {c a r s / h o u s e h o l d}} = 1. 8 3. +$$ + +We divide the evacuation population by the average number of people per car to find that 66,655 cars need to be evacuated. + +Both $s(\rho)$ and $t(\rho)$ are independent of the number of cars to evacuate: $s(\rho)$ is based on highway information that stays constant throughout the model, and $t(\rho)$ is based on the length of the highway. The only function dependent on the numbers of cars is total evacuation time, $e(\rho)$ . Substituting values for $b, l, N,$ and $L$ , we arrive at + +$$ +e _ {l} (\rho) = \frac {1 1 7 \rho + 6 6 6 5 5}{\rho \frac {1}{0 . 0 1 3 6 0 4 9} \left(\frac {5 2 8 0}{\rho} - 2 7\right)}. +$$ + +After finding the minimum value, we arrive at + +$$ +\min \rho_ {l} = 8 3 \text {c a r s / m i}, \quad s = 5 2 \text {m p h}, +$$ + +$$ +t = 2 \mathrm {h} 1 5 \min , \quad e _ {l} = 1 8 \mathrm {h}. +$$ + +Our model does not evacuate people very quickly, but there is a significant decrease in average trip time $t(\rho)$ for an individual car. + +Our model applies to only one lane of traffic. If cars on I-26 were allowed to travel in only one lane but at optimal density, the total evacuation time would be approximately $18\mathrm{h}$ , with each car making the journey in just over $2\mathrm{h}$ at an average speed of $52\mathrm{mph}$ . + +We assume that adding another highway lane halves the number of cars per lane and find + +$$ +\min \rho_ {l} = 7 3 \text {c a r s / m i}, \quad s = 5 8 \text {m p h}, +$$ + +$$ +t = 2 \mathrm {h} 0 \min , \quad e _ {l} = 1 0 \mathrm {h}. +$$ + +Although the average trip time slightly decreased and speed slightly increased, the most striking result of opening another lane is the halving of total evacuation time. + +Turning the entire I-26 into one-way traffic turns I-26 into a pair of two-lane highways rather than a four-lane highway; adding another pair of lanes to our already existing pair of lanes again halves the number of cars to be evacuated per lane. + +$$ +\min \rho_ {l} = 5 8 \mathrm {c a r s / m i}, \quad s = 6 8 \mathrm {m p h}, +$$ + +$$ +t = 1 \mathrm {h} 4 0 \min , \quad e _ {l} = 6 \mathrm {h}. +$$ + +# Conclusions from the I-26 Model + +The problem explicitly mentions four means by which traffic flow may be improved: + +turning I-26 one-way, +staggering evacuation, +- turning smaller highways one-way, or +- limiting the number or type of cars. + +# Staggering + +Contrary to the emphasis of the South Carolina Emergency Preparedness Division (SCEPD), the primary proposal should not be the reversal of southbound traffic on I-26 but rather the establishment of a staggering plan. One glance at the total evacuation time vs. car density graph reveals the great benefits gained from maintaining a constant and moderate car density on the highway. The majority of the problems encountered during Hurricane Floyd, such as the 18-hour trips to Columbia or incidents of cars running out of gas on the highway, would be solved if a constant car density existed on the highway. + +However, while our model assumes a certain constant car density, it does not provide a method for producing such density. Therefore, the SCEPD should produce a plan that staggers the evacuation to maintain a more constant car density. One proposal was a county-by-county stagger. SCEPD should take the optimal car density and multiply that by the optimal speed to arrive at an optimal value of cars/hour. The SCEPD should then arrange the stagger so that dispersal of cars per hour is as close as possible close to the optimal value. + +# Making I-26 One-Way + +In addition to the staggering plan, making I-26 one-way reduces the total evacuation time from $10\mathrm{h}$ to $6\mathrm{h}$ . Thus, while staggering should always be implemented, reversal of traffic on I-26 should supplement the staggering plan when the SCEPD desires a shorter evacuation time. + +# The Other Options + +This leaves two more strategies for managing traffic flow: turning smaller highways one-way and limiting the number or types of car taken per household. Both of these can be implemented easily in our model. Turning smaller highways one-way would encourage evacuees to take back roads instead of I-26, making percentage of evacuees taking I-26 less than $20\%$ , reducing our + +value for $N$ . Restricting the number of cars per household would also reduce $N$ . Likewise, disallowing large vehicles would reduce $l$ , the average length of vehicles. + +Considering that the optimal speed calculated after making I-26 one-way is $68 \, \text{mph}$ , and that adding other evacuation strategies would raise the optimum average speed above the lawful limit of $70 \, \text{mph}$ , additional strategies would be unnecessary. + +# Adding Myrtle Beach + +The next route needing consideration is 501/I-20 leaving Myrtle Beach, with 200,000 people. Unfortunately, we lack statistics on how many people took the 501/I-20 evacuation route. We can, however, make a guess using a ratio. We assume that an equal proportion of residents of Myrtle Beach evacuate using the main route as in Charleston, so that the number taking the highway leaving a city is directly proportional to the city's size. This suggests that 49,000 people leave Myrtle Beach in $49,000 / 1.84 = 26,600$ cars, or 13,300 cars/lane. Combining this with an $L = 150$ mi, the distance between Myrtle Beach and Columbia, we can apply our model to arrive at + +$$ +\min \rho_ {l} = 4 6 \text {c a r s / m i}, \quad s = 8 0 \text {m p h}, +$$ + +$$ +t = 1 \mathrm {h} 5 0 \mathrm {m i n}, \qquad e _ {l} = 5 \mathrm {h} 2 8 \mathrm {m i n}. +$$ + +We cap the speed at $70\mathrm{mph}$ and determine a preferred car density of 56 cars / mi. This figure produces a new set of optimal values: + +$$ +\min \rho_ {l} = 5 6 \text {c a r s / m i}, \quad s = 8 0 \text {m p h}, +$$ + +$$ +t = 2 \mathrm {h}, \quad e _ {l} = 5 \mathrm {h} 3 0 \min . +$$ + +This calculation also confirms one of our basic assumptions, that I-26 between Charleston and Columbia is the limiting factor: Evacuation from Myrtle Beach using two lanes still takes less time (5 h) than evacuation using 4 lanes on I-26 from Charleston (6 h). From this we can conclude that making traffic roads leading from Myrtle Beach one-way would be unnecessary. We assume that applying our model to other smaller highways will lead to similar results. + +# Adding Intersections and I-95 + +To simplify the model, we assume that intersecting routes of equal density contribute traffic to the adjoining route so that the difference in densities always seeks a balance. Intersections between roads of similar relative density do not cause unexpected spikes in density on either road. Their effect is therefore negligible. + +On the case of two interstates of unequal traffic densities, the volume of traffic on the busier route may cause a substantial change in density on the adjoining route. We assume a normalizing tendency at interchanges, so drivers are not likely to change routes without immediate benefit of higher speed. We also assume that on interstates over long distances with only intermittent junctions, there is a normalizing tendency of traffic to distribute itself. Therefore, though I-95 may cause congestion problems, in our model the traffic on I-95 has a negligible effect on the overall evacuation traffic. + +# Adding Columbia and the Rest of the World + +We view Columbia as a distribution center. Columbia accepts a certain number of cars per unit of time, dispenses another number of cars per unit of time to the rest of the world, and retains a certain amount of the traffic that it receives. In the case of an evacuation, this retention reflects evacuees who stay with families or find hotel rooms. + +For example, staggered one-way traffic on I-26 yields optimal car density to be 58 cars/mi and the optimal speed of 68 mph. Multiplying these yields a flow of 3,944 cars/h into Columbia. + +If the highways leading to the rest of the world can handle a large traffic flow in cars per unit time, evacuation time will not be affected. If backups outside of Columbia are a problem, building more shelters would help because Columbia would retain more of the incoming traffic. However, if the highways leading elsewhere can handle the flow, shelters are unnecessary. + +# Strengths and Weaknesses + +One strength of the model is its formulaic practicality. With reliable measurements of traffic density and speed, the evacuation volume predictions predictions should be useful. + +A second and more important strength of the model is its use for comparison. Its prediction can be considered a reference point for experimentation. After all, we are looking for improvements but not necessarily exact results. The model offers a range of possible values, and this is an advantage. + +The simplifications and approximations taken in this model introduce obvious weaknesses, particularly in our concept of car density. In simplifying traffic flow, we assumed a homogeneity over distance, but this isn't likely to happen. Much of our model and its derivation hinge on the assumption that traffic density can be controlled. + +With unconvincing generality, we calculate ranges of average speeds and assume that all drivers drive as fast and as safely as possible. However, it is likely that traffic will clump and cluster behind slower drivers. + +A potential source of congestion, intersections, was simplified by assuming that relative traffic densities seek a balanced level. In reality, different routes + +are preferred over others and imbalances in traffic density are likely to occur. + +The only way to test the model is to collect data during very heavy traffic. However, traffic as great as during Hurricane Floyd is rare. + +In short, the weaknesses of the model are primarily related to its simplicity. However, it is that same simplicity that is its greatest strength. + +# References + +Cutter, Susan L., Kirsten Dow, Robert Oldendick, and Patrice Burns. South Carolina's Evacuation Experience with Hurricane Floyd. http://www.cla.sc.edu/geog/hr1/Floyd_evac.html. Accessed February 10, 2001. +Dean, Ryan, Drew Morgan, and David Ward. Braking distance of a moving car. http://199.88.16.12/~team99b/tech.html. Accessed February 10, 2001. +Dow, Kirsten, and Susan L. Cutter. South Carolina's Response to Hurricane Floyd. http://www.colorado.edu/hazards/qr/qr128/qr128.html. Accessed February 10, 2001. +Locke, John. [1690]. Second Treatise On Government. http://www.swan.ac.uk/poli/texts/locke/lockcont.htm. Accessed February 12, 2001. +Weiss, Neil, and Matthew Hassett. 1982. Introductory Statistics. Reading, MA: Addison-Wesley. + +![](images/c1db44e45ed2ab8bb0a12a7567ede136961543bb596750294a35b68633702909.jpg) +Dr. Ann Watkins, President of the Mathematical Association of America, congratulating MAA Winner Adam Dickey after he presented the team's model at MathFest in Madison, WI, in August. [Photo courtesy of Ruth Favro.] + +# Mathematical Commission Streamlines Hurricane Evacuation Plan + +COLUMBIA, SC, FEB. 12—As long as the western coast of Africa stays where it is, global climate patterns will continue to have it in for South Carolina. That proverbial "it" is the hurricane season. Some may point their fingers at God and curse His ways, but cooler heads will eventually surrender—shutter their windows, pack their cars, and drive to Columbia for the next few days, mood unsettled. + +However, a report issued from the governor's office earlier this week announced that scientists have designed a new plan for coastal emergency evacuation. + +Prompted by public disapproval of evacuation tactics used during the mandatory evacuation ordered during Hurricane Floyd, the new study sought to find the source of the congestion problems that left motorists stranded on I-26 for up to 18 hours wondering why they even attempted evacuation at all. + +"The government should have known that we didn't have the roads to get everybody out," one Charleston resident commented, "and we just had the speed limits raised, so I would've expected a quicker escape rout." + +Governor Jim Hodges commissioned the evacuation study last Friday, expecting a quick response. So far, the results seem plausible and practical. + +The private commission developed a mathematical model to describe the traffic flow that caused the backups responsible for the evacuation problems. Then, by manipulating the model and combining it with statistical survey data collected after the evacuation, the commission developed a report evaluating the various current suggestions to alleviate issues. + +The findings of the commission sug + +gest that the best way to avoid evacuation backups is to stagger and sequence the county and metropolitan evacuation orders so that main routes do not exceed the critical traffic density that caused the slowdown. + +As explained by one member of the commission, "the model does not get into the shitty details of complicated traffic modeling, but it does present a useful framework for evaluating crisis plans and traffic routing." + +The solution garnered some skeptical criticism among mathematics researchers state-wide. "The commission is missing the point here," commented one researcher at the Hazards Research Lab in the Department of Geology at the University of South Carolina. "The problem with Hurricane Floyd was an anomalously high rate of evacuation among coastal residents—this is a sociopsychological problem and not a mathematical one." + +Despite the cool reception, the commission is confident in its findings. According to the report, a general staggering of traffic will far exceed the benefits of other methods of congestion control that have been suggested, such as reversing the eastbound lanes of I-26 and some secondary highways. + +"We find that advance planning with a mind to reduce the traffic surge associated with quickly ordered mandatory evacuations is the most useful way to improve evacuations," said a spokesman for the commission. + +Despite the current concerns, the proving grounds for the findings of the commission will not surface until this fall, when the tropical swells off the coast of Africa begin heading our way again. + +—Corey R. Houmand, Andrew D. Pruett, and Adam S. Dickey in Winston-Salem, NC. + +# Judge's Commentary: The Outstanding Hurricane Evacuation Papers + +Mark Parker + +Department of Mathematics, Engineering, and Computer Science + +Carroll College + +Helena, MT 59625 + +mparker@carroll.edu + +http://web.carroll.edu/mparker + +# Introduction + +Once again, Problem B proved to be quite challenging—both for the student teams and for the judges! The students were challenged by a multifaceted problem with several difficult questions posed, and the judges were challenged to sort through the wide range of approaches to find a small collection of the best papers. It is worth reminding participants and advisors that Outstanding papers are not without weaknesses and even mathematical or modeling errors. It is the nature of judging such a competition that we must trade off the strengths, both technical and expository, of a given paper with its weaknesses, and make comparisons between papers the same way. + +The approaches taken by this year's teams can be divided into two general categories: + +Macroscopic: Traffic on a particular highway or segment of highway was considered to be a stream, and a flow rate for the stream was characterized. Among the successful approaches in this category were fluid dynamics and network flow algorithms. + +Microscopic: These can be considered car-following models, where the spacing between and the speeds of individual vehicles are used to determine the flow. Among the successful approaches were discrete event simulations (including cellular automata) and queuing systems. + +By far, the most common approach was to determine that the flow $q$ , or flux, is a function of the density $\rho$ of cars on a highway and the average speed $v$ of those cars: $q = \rho v$ . Successful approaches identified the following characteristics of the basic traffic flow problem: + +- When the vehicle density on the highway is 0, the flow is also 0. +- As density increases, the flow also increases (up to a point). +- When the density reaches its maximum, or jam density $\rho_0$ , the flow must be 0. +- Therefore, the flow initially increases, as density does, until it reaches some maximum value. Further increase in the density, up to the jam density, results in a reduction of the flow. + +At this point, many teams either derived from first principles or used one of the many resources available on traffic modeling (such as Garber and Hoel [1999]) to find a relationship between the density and the average speed. Three of the common macroscopic models were: + +- a linear model developed by Greenshield: + +$$ +v = v _ {0} \left(1 - \frac {\rho}{\rho_ {0}}\right), \qquad \mathrm {s o} \qquad q = \rho v _ {0} \left(1 - \frac {\rho}{\rho_ {0}}\right); +$$ + +- a fluid-flow model developed by Greenberg: + +$$ +v = v _ {0} \log {\frac {\rho}{\rho_ {0}}}, \qquad \mathrm {s o} \qquad q = \rho v _ {0} \log {\frac {\rho}{\rho_ {0}}}; +$$ + +or + +- a higher-order model developed by Jayakrishnan: + +$$ +v = v _ {0} \left(1 - \frac {\rho}{\rho_ {0}}\right) ^ {a}, \qquad \mathrm {s o} \qquad q = \rho v _ {0} \left(1 - \frac {\rho}{\rho_ {0}}\right) ^ {a}, +$$ + +where $v_{0}$ represents the speed that a vehicle would travel in the absence of other traffic (the speed limit). By taking the derivative of the flow equation with respect to speed (or density), teams then found the optimal speed (or density) to maximize flow. + +Many teams took the optimal flow from one of the macroscopic approaches and used it as the basis for a larger model. One of the more common models was simulation, to determine evacuation times under a variety of scenarios. + +In order to make it beyond the Successful Participant category, teams had to find a way realistically to regulate traffic density to meet these optimality conditions. Many teams did this by stipulating that ramp metering systems (long term) or staggered evacuations (short term) could be used to control traffic density. + +There were a number of mathematically rigorous papers that started with a partial differential equation, derived one of the macroscopic formulas, determined appropriate values for the constants, calculated the density giving the optimal flow, and incorporated this flow value into an algorithm for determining evacuation time. In spite of the impressive mathematics, if no plan was given to regulate traffic density, the team missed an important concept of the MCM: the realistic application of a mathematical solution to a real-world problem. + +One key to successful model building is to adapt existing theory or models properly to the problem at hand, so judges see little difference between deriving these equations from first principles and researching them from a book. Whether derived or researched, it is imperative to demonstrate an understanding of the model you are using. + +# The Judging + +No paper completely analyzed all 6 questions, so the judges were intrigued by what aspects of the problem that a team found most important and/or interesting. We were similarly interested in determining what aspects of the problem a team found least relevant and how they divided their effort among the remaining questions. To be considered Outstanding, a paper had to meet several minimum requirements: + +- the paper must address all 6 questions, +- all required elements (e.g., the newspaper article) must be included, and +- some sort of validation of the model must be included. + +We were also particularly interested in how teams modeled the I-26/I-95 interchange and the congestion problem in Columbia. Many teams chose to treat Columbia as the terminal point of their model and assumed that all cars arriving there would be absorbed without creating backups. + +To survive the cut between Honorable Mention and Meritorious, a paper had to have a unique aspect on some portion of the problem. Two examples that come to mind are a unique modeling approach or some aspect of the problem analyzed particularly well. Thus, papers that failed to address all questions or had a fatal weakness that prevented their model from being extended could still be considered Meritorious. The Meritorious papers typically had very good insight into the problem, but deficiencies as minor as missing parameter descriptions or model implementation details prevented them from being considered Outstanding. + +# The Outstanding Papers + +The six papers selected as Outstanding were recognized as the best of the submissions because they: + +- developed a solid model which allowed them to address all six questions, and analyze at least one very thoroughly; +made a set of clear recommendations; +- analyzed their recommendations within the context of the problem; and +- wrote a clear and coherent paper describing the problem, their model, and their recommendations. + +Here is a brief summary of the highlights of the Outstanding papers. + +The Bethel College team used a basic car-following model to determine an optimal density, which maximized flow, for individual road segments. They then formulated a maximum flow problem, with intersections and cities as vertices and road segments as arcs. The optimal densities were used as arc capacities, the numbers of vehicles to be evacuated from each city were used as the sources, and cities at least 50 miles inland were defined to be sinks. Each city was then assigned an optimal evacuation route, and total evacuation times under the different scenarios were examined. + +The Duke team also used a basic car-following model from the traffic-modeling literature. This model provided the foundation of a one-dimensional cellular automata simulation. They did a particularly good job of defining evacuation performance measures—maximum traffic flow and minimum transit time, and analyzing traffic mergers and bottlenecks—aspects of the problem ignored by many other teams. + +What discussion of Outstanding papers would be complete without a Harvey Mudd team? Of the teams that utilized literature-based models, this team did the best job of considering advanced parameters—including road grade, non-ideal drivers, and heavy-vehicle modification. They also did a very good job of comparing their model with the new South Carolina evacuation plan, recognizing the bottleneck problem in Columbia, and analyzing the impact of extra drivers from Florida and Georgia on I-95. Their entry was a nice example of a simple model that was well analyzed and thoroughly explained. + +The Virginia Governor's School team began their analysis by reviewing the current South Carolina evacuation plan, a baseline to compare their model against. They researched the literature to find traffic-flow equations and then used a genetic algorithm to assign road orientation and evacuation start times for cities. They did an exceptionally good job of analyzing the sensitivity of their model to changes in parameter values. + +The INFORMS prizewinner, from Lawrence Technical University, combined Greenshield's model with a discrete event simulation. The judges saw this entry as a solid paper with logical explanations and a good analysis. The team's + +model handled bottlenecks, and the team used a simulation of the actual 1999 evacuation to validate their model. + +The MAA and SIAM winner, from Wake Forest University, derived a following model from first principles, which was then incorporated in a cellular automata type model. Like many of the best approaches, the parameters for their model came from the 1999 evacuation. They provided a thoughtful, not necessarily mathematical, analysis of intersections and I-95. + +# Advice + +At the conclusion of our judging weekend, the judges as a whole offered the following comments: + +# Follow the instructions + +- Answer all required parts. +Make a precise recommendation. +- Don't just copy the original problem statement, but provide us with your interpretation. + +# Readability + +- Make it clear in the paper where the answers are. +- Many judges find it helpful to include a table of contents. +- Pictures and graphs can help demonstrate ideas, results, and conclusions. +- Use discretion: If your paper is excessively long (we had a paper this year that was over 80 pp, not including computer program listing!), you should probably reconsider the relevance of all factors that you are discussing. Depending on what round of judging your paper is being read, judges typically have between 5 and 30 minutes to read it. + +# Computer Programs + +- Make sure that all parameters are clearly defined and explained. +- When using simulation, you must run enough times to have statistically significant output. A single run isn't enough! +- Always include pseudocode and/or a clear verbal description. + +# Reality Check + +- Why do you think your model is good? Against what baseline can you compare/validate it? +- How sensitive is your model to slight changes in the parameters you have chosen? (sensitivity analysis) + +- Complete the analysis circle: Are your recommendations practical in the problem context? + +Before the final judging of the MCM papers, a first (or triage) round of judging is held. During triage judging, each paper is skimmed by two or three judges, who spend between 5 and 10 minutes each reading the paper. Typically, when you send your paper off to COMAP, you have about a $43\%$ chance of being ranked higher than Successful Participant. If, however, you survive the triage round, you have about an $80\%$ chance of being ranked higher than Successful Participant. Head triage judge Paul Boisen offers the following advice to help you survive triage. + +# Triage Judge Tips + +- Your summary is a key component of the paper; it needs to be clear and contain results. A long list of techniques can obscure your results; it is better to provide only a quick overview of your approach. The Lawrence Technical University paper is a good example of a clear and concise summary. +- Your paper needs to be well organized—can a triage judge understand the significance of your paper in 6 to 10 minutes? + +# Triage Judge Pet Peeves + +- Tables with columns headed by Greek letters or acronyms that cannot be immediately understood. +- Definitions and notation buried in the middle of paragraphs of text. A bullet form is easier for the frantic triage judge! +Equations without variables defined. +- Elaborate derivations of formulas taken directly from a text. It is better to cite the book and perhaps briefly explain how the formula is derived. It is most important to demonstrate that you know how to use the formulas properly. + +# Reference + +Garber, Nicholas J., and Lester A. Hoel. 1999. Traffic and Highway Engineering. Pacific Grove, CA: Brooks/Cole Publishing Company. + +# About the Author + +After receiving his B.A. in Mathematics and Computer Science in 1984, Mark Parker spent eight years working as a systems simulation and analysis engineer in the defense industry. After completing his Ph.D. in Applied Mathematics at the University of Colorado-Denver in 1995, he taught mathematics and computer science at Eastern Oregon University for two years. He then spent three years on the faculty at the U.S. Air Force Academy teaching mathematics and operations research courses. He now shares a teaching position with his wife, Holly Zullo, at Carroll College in Helena, MT, and spends as much time as he can with his three-year-old daughter Kira. + +![](images/f72b55d9542dca4828eb972e117605f325d9e508fa3f290cf136f273d5787ec1.jpg) \ No newline at end of file diff --git a/MCM/1995-2008/2002ICM/2002ICM.md b/MCM/1995-2008/2002ICM/2002ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..3d4f21173397445c4d01ad151112ea803aec5129 --- /dev/null +++ b/MCM/1995-2008/2002ICM/2002ICM.md @@ -0,0 +1,2320 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Dean of the School of + +Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy State University + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Director of Ed. Tech. + +Roland Cheyney + +Production Editor + +Pauline Wright + +Copy Editors + +Seth A. Maislin + +Timothy McLean + +Distribution Manager + +Kevin Darcy + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 23, No. 1 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +David C. "Chris" Arney + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. "Gene" Woolsey + +Brigham Young University + +The College of St. Rose + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription includes print copies of quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in their classes, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2220 \$75 + +(Outside U.S.) #2221 $85 + +# INSTITUTIONAL PLUS MEMBERSHIP SUBSCRIBERS + +Institutions can subscribe to the Journal through either Institutional Pus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in any class taught in the institution, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2270 $395 + +(Outside U.S.) #2271 $415 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +Regular Institutional members receive only print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2240 $165 + +(Outside U.S.) #2241 $185 + +# LIBRARY SUBSCRIPTIONS + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching and our organizational newsletter Consortium. + +(Domestic) #2230 $140 + +(Outside U.S.) #2231 $160 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2002 by COMAP, Inc. All rights reserved. + +# Table of Contents + +# Guest Editorial + +An Open Letter + +Project INTERMATH Staff 1 + +# INTERMATH Forum + +A Model for Academic Change + +Donald Small and Chris Arney 3 + +Mathematical Modeling as a Thread + +Richard D. West 9 + +# Special Section on the ICM + +Results of the 2002 Interdisciplinary Contest in Modeling + +Chris Arney and John H. "Jack" Grubbs 11 + +Where's the Scrub? Aye, There's the Rub + +Victoria L. Chiou, Andrew Carroll, and Jessamyn J. Liu 25 + +Cleaning Up the Scrub: Saving the Florida Scrub Lizard + +Nicole Hori, Steven Krumholtz, and Daniel Lindquist 37 + +Judges' Commentary: The Outstanding Scrub Lizard Papers + +Gary Krahn and Marie Vanisko 47 + +Author's Commentary: The Outstanding Scrub Lizard Papers + +D.Grant Hokit 51 + +# Articles + +Classroom Scheduling Problems: A Discrete Optimization Approach + +Peh H. Ng and Lora M. Martin 57 + +The Optimal Positioning of Infielders in Baseball + +Alan Levine and Jordan Ludwick 67 + +# ILAP Module + +Who Falls Through the Healthcare Safety Net? + +Marie Vanisko 75 + +Reviews. 83 + +Guide for Authors 91 + +![](images/e7bb1ff686fa37a725db3d6a0e20d1d4db211151fced598e59970362ceb1a626.jpg) + +# Guest Editorial An Open Letter + +Project INTERMATH Staff + +The letter below is being sent to engineering deans and chairs of mathematics departments, together with a brochure; however, the information is intended for all math-science-engineering faculty and therefore is published here in adapted form. + +Dear Dean or Chair: + +The National Science Foundation is engaged in a systemic initiative entitled Mathematics Across the Curriculum (MATC). Among the goals of the initiative is the desire to change the culture in which undergraduate mathematics, science, and engineering curricula are designed and presented. From the perspective of the mathematics departments, the desire is for the science and engineering departments to become partners rather than clients in determining what happens on a daily basis in the mathematics classroom. + +Project INTERMATH is one of the projects funded under the MATC initiative. During the past 6 years, various initiatives have been created, tested and adapted for use across the country. Three of the schools in the 15-school consortium, the United States Military Academy, Carroll College (Montana), and Harvey Mudd College, have developed comprehensive 2-year 15-credit mathematics experiences in response to the ABET's Engineering Criteria 2000. Each of these curricula integrates discrete mathematics and includes dynamical systems, linear algebra, probability and statistics, calculus through vector integral calculus, and differential equations while achieving the outcomes suggested in ABET 2000. Recently one of the schools, Carroll College, underwent an ABET visit. The visitors were quite impressed with the interaction among math, science, and engineering in the curriculum at Carroll. Subsequently, ABET awarded the ABET Innovation Award to Carroll: + +For the adoption of student goals for mathematics majors that embrace the principles of ABET's Engineering Criteria 2000 and for development of an innovative, cross-disciplinary curriculum tailored to the needs of mathematics and other disciplines. + +Presently, a consortium is being formed to apply for NSF funds to adapt and implement the curricula mentioned above. Course guides are available for each of the three curricula to help schools adapt the model programs to their needs. The consortium envisions having advisors available from the three projects to assist as needed with the adaptation. The enclosed brochure describes the curricula in more detail. While the current curricula are taught from existing textual materials, a follow-on project is preparing a unified set of materials from which the curriculum can be taught. The project also seeks field testers for the unified materials. For additional information on the Consortium being formed, please contact Frank Giordano at FrankCOMAP@aol.com. + +Additionally, Macalester College has developed a scientific programming course for freshman year and a collection of scientific programming projects (with databases available). The course introduces the student to the solution of problems too large to be solved by conventional means. Finally, all INTERMATH schools have worked with their science and engineering departments to design an impressive collection of interdisciplinary projects. Many of these projects can be introduced simultaneously in freshman mathematics and science courses and revisited later in engineering science and engineering courses. All projects are available on Project INTERMATH's website and many are available in hard copy. + +If you would like more information on Project INTERMATH, please visit our website at www(ProjectIntermath.com or www(ProjectIntermath.org. For copies of the brochure concerning a specific INTERMATH Project, or a copy of the Tools for Teaching volume featuring Project INTERMATH and including representative interdisciplinary projects [Campbell 2001], please contact COMAP at (800) 772-6627 or visit the website www.comap.com. + +Sincerely, + +Project INTERMATH Staff + +# References + +Campbell, Paul J. (ed.). 2001. Tools for Teaching 2000: ILAP Modules, Interdisciplinary Lively Applications Projects. Lexington, MA: COMAP. + +# Editor's Note + +Quick responses from the Contest Director, the authors of the Outstanding papers, and the commentators allow publication in this issue of the results of the 2002 Interdisciplinary Contest in Modeling. + +# INTERMATH Forum A Model for Academic Change + +Donald Small + +Department of Mathematical Sciences + +United States Military Academy + +West Point, NY 10996 + +ad5712@usma.edu + +Chris Arney + +School of Mathematics and Sciences + +The College of Saint Rose + +Albany, NY 12203 + +arneyc@mail.strose.edu + +# Introduction + +Project INTERMATH, through the utilization of interdisciplinary lively application projects (ILAPs), promotes the development of integrated and interdisciplinary-based mathematics courses, programs, and curricula. To do this, faculty and departments must implement academic change. This article is based on our experiences (and hopefully insights) in implementing these kinds of academic changes and on the current status of Project INTERMATH. We provide a model to describe the people and organizations involved in academic change processes. For some of the details of the dynamics, methods, and results of this type of academic change for Project INTERMATH, we refer you to West [2002] (in this issue) and to earlier articles in Campbell [2001]. + +# Types of Curriculum Change + +Types of curriculum change can be partitioned into two broad categories: evolutionary or revolutionary. Evolutionary change usually involves placing more or less emphasis on a concept. For example, decreasing the emphasis on integration techniques is an example of evolutionary change, as is increasing + +the emphasis on numerical or graphical representation of functions. Evolutionary change can be thought of as continuous change, whereas revolutionary change is discontinuous change. Replacing a standard calculus-dominated core program with a modeling-dominated program that integrates the treatment of data analysis, discrete dynamical systems, matrix algebra, calculus, and differential equations is an example of discontinuous change. Another example is to reform a traditional college algebra course by replacing the focus on symbolic manipulations and abstract drill exercises with a focus on elementary data analysis, functions, modeling, and problem solving in context. + +# The Model + +The following change model and comments are based on experience gained from a decade and a half of conducting faculty development and dissemination workshops. The first workshops were focused on integrating the use of computer algebra systems (CAS) into instruction and were followed by calculus reform dissemination workshops, then INTERMATH workshops, and finally college algebra reform workshops. The change model that we describe applies to implementing discontinuous change. It is adapted from a business model for marketing, called the Technology Adoption Life Cycle, described in Moore and McKenna [1999]. They describe the business model pictorially by a bell-shaped curve: + +The divisions in the curve are roughly equivalent to where standard deviations would fall. That is, the early majority and the late majority fall within one standard deviation of the mean, early adopters and the lag-gards within two, and way out there, at the very onset of a new technology, about three standard deviations from the norm are the innovators. + +Although the division points in the curriculum-change model may not correspond to standard deviations, the groups and their order are the same. The makeup of the groups span both academic ranks and years of experience. Typically, a department contains representatives from two or three different groups. + +# Applying the Model + +# Characterizing the Faculty + +A first step in discussing the application of this model to curriculum change is to describe the characteristics of the faculty in each of the groups. + +- Innovators (change agents) are visionaries who have studied a particular program and believe that they can develop a much better program. Their + +goals are usually in clear contrast to the results of the existing curriculum. They usually argue for their vision on a philosophical level, while citing the shortcomings of the present curriculums on a practical level. Visionaries are not content with evolutionary change but want breakthroughs—strategic leaps forward guided by their vision. They are revolutionaries in their field and are willing to devote tremendous efforts in order to translate their visions into curriculum materials. + +- Early adopters are risk-takers. They are seriously concerned about the shortcomings of the present curriculum and are willing to take a risk on a new, untried curriculum. They are often recruited by one or more innovators and do not wait to be convinced by assessment studies or formal, established programs. +- Early-majority faculty are conservative and are driven by a strong sense of practicality. They are content to wait and see how the new curriculum works out in schools similar to theirs. They want to hear reports from well-respected references before making a decision to change but are willing to make small adaptations to make the change work in their local environment. They don't expert perfection in the course material but they do expect quality in the content of the materials. +- Late-majority faculty wait until the new curriculum becomes well established, particularly at several large schools. They are interested only in a "tried and true" curriculum that does not require any adaptation on their part. They expect thoroughly edited, high-quality, carefully-presented textbooks and material to be in place before they make the change. +- Laggards are people who do not want to change for any reason and will change only when there is no other choice available. + +# The Start-Up + +Grant funding plays a critical role in developing and implementing curriculum change. Although there have been exceptions in which missionary zeal and idealism have sustained innovators to proceed without outside funding, the norm is for innovators to be supported by grant funding. An example is the large number of National Science Foundation (NSF) pilot grants followed by a few multi-year development grants that helped initiate and sustain the calculus reform movement for several years. + +Most of the responsibility for moving from the innovators stage to the early adopters stage rests with the innovators. They are the ones who recruit early adopters and who publicize their new programs through sessions at professional meetings, newsletters, and journal articles. Their primary recruiting device is the dissemination workshop. These workshops also serve to support existing adopters and to further publicize the new curriculum. Supporting + +recruits through additional faculty development workshops, personal visits, and ongoing communication is essential to helping a recruit become an early adopter. The early-adopters stage may last for five or more years during which time curriculum materials are class-tested, refined, expanded to provide more instructional help, and, hopefully, picked up by a commercial publisher. This stage is characterized by departments offering experimental sections based on the new program materials. + +# Crossing the Chasm + +The largest challenge or "chasm," to use Moore and McKenna's term, is to move a curriculum from the early-adopters stage to the early-majority stage. In general, innovators are not successful in orchestrating this move. Whereas innovators and early adopters often share a similar entrepreneurial spirit, the same is not true with innovators and early-majority faculty. The conservative and pragmatic early-majority faculty are not risk-takers. They demand convincing evidence, not just the results of special sections. Early-majority faculty are often looking for reasons not to adopt the new curriculum, in contrast to the early adopters, who are looking for reasons to try out the new program. An example is a department chairman of a prestigious college who had discovered a mistake in a beta copy of a new computer algebra system (CAS) during a dissemination workshop. He announced that his department would not use any CAS until it had been proven to be completely accurate, which he later translated to never. This type of resistance to some academic changes is widespread. + +Another source of difficulty in crossing the chasm is the change-diversity of the faculty. This is particularly true in large departments. When departments attempt to change collectively, not all faculty members are necessarily in the same change group, so the timing and implementation of collective change for large programs and courses are even more difficult. For example, several years ago we were helping a department adopt and implement a new calculus reformed course text with the utilization of technology. The faculty with whom we spoke acted like early-majority faculty in regard to their state of reform, in that they learned how the book was used at a couple of other schools and were willing to take on a few special adaptations to insure local success. However, we later discovered when the implementation didn't go well that the majority of the teaching faculty in the project were in fact late-majority faculty. Collectively, they were not willing to stray beyond the established boundaries of a textbook, nor were they willing to wait until all the difficulties and new ideas were resolved. When there are differences of levels of commitment in a group, it takes more time to resolve those difference before the change can be fully and successfully implemented. + +Crossing the chasm into the early majority and then into the late majority require strong efforts by sales forces of commercial publishers and strong administrative support by department chairs, deans, and provosts. Faculty + +need to understand their institution's commitment to the change and the value assigned to the extra faculty time required. Unfortunately, the limited profit potential of a new curriculum restricts commercial sales efforts, while the pressure of budgets and student opinion often restricts the necessary administrative support. + +# Leadership + +Leadership is essential in guiding change through the different stages. Innovators provide this leadership for moving from the innovator stage to the early adopters stage but usually are not successful in leading departments across the chasm. Innovators and early-majority faculty have different goals and agendas, early-majority faculty do not usually participate in dissemination workshops, and early-majority faculty demand assessment studies that usually do not exist. Crossing the chasm then rests on the combined efforts of people, such as administrators and commercial sales people, who may have had no involvement with the development of the new curriculum and thus do not have the personal commitment of an innovator or early adopter. The result is that most discontinuous change programs fail to cross the chasm and thus never achieve their goal of revolutionary change. + +# Applying the Model to INTERMATH + +Like all reform projects, Project INTERMATH began with several high-energy innovators, who built a bold new integrated and interdisciplinary mathematics program at the United States Military Academy (using ILAPs) and made it work well enough that early adopters at places like Carroll College (Montana) and Harvey Mudd College took the risks to build their own versions of integrated, interdisciplinary curricula. + +INTERMATH, now with ILAP publications by COMAP and MAA and more textual and course materials being published (see the References), seeks early-majority departments and faculty who would like to develop their students with more integrated and interdisciplinary curricular methods. Interested faculty should contact either Donald Small (ad5712@usma.edu) or Gary Krahn (ag2609@usma.edu) at the United States Military Academy for more information. + +# References + +Arney, David C. 1997. Interdisciplinary Lively Application Projects (ILAPs). Washington, DC: Mathematical Association of America. + +________, and Small, Donald. 2002. Changing Core Mathematics. MAA Notes Series. In press. Washington, DC: Mathematical Association of America. +Campbell, Paul J. (ed.). 2001. Tools for Teaching 2000: ILAP Modules, Interdisciplinary Lively Applications Projects. Lexington, MA: COMAP. +COMAP website for ILAPs: http://www.projectintermath.org/products/listing/. +Moore, Geoffrey A., and Regis McKenna. 1999. *Crossing the Chasm: Marketing and Selling Technology Products to Mainstream Customers*. Revised ed. New York: Harperbusiness. 2001. e-book ed. for Microsoft Reader. +West, Richard D. 2002. Mathematical modeling as a thread. The UMAP Journal 23 (1): 9-10. + +# About the Authors + +![](images/9b9b1bae5835327011a62bf141222b5c642af7676b3bfd1a92e9a0de2e407ad0.jpg) + +Don Small graduated from Middlebury College and received his Ph.D. degree in mathematics from the University of Connecticut. He taught mathematics at Colby College for 23 years before joining the Mathematics Department at the U.S. Military Academy in 1991. He is active in the calculus and college algebra reform movements—developing curricula, authoring texts, and leading faculty development workshops. Active in the Mathematical As- + +sociation of America (MAA), Don served a term as Chairman and two terms as Governor of the Northeast Section and has been a long-term member of the MAA's CRAFTY committee. His interests are in developing curriculum that focuses on student growth while meeting the needs of partner disciplines, society, and the workplace. He is also a member of AMATYC and AMS. + +![](images/e7ce2ecc2e97cc604c74b885044806a0e3229f3fa82dfd44ac9446e022ac4288.jpg) + +Chris Arney has an undergraduate degree from the United States Military Academy (USMA) and a Ph.D. from Rensselaer Polytechnic Institute. He taught mathematics at USMA for many years and is the author of several mathematics textbooks and laboratory manuals. Recently, he edited Arney [1997]. His areas of research interest include applied mathematics, numerical analysis, and the history of mathematics. His teaching interests include using computers, writing, and interdisciplinary applications in the mathematics and science curricula. + +# Mathematical Modeling as a Thread + +Richard D. West + +Dept. of Mathematics + +Francis Marion University + +Florence, SC 29501 + +rwest@fmarion.edu + +As part of the national movement toward improving mathematics education, intense curriculum reform has taken place at the college level since about 1989. One of the major purposes of this ongoing reform has been to change how students are taught mathematics, with the primary focus being to empower students. At the institutions where this type of reform has been successful, a cultural change around mathematics has taken place. One institution where the culture has changed is the United States Military Academy at West Point. + +This may seem odd, as West Point is steeped in tradition and has four large (over 1,000 students) standardized courses in mathematics that all students must take. Yet the leaders of this curriculum change looked at these supposed obstacles to change as an opportunity and sought to create an integrated mathematics program out of the four distinct courses. To accomplish this metamorphosis, outcome goals and intermediate course objectives were set along five educational threads: + +- mathematical modeling, +- scientific computing, +- mathematical reasoning, +- communication, and +- history of mathematics. + +The key to the success of reforming into a four-semester program that empowers students and faculty was the mathematical modeling thread. In the words of many faculty and students, "Modeling made the mathematics relevant and put life into the courses." As a result, students retained more mathematics and made progress toward becoming confident, aggressive problem solvers. + +The courses at West Point have always been applied, as all students must be prepared for an engineering minor. So, many problems used in mathematics + +class routinely come from other disciplines. The stated goal of the mathematical modeling thread is to get students to + +- recognize opportunities for quantitative modeling, +- apply the modeling process in solving relevant problems, and +- test and analyze their models for sensitivity of variables and assumptions. + +As the curriculum changed and the faculty endeavored to grow students along these threads, the need for more significant problems, or projects, became obvious. Since 1992, each of the four courses has used interdisciplinary projects developed in conjunction with other disciplines. These projects are called Interdisciplinary Lively Application Projects, or ILAPs. To keep these projects "lively" and relevant, they are changed each year. Also, one of the reform ideas was to compress the curriculum (make it leaner). The faculty found that the ILAPs enabled instructors to become more efficient in covering material and empowered students as modelers and better problem solvers. + +As a participant in the reform process at West Point, I developed a teaching style that involves modeling as a thread and integrates projects throughout my courses. Teaching in a public university the past two years, I have adapted this teaching style to all of my new courses. + +New interdisciplinary projects have become the norm, even in large standardized college algebra courses. The integration of mathematical modeling has the same benefits at many different levels, from college algebra and calculus courses to graduate mathematics content courses for teachers. In all cases, the success of this curricular change is evident in greater student retention and improved attitudes toward the learning of mathematics. + +The purposes and results may differ from one institution to the next, but establishing mathematical modeling as a thread through the use of ILAPs appears to be a successful method to motivate students toward becoming confident, aggressive problem solvers, a goal of many reform projects. + +# About the Author + +![](images/0ab8da37bf5881dc2702213f99a0fafe28794aee8cbaa97a1e2ef709863d796a.jpg) + +After spending 14 years as a mathematics professor at the United States Military Academy at West Point, Rich West retired from the Army to Florence, SC, in 1999. There he continues to teach full-time at Francis Marion University. He received his Ph.D. in Mathematics Education from New York University in 1995. He is interested in curriculum reform, program assessment, and teaching at the college level. He has served as the Managing Director of Project Intermath for the past seven years. + +# Modeling Forum + +# Results of the 2002 Interdisciplinary Contest in Modeling + +Chris Arney, Co-Director + +Dean of the School of Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +John H. "Jack" Grubbs, Co-Director + +Dept. of Civil and Environmental Engineering + +Tulane University + +New Orleans, LA 70112 + +jgrubbs@tulane.edu + +# Introduction + +A total of 106 teams of undergraduates, from 71 institutions in 5 countries, spent the second weekend in February working on an applied mathematics problem in the 4th Interdisciplinary Contest in Modeling (ICM). + +This year's contest began at 8:00 P.M. on Friday, Feb. 7, and ended at 8:00 P.M. on Monday, Feb. 11. During that time, the teams of up to three undergraduates or high-school students researched and submitted their optimal solutions for an open-ended interdisciplinary modeling problem involving environmental science. After a weekend of hard work, solution papers were sent to COMAP. + +The two of the papers that were judged to be Outstanding appear in this issue of The UMAP Journal. Results and winning papers from the first three contests were published in special issues of The UMAP Journal in 1999 through 2001. + +In addition to the ICM, COMAP also sponsors the Mathematical Contest in Modeling (MCM), which runs concurrently with the ICM. Information about the two contests can be found at + +www.comap.com/undergraduate/contest/icm www.comap.com/undergraduate/contest/mcm + +The ICM and the MCM are the only international modeling contests in which students work in teams to find a solution. + +Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better informed and better-prepared citizens, consumers, and workers. + +This year's problem, which involved understanding and managing the habitat of the Florida scrub lizard, proved to be particularly challenging. It contained various data sets to analyze, had several challenging requirements needing scientific and mathematical connections, and also had the ever-present requirements to use creativity, precision, and effective communication. The author of the problem, environmental scientist Grant Hokit, was one of the final judges, and his commentary appears in this issue. + +All the competing teams are to be congratulated for their excellent work and dedication to scientific modeling and problem solving. This year's judges remarked that the quality of the papers was extremely high, making it difficult to choose the two Outstanding papers. + +In 2002 the ICM continued to grow as an online contest, where teams registered, obtained contest instructions, and downloaded the problem through COMAP's ICM Website. + +# Problem: The Scrub Lizard Problem + +![](images/b2139cf49df45388e945e80c9f00290e7a25707a664348254f16a38c01d70e6b.jpg) +Figure 1. Florida scrub lizard. Photo by Grant Hokit. + +# If We SCRUB Our Land Too much, We May Lose the LIZARDS + +The Florida scrub lizard is a small, gray, or gray-brown lizard that lives throughout upland sandy areas in the Central and Atlantic coast regions of Florida. The Florida Committee on Rare and Endangered Plants classified the scrub lizard as endangered. + +You will find a fact sheet on the Florida Scrub Lizard at http://www.comap/ undergraduate/contestscim/2002problem/scrubizard.pdf. [EDITOR'S NOTE: We do not reproduce that document here.] + +The long-term survival of the Florida scrub lizard is dependent upon preservation of the proper spatial configuration and size of scrub habitat patches. + +# Task 1 + +Discuss factors that may contribute to the loss of appropriate habitat for scrub lizards in Florida. What recommendations would you make to the state of Florida to preserve these habitats and discuss obstacles to the implementation of your recommendations? + +# Task 2 + +Utilize the data provided in Table 1 to estimate the value for $F_{a}$ (the average fecundity of adult lizards), $S_{j}$ (the survivorship of juvenile lizards between birth and the first reproductive season), and $S_{a}$ (the average adult survivorship). + +# Table 1. + +Summary data for a cohort of scrub lizards captured and followed for 4 consecutive years. Hatchling lizards (age 0) do not produce eggs during the summer they are born. Average clutch size for all other females is proportional to body size according to the function $y = 0.21(\mathrm{SVL}) - 7.5$ , where $y$ is the clutch size and SVL is the snout-to-vent length in mm. + +
YearAgeTotal number livingNumber of living femalesAvg. female size (mm)
1097249530.3
211809245.8
32201155.8
432256.0
+ +# Task 3 + +It has been conjectured that the parameters $F_{a}, S_{j}$ , and $S_{a}$ are related to the size and amount of open sandy area of a scrub patch. Utilize the data provided in Table 2 to develop functions that estimate $F_{a}, S_{j}$ , and $S_{a}$ for different patches. In addition, develop a function that estimates $C$ , the carrying capacity of scrub lizards for a given patch. + +Summary data for 8 scrub patches including vital rate data for scrub lizards. Annual female fecundity $(F_{a})$ , juvenile survivorship $(S_{j})$ , and adult survivorship $(S_{a})$ are presented for each patch along with patch size and the amount of open sandy habitat. + +Table 2. + +
PatchPatch size (ha)Sandy habitat (ha)FaSjSaDensity (lizards/ha)
a11.314.805.6.12.0658
b35.5411.316.6.16.1060
c141.7651.559.5.17.1375
d14.657.554.8.15.0955
e63.242.129.7.17.1180
f132.3554.149.9.18.1482
g8.461.675.5.11.0540
h278.2684.3211.0.19.15115
+ +# Task 4 + +Many animal studies indicate that food, space, shelter, or even reproductive partners may be limited within a habitat patch, causing individuals to migrate between patches. There is no conclusive evidence on why scrub lizards migrate. However, about $10\%$ of juvenile lizards do migrate between patches, and this immigration can influence the size of the population within a patch. Adult lizards apparently do not migrate. Utilizing the data provided in the histogram in Figure 2, estimate the probability of lizards surviving the migration between any two patches $i$ and patch $j$ . + +![](images/3ca70bbc48eab906a38a41792179fae40e51800e716e930709cc8c4cf70591fe.jpg) +Figure 2. Migration data for juvenile lizards marked, released, and recaptured up to 6 months later. Surveys for recapture were conducted up to $750\mathrm{m}$ from release sites. + +# Task 5 + +Develop a model to estimate the overall population size of scrub lizards for the landscape given in Table 3. Also, determine which patches are suitable for occupation by scrub lizards and which patches would not support a viable population. + +# Table 3. + +Patch size and amount of open sandy habitat for a landscape of 29 patches located on the Avon Park Air Force Range. See Figure 3 for a map of the landscape. + +
Patch identificationPatch size (ha)Sandy habitat (ha)
113.665.38
232.7411.91
31.390.23
42.280.76
57.033.62
614.474.38
72.521.99
85.872.49
922.278.44
1019.257.58
1111.314.80
1274.3519.15
1321.577.52
1415.502.82
1535.5411.31
162.931.15
1747.2110.73
181.670.13
199.802.23
2039.317.15
212.230.78
223.731.02
238.461.67
243.891.89
251.331.11
260.850.79
278.755.30
289.776.22
2913.454.69
+ +# Task 6 + +It has been determined from aerial photographs that vegetation density increases by about $6\%$ a year within the Florida scrub areas. Please make a recommendation on a policy for controlled burning. + +![](images/5a23a1f9be6c08e354d88a00434cd1d62a43963879fbe85da30dc57d04c207e1.jpg) +Figure 3. Map of landscape of 29 patches located on the Avon Park Air Force Range. + +# The Results + +Solution papers were coded at COMAP headquarters so that names and affiliations of authors would be unknown to the judges. Each paper was read preliminarily by two "triage" judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at the United States Military Academy, West Point, NY. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
Scrub Lizard2162860106
+ +The two papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +"Where's the Scrub? Aye, There's the Rub" + +Maggie L. Walker Governor's School + +Richmond, VA + +John Barnes + +Victoria L. Chiou + +Andrew Carroll + +Jessamyn J. Liu + +"Cleaning Up the Scrub: Saving the + +Florida Scrub Lizard" + +Olin College of Engineering + +Needham, MA + +Burt Tilley + +Nicole Hori + +Steven Krumholtz + +Daniel Lindquist + +# Meritorious Teams (16 teams) + +Beijing University of Posts & Telecommunications, Beijing, China (He Zuguo) + +Carroll College, Helena, MT (Sam R. Alvey) + +Central South University, Changsha, China (Zhang Hongyan and Zheng Zhoushun) + +Dickinson College, Carlisle, PA (Brian S. Pedersen) + +Elon University, Elon, NC (Crista Coles) (two teams) + +Fudan University, Shanghai, China (Cao Yuan) + +Harvey Mudd College, Claremont, CA (Michael E. Moody) + +Monmouth College, Monmouth, IL (Christopher Fasano) + +Northwestern Polytechnical University, Xian, China (Xiao Hua Yong) + +Tsinghua University, Beijing, China (Hu Zhiming) + +United States Air Force Academy, USAF Academy, CO (Jim West) + +University of Missouri, Rolla, MO (Mohamed Ben Rhouma) + +University of Science and Technology of China, Hefei, China (Tao Dacheng) + +University of Science and Technology of China, Hefei, China (Zhang Hong) + +Youngtown State University, Youngstown, OH (Scott Martin) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and by the Head Judge. Additional awards were presented to the Governors School team from Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Director + +Chris Arney, Dean of the School of Mathematics and Sciences, The College of Saint Rose, Albany, NY + +Associate Directors + +Michael Kelley, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Gary W. Krahn, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Judges + +Richard Cassidy, Dept. of Industrial Engineering, University of Arkansas, Fayetteville, AR + +Grant Hokit, Dept. of Biology, Carroll College, Helena, MT + +Marie Vanisko, Dept. of Mathematics, Carroll College, Helena, MT + +Triage Judges + +Darryl Ahner, Eric Drake, Alex Heidenberg, D. Jacobs, Alan Johnson, Gary Krahn, E. Lesinski, Joe Myers, Mike Phillips, K. Romano, Kathi Snook, B. Stewart, Ani Velo, and Brian Winkel, all of the U.S. Military Academy, West Point, NY. + +# Source of the Problem + +The Scrub Lizard Problem was contributed by Grant Hokit, Dept. of Biology, Carroll College, Helena, MT. + +# Acknowledgments + +Major funding for the ICM is provided by a grant from the National Science Foundation through COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS). + +We thank: + +- the ICM judges and ICM Board members for their valuable and unflagging efforts, and +- the staff of the Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY, for hosting the triage judging and the final judging. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, style has been adjusted to that of The UMAP Journal, and the papers have been edited for length. Please peruse these student efforts in that context. + +To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +$\mathrm{H} =$ Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORI
ARIZONA
McClintockTempeIvan BarkdollP
CALIFORNIA
Harvey Mudd CollegeClaremontMichael MoodyM, H
Sonoma State UniversityRohnert ParkElaine McDonaldP
COLORADO
Colorado State UniversityFort CollinsMichael KirbyP
United States Air Force AcademyUSAF AcademyJim WestM
GEORGIA
Georgia Southern UniversityStatesboroLaurene FausettP
ILLINOIS
Monmouth CollegeMonmouthChristopher FasanoM
INDIANA
Earlham CollegeRichmondMic JacksonH
KENTUCKY
Asbury CollegeWilmoreDavid CoulietteH, P
Northern Kentucky UniversityHighland HeightsGail MackinP
MASSACHUSETTS
Babson CollegeWellesleySteven EriksenP
Olin College of EngineeringNeedhamBurt TilleyO
MICHIGAN
East Grand Rapids Public SchoolsGrand RapidsMary ElderkinP
Lawrence Technological UniversitySouthfieldHoward WhitstonH
Ruth FavroP
MINNESOTA
St. Cloud State UniversitySt. CloudDominic NaughtonP
MISSOURI
University of Missouri-RollaRollaMohamed Ben RhoumaM
MONTANA
Carroll CollegeHelenaSam AlveyM
Montana Tech of the Univ. of MontanaButteRichard RossiH, P
NEW JERSEY
Rowan UniversityGlassboroHieu NguyenP
Samuel LoflandP
NEW YORK
U.S. Military AcademyWest PointMike HuberH
Mike JohnsonH
NORTH CAROLINA
Elon UniversityElonCrista ColesM, M
Piedmont Community CollegeRoxboroLisa CooleyP
OHIO
Ohio Wesleyan UniversityDelawareRichard LinderP, P
Youngstown State UniversityYoungstownAngela SpalsburyH
Scott MartinM
OREGON
Eastern Oregon UniversityLa GrandeJeffrey WoodfordP
Franklin High SchoolPortlandDavid HamiltonP, P
PENNSYLVANIA
Bloomsburg UniversityBloomsburgKevin FerlandP
Clarion University of PennsylvaniaClarionAndrew TurnerH
Dickinson CollegeCarlisleBrian PedersenM
Lafayette CollegeEastonThomas HillH
TEXAS
Texas A&M UniversityCollege StationJay WaltonH
VIRGINIA
Maggie L. Walker Governor's SchoolRichmondJohn BarnesO, P
Crista HamiltonP
WASHINGTON
Pacific Lutheran UniversityTacomaMei ZhuH
WISCONSIN
Beloit CollegeBeloitPaul J. CampbellP
CANADA
York UniversityToronto, ONMorton AbramsonP
CHINA
Anhui UniversityHefeiCheng JunshengH
Wang DapengP
Beijing Union UniversityBeijingRen KailongP
Beijing Univ. of Chemical TechnologyBeijingYan ChengH
Beijing Univ. of Posts & Telecomm.BeijingHe ZuguoM, P
Central South UniversityChangshaChen XiaosongH
Zhang Hongyan and Zheng ZhoushunM
Chongqing University Inst. of Math. & Phys.ChongqingQu GongP
He RenbinP
Dalian University of TechnologyDalianLiaoning and Yu HongquanP, P
East China Univ. of Science and Technology Experimental High School of Beijing Normal UniversityShanghaiNi ZhongxinH, P
Fudan UniversityBeijingWang JiangciP
ShanghaiCai ZhijieH
Cao YuanM
Hangzhou Univ. of CommerceHangzhouZhu LingH
Harbin Engineering UniversityHarbinLuo YueshengP
Zhang XiaoweiP
Harbin Institute of TechnologyHarbinShang ShoutingP
Zheng TongP
Harbin Univ. of Science and TechnologyHarbinChen DongyanH
Li DongmeiP
Hefei University of TechnologyHefeiSu HuamingP
Du XueqiaoP
Jiamusi University College of MathematicsJiamusi CityHeiLong and Ji Bai ShanP
Jilin Institute of TechnologyChangchunLu JinH
Bai PingP
Li YanP
Huang QingdaoP
Jilin UniversityChangchunZhangKuiyuanP
Jinan UniversityGuangzhouHu DaiqiangP
Zhang LinP
Nanjing University of Science and TechnologyNanjingQian pingP
Wu MinP
Nankai Institute of MathematicsTianjinFu LeiH
Northwestern Polytechnical UniversityXi'anFeng NieH
Xiao Yong HuaM
Peking UniversityBeijingLiu YulongH, P
Shanxi UniversityTaiyuanWang GuangP
Ding JuntangP
South China Univ. of TechnologyGuangzhouLiang FaH
Hong YiP
Tsinghua UniversityBeijingHu ZhimingM
Ye JunP
University of Science and Technology of ChinaHefeiZhang HongM
Tao DachengM
Xi'an Jiaotong UniversityXi'anHe XiaoliangH, P
Zhejiang UniversityHangzhouYang QifanP
Yong HeP
Zhongshan UniversityGuangzhouChen ZepengP
Tang MengxiP
FINLAND
Päivölä CollegeTarttilaMerikki LappiH
IRELAND
University College DublinDublinMichael MackeyH, P
+ +# Editor's Note + +For team advisors from China, we have endeavored to list family name first, with the help of Zheng Rong. + +# Where's the Scrub? Aye, There's the Rub + +Victoria L. Chiou + +Andrew Carroll + +Jessamyn J. Liu + +Maggie L. Walker Governor's School + +for Government and International Studies + +Richmond, VA + +Advisor: John A. Barnes + +# Abstract + +We use data from eight patches inhabited by scrub lizards in logistic regressions to predict from the area of sandy habitat the average fecundity, juvenile and adult survivorship, and total population of a patch. + +From the viewpoint of evolutionary biology, we analyze the marginal benefit and risk for an individual lizard migrating. The probability of dying during migration is $30\%$ , with a $0.3\%$ marginal risk per meter migrated. + +We determine which patches at Avon Park Air Force Base are self-sustaining and which are sustained by migration; our model is $76\%$ accurate in predicting whether a patch is occupied. + +We recommend removing encroaching vegetation through roller-cutting as opposed to controlled burning, due to the high intensity of a fire required to burn scrub and due to the public discomfort with controlled burning. + +# Introduction + +Because of the immense diversity within the Florida sand pine scrub ecosystem, the World Wildlife Organization has granted the Florida scrub "ecoregion" status; at a mere $3900\mathrm{km}^2$ , it is among the smallest ecoregions of the contiguous United States. + +This ecoregion is a "naturally fragmented archipelago of habitat islands" [Branch et al. 1999]. Isolated light-colored patches of sandy soil, obscured by litter and lichens, are surrounded by areas of dense scrub thicket. + +The Florida scrub is rapidly deteriorating due to human development and replacement of scrub by citrus groves, pasturelands, and pine plantations. Extensive human disturbance and development of scrub areas has increased the fragmentation and isolation of scrub patches, and led to fire suppression. + +Florida scrub must be maintained by periodic intense fires. Scrub patches burn naturally every 15 to 100 years [Harper and MacAllister 1998]. Because of human development, fires have been suppressed for the past 80 years; this suppression has led to a decrease in the number of available scrub patches, reduction of scrub patch size, decline of habitat quality, and increased patch isolation [Branch et al. 1999]. + +Conservation efforts have involved proposals for prescribed burning and buying up scrub lands for consolidation. Scientists and conservationists should combine their efforts to provide the public with critical information on the needs of imperiled, threatened, and endangered species, particularly those endemic to the Florida scrub area. While much government money has been directed towards wetland conservation, the Florida scrub contains more vulnerable species than the wetlands for which the state is known [Harper and MacAllister 1998]. If appropriate measures are not taken to protect habitat, the imperiled Florida scrub lizard will become endangered or even extinct. Before further policies on prescribed burning or mechanical methods of vegetation clearance can be implemented, the public must understand the benefits of such policies. + +# Food for the Brood: Lizard Fecundity and Survivorship + +# Assumptions + +- The only factors that contribute to change in population are fecundity and survivorship. +- There are numerous definitions and levels of fecundity. We use annual female fecundity, the total number of offspring per female in one full year. +- We do not consider age a determinant of sexual maturity, except that lizards do not reproduce in the same season in which they were born, regardless of size [Antonio 2000]. + +Fecundity is affected by the size and age of the lizard, available food and nutrition sources, sex ratio, environmental fluctuations, temperature, and humidity of the area. Sex ratios of lizard populations are typically about one-to-one. + +Clutches range from two to eight eggs per clutch [Branch and Hokit 2000]. The major factor affecting clutch size is the size of the lizard, which is propor + +tional to snout-to-vent length (SVL), according to the function + +$$ +y = 0. 2 1 (\mathrm {S V L}) - 7. 5. \tag {1} +$$ + +Body size is critical because lizards require stored energy (in the form of fat reserves) to produce eggs; body size increases with age. + +Lizards lay from three to five clutches in one reproductive season; this number is the clutch frequency. Since direct data collection is nearly impossible, clutch frequency is often estimated as the duration of the active season divided by the time to produce a clutch. This estimate may be inaccurate due to variability in the time to produce a clutch and the reproductive season being shorter than the active season. + +The incubation time is 30 days; so with a reproductive season of late March through June, clutch frequency is approximately three. This agrees with researchers who determined that there are not enough data to calculate clutch size or clutch frequency and who thus assumed an average of four eggs per clutch and three clutches per season [Branch et al. 1999]. + +Survivorship is the ratio of lizards surviving at age $x$ over those who were living at age $(x - 1)$ ; it is generally measured by sequential sampling of a marked cohort of individuals. Losses due to emigration are small compared to those from mortality and tend to be balanced by gains from immigration. + +Using Table 1 of the problem statement and applying (1) with the assumptions of three clutches per season of four eggs each, we find: + +- $F_{a} = 5.33$ , average annual fecundity; +- $S_{j} = 0.185$ , the juvenile survivorship rate from age 0 to 1 (between birth and the first reproductive season); and +- $S_{a} = 0.106$ , the average survivorship rate. + +# Modeling Female Lizard Growth + +Female reptile growth can be split into three periods: + +- growth until sexual maturity (period 1), +- growth after sexual maturity until optimal size (period 2), and +- growth after optimal size (period 3). + +Growth is generally rapid until reaching sexual maturity and much slower thereafter [Heatwole 1976]. + +The growth rate in period 1 may be estimated from the average hatchling size, lizard size at sexual maturity, and time necessary to reach maturity. The lizard is $21\mathrm{mm}$ at hatching but reaches $45\mathrm{mm}$ by sexual maturity in 10 to 11 months [Gans and Pough 1982; Branch et al. 1999]; hence the growth rate is 2.2 to $2.4\mathrm{mm}/$ month. + +After sexual maturity, the lizard continues to grow at a lower rate to optimal size. If the lizard is still alive after this point, its rate tapers to a growth rate that continues for the rest of its life. Since most scrub lizards do not live past two years of age [Branch et al. 1999], we assume that the growth between ages 1 and 2 years in Table 1 of the problem statement is period-2 growth (0.83 mm/month, on average), and the growth between 2 and 3 years of age is period-3 growth (0.02 mm/month, on average). + +# Scrub, Sand, and Survivorship: Modeling Lizard Carrying Capacity, Fecundity, and Survivorship + +Much of the variation in annual female fecundity, juvenile survivorship, adult survivorship, and density is explained by patch size and amount of sandy habitat. However, $97\%$ of the variation in sandy habitat area is explained by the size of the patch; so in our regression analyses, we use only one of those two variables (whichever one has higher correlation with the variable of interest). + +The area of the sandy habitat has a large impact on average fecundity $(r^2 = .77)$ , with predicted fecundity varying from 5.9 to 11.7 from the smallest to the largest patches. Area of sandy habitat also greatly affects $(r^2 = .81)$ adult survival rate, with predicted values ranging from .07 to .16. + +However, survivorship of juvenile lizards is less related to patch size ( $r^2 = .66$ ), varying from .14 to .20. For juveniles, the probability of successful emigration may play a significant role in survivorship. + +Juvenile survivorship and adult survivorship are closely linked ( $r^2 = .96$ ). This is expected, since juveniles are not so different in structure and metabolism from adults in predators, habitat, or food sources. + +The proportion of the patch occupied by sandy habitat is an extremely poor predictor of average fecundity, survivorship, or density, with the highest $r^2$ being .04. + +We use a logistic model to predict average fecundity, juvenile survivorship, and adult survivorship from the area of sandy habitat. We choose a logistic model because as the area of sandy habitat increases, fecundity and survivorship do not continue to increase without bound, as would occur with a linear model, but instead tend toward a maximum. + +The regression reveals the following relationships with area covered by sandy habitat $(x)$ : + +$$ +\text {a n n u a l f e c u n d i t y} = 1 0. 3 \left(1 + 1. 4 2 e ^ {- 0. 0 9 6 x}\right) +$$ + +$$ +\text {j u v e n i l e s u r v i v o r s h i p} = 0. 1 7 9 \left(1 + 0. 8 9 e ^ {- 0. 1 6 9 x}\right) \tag {2} +$$ + +$$ +\text {a d u l t s u r v i v o r s h i p} = 0. 1 3 9 \left(1 + 1. 9 3 e ^ {- 0. 1 2 3 x}\right). +$$ + +To model carrying capacity, we calculated the number of lizards in each patch by multiplying density by patch size, using the data in Table 1 of the problem statement. We regressed population density on patch size $z$ ( $r^2 = .87$ ) + +and predicted number of lizards in a patch as population density predicted times size of the sandy patch, arriving at: + +$$ +\text {n u m b e r o f l i z a r d s} = 0. 2 2 7 z ^ {2} + 5 1. 2 z, \tag {3} +$$ + +with $r^2 = .999$ . Does this equation also give a good estimate for the carrying capacity of a patch? Lizard populations tend to be quite stable, fluctuating only mildly from carrying capacity [Gans and Tinkle 1977]. That the actual lizard populations correspond so closely to the predicted values suggests that the populations are at carrying capacity. + +# Lizard Migration Motivation + +It is difficult to determine the probability of a lizard dying during migration based on the proportion of lizards recaptured at various distances from location of initial capture. That the proportion recaptured decreases with increasing distance could be the result of lizards dying between each of the recapture sites or of lizards ceasing to migrate after having traveled a certain distance. + +We could use the average speed of dispersal (2.5 m/day) to derive the mortality rate for each day migrated [Branch et al. 1999]. However, doing so assumes that no lizard reach its destination: All are killed en route to an ideal location that is theoretically an infinite distance away. To calculate accurately the probability of dying during migration, we need to analyze the cause of migration. + +There are no conclusive data why lizards migrate. Other animals migrate based on the availability of food, space, shelter, or reproductive partners, but apparently a universal $10\%$ of juvenile scrub lizards migrate regardless of any environmental attribute so far measured. + +An individual's movement to a new patch brings genetic material. Can this influx of genes be shown to benefit the lizard population and the individual lizard? + +For any species to survive, it must use a reproductive strategy that allows rapid adaptation relative to changes in the environment. Less-evolved species rely heavily on genetic diversity, natural selection, and learned behavior to maintain adaptability, as well as on producing far more offspring in a short span of time than can survive. As a result, the population of lizards in a patch exhibits little genetic diversity. + +The individual lizard must derive a benefit from having the only offspring in a patch with a genetic advantage, or evolution would not select for lizards that emigrate at a rate of $10\%$ . The potential benefits of migration are moving to an area with + +- greater fecundity, +- greater survivorship, or +- an advantage for progeny over other lizards. + +# The potential risks are + +- moving to an area with lesser fecundity, +- moving to an area with lesser survivorship, and +- dying en route. + +Due to the strong correlation (90%) between area of sandy habitat and number of lizards in a patch, most lizards live in patches with fecundity and survivorship rates above those of the average patch. Thus, the only advantage of emigrating introducing genes with a selective advantage in the new patch. + +By moving to a different patch, the predicted number of offspring falls from 10.1, the average fecundity of all lizards, to 7.8, the average fecundity of all patches. (We assume that males have the same fecundity rate as females.) The decrease of 2.3 indicates a penalty of producing $23\%$ fewer offspring by migrating. + +For $10\%$ of juveniles emigrating, the net benefit of successful emigration should theoretically be $10\%$ as well. The average distance traveled by migrating lizards was $105.5 \mathrm{~m}$ , so the marginal benefit to traveling $1 \mathrm{~m}$ successfully should be $100\% / 105.5 \mathrm{~m} = .095\% / \mathrm{m}$ . + +Beyond $400\mathrm{m}$ , no lizards are captured. We assert this as the distance beyond which no lizards emigrate; at this distance, the net benefit of migration is $-100\%$ . + +Since the migration penalty in fecundity does not vary with distance traveled, we can relate marginal benefit per meter to the average probability of dying en route: + +$$ +b = 0. 0 9 4 8 d - r d - 2 2. 8, +$$ + +where + +$d =$ distance traveled $(\mathfrak{m})$ + +$b =$ the net benefit of migration $(\%)$ + +$r =$ the marginal risk per meter of dying en route, and + +22.8 is the percentage fecundity penalty of migrating to an average patch. + +At a distance of $400\mathrm{m}$ , the net benefit is $-100\%$ . We put $d = 400$ , $b = -100$ , and solve for $r$ , finding $r = 0.288\%$ deaths/meter. + +We estimate the average death rate $D$ for all emigration based on the average distance traveled (105.5 m). Thus, the average death rate for emigration is $D = 0.288\% / \mathrm{m} \times 105.5 \mathrm{~m} = 30.4\%$ . The average mortality rate for the juvenile population due to migration equals $D(30.4\%)$ times the propensity to migrate (10%), or 3%. + +# Patch Occupation and Viable Population + +Previous lizard studies at Avon Park Air Force Base use a measure of isolation $S_{i}$ of a patch $i$ : + +$$ +S _ {i} = \sum p _ {j} e ^ {- d _ {i j}} A _ {j}, +$$ + +with the sum taken over all patches $j$ with $j \neq i$ [Branch et al. 1999]. Thus, isolation is a function of + +- $d_{ij}$ , the distance between patches $i$ and $j$ ; and +- $A_{j}$ , the area of patch $j$ . + +The value of $p_j$ is 1 if patch $j$ is occupied, 0 otherwise. + +The distance between patches determines the difficulty of movement between patches, while the area of the patch determines the possible number of migrants, since area has a strong correlation with patch population. + +Branch et al. [1999] determine that the probability that a patch is occupied is given by + +$$ +P _ {i} = \frac {\exp (0 . 6 1 A _ {i} + 0 . 0 5 S _ {i} - 5 . 2 2)}{1 + \exp (0 . 6 1 A _ {i} + 0 . 0 5 S _ {i} - 5 . 2 2)}, +$$ + +where for patch $i$ we have + +$P_{i} =$ probability that the patch is occupied, + +$A_{i} =$ area of the sandy habitat of the patch, and + +$S_{i} =$ the isolation parameter for the patch. + +This equation predicted patch occupancy for scrub patches within the Avon Park Air Force Range with $89\%$ accuracy [Branch et al. 1999]. However, this equation can be used only if it is known which patches are occupied; yet the goal is to predict patch occupation without knowing which patches are occupied. + +To accomplish this latter goal, we use our logistic regressions (2). We formulate the ability of each patch to sustain its population by comparing average fecundity to the average number of deaths through the equation: + +$$ +\mathrm {S u s t a i n a b i l i t y} = 1 + \left(F _ {a} - \left[ \frac {1}{S _ {j}} + \frac {1}{S _ {j} S _ {a}} + \frac {1}{S _ {j} S _ {a} S _ {a}} + \dots + \frac {1}{S _ {j} S _ {a} ^ {n}} \right]\right). +$$ + +This equation gives the sustainability of a patch: the average number of lizards that a single lizard will yield each year from the patch. The number of lizards in the patch will grow, stay the same, or decrease, depending on whether sustainability is below 1, equal to 1, or above 1. + +According to our logistic regressions, only patches 2, 9, 12, 15, and 17 have sustainability greater than 1. They are thus the only patches capable of maintaining a population, apart from migration. + +# Migration + +To factor in the effects of migration, we use a long-term approach. We assume that each patch is at carrying capacity, so excess lizards that are produced in patches with sustainability greater than 1 migrate to other patches. For each patch, we estimate the number of lizards each year generated above replacement level by multiplying the predicted population by the sustainability, using (3). + +We assume that lizards migrate patches to other populated patches uniformly; if an inhabited patch is adjacent to two uninhabited patches, half of the migrating lizards in the inhabited patch attempt to migrate to one inhabited patch and the other half attempt to migrate to the other patch. We calculate the number of lizards that die between patches using the previously calculated average death rate for emigration, $r = 0.288\%$ . + +We apply these formulas in a series of "rounds" that move the number of offspring above the equilibrium number from the inhabited patches to the uninhabited ones. In each "round," the effect of migration is first calculated between adjacent inhabited patches, then the effect of migration to uninhabited patches is taken into account. + +After each round, patches at equilibrium are classified as inhabited. Patches that changed from a yearly deficit of lizard production to a yearly surplus, due to migration, are placed into the next round as occupied patches that generate migration into unoccupied adjacent patches. After six rounds, all the patches are either at equilibrium or have a yearly deficit of lizards even with migration. + +Our model predicts that patches 2, 3, 9, 10, 11, 12, 15, 17, 21, 22, and 23 are occupied. This accurately predicts the occupancy status of 22 of the 29 patches (76%). Furthermore, the model is not systematically biased: four unoccupied patches (2, 3, 9, and 10) are predicted as occupied, and four occupied patches (5, 13, and 23) are predicted as unoccupied. + +Assuming that the population in an inhabited patch is assumed at carrying capacity, the total number of lizards in the Range is 17,679. + +# A Policy for Controlled Burning + +Florida scrub must be maintained by periodic intense fires: Flora and fauna of the scrub require fire to disperse seeds, regenerate, and clear dense brush. As vegetation becomes increasingly dense, sandy patches experience fragmentation and may disappear [Harper and MacAllister 1998]. Natural burns occur every 15 to 100 years. The U.S. Army Corps of Engineers recommend prescribed fires every 8 to 20 years [Harper and MacAllister 1998]. + +Prescribed fires are a heatedly debated remedy, particularly since scrub lands have a high real-estate value. Nearby homeowners fear that prescribed fires may get out of control, as happened with recent ones in Texas and California that destroyed more than 200 homes. + +# Part 1: Vegetation Model + +# Assumptions + +- The $6\%$ increase in vegetation density per year noted in the problem statement decreases sandy habitat and applies to scrub areas in their entirety and to all Florida scrub areas. +- The rate of increase of vegetation density remains constant for subsequent years. + +We use a spreadsheet to simulate overgrowth of vegetation. Using Table 3 of the problem statement, we calculate the percentage of sandy habitat per patch; the average is $39.2\%$ . Per our assumption, we apply this average to the whole Florida scrub ecoregion. + +Initial sandy habitat area, or sandy habitat area directly prior to the establishment of the $6\%$ vegetation density growth rate, is represented by + +$$ +S _ {o} = 3 9 0, 0 0 0 (0. 3 9 2) = 1 5 2 7 7 3. +$$ + +Amount of remaining sandy habitat in subsequent years, given a $6\%$ vegetation density growth rate, is calculated from + +$$ +S _ {t} = 1 -. 0 6 S _ {t - 1}, +$$ + +whose solution is + +$$ +S (t) = 1 5 2, 7 7 3 e ^ {- 0. 0 6 1 9 t} +$$ + +for time $t$ in years. + +# Assessing the Model + +Our model relies strongly on statistical analyses of experimental data and evolutionary theory to create equations and theories to apply to all scrub lizard populations. This is necessary because of the scarcity of documented and quantified relationships between vital attributes of scrub lizards (such as food, shelter, and space requirements, predatory and density limitations, the influence of temperature and rainfall, or why scrub lizards migrate) and scrub lizard fecundity and survivorship. As a result, our model goes a long way with few concrete data, predicting such diverse attributes as marginal risk of dying per meter migrated and the number of years that the population of a patch can survive without encroaching vegetation being cleared. + +Because we use few constants in our equations and rely more upon logistic relationships between data and basic evolutionary principles, our model should be easily adaptable to most species that live in patches. Only a few data about fecundity, survivorship, relationship to habitat, population density, and tendency to migrate are required to predict which patches are being inhabited, + +which patches are necessary to sustaining a population throughout the region, the net benefit of migrating, and the relationship between size and fecundity. Although our model analyzes the population dynamics of the scrub lizard, it could just as easily apply to the scrub jay. + +Another advantage of our model is the speed and ease with which it can be run and adapted. Our model requires only a spreadsheet program, a calculator that can perform logistic regressions, and minimal data-entry time. + +Although it would have been possible to relate patch size, sandy habitat area, fecundity, survivorship, and density with a multiple regression, we believe that a logistic regression better represents the diminishing returns of increases in patch size and sandy habitat on survivorship and fecundity. + +A weakness of this approach is that our model is not very robust. Because there are so few data, our assumptions are flawed, and so the only accurate piece of our model is the logistic equations, which are not useful for predicting which patches are inhabited. However, all our assumptions are grounded in basic principles of biology and evolution. Also, our model is at greater risk than most if the data are inaccurate, because it relies on so few data points. + +# Our Proposal + +The risks and opposition of controlled burning outweigh support of conservationists. There is a tremendous risk to human life and property incurred by controlled burning, such as the voluminous amounts of noxious smoke that would prove detrimental to air quality and population health [Harper and MacAllister, 1998]. Inappropriate smoke management would result in severe visibility reduction for vehicle operators, and pose a health risk to those with respiratory problems. + +Alternatives include numerous upland management strategies, such as scraping, chaining, cabling, railing, rollerchopping, shredding, and rotobeating. The U.S. Army Corps of Engineers have found that many scrub flora species respond nearly equally to fire and mechanical methods. Other studies indicate that mechanical methods stimulate seed germination of some scrub species [Harper and MacAllister 1998]. + +We recommend that mechanical methods such as rollerchopping be implemented in place of controlled burning. Rollerchopping involves a tractor or bulldozer pulling steamroller drum with chopper blades through the brush [Payne and Bryant 1994]. Rollerchopping has resulted in reduction of coarse woody debris, increased open sandy habitat, increased stand quality—and higher lizard density. + +Consolidation of scrub patches would likely have a positive effect on lizard populations [Branch et al. 1999]. The U.S. Army Corps of Engineers recommends creation of larger scrub patches [Harper and MacAllister 1998], which can be achieved through restoration of surrounding degraded scrub patches. Sand roads should be used to connect patches, to facilitate migration, to im + +prove gene flow, and to recolonize of patches [Harper and MacAllister 1998]. Disturbances such as road creation and extensive development should be avoided. Roads and construction act as barriers that increase the fragmentation of existing scrub patches. + +# References + +Antonio, A.L. 2000. Sceloporous woodi species account. Animal Diversity Web. http://animaldiversity.ummz.umich.edu/accounts/sceloporus/s._woodi\$narrative.html. +Branch, L.C., B.M. Stith, and D.G. Hokit. n.d. Effects of landscape structure on the Florida scrub lizard. Retrieved February 9, 2002, from http://enr.ifas.ufl.edu/publications/NFR_98/oral_up2.htm. +Branch, L.C., et al. 1999. The effects of landscape dynamics on endemic scrub lizards: An assessment with molecular genetics and GIS modeling. Retrieved February 9, 2002, from http://wld.fwc.state.fl.us/cptps/PDFs/Reports/Branch-lizards.pdf. +Branch, L., and Grant Hokit. 2000. Scrub Lizard Fact Sheet WEC 139. Florida Cooperative Extension Service. Gainesville: Institute of Food and Agricultural Sciences, University of Florida. +Brewer, R. 1979. Principles of Ecology. Philadelphia: Saunders. +Finney, M.A., and U.S. Department of Agriculture and Forest Services. 1998. FARSITE: Fire Area Simulator Model Development and Evaluation (March 1998). Retrieved February 9, 2002, from http://firelab.org/pdf/fbp/finney/fireareato.pdf. +Florida Dept of Environmental Protection. 2001. State of our lands. Retrieved February 9, 2002, from http://www.afn.org/~ese/endang.txt, http:// www.dep.state.fl.us/lands/div/newsletter. +Florida Natural Areas Inventories. 2001. Florida scrub lizard. Retrieved February 9, 2002, from http://www.fnai.org/FieldGuide/pdf/Sceloporus_woodi.pdf. +Franklin, S.E. 2001. Remote Sensing for Sustainable Forest Management. Boca Raton: Lewis Publishers. +van Gadow, K. (ed.) 2001. Risk Analysis in Forest Management. Dordrecht: Kluwer Academic Pub. +Gans, C., and R.B. Huey (eds.). 1988. *Biology of the Reptilia: Defense and Life History*, vol. 16, ecology B. New York: Alan R. Liss. +Gans, C., and F.H. Pough (eds.). 1982. *Biology of the Reptilia: Physiological Ecology*, vol. 13, physiology D. London: Academic Press. + +Gans, C., and D.W. Tinkle. (eds.). 1977. *Biology of the Reptilia: Ecology and Behavior*, vol. 7. London: Academic Press +Giles, R.H., Jr. 1978. *Wildlife Management*. San Francisco: W.H. Freeman. +Gurney, W.S.C., and R.M. Nisbet. 1998. Ecological Dynamics. New York: Oxford University Press. +Harper, M.G., and B.A. MacAllister. 1998. Management of Florida scrub for threatened and endangered species. Retrieved February 9, 2002, from http://www.cecer.army.mil/techreports/Tra_scrrb.lln/TRA_SCRB.LLN.post.pdf. +Heatwole, H. 1976. Reptile Ecology. St. Lucia, Q.: University of Queensland Press. +Huey, R.B., E.R. Pianka, and T.W. Schoener (eds.). 1983. *Lizard Ecology: Studies of a Model Organism*. Cambridge: Harvard University Press. +Jorgensen, S.E., B. Halling-Sorensen, and S.N. Nielsen (eds.) 1996. Handbook of Environmental and Ecological Modeling. Boca Raton: Lewis. +Knight, C.B. 1965. *Basic Concepts of Ecology*. New York: Macmillan. +May, R.M. (ed.). 1981. Theoretical Ecology: Principles and Applications. 2nd ed. Oxford: Blackwell Scientific. +Myers, R.L., and J.J. Ewel (eds.) 1990. *Ecosystems of Florida*. Orlando: University of Central Florida Press. +Newman, E.L. 2000. Applied Ecology and Environmental Management. 2nd ed. Oxford: Blackwell Science. +Orr, R.T. 1961. *Vertebrate Biology*. Philadelphia: Saunders. +Payne, N.F., and F.C. Bryant. 1994. Techniques for Wildlife Habitat Management of Uplands. New York: McGraw-Hill. +Pianka, E.R. 1986. Ecology and Natural History of Desert Lizards: Analyses of the Ecological Niche and Community Structure. Princeton, N.J.: Princeton University Press. +Spellerberg, I.F., and S.M. House. 1982. Relocation of the lizard Lacerta agilis: an exercise in conservation. British Journal of Herpetology 6 (7): 245-248. +Wenger, K.F. (ed.). Forestry Handbook. 2nd ed. New York: Wiley. +Woolfenden, G.E., and J.W. Fitzpatrick. 1984. The Florida Scrub Jay: Demography of a Cooperative-Breeding Bird. Princeton, N.J.: Princeton University Press. + +# Cleaning Up the Scrub: Saving the Florida Scrub Lizard + +Nicole Hori + +Steven Krumholtz + +Daniel Lindquist + +Olin College of Engineering + +Needham, MA + +Advisor: Burt Tilley + +# Introduction + +The Florida scrub lizard is a victim of human development and detrimental involvement in the environment. This lizard lives with its "family" of 13 other animals in the Florida scrublands (Figure 1). Many lizards have found that their houses of open sand are being invaded by increasing human-aided dominance of flourishing scrub. This dominance has left many lizards homeless. + +Our goal is to provide information that can help save the scrub lizards by modeling many different aspects of their life and their environment, and by locating abundant safe places for occupation. + +# Preserving Scrub Lizard Habitat + +Human development of land is the largest factor in the loss of habitat for the Florida scrub lizard (Sceloporous woodi). In addition to converting lizard habitat to human habitat in the form of roads, homes, and citrus fields, development prevents natural lightning-sparked fires from sweeping freely across the landscape [Smith 1999]. For decades, fires have been seen by humans as destructive, rather than beneficial, and suppressed. Human prevention of such natural fires has led to overgrowth and increased shading and leaf litter, gradually shrinking the open sandy areas in which Florida scrub lizards live. + +Though there are no clear data regarding extinctions and recolonizations of lizards in the scrub, the distribution of the taxa suggests that it is frequent and may be especially common in small patches [Branch et al. 1999, 3, 22]. Human + +![](images/edb99105ed1d878f736d4820b38092699222f7478db1a5e340f3117cfff77a58.jpg) +Figure 1. The Florida scrub family. From top left to bottom right: blue-tailed mole skink, southeastern five-lined skink, eastern diamondback rattlesnake, sand skink, Chuck-Will's-Widow, scarlet kingsnake, short-tailed snake, Florida scrub lizard, eastern coachwhip, silver-backed argiope, gopher frog, Florida worm lizard, Florida gopher tortoise, Florida scrub jay. + +development has resulted in fragmentation, which creates barriers between patches of scrub that prevent lizards from migrating to repopulate areas and exchange genetic information [Branch and Hokit 2000]. Lizards in small scrub patches in urban areas in Titusville and Naples were far less genetically diverse than those in the Jonathan Dickinson State Park, which has about 1,900 ha of continuous scrub [Branch et al. 1999, 52]. + +Fires are an integral component of the natural scrub ecosystem and without human intervention would occur in a given area approximately once every 6-20 years. In their absence, shrubs and trees become overgrown and many species are displaced, including scrub lizards. Alterations made to the environment make the stoppage of fire-fighting insufficient for full scrubland recovery. Instead of burning thousands of acres, naturally started fires run into concrete or asphalt "firebreaks" or are extinguished to prevent damage to property, and controlled fires must take their place. Controlled burning allows the amount of fuel to be lowered to safe levels [Smith 1999] and can create areas of scrub in different stages of growth alongside one another so that there will be refuges to which small animals and insects may return [Fire in the Florida scrub 2000]. + +Controlled burning must be done carefully, as accumulation of fuel may cause fires to become uncontrollable. Furthermore, scrub oaks grown to the size of trees will survive if they are not cut back first. Under natural conditions, scrub oaks would be killed before full growth by above-ground fire and sprout up again from their root systems. + +The fragmentation of scrub patches has caused even more problems for scrub lizards. Lizards are much more vulnerable to local extinction in small patches; while these patches may provide good stepping stones for lizard movement between preserves, larger patches must be kept intact to sustain a stable population. The precise reasons for different survivorship, density, and recruitment rates in small and large patches are unclear [Branch et al. 1999, 71]. Some of the vulnerability experienced by small patches may be attributable to stochastic demographic processes: In the smallest patches, there are fewer than a dozen individual lizards, and they may be more susceptible to predation since there is a higher ratio of perimeter to sandy area. + +In addition to controlled burning, conservation measures should include habitat preserves whose spatial distribution corresponds to the characteristics of the scrub lizards. Although an assortment of small reserves may protect as many vertebrate species as a single large reserve, the distribution of these small reserves will have a tremendous impact on individual species. The most stable populations of scrub lizards occur in patches with large amounts of bare sand that are close to other stable patches. Scrub lizards are more vulnerable than race-runners, a similar species of lizard, because they have a lower ability to disperse and are more habitat-specific, being unable to live in areas with dense grass cover, mesic flatwoods, old fields, dry depression marshes, and very barren areas [Branch et al. 1999, 24]. While race-runners have a home range of up to $13,000\mathrm{m}^2$ , home ranges for scrub lizards are $800\mathrm{m}^2$ and $400\mathrm{m}^2$ for males and females, respectively. + +Genetic diversity correlates strongly to geographic distribution, since scrub lizards have extremely limited ranges and tend to stay in the patches in which they hatch (only $10\%$ migrate). Lizards from the five largest scrub ridges have distinct mtDNA (mitochondrial DNA), and a representation of each should be preserved for the sake of genetic diversity. The portion of total genetic diversity observed among populations within ridges was $17.5\%$ , and the portion that occurred within local populations is $10.4\%$ [Branch et al. 1999, 53]. + +# Estimating $F_{a}, S_{j}$ , and $S_{a}$ + +To determine fecundity, we use the data provided, as well as additional background information on scrub lizards. Measuring fecundity—the number of hatchlings one female lizard can produce in one year—first requires knowing how many clutches of eggs a female can lay. Female lizards are capable of 3-5 clutches per year. Furthermore, mature lizards become sexually active and able earlier in the season than the younger females. Therefore, we estimate that + +young females (age 1) lay an average of 3.5 clutches per season, while mature females (age 2 and 3) lay an average of 4 clutches per season. + +We determine the number of eggs per female per age group by using the equation provided in the problem statement (clutch size $= 0.21l - 7.5$ ) to determine clutch size, then multiplying clutch size by the estimate of number of clutches per season (Table 1). + +Table 1. Number of eggs laid per female, every season, divided by age group. + +
Age (years)Number of Eggs
1-27.4
2-316.9
3-417.0
+ +To determine how many total eggs are laid per season, we multiply the values for eggs per female per age group by the number of females in that age group and add over age groups. The sum (901.7) is divided by the total number of females (105) to get the number of eggs laid per female (8.6). + +On average, $95\%$ of eggs survive into hatchlings. Therefore, to determine fecundity, the eggs/female ratio is multiplied by 0.95, resulting in a fecundity of 8.2 eggs/female. + +To determine the survival rate of juvenile lizards, the number of age-1 lizards (180) is divided by the number of age-0 lizards (972). The resulting quotient is $180 \div 972 = .185$ , or $18.5\%$ of lizards survive their first year. + +Determining the survival rate of adult lizards is similar. By dividing the number of age-2 lizards (20) by the number of age-1 lizards (180), we find that the survival rate of young adult lizards is $11.1\%$ . For the survival rate of older "senior" lizards, the number of age-3 lizards (2) is divided by the number of age 2 lizards (20), resulting in a survival rate of $10\%$ . We assume that no age-3 lizard lives to be 4 years of age. + +To determine the overall survival rate of adults for this sample, the survival rate of young adults and the survival rate of senior adults are weighted and then averaged. To weight the survival rates, the rate for each age group is multiplied by the number of members of that age group, as in Table 2; the resulting average adult survival rate is $11\%$ . + +Table 2. Calculation of overall survival rate. + +
Survival rateNo. of membersWeight
young adults0.111202.22
senior adults0.10020.20Weighted survival rate
Total222.422.42/22 = 0.11
+ +# Developing Functions for $F_{a}, S_{j}, S_{a}$ , and $C$ + +Fecundity and survivorship appear to depend both on patch size $A$ and on area $h$ of sandy habitat. But patch size and sandy habitat are related via + +$$ +h = . 3 1 6 5 A + 2. 3 1, \tag {1} +$$ + +with correlation .986. We use area of sandy habitat as the better predictor; it makes more sense to model the lizard population by the area in which it lives instead of by the area that surrounds its living space. + +Since density is measured by lizards/hectare, we must consider patch size and use (1) to convert to area of sandy habitat. + +Since fecundity, survivorship of juveniles, and survivorship of adults all have upper bounds (levels at which physical biology presents limits), we model these quantities by logistic regressions: + +$$ +F _ {a} = \frac {1 0 . 3 3}{1 + 1 . 4 2 1 e ^ {- 0 . 0 9 5 7 h}}, \quad S _ {j} = \frac {0 . 1 7 9}{1 + . 8 9 e ^ {- 0 . 1 6 9 h}}, \quad S _ {a} = \frac {0 . 1 3 9}{1 + 1 . 9 3 e ^ {- 0 . 1 2 3 h}}, \tag {2} +$$ + +where $F_{a}$ is the fecundity, $S_{j}$ is the survival rate of juveniles (aged 0-1), $S_{a}$ is the survival rate of adults (aged 1-3), and $h$ is the sandy habitat area in hectares. + +We also regress the carrying capacity of a scrub patch on the desired category and the area of sandy habitat. To do so, we make three assumptions: + +- The measured of density $D$ is in terms of lizards/hectare of scrub, not in terms of lizards/hectare of open sandy habitat. +- Since the scrub patches have existed for multiple years, each scrub patch is currently at its carrying capacity, as demonstrated by the provided density data. +There is an upper bound to density. + +Because of the third assumption, a logistic model would be the best; unfortunately, there is no way of calculating or extrapolating from the information provided the order of magnitude of such an upper bound. Unlike the vital statistics, where there are clear limits to how many eggs a female can lay and how long lizards can live, density has no clear limit. A logistic model of the given data would create an upper bound of about 80 lizards/hectare, a figure that could certainly be higher. + +Therefore, for a better model we use power regression, getting for the density $D$ of the scrub patch, in lizards/hectare, + +$$ +D = 3 6. 9 3 h ^ {0. 2 2 1}. +$$ + +This regression has a high correlation (.937). Since carrying capacity is measured in total number of lizards, the scrub patch area of each patch must + +be multiplied by the density equation to determine the carrying capacity $C$ for each patch: + +$$ +C = D A = 3 6. 9 3 A h ^ {0. 2 2 1}. +$$ + +This model can help determine if certain patches of scrub are suitable for lizard "transplantation," or if these patches are already over their capacity and should not have new lizards introduced. + +# Probability of Surviving During Migration + +The data include a probability distribution of distances traveled by surviving lizards. That histogram gives the probability of a lizard going $d$ meters, given that it survived, or $P(d \mid S)$ . Then + +$$ +P (d \text {a n d} S) = P (S) \times P (d \mid S), \tag {3} +$$ + +where $P(S)$ is the probability of a lizard surviving and $d$ is distance in meters. + +Using release/recapture data from the Florida Game and Fresh Water Fish Commission, we calculate the overall survival rate of the $10\%$ of lizards who migrate: + +$$ +P(S) = \frac{\text{lizards released}}{\text{lizards recovered}} = \frac{227}{71} = .3128 = 31.3\%. +$$ + +Using this probability in (3), we arrive at the entries in Table 3. + +Table 3. Probability of survival as a function of distance traveled. + +
Distance traveled (m)P(d and S)
500.1314
1000.0782
1500.0563
2000.0376
2500.0063
3000
3500.0031
+ +We can now use regression to model the probability of a lizard surviving a journey of $d$ meters. Since lizards cannot have a negative survival rate, a logistic regression seems best. We obtain + +$$ +S = \frac {0 . 3 4 1}{1 + 0 . 8 7 3 e ^ {0 . 0 1 2 5 d}}. \tag {4} +$$ + +We can find the probability of a lizard surviving the migration between patch $i$ and patch $j$ by calculating the distance $d$ between the two patches and substituting that value into (4). + +# Determining Total Landscape Population and Suitability of Patches for Inhabitation + +The landscape at the Avon Park Air Force Range contains a wide range of different-sized patches, not all of which can sustain lizards. Before making the distinction, however, we first create a model to estimate the landscape's current population. + +We assume that each patch is at its carrying capacity. We find the density for each patch by determining $D$ in the equation + +$$ +D = 3 6. 9 3 h ^ {0. 2 2 1}, +$$ + +where $h$ is the size of the sandy area (in hectares). To determine population, we multiply this density by the total patch size: $P = DA$ , where $P$ is the population and $A$ is the area. + +Using this approach on each patch, we estimate the total population to be 25,200 individuals. + +We estimate the fecundity $F_{a}$ , the survival rate of juveniles $S_{j}$ , and the survival rate of adults $S_{a}$ using the earlier regression equations (2). + +To determine if a scrub patch is suitable for occupation by lizards, it is important to know if the population of the patch is either increasing or decreasing. A patch that has a decaying population is most likely not a good place to which lizards should relocate, while a patch with an increasing population shows that it is flourishing and that the environment is suited to lizards. + +With the fecundity (birthrate) and the survival rates of each generation of lizards, we can create a Leslie matrix for each of the 29 patches: + +$$ +\begin{array}{r l r} {\mathcal {L}} & = & {\left[ \begin{array}{l l l l} 0 & F _ {a} & F _ {a} & F _ {a} \\ s _ {j} & 0 & 0 & 0 \\ 0 & s _ {a} & 0 & 0 \\ 0 & 0 & s _ {a} & 0 \end{array} \right].} \end{array} +$$ + +In this matrix, the birthrates are in the top row, with each column representing one year of age. Going diagonally down to the right are the survival rates. Using MATLAB, we determined the eigenvalues for each of the individual matrices. + +The eigenvalues serve as projections of change of the population. An eigenvalue greater than one indicates an increasing population, whereas an eigenvalue of less than one shows a decreasing population that without external influences would eventually die off. Most of the patches have eigenvalues of less than one and will thus eventually have no lizards. However, we must also take into account immigration. + +We know that $10\%$ of all juveniles in a given patch tend to migrate, though our results show that no lizards survive past $400\mathrm{m}$ of travel. For simplicity, we assume that the lizards emigrating from each patch distribute evenly among all patches within $400\mathrm{m}$ of the original patch. To find the number of lizards + +emigrating, we use the equation for the juvenile population $j$ in terms of the total population $P$ : + +$$ +j = \frac {P - j}{2} F _ {a}, +$$ + +which when solved for $j$ yields + +$$ +j = \frac {P F _ {a}}{2 + F _ {a}}. +$$ + +Since the number of lizards that emigrate is one-tenth the total juvenile population, the total number $E$ of emigrants from a patch is + +$$ +E = \frac {P F _ {a}}{1 0 (2 + F _ {a})}. +$$ + +To determine whither these lizards emigrate, we need to determine what patches are within $400\mathrm{m}$ of another patch; how many survive en route depends on the distance to the other patch. The results of our measurements between patches are shown in Figure 2. + +Figure 2 also gives a rough model of how the population distribution would play out. The green (gray) patches have an increasing population, based on the eigenvalue; they need no immigrants to sustain a population. The yellow (white) patches are less than $400\mathrm{m}$ away from one or more green patches and thus have a steady influx of immigrants from those patches. The red (dark) patches are not less than $400\mathrm{m}$ from a green patch and thus receive few immigrants. Thus, the lizard population will become concentrated almost entirely in the patches on the west side of the landscape. + +# Recommendation: Controlled Burning + +We recommend controlled burning. Fires are an integral component of the natural scrub ecosystem and would occur in a given area approximately once every 6-20 years if allowed to spread. When an open area has been restored through controlled burning, lizards from nearby patches can migrate to the freshly burned area and repopulate it; then the highest densities of scrub lizards would be found in areas in the early stages of recovery from fire or other disturbances. As each patch of scrub matures, scrub lizards are expected to migrate to more open and sandy areas [Branch et al. 1999, 71]. + +Excess vegetation growth can be controlled with a combination of mechanical cutting (where the scrub oaks or other shrubs have grown too large to burn safely) and controlled burning. Some risk is involved, but millions of acres are intentionally burned each year in the United States [Cannell 1999] and the protocol is well developed. + +Not only would prescribed burning increase the amount of habitat suitable for native species, it would reduce the possibility of a wild fire like the one that swept through 500,000 acres in Florida in the summer of 1998. + +![](images/ea094cb8faab4d678bd55dfad71bdf133c5440ab1f817206383b017956b236be.jpg) +Figure 2. Landscape at Avon Park Air Force Range, with distances between patches (in hundreds of meters). + +# References + +Antonio, A.L. 2000. Sceloporous woodi species account. Animal Diversity Web. http://animaldiversity.ummz.umich.edu/accounts/sceloporus/s._woodi\$narrative.html. +Branch, L., et al. 1999. Effects of Landscape Dynamics on Endemic Scrub Lizards: An Assessment with Molecular Genetics and GIS Modeling. Tallahassee: Florida Game and Freshwater Fish Comission. +Branch, L., and Grant Hokit. 2000. Scrub Lizard Fact Sheet WEC 139. Florida Cooperative Extension Service. Gainesville: Institute of Food and Agricultural Sciences, University of Florida. +Breininger, D., et al. 1995. Landscape Patterns of Florida Scrub Jay Habitat Use and Demographic Success. Kennedy Space Center: Dynamac. +Cannell, Michael. 1999. Fighting fire with fire: Prescribed burning can prevent wildfires. Science World (22 February 1999). +Christman, S. 1997-2000. Animals of the Florida scrub. http://216.203.152.232/main_fr.cfm?state=Track&viewsrc=tracks/scrub/fs_1.htm. +Florida Natural Areas Inventory. 2001. Field Guide to the Rare Animals of Florida. http://www.fnai.org/fieldguide/. +Fire in the Florida Scrub. 2000. http://www.archbold-station.org/discoveringflscrub/fire/fire.html. +Fire and Forest Protection. 2000. http://flame.fl-dof.com/Env/fire.html. State of Florida Division of Forestry. +Schmalzer, Paul A. 2001. Scrub habitat. Kennedy Space Center: Dynamamac. http://www.nbbd.com/godo/ef/scrub/. Last updated 5 May 2001. +Smith, R.B. 1999. Gopher tortoises. Kennedy Space Center: Dynamac. http://www.nbbd.com/godo/ef/gtortoise/index.html. + +# Judges' Commentary: The Outstanding Scrub Lizard Papers + +Gary Krahn + +Dept. of Mathematical Sciences + +United States Military Academy + +West Point, NY 10996 + +ag2609@usma.edu + +Marie Vanisko + +Dept. of Mathematics, Engineering, and Computer Science + +Carroll College + +Helena, MT 59625 + +mvanisko@carroll.edu + +# Introduction + +The papers were assessed on + +- the breadth and depth of the analysis on each portion of the posed problems, +- the validity and creativity in the proposed models, and +- the clarity and presentation of solutions. + +Virtually all papers demonstrated a significant amount of work and thoughtful analysis by the team members. The judges were impressed with the quality of attentive research undertaken by the students on the science involving survival of scrub lizards and were pleased to see a variety of innovative attempts to solve the problems. Making sense of ecological factors affecting the scrub lizard population was essential for successful papers, but the heart of the contest problem was developing a mathematical model that might accurately determine the factors that could contribute to or detract from survival of the scrub lizards. + +# The Problem + +The Florida scrub lizard is a small, gray, or gray-brown lizard that lives throughout upland sandy areas in the Central and Atlantic coast regions of Florida. The Florida Committee on Rare and Endangered Plants classified the scrub lizard as endangered. The long-term survival of the Florida scrub lizard is dependent on preservation of the proper spatial configuration and size of scrub habitat patches. + +The problem was written by Prof. Grant Hokit of the Dept. of Natural Sciences at Carroll College in Helena, Montana. He and his colleagues from the University of Florida have conducted extensive research on scrub lizards and their habitat, some of the results of which appear in this problem. + +# The Data + +It is conjectured that fecundity and survival rates of the scrub lizard are related to the size and amount of open sandy area of a scrub patch where they live. Students were asked to deduce from small samples of data a pattern for fecundity and survival rates. The students were also provided the positions of scrub patches relative to one another and were asked to describe the impact of lizard migration on the survival rate of lizards in these patches. [Note: It is still not known why the scrub lizard migrates.] + +# The Science + +Students with a background in ecology recognized that the plight of the scrub lizard is very similar to the plight of other endangered species. Research into the various factors that affect the habitat of the lizard was essential, because the maintenance of a livable habitat is just as important as the understanding of the impact that the habitat has on the survivability of the lizards. One of the problems posed related to the maintenance of the sandy areas by occasional controlled burning. After researching the habitat dynamics of the scrub lizards, students were asked to make recommendations to preserve the habitats and to discuss obstacles they might encounter to their recommendations. + +Then came the students' understanding of reproductive rates within different age groups and the survivability of offspring, youths, and adults under different conditions. The term "fecundity" lent itself to more than one interpretation. The migration of juvenile lizards introduced a factor that complicated the population model. + +# The Model + +The teams used a variety of modeling techniques. To estimate parameters such as fecundity and survival rates, some students extrapolated from the given data and some accessed additional data. To predict long-term survivability, some teams conducted simulations and others used Leslie matrices to determine which patches could sustain the lizards. + +Interesting and viable probability models, as well as informative simulations, were used to analyze the migration of lizards from one patch to another. The geometry of the migration required complex modeling, taking into account the positions and sizes of patches relative to one another. + +An essential part of the modeling process is clearly stating the underlying assumptions. It was enjoyable and informative when teams interpreted the results of the model with respect to the simplifying assumptions. Often, students believe that the judges know the correct answer and have absolute knowledge about the model development, so that there is no need to fill in the details of the modeling process. This belief is wrong—good papers must carefully provide these details. + +Students wrestled with their responsibility to transform an ill-defined problem into a well-defined problem. The migration component of this problem provided only ideas, and the students had nearly a clean slate to begin the analysis. The motivation and dynamics for migration of scrub lizards is almost completely unknown. Therefore, the modeling process was limited to an empirical and not explicative model. + +Other interesting perspectives on modeling were seen on papers that suggested burning schemes or other ways to keep the scrub patches from being overrun and uninhabitable for the lizards. + +# The Analysis + +Analysis distinguished the Meritorious and Outstanding papers from the others, and the thoroughness with which that analysis was done distinguished the Outstanding papers from the Meritorious. Some teams used modeling that was less sophisticated but verified their model with simulations. This is acceptable as long as they describe their modeling process and show reasonable results. + +Other teams used classic models, such as developing Leslie matrices for the patches and then basing their conclusions on the eigenvalues of the matrices. Many used the exponential distribution to describe the survival pattern of the lizards. + +# Presentation + +There were great variations in the quality of the write-ups. Thoroughness is essential, and conciseness is necessary for the one-page summary. Some papers revealed great potential from the modeling perspective but were difficult to follow and therefore problematic to assess. Others developed good models but failed to interpret the models in the context of the issues raised. + +On the other hand, some papers had page after page of well-written perspectives on the issues but failed to do adequate mathematical modeling. A qualitative approach must be accompanied with a quantitative analysis. + +Failure to document sources properly kept papers from rising to the top. Papers that revealed a comprehensive review of available resources and documented where those resources were referenced showed intellectual maturity that was appreciated and valued by the judges. + +# Conclusion + +Reading and judging the ICM papers was an enjoyable experience. It was clear that many students worked very hard on the project during the four-day period, and the judges were impressed. The interdisciplinary nature of this problem opens the door for creative solutions from many perspectives, and problems of this type enlighten students to the broader challenges associated with biodiversity and survival of endangered species. + +# About the Authors + +Gary Krahn is the Head of the Department of Mathematical Sciences at the U.S. Military Academy at West Point. His interests include the study of generalized de Bruijn sequences for communication and coding applications. He enjoys his role as a judge and Associate Director of the ICM. + +Marie Vanisko is in her 31st year of teaching undergraduate mathematics at Carroll College in Helena, Montana, and has been active in Project INTERMATH. She is interested in seeking out useful applications of mathematics to share with her students and in developing technology modules to enrich the mathematics classroom. Having served as a judge for the MCM for many years, she found it very interesting to judge the ICM for the first time this year. + +# Author's Commentary: The Outstanding Scrub Lizard Papers + +D. Grant Hokit + +Dept. of Natural Sciences + +Carroll College + +Helena, MT 59625 + +ghokit@carroll.edu + +# Introduction + +I recall watching the astronauts take the first steps on the Moon and the contagious euphoria that swept the country after such a remarkable technological achievement. Technology has not been idle in the last three decades, as we have witnessed many innovations that have truly changed the world. + +Despite such achievements, human civilization is still completely dependent on natural systems. The anthropogenic systems that provide the life support for our people in space are no substitute for the natural systems that sustain the billions on earth. The natural ecosystems that provide clean air and water, food, and shelter, are unlikely to be replaced by technological systems in the foreseeable future. These natural systems must be maintained to preserve our existence. + +Anthropogenic systems have one major advantage over natural systems: Their status is easily monitored with calibration instruments that we know and understand because we built them. Thermostats, carbon monoxide detectors, and computer-controlled fuel injectors are commonplace on modern automobiles. Such calibration instruments are difficult to recognize for natural systems. We must rely on our incomplete understanding of natural systems to identify when these systems are endangered. Consequently, we often find ourselves creating costly environmental problems. + +# The Fragmented Landscape of Southern Florida + +As one example, human activities have endangered the fresh water resources in Southern Florida. Upland habitats provide a vital function in water regulation and purification. The summer rains supply fresh water to Southern Florida and percolate through the upland soils on their way to the wetlands and swamps that act as natural reservoirs. + +However, these uplands are also ideal locations for urban development and citrus agriculture. Consequently, urbanization and citrus agriculture have converted the major portion of upland habitats. Now the summer rains are augmented by agricultural runoff and especially by runoff from heavily fertilized lawns and golf courses. The downstream reservoirs, both natural and human-made, are suffering from eutrophication effects such as oxygen depletion and changes in microbial activities. + +Florida and federal taxpayers are now paying hundreds of millions of dollars to design and implement a water-management system that saves the fresh water needed by Floridians and saves national treasures such as the Florida Everglades. Although the urban and agricultural development is undoubtedly an economic boom for Florida, we are now paying for the unaccounted cost of disturbing a vital natural system. + +Had we an instrument to assess the impact, we might avoid further damage to this natural water system and the consequent cost of rectifying the damage. Also, such an instrument might have prevented the problem by indicating how much development was possible before experiencing significant damage. + +Biodiversity may provide a barometer for assessing the impact to natural systems. Biodiversity can be considered the sum total of all the genes, species, and ecosystems that exist on our planet. Because everyone can relate to what a species is, biodiversity is often thought of in terms of species diversity. It logically follows that if all species are persisting in a natural system, then the system must be stable. Unhealthy systems experience a decrease in biodiversity. + +# The Florida Scrub System + +The Florida scrub system is part of the mosaic of ecosystems that cover the peninsula of Florida. Scrub is a xeric (dry) upland system surrounded by the more hydric (wet) systems that dominate the South Florida landscape. Scrub is patchily distributed across the Florida landscape, occurring only on well-elevated, well-drained areas such as ancient sand dunes. Florida scrub contains the highest number of endemic species (species found in no other type of habitat) for any terrestrial ecosystem in the Southeastern United States and thus has high biodiversity. Unfortunately, because of the destruction of scrub habitat, many of these species are listed as endangered or potentially + +endangered. The Florida scrub lizard (Sceloporus woodi) is one such species. + +Because of our poor understanding of ecological processes, we do not know how much scrub habitat is required to maintain scrub biodiversity. In fact, it is difficult for us to predict how much habitat is necessary to maintain even a single species. We are only beginning to understand the processes that influence species distribution patterns. We hope that by focusing on one or a few species, we can ultimately piece together an understanding of processes that influence overall biodiversity. If we can successfully model and predict trends in scrub lizard populations, we can better understand the processes important to the persistence of scrub lizards. More important, by understanding the habitat distribution needs of the Florida scrub lizard, we can better appreciate the needs for all scrub organisms. + +Such a problem requires an integrative approach. The scrub lizard thrives in habitat that exists in patches across the Florida landscape. This spatial component requires researchers to understand not only ecological processes inside scrub patches but also processes that influence the dispersal of lizards between scrub patches. To achieve success, population biology, landscape ecology, computer modeling, and mathematics have to be integrated. This was the problem presented to students in this year's Interdisciplinary Contest in Modeling. + +# Formulation of the Contest Question + +In 1994, the Department of Defense provided funds to conduct amphibian and reptile surveys on the Avon Park Air Force Range (APAFR). This area is set aside for bombing practice for military aircraft and ironically contains some of the best-preserved scrub habitat in Florida. + +Scrub organisms are fire-adapted. In fact, many scrub organisms cannot persist without periodic fires that thin dense, senescent vegetation and open up areas of open sandy habitat. Not only are fires allowed to burn on APAFR, the natural resource staff initiates periodic controlled burns to help manage scrub habitat. Because of private land issues elsewhere in the state, prescribed burning is seldom employed and the scrub habitat suffers. + +My colleagues (Lyn Branch and Brad Stith of the University of Florida) and I were awarded funds to survey for rare and endangered species on APAFR. We immediately recognized the potential to collect valuable demographic, dispersal, and habitat data concerning the Florida scrub lizard. + +We mapped all 95 scrub patches on APAFR: We used a geographic information system (GIS) and infrared aerial photos to delineate the boundaries of the patches, calculate patch areas, measure vegetation density, and construct digitized maps of the landscape. We conducted surveys in each patch to determine the presence of scrub lizards and to establish a baseline to compare occupancy patterns with patterns predicted by mathematical models. We also established eight trapping grids, each one hectare in size, in eight different patches that ranged in size from 11 to 278 hectares. These trapping grids were visited ev + +ery month for two years; mark/recapture techniques allowed us to estimate density, survivorship, and fecundity (birth rate) for lizards in all eight patches. + +We also conducted dispersal studies. Although radio telemetry can provide direct measurements of dispersal behavior, scrub lizards are too small to burden with typical radio transmitters. Smaller transmitters cost too much to afford the hundreds necessary to get large sample sizes, and the batteries last for only a few weeks. We assessed dispersal indirectly. We simply marked hundreds of lizards, released them, and walked transects to recapture lizards up to months after their release date. The distance to the release site was recorded for each recaptured lizard. In this manner, we could estimate how distance from the release site was associated with recapture rates. We also tested lizards in enclosures to assess how effectively they moved through different types of vegetation and across water barriers. + +This initial research was the source for all of the data provided in the Contest, and the students were assigned the task of modeling a metapopulation (group of populations connected by dispersal) of scrub lizards on the north end of the APAFR. + +Although my colleagues and I had published a logistic regression model using the same data [Hokit et al. 1999], the model was static and did not include dynamic demographic and dispersal processes. We recently published the results of two dynamic models [Hokit et al. 2001], but both models include very general assumptions about population demographics. For example, one model assumes that vital rates (survivorship and fecundity) are equivalent for different patches. This simplifying assumption makes the modeling easier but does not incorporate what we know from other analyses: Patch size is positively correlated with survivorship, fecundity, and density. + +Thus, it was up to the students to design a spatially explicit (specific for a particular landscape), dynamic landscape-scale metapopulation model that incorporates patch specific vital rates and dispersal. Such a model has yet to be published for any species on any landscape, so the Contest was truly an original challenge for the students. Furthermore, students were required to address policy and management issues concerning the scrub lizard and Florida scrub habitat. + +# Response to Student Solutions + +I was genuinely impressed with the student solutions to the problem. The creativity and range of approaches were remarkable. I was amazed at how different approaches resulted in well-thought-out and highly accurate solutions. I could gauge the accuracy of the modeling solution by testing the model predictions against known occupancy patterns for the APAFR landscape. Many models were within one or two patches of "predicting" the actual occupancy patterns on the landscape. + +Many papers introduced me to new perspectives and approaches for such + +modeling problems; as a result, I'm motivated to learn new modeling strategies. Some papers utilized a traditional Leslie matrix coupled with dispersal models. Others used an incidence function approach. Still others incorporated neural-net modeling and polygonal representations of the actual landscape. The polygons were then used to model not only dispersal rates but also the probabilities associated with the direction of dispersal. + +Including dispersal dynamics was one of the more challenging aspects of the problem. Given the crude nature of the dispersal data (e.g., recapture rates vs. distance from release site), it was a challenge to estimate survival probabilities for lizards moving between patches. Although seemingly simplistic, many papers arrived at the assumption that survival probabilities were probably correlated with recapture probabilities. Many animal studies have demonstrated just such an association between recapture and survival probabilities. Currently, we can only assume that the same is true for scrub lizards. + +The best papers integrated policy and management options with their metapopulation model, resulting in prescribed treatments for specific habitat patches. These papers not only predicted which patches could support scrub lizard populations but also created a schedule of controlled burns to enhance and maintain scrub habitat. This approach combined the best science, math, and policy to arrive at a truly integrative and interdisciplinary solution. + +# Conclusions + +The problem faced by the Florida scrub lizard is not unique. Many species are endangered due to habitat destruction and fragmentation. Although not as newsworthy as global climate change, ozone depletion, or acid rain, habitat destruction is by far the leading threat to biodiversity (although the former factors may lead to habitat destruction). Some estimates project that without careful management of habitat destruction, $10\%$ to $20\%$ of extant species will go extinct within the next few decades. Extinction balanced by complementary speciation (evolution of new species) presents no great risk to species diversity. However, an extinction rate of $10\%$ to $20\%$ over a few decades rivals major extinction events of the past, including the one that saw the demise of the dinosaurs. Thus, it is the rate of extinction, not extinction itself, which is problematic. A high rate of extinction will jeopardize biodiversity. If biodiversity is an accurate barometer of ecosystem health, we may be jeopardizing more than the scrub lizard's future. + +There is much work to be done before we can be confident that our modeling strategies are accurate, robust, and generally applicable to many species. We are only beginning to understand the subtlety of the processes that act across spatial and temporal scales to influence the distribution of species and the functioning of natural systems. With such talented and well-motivated students, we may reach sufficient understanding to allow for the continued maintenance of our life-support system. + +# Acknowledgments + +I would like to thank Chris Arney for directing such a respectable contest as the COMAP Interdisciplinary Contest in Modeling. I'm very grateful to Gary Krahn for his help in writing the problem. I also thank my colleagues from the University of Florida, Lyn Branch and Brad Stith, and all the field technicians without whom the data would not be available. Finally, I thank the natural resource staff at Avon Park Air Force Range, who provided funding opportunities, logistical support, and access to the best scrub habitat in Florida. + +# References + +Hokit, D.G., B.M. Stith, and L.C. Branch. 1999. Effects of landscape structure in Florida scrub: A population perspective. Ecological Applications 9: 124-134. + +______ 2001. Comparison of two types of metapopulation models in real and artificial landscapes. Conservation Biology 15: 1102-1113. + +# About the Author + +D. Grant Hokit is Associate Professor of Biology at Carroll College (Montana), where he has been since 1996. He has a B.S. (1986) from Colorado State University in Wildlife Biology and a Ph.D. (1994) in Zoology from Oregon State University, where he did amphibian research in behavioral and population ecology, including research on UV-b radiation and amphibian declines. He did a post-doc in Wildlife Ecology and Conservation at the University of Florida from 1994 to 1996, where he engaged in scrub lizard landscape ecology research. + +# Classroom Scheduling Problems: A Discrete Optimization Approach + +Peh H. Ng + +Division of Science and Mathematics + +University of Minnesota-Morris + +Morris, MN 56267 + +pehng@mrs.umn.edu + +Lora M. Martin + +Associate Software Engineer + +UNISYS Corporation + +St. Paul, MN 55164-0942 + +# Introduction + +Every year, colleges and universities face the problem of assigning classrooms to satisfy the needs of courses, faculty, and students. Classrooms and space are limited, and certain conflicts must be avoided; more often than not, a solution cannot be found to satisfy everyone's requirements. Our main objective was to find an optimal solution to satisfy the majority of people involved at our campus, the University of Minnesota-Morris (UMM). In the long run, our mathematical model can benefit many secondary schools, vocational schools, colleges and universities; and it could be extended to other types of scheduling problems such as airline flights and manufacturing systems (see Kolen et al. [1987], Dondeti and Emmons [1986], and Mangoubi and Mathaisal [1985]). + +At the University of Minnesota-Morris, not all courses can be scheduled at the times requested by professors. The university first needs to find a systematic way to allocate available rooms and time periods to courses in the "best" possible way. Second, each department or discipline needs to assign professors to the courses in the "best" possible way based on constraints provided by the professor or the course. + +A typical classroom scheduling problem can be modeled as an integer linear programming problem (ILP) (Carter [1989], Carter and Tovey [1992], Ferland and Roy [1985], Garey and Johnson [1979], Glassey and Mizrach [1986], and Sierksma [1996]). An ILP is an optimization problem that consists of a linear + +objective function, linear constraints, and discrete variables. Mathematically, an ILP can be written as + +$$ +\{\text {m a x i m i z e} \vec {c} \cdot \vec {x}: A \vec {x} \leq \vec {b}, \vec {x} \geq \vec {0}, \vec {x} \text {i n t e g e r} \}, +$$ + +where $\vec{x}$ is the vector of decision variables, and $\vec{c},\vec{b}$ , and the constraint matrix $A$ are given data. Thus, the objective function of the problem is a linear function whose value we want to optimize subject to the given constraints. + +The constraints for any scheduling problem may vary from one institution to another and the differences will be reflected in the ILP. A few of the general constraints of a classroom scheduling problem include time availability or lack thereof, the size of the room, how well equipped the room is, the projected enrollment of the students in the courses, and availability of faculty members in terms of appropriate courses. Therefore, the University of Minnesota-Morris's Classroom Scheduling Problem (UMM-CSP) is defined as finding the best assignment of classrooms at a given time that meets the availability of the faculty members and the requirements of the courses. + +There are two major parts to this paper. First, we show formulations of two ILPs that provide mathematical models to solve the (UMM-CSP). Then we solve the ILPs using data from the Mathematics Department at University of Minnesota-Morris and present the results. + +# Integer Linear Programming Model + +We describe two ILPs that we then use to solve the (UMM-CSP). To formulate these ILPs as mathematical models, we derive a list of linear inequalities and an objective function that correspond to the real constraints of the scheduling problem in (UMM-CSP). + +# Time Periods, Mathematics Courses, and Classrooms + +When UMM switched to the semester system in Fall 1999, a typical course was designated with 4 credit hours, meaning that in a week the total amount of class time should be about 200 minutes. Thus, classes usually meet for 65-minute periods on MWF or a 100-minute periods on TTh. Although most (about $90\%$ ) of the courses are 4 credit hours, a few meet TTh for 50 minutes each and carry 2 credits. For efficiency, two 2-credit courses are scheduled during the 100-minute time period on TTh. + +We use the time period indices in Table 1. + +A few rooms around campus are usually scheduled with mathematics courses. In addition, there is a set of courses that are offered during both the Fall and Spring semesters. Tables 2 and 3 describe the indices for the classrooms and courses. + +Table 1. +Indices for time periods. + +
IndexMeeting DaysActual Meeting Times
1MWF8:00–9:05
2MWF9:15–10:20
3MWF10:30–11:35
4MWF11:45–12:50
5MWF1:00–2:05
6MWF2:15–3:20
7MWF3:30–4:35
8TTh8:00–9:40
9TTh10:00–11:40
10TTh12:00–1:40
11TTh2:00–3:40
12TTh4:00–5:40
+ +Table 2. +Indices for classrooms. + +
IndexActual ClassroomsComments (capacity of room)
1MRC 10Computer classroom for Calculus (37)
2SCI 1020General (70)
3SCI 1030General (40)
4SCI 1040General (20)
5SCI 2185General (24)
6SCI 2200General (46)
7SS 136General (50)
8SS 245General (70)
+ +# The Room-Time-Course ILP + +We exhibit the ILP model for which a feasible solution assigns math courses to time periods and to classrooms based on certain criteria or constraints. The constraints are: + +1. Certain math courses (Calculus 1 and 2) have to be in the computer classroom MRC 10. +2. For courses with multiple sections, the sections must be offered at different times. +3. No course with more than 20 students as its maximum can be assigned to rooms such as SCI 1040. +4. Every course must be assigned to exactly one room at some time period. +5. At most one course can be assigned to a room at any time. +6. Courses such as Linear Algebra or Differential Equations are not to be offered during time period 4 because that is when most large science lecture classes are held. + +Table 3. Indices for mathematics courses for Fall and Spring. + +
Fall
CourseSectionsIndex
(Calc 1) Math 1101(5)1,2,3,4,5
(Calc 2) Math 1102(2)6,7
(Pre-Calc) Math 1011(2)8,9
(Intro. Stat) Math 1601(4)10,11,12,13
(Basic Alg) Math 0901(1)14
(Survey Calc) Math 1021(1)15
(Calc 3) Math 2101(1)16
(Linear Alg.) Math 2111(1)17
(Pure Math 1)Math 2201(1)18
(Diff. Eq.) Math 2401(1)19
(Prob.) Math 2501(1)20
(Stats Mthd) Math 2601(1)21
(Geom.) Math 3211(1)22
(Discrete) Math 3411(1)23
(Data Analy.) Math 3601(1)24
(Real-Complex) Math 4201(1)25
(Topics in Stats) Math 4650(1)26
Spring
(Calc 1) Math 1101(3)1,2,3
(Calc 2) Math 1102(4)4,5,6,7
(Pre-Calc) Math 1011(1)8
(Intro. Stat) Math 1601(4)9,10,11,12
(Survey Math) Math 1001(1)13
(Calc 3) Math 2101(1)14
(Linear Alg.) Math 2111(1)15
(Hist. Math) Math 2211(1)16
(Math Stats) Math 2611(1)17
(Pure Math 2) Math 3201(1)18
(Op. Res.) Math 3401(1)19
(Mgmt Sci) Math 3501-3502(1)20
(Data Analy.) Math 3611(1)21
(Abst.-Topics) Math 4231(1)22
(Biostat) Math 4601(1)23
+ +7. A few courses may not be scheduled back-to-back. +8. Certain lower-level mathematics courses must be taught during the MWF time periods. + +To formulate an ILP, we need to define decision variables. Without loss of generality, we illustrate the case for the Fall semester. Since for the first part we are deciding on which room and what time period to assign the math courses to, we define the decision variables as + +$$ +x _ {t, c, r} = \left\{ \begin{array}{l l} 1, & \text {i f c o u r s e c i s t a u g h t a t p e r i o d t i n c l a s s r o o m r ;} \\ 0, & \text {o t h e r w i s e} \end{array} \right. +$$ + +for each time period $t = 1, \ldots, 12$ , for each course $c = 1, \ldots, 26$ , and for each room $r = 1, \ldots, 8$ . + +Each constraint corresponds to a linear inequality or an equality. We translate the constraints as follows: + +1. Every Calculus 1 and 2 section (course index 1, . . . , 7) has to be scheduled in the computer classroom MRC 10 (room index 1): + +$$ +\sum_ {t = 1} ^ {1 2} x _ {t, c, 1} = 1 \quad \text {f o r} c = 1, \dots , 7. +$$ + +2. For courses with multiple sections, the sections must be offered at different times: + +$$ +\sum_{\substack{c:c\text{is part}\\ \text{of a course with}\\ \text{multiple sections}}}x_{t,c,r}\leq 1\quad \text{for} t = 1,\ldots ,12;r = 1,\ldots ,8. +$$ + +3. Every course has to be assigned to exactly one room at some time period: + +$$ +\sum_ {t = 1} ^ {1 2} \sum_ {r = 1} ^ {8} x _ {t c r} = 1 \mathrm {f o r} c = 1, \ldots , 2 6. +$$ + +4. At most one course can be assigned to a room at any time: + +$$ +\sum_ {c = 1} ^ {2 6} x _ {t c r} \leq 1 \mathrm {f o r} t = 1, \dots , 1 2; r = 1, \dots , 8. +$$ + +5. That a specific course $\bar{c}$ cannot be offered at some time $\bar{t}$ translates as: + +$$ +\sum_ {r = 1} ^ {8} x _ {\bar {t}, \bar {c}, r} = 0. +$$ + +6. That two courses $\bar{c}$ and $\tilde{c}$ cannot be offered back to back becomes: + +$$ +\sum_ {r = 1} ^ {8} x _ {t, \bar {c}, r} + x _ {t + 1, \tilde {c}, r} \leq 1 \text {f o r} t = 1, \dots , 7, 9, 1 0, 1 1. +$$ + +7. That a lower-level math course $\bar{c}$ that cannot be scheduled on a TTh schedule can be represented as: + +$$ +\sum_ {t = 8} ^ {1 2} \sum_ {r = 1} ^ {8} x _ {t, \bar {c}, r} = 0. +$$ + +8. As in any general ILP model, the bounds and the integer constraints of the variables must be included: + +$$ +0 \leq x _ {t, c, r} \leq 1, \quad \text {i n t e g e r}, \quad \text {f o r} t = 1, \dots , 1 2; c = 1, \dots , 2 6; r = 1, \dots , 8. +$$ + +The objective function for room-course-time scheduling is not that crucial because the main purpose is to obtain a feasible solution that satisfies all the constraints. Thus, we choose to maximize the linear objective function + +$$ +\sum_ {t = 1} ^ {1 2} \sum_ {c = 1} ^ {2 6} \sum_ {r = 1} ^ {8} x _ {t, c, r}. +$$ + +That every course has to be assigned to exactly one room at some time period implies an optimal value of 26 if a feasible solution exists. The more constraints or restrictions, the greater the possibility that the model is infeasible. For the (UMM-CSP), there is indeed a feasible solution. + +# The Professor Assignment ILP + +At UMM, the class schedule for the academic year is determined during the fall of the previous year. However, when the schedule is being prepared, many departments do not know exactly who will be on leave, let alone who the new adjunct faculty will be the next academic year. Thus, in the Mathematics Department we usually start on the classroom-time-courses scheduling with few professors assigned, if any. Once the classroom-time-courses schedule is determined, we find the best way to allocate professors. + +Another computational reason for splitting the project into two parts is to minimize the number of variables in each of the two models. Garey and Johnson [1979] showed that an ILPs is an NP-hard problem, meaning that its computational complexity grows exponentially with the number of input (decision) variables. The first model, on time-room-course assignment, has $13 \times 9 \times 26$ variables; the second, on professor assignment, has $11 \times 26$ variables. Partitioning into two problems makes the computations more manageable. + +Based on the outcome of the classroom-time-courses ILP model, we define + +$$ +A = \left\{\{t, c, r \}: x _ {t, c, r} = 1, \text {f r o m t h e t i m e - c o u s e - r o o m I L P m o d e l} \right\}. +$$ + +We have about 11 faculty. We request that each faculty member submit four preferred courses and three preferred times. For this ILP, to maximize satisfaction among all faculty, the objective function is crucial. + +Many faculty put a higher weight on course preferences than on time preferences. We incorporate objective-function coefficients that reflect those preferences, with the objective function maximizing the overall happiness factor of the entire department. For assignment $a = \{t, c, r\}$ of time $t$ , course $c$ , and room $r$ to professor $p$ , the coefficient is: + +$$ +k _ {p, a} = \left\{ \begin{array}{l l} 1 0, & \text {i f c o u r s e c a n d t i m e t a r e r e q u e s t e d b y p ;} \\ 5, & \text {i f c o u r s e c b u t n o t t i m e t a r e r e q u e s t e d b y p ;} \\ 1, & \text {i f c o u r s e c i s n o t r e q u e s t e d , b u t t i m e t i s , b y p ;} \\ 0, & \text {i f n e i t h e r c o u r s e c n o r t i m e t i s r e q u e s t e d b y p .} \end{array} \right. +$$ + +We maximize + +$$ +\sum_ {a \in A} \sum_ {p = 1} ^ {1 0} k _ {p, a} x _ {p, a}, +$$ + +where the decision variables are + +$$ +x _ {p, a} = \left\{ \begin{array}{l l} 1, & \text {i f p i s a s s i g n e d c o u r s e c a t t i m e t ;} \\ 0, & \text {o t h e r w i s e} \end{array} \right. +$$ + +for each professor $p = 1, \ldots, 11$ and for each $a \in A$ . For the constraints, we have + +1. Each professor can teach at most one course at each time period: + +$$ +\sum_ {a \in A: a = \{t, c, r \}} x _ {p, a} \leq 1 \text {f o r} p = 1, \dots , 1 1. +$$ + +2. Each professor is assigned to at least two but at most three courses: + +$$ +\sum_ {a \in A: a = \{t, c, r \}} x _ {p, a} \leq 3 \text {f o r} p = 1, \dots , 1 1; +$$ + +$$ +\sum_ {a \in A: a = \{t, c, r \}} x _ {p, a} \geq 2 \text {f o r} p = 1, \dots , 1 1. +$$ + +3. Each course is taught by one and only one professor: + +$$ +\sum_ {p = 1} ^ {1 1} x _ {p, a} = 1 \text {f o r} a \in A. +$$ + +4. If a particular professor $\bar{p}$ is prohibited from teaching course $\bar{c}$ , then we add the constraint: + +$$ +x _ {\bar {p}, a} = 0 \text {f o r} a \in A \text {A N D} a = \{t, \bar {c}, r \}. +$$ + +5. If one of three professors (the statisticians, $\bar{p}$ , $\tilde{p}$ , $\hat{p}$ ) must teach a certain course $\bar{c}$ , then we add: + +$$ +x _ {\bar {p}, a} + x _ {\tilde {p}, a} + x _ {\hat {p}, a} = 1 \text {f o r} a \in A, \& a = \{t, \bar {c}, r \}. +$$ + +6. The bounds and integer constraints of the decision variables are: + +$$ +x _ {p, a} \text {i n t e g e r}, \quad 0 \leq x _ {p, a} \leq 1, \text {f o r} a \in A; p = 1, \dots , 1 1. +$$ + +# Results and Analysis + +We solved the ILP models using optimization software called CPLEX, which stands for simplex algorithm written in the C language. The simplex algorithm is well known for solving linear programming problems [Sierksma 1996]. The simplex algorithm searches from one basic feasible solution, a solution found at a corner of a polyhedron, to another until it finds a solution that cannot be improved by any of its neighboring corner points. The algorithm also concludes if there is no feasible solution. CPLEX can also be used to solve an integer linear programming problem, although at a much slower speed, by incorporating the branch-and-bound algorithm for solving ILPs [Sierksma 1996]. Basically, the branch-and-bound algorithm goes through the phases of either partitioning the problem into two smaller ones and solving their individual linear relaxations (which are basically linear programming problems); or by proving that any partition problem will not yield optimal values that will be better. + +# Results + +The results for the times-courses-rooms assignment problems (Fall and Spring semesters) are given in Tables 5 and 6 in the Appendix, and results for the professors-courses assignments are in Tables 7 and 8. [EDITOR'S NOTE: We omit these tables, since their results pertain to the specific constraints at University of Minnesota-Morris.] + +# Analysis of the Results + +There are not many rooms left for the other 5 departments in just the Division of Science and Math, let alone other departments (about 20) across the campus (and each department offers 10 to 26 sections of courses each semester)! + +# Conclusions + +We found an optimal and feasible solution to the classroom scheduling problem that satisfied the majority of the people involved. + +It is conceivable that not all professors got their preferred times. Indeed, since we solved the room-time-course model first, the times of the courses were already fixed when we attacked to solve the professor assignment problem. In addition, our main concern in the latter problem was assigning the professors to the preferred courses. + +An advantage of solving classroom scheduling problems by this mathematical model or other ILP formulations is that it allows the users the freedom to control or add constraints and adjust objective-function coefficients. For example, if a department wanted to require that a professor cannot teach two classes in consecutive time periods, then additional mathematical constraints (inequalities) could be added. Also, the hierarchy on the level of satisfaction between the "course" preference and the "time" preference can be changed by incorporating it into the objective function coefficients. Further customization is possible. + +We hope that this project will benefit the University of Minnesota-Morris and other institutions by giving a better applicable solution to the scheduling of classrooms according to the requests of professors and students. In addition, the results provide a foundation based on which other types of scheduling problems can be solved or understood better. + +# References + +Carter, M.W. 1989. A Lagrangian relaxation approach to the classroom assignment problem. INFOR 27: 230-246. +_____, and C.A. Tovey. 1992. When is the classroom assignment problem hard? Operations Research 40: 28-39. +Dondeti, V.R., and H. Emmons. 1986. Resource requirements for scheduling with different processor sizes—Part I. Technical Memorandum 577, Department of Operations Research, Case Western Reserve University, Cleveland, Ohio. +Ferland, J.A., and S. Roy. 1985. Timetabling problem for university assignment of activities to resources. Computers in Operations Research 12: 207-218. +Garey, M.R., and D.S. Johnson. 1979. Computers and Intractability: A Guide to the Theory of NP-Completeness. San Francisco: W.H. Freeman. +Glassey, C.R., and M. Mizrach. 1986. A decision support system for assigning classes to rooms. Interfaces 16 (5): 92-100. +Gosselin, K., and M. Truchon. 1986. Allocation of classrooms by linear programming. Journal of the Operations Research Society 37: 561-569. + +Kolen, A., J.K. Lenstra, and C. Papadimitriou. 1987. Interval scheduling problems. Unpublished working paper, Centre of Mathematics and Computer Science, C.W.I., Amsterdam. +Mangoubi, R.S., and D.F.X. Mathaisal. 1985. Optimizing gate assignments at airport terminals. *Transportation Science* 19: 173-188. +Mulvey, J.M. 1982. A classroom/time assignment model. European Journal of Operations Research 9: 64-70. +Sierksma, G. 1996. Linear and Integer Programming. New York: Marcel Dekker. + +# Acknowledgment + +Research partially supported by the Undergraduate Research Opportunities Program (UROP) of the University of Minnesota. + +# About the Authors + +![](images/54cea9c5ce4bac9558862df404c35d365734d894c0832976137efb28e5ac14bc.jpg) + +Peh Ng received a B.S. in Mathematics and Physics from Adrian College, Michigan; an M.S. in applied mathematics from Purdue University; and a Ph.D. in Operations Research and Combinatorial Optimization, also from Purdue University. She is currently a University of Minnesota Morse-Alumni Distinguished Teaching Professor of Mathematics and an Associate Professor at the University of Minnesota-Morris. Her areas of publication include oper- + +ations research, discrete optimization, and graph theory. During the last seven years, she has also worked with students on research projects supported by university-wide undergraduate research programs. + +![](images/67f21527d6ebc95e472bf6ec79eae259a1c473a0d65ec851f80d29562fd852e3.jpg) + +Lora Martin graduated with a B.A. in Computer Science and a minor in Mathematics from the University of Minnesota-Morris in May 1998. She currently enjoys working on a Java Virtual Machine Development Team as a software engineer for the Unisys Corporation in Roseville, MN. Her presentation and research experience proved valuable during team meetings and company functions. Future career plans include graduate coursework in a management-related field. + +# The Optimal Positioning of Infielders in Baseball + +Alan Levine + +Jordan Ludwick + +Mathematics Dept. + +Franklin and Marshall College + +Lancaster, PA 17604 + +a_levine@email.fandm.edu + +# Introduction + +When asked in the late 19th century to explain his success at the plate, baseball Hall-of-Famer "Wee Willie" Keeler responded quite simply, "... I hit 'em where they ain't." While we will always associate this famous quote with Keeler, it is the objective of all batters in the game of baseball to "hit 'em where they ain't." But who exactly are "they"? Evidently, "they" are the nine fielders on the opposing team. In each trip to the plate, a batter attempts to put the ball into play so that it will not be caught or otherwise intercepted by one of the opposing fielders. Conversely, it is the goal of the nine fielders to do just that—to catch or at least get their gloves on a batted ball. + +The fielders seek to position themselves so that their collective likelihood of reaching a batted ball is maximized. When facing a right-handed batter known to pull the ball down the third-base line, for instance, the fielders tend to shift to the left.1 Similarly, when a left-handed pull hitter steps to the plate, the fielders most likely shift to the right. + +We develop a mathematical model that uses elementary probability and calculus to determine the optimal positioning of each of the four infielders—the third baseman, the shortstop, the second baseman, and the first baseman—as a function of the distribution of the batter's hits. + +# The Model + +The first step in developing the model is to establish a method for quantifying the "location" of a fielder or a batted ball in the infield. Let $H$ represent the location of home plate. We assume that any batted ball travels along a straight line through $H$ . Let $\theta$ be the angle between this line and the third-base line (measured in degrees) and set $x = \theta / 90$ . Thus, $x = 0$ represents the third-base line, $x = 1$ represents the first-base line, and $x = 0.5$ represents any point on the line from home plate through the center of second base. Since we are concerned only with fair balls, the range of values of $x$ is limited to [0, 1]. + +Let $X$ be a continuous random variable representing the location of a batted ball and let $f(x)$ be the probability density function of $X$ . While a number of different density functions might be appropriate models, we adopt one that is particularly simple—the piecewise-linear distribution, defined by: + +$$ +f (x) = \left\{ \begin{array}{l l} \frac {2 x}{k}, & 0 \leq x \leq k; \\ \frac {2 (1 - x)}{1 - k}, & k \leq x \leq 1, \end{array} \right. \tag {1} +$$ + +where $k$ is a constant. Note that this density function is unimodal with mode $k$ (see Figure 1). This means that the batter is most likely to hit the ball to position $x = k$ and that the likelihood of hitting the ball to position $x = j$ decreases as $|j - k|$ increases. Furthermore, since the area under any density function must equal 1, $f(k) = 2$ for every $k$ . + +![](images/5cae14d755040dcc1b37df37876c3bf9a9904c776e00f81ef10cd36a68baace7.jpg) +Figure 1. Piecewise-linear density function. + +From the definition of density functions, the probability that a ball is hit in the sector between $x = a$ and $x = b$ is the area under the graph of over the interval $[a, b]$ ; in other words, $P(a \leq X \leq b) = \int_{a}^{b} f(x) \, dx$ . + +We assume that for each infiender there exists an interval of $x$ -values such that the infiender can, with probability 1, reach any ball hit in that interval. (This + +doesn't mean that the infiender will make the play and retire the batter.) We define the range of the infiender as the length of that interval. We accept the conventional infield positioning in the sense that the third baseman is always the leftmost infiender, the shortstop is always to the right of the third baseman, the second baseman is always to the right of the shortstop, and the first baseman is always the rightmost infiender.3 + +To distinguish between infielders, we use subscripts corresponding to the standard position numbers used in baseball scoring. Specifically, the first base man is denoted by "3", the second base man by "4", the third base man by "5", and the shortstop by "6". Let $\vec{x} = (x_{5}, x_{6}, x_{4}, x_{3})$ be a vector representing the location of the four infielders, where $x_{j}$ is the right end of the interval that infiender $j$ can cover. Let $\vec{r} = (r_{5}, r_{6}, r_{4}, r_{3})$ be a vector representing the ranges of the infielders. Thus, infiender $j$ can cover the interval from $x_{j} - r_{j}$ to $x_{j}$ (Figure 2). + +![](images/31bbc8e370d87487b3fe738786357072796fb2fd52f802ff0fba41536f6cf186.jpg) +Figure 2. Positioning of infielders and their ranges. + +We say that $\vec{x}$ is feasible if: + +1. $0 \leq x_{5} - r_{5}$ +2. $x_{5}\leq x_{6} - r_{6},$ +3. $x_{6}\leq x_{4} - r_{4},$ + +4. $x_{4}\leq x_{3} - r_{3},$ and +5. $x_{3}\leq 1$ + +These five inequalities ensure that all four infielders and their entire ranges remain in fair territory and that the ranges of no two infielders overlap, although their ends may coincide at a point. + +We say that $\vec{x}$ is gapless if the inequalities in (2), (3), and (4) are replaced by equal signs. This means that the right end of the third baseman's range coincides with the left end of the shortstop's range, the right end of the shortstop's range coincides with the left end of the second baseman's range, and the right end of the second baseman's range coincides with the left end of the first baseman's range. The positioning in Figure 2 is feasible but not gapless, as there is a gap between the first and second basemen. + +# Optimal Positioning + +Let $L(\vec{x}, \vec{r})$ represent the probability that a set of infielders whose locations are given by $\vec{x}$ and whose ranges are given by $\vec{r}$ will reach a batted ball. Equivalently, $L(\vec{x}, \vec{r})$ is the probability that the batted ball will be in one of the four intervals of the form $[x_j - r_j, x_j]$ , for $j = 3, 4, 5, 6$ . Hence, for any feasible $\vec{x}$ , we have + +$$ +L (\vec {x}, \vec {r}) = \sum_ {j = 3} ^ {6} \int_ {x _ {j} - r _ {j}} ^ {x _ {j}} f (x) d x. +$$ + +If $\vec{x}$ is gapless, then + +$$ +L (\vec {x}, \vec {r}) = \int_ {x _ {5} - r _ {5}} ^ {x _ {3}} f (x) d x = \int_ {x _ {b}} ^ {x _ {b} + R} f (x) d x, +$$ + +where $x_{b} = x_{5} - r_{5}$ is the left end of the third baseman's interval and $R = \sum_{j=3}^{6} r_{j}$ is the total range of all of the fielders. + +Let + +$$ +L ^ {*} = \max \{L (\vec {x}, \vec {r}), \vec {x} \text {i s g a p l e s s} \} +$$ + +be the maximum probability that can be attained by a gapless location of infielders. We claim that $L^{*}$ is also the maximum probability that can be attained by any feasible location of infielders. In other words, the optimal location of infielders is always a continuous block—that is, with no gaps between any of them. + +To see why this is true, observe that if $\vec{x}$ is gapless, then $L(\vec{x},\vec{r})$ is the area of "double trapezoid" in Figure 3. (We are assuming that the optimal solution should have the mode of the distribution somewhere in the range that can be covered by the inf Fielders.) This can be most easily computed by adding the areas of the two triangles outside the range and subtracting from 1. + +![](images/e8b9756e95be863feb6ca4097dbe6094c5055adfa9f79ee2141c36f0aa31709d.jpg) +Figure 3. Area covered by gapless location of infielders. + +Thus, + +$$ +L (\vec {x}, \vec {r}) = 1 - \frac {1}{2} x _ {b} f (x _ {b}) - \frac {1}{2} (1 - x _ {b} - R) f (x _ {b} + R) = 1 - \frac {x _ {b} ^ {2}}{k} - \frac {(1 - x _ {b} - R) ^ {2}}{1 - k}. +$$ + +Differentiating gives + +$$ +L ^ {\prime} (x _ {b}) = \frac {- 2 x _ {b}}{k} + \frac {2 (1 - x _ {b} - R)}{1 - k}. +$$ + +Upon setting this equal to 0, we find that $L$ is maximized when $x_{b} = k(1 - R)$ . Hence, the optimal gapless location of infielders covers the interval + +$$ +[ k (1 - R), k (1 - R) + R ]. +$$ + +Furthermore, using (1), we find that the optimal probability is + +$$ +L ^ {*} = 1 - k (1 - R) ^ {2} - \left[ (1 - R) ^ {2} - k (1 - R) ^ {2} \right] = 2 R - R ^ {2}, +$$ + +which, not surprisingly, is independent of $k$ . Most important, however, is the fact that $f(x_{b}) = f(x_{b} + R) = 2(1 - R)$ and that $f(x) \geq 2(1 - R)$ for all $x \in [x_{b}, x_{b} + R]$ . Thus, the function values at the two ends of the optimal interval are the same, and the function values in the interior of the optimal interval are bigger than those at the ends. + +Now imagine that we place a gap of length $d > 0$ between two of the fielders, say the first and second basemen. This will cause the first baseman to move a distance $d_r$ to the right and the other fielders to move a distance $d_l$ to the left, where $d_l + d_r = d$ . The probability that the ball is hit in the interval covered by the fielders is + +$$ +L = L ^ {*} - \frac {1}{2} d \big [ f (a) + f (b) \big ] + \frac {1}{2} d _ {r} \big [ f (x _ {b} + R) + f (x _ {b} + R + d _ {r}) \big ] + \frac {1}{2} d _ {l} \big [ f (x _ {b}) + f (x _ {b} - d _ {l}) \big ], +$$ + +where $a$ and $b$ are the endpoints of the gap. We know that $f(x_{b}) > f(x_{b} - d_{l})$ , $f(x_{b} + R) > f(x_{b} + R + d_{r})$ , and both $f(a)$ and $f(b)$ are greater than $2(1 - R)$ . Therefore, + +$$ +L < L ^ {*} - \frac {1}{2} d \big [ f (a) + f (b) \big ] + 2 d (1 - R) < L ^ {*}. +$$ + +So inserting the gap causes a decrease in the probability that one of the fielders can reach the batted ball.4 + +# Estimating $k$ + +The piecewise-linear density function $f$ in (1) that we've chosen to model the distribution of hits depends on one parameter, the mode $k$ . For our results to be useful, we have to estimate $k$ based on some data collected about a batter's past performance. In particular, suppose that we have a sample $t_1, t_2, \ldots, t_n$ of $n$ locations at which balls were hit in the infield. We use these data to determine the maximum likelihood estimate of $k$ . (We use the letter $t$ to distinguish from the $x$ 's used previously to denote the location of the fielders.) + +Without loss of generality, we assume that the data are in increasing numerical order, so that $t_1$ is the smallest observation and $t_n$ is the largest. Then there exists some $j$ such that $t_j \leq k < t_{j+1}$ . It follows that the likelihood function for $k$ is + +$$ +H (k) = \left[ \prod_ {i = 1} ^ {j} \left(\frac {2}{k}\right) t _ {i} \right] \left[ \prod_ {i = j + 1} ^ {n} \left(\frac {2}{1 - k}\right) (1 - t _ {i}) \right] = \frac {C}{k ^ {j} (1 - k) ^ {n - j}}, +$$ + +for $t_j \leq k < t_{j+1}$ , where $C = 2^n \left[ \prod_{i=1}^j t_i \right] \left[ \prod_{i=j+1}^n (1 - t_i) \right]$ is a constant independent of $k$ . Note that to make this definition complete, we define $t_0 = 0$ and $t_{n+1} = 1$ , and all vacuous products are also equal to 1. + +For example, if $n = 2$ , we have: + +$$ +H (k) = \left\{ \begin{array}{l l} \frac {4 (1 - t _ {1}) (1 - t _ {2})}{(1 - k) ^ {2}}, & 0 \leq k \leq t _ {1}; \\ \frac {4 t _ {1} (1 - t _ {2})}{k (1 - k)}, & t _ {1} \leq k \leq t _ {2}; \\ \frac {4 t _ {1} t _ {2}}{k ^ {2}}, & t _ {2} \leq k \leq 1. \end{array} \right. +$$ + +Observe that $H$ is a continuous function of $k$ . + +Our goal is to determine the value of $k$ that maximizes $H(k)$ . This is equivalent to minimizing the denominator $G(k) = k^{j}(1 - k)^{n - j}$ . Then $g^{\prime}(k) = k^{j - 1}(1 - k)^{n - j - 1}(j - kn)$ . Setting $g^{\prime}(k) = 0$ and solving for $k$ , we see that $k = j / n$ is a critical point of $g$ . There is no guarantee, however, that $t_j \leq j / n < t_{j + 1}$ . If + +indeed that is true, then $k = j / n$ is a local maximum of $g$ , not a local minimum. Consequently, $H(k)$ may (but does not necessarily) achieve a local minimum on $[t_j, t_{j+1}]$ . Never does it achieve a local maximum there. So, for example, in the case $n = 2$ , if $t_1 \leq t_2 < 0.5$ or $0.5 < t_1 \leq t_2$ , then $H$ is monotonic on each of the intervals $[0, t_1]$ , $[t_1, t_2]$ , and $[t_2, 1]$ . On the other hand, if $t_1 \leq 0.5 \leq t_2$ , then $H$ is monotonic on $[0, t_1]$ and $[t_2, 1]$ and achieves a local minimum on $[t_1, t_2]$ at $k = 0.5$ . + +In all cases, we are left to conclude that the global maximum of $H(k)$ must occur at an endpoint of some interval $[t_j, t_{j+1}]$ . In other words, the maximum likelihood estimate of $k$ is one of the data points. Thus, all we need do is evaluate $H(t_j)$ for all $j$ and select the one that is greatest. + +Example: Suppose that a particular batter has hit 7 balls through the infeld during the last two games. Examination of videotape reveals the location of these hits, in the context of our model, as: .21, .27, .30, .33, .36, .40, and .64. Furthermore, assume that the fielders' ranges are given by $\vec{r} = (.16, .20, .18, .12)$ . This means that the shortstop has the widest range, followed by the second baseman and the third baseman. The first baseman has the smallest range, as is often the case, especially if there is a runner on first base. + +Table 1 gives the values of $H(t_{j})$ for $j = 1,2,\ldots ,7$ . + +Table 1. Values of $H(t_{j})$ for $j = 1,\ldots ,7$ for the example. + +
tjH(tj)
.2124.922
.2731.136
.3031.108
.3327.847
.3622.559
.4015.156
.641.506
+ +Since $H(t_{j})$ assumes its greatest value when $j = 2$ , the maximum-likelihood estimate of $k$ is $x_{2} = .27$ . We now can define our density function $f(x)$ : + +$$ +f (x) = \left\{ \begin{array}{l l} 7. 4 1 x, & 0 \leq x \leq . 2 7; \\ 2. 7 4 (1 - x), & . 2 7 \leq x \leq 1. \end{array} \right. +$$ + +Our goal, however, is to determine the location of our optimal "block." We know that $k = .27$ and that $R = \sum_{j=3}^{6} r_j = .66$ . Thus, the left end of the block is at $t_b = k(1 - R) = .09$ and the right end of the block is at $t_b + R = .75$ . The third baseman covers the interval [.09, .25], the shortstop covers [.25, .45], the second baseman covers [.45, .63], and the first baseman covers [.63, .75]. Presumably, each infiender is able to range + +some given distance to the left and some given distance to the right. The optimal positioning of the infiender within the designated interval must be consistent with this ability to move to either side. + +# Further Analyses + +There are a number of variations to the model described here to explore. First, we might try to adopt a different density function for the distribution of hits. One possibility is a beta distribution $f(x) = cx^{\alpha - 1}(1 - x)^{\beta - 1}$ , for some constant $c$ . This density function depends on two parameters, $\alpha$ and $\beta$ , and thus we have more control over the shape of the distribution. That the optimal positioning of infielders is gapless is still true, but the calculation of the optimal position and the estimation of the parameters are more difficult problems than in the piecewise-linear case. We could also try density functions that are not unimodal, although there are probably not many batters whose hit distribution would be described by such a function. + +Our model maximizes the probability that one of the infielders will get to a batted ball. It does not take into account the possibility that even though the infielder gets to the ball, doing so may not actually retire the batter (i.e., get the batter out). So we could create a function $p(x,y)$ , which is the probability that an infielder located at position $x$ can retire a batter who hits the ball to location $y$ . Presumably, $p(x,y)$ would be very close to 1 if $x = y$ (meaning that the ball is hit directly at the infielder) and decrease as $|x - y|$ increases. The objective would then be to maximize the collective probability that some infielder retires the batter. + +# About the Authors + +![](images/04c1c49754f05171d91eceff43861dc8278f505757f22a4a38eb9a16f759ce2c.jpg) + +Alan Levine received his Ph.D. in Applied Mathematics from the State University of New York at Stony Brook in 1983. Since then, he has taught in the Dept. of Mathematics at Franklin and Marshall College. Although he has never fulfilled his lifelong ambition to be the statistician for the New York Mets, he did teach a first-year seminar entitled "Math and Sports" in the fall of 2000. + +![](images/9b15c742c22501dd28f16e4e2b75be7234bd584792a4e8f7880676b0f5e3336d.jpg) + +Jordan Ludwick is a December 2000 graduate of Franklin and Marshall College, where he majored in mathematics. He was the preceptor for the Math and Sports course taught by Prof. Levine, and this paper is the result of an independent study project that he did in conjunction with the course. He currently works in the banking industry. + +![](images/4d243d68e5cd7f7437f329b9ca242089e37ed4b90718f5aac6dff2df01a2c5c7.jpg) + +INTERDISCIPLINARY LIVELY APPLICATIONS PROJECT + +# AUTHOR: + +Marie Vanisko + +(Mathematics, Engineering, + +Physics, & Computer + +Science) + +mvanisko@carroll.edu + +Carroll College, Helena, MT + +# EDITORS: + +Chris Arney, + +Kathleen Snook, + +and Steve Horton + +Dept. of Mathematical + +Sciences + +U.S. Military Academy + +West Point, NY + +# Who Falls Through the Healthcare Safety Net? + +# MATHEMATICS CLASSIFICATIONS: + +This project is appropriate for a mathematics course intended to serve students majoring in fine arts and the humanities, frequently referred to as a "liberal arts" mathematics course. + +# DISCIPLINARY CLASSIFICATIONS: + +Health Information Management, Business, Ethics, Sociology, Political Science + +# PREREQUISITE SKILLS: + +An understanding of basic statistical terms and charts + +# PHYSICAL CONCEPTS EXAMINED: + +Actual data sets from the Census Bureau are analyzed concerning those without health insurance, first in general, then relative to those in poverty, and finally broken down by state. The emphasis is on interpreting the statistics and explaining them to others. + +# COMPUTING REQUIREMENT: + +Either a graphing calculator or a computer with spreadsheet, computer algebra system, and/or statistical package. + +The UMAP Journal 23 (1) (2002) 75-82. © Copyright 2001 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +# Contents + +1. Introduction +2. Instructions +3. Requirements + +Instructors' Comments and Solutions + +About the Author + +# 1. Introduction + +Many of us take our health insurance for granted; but according to the U.S. Census Bureau report for 1999, more than $15\%$ of the people in the United States have no health insurance, public or private. That is more than 42 million people, many of whom are children. With increasing medical costs, those without insurance frequently have to forego medical treatment or must give up their life savings to pay for it. + +This Interdisciplinary Lively Application Project (ILAP) explores groups of people without health insurance in an effort to analyze the problem. All data are from the U.S. Census Bureau. + +# 2. Instructions + +group must provide detailed written reports, as instructed in the requirements, and be prepared to discuss the results in class. + +This ILAP, with periodically updated data, is at http://web.carroll.edu/mvanisko/default.htm. Select Carroll ILAPs on the menu page and go to the ILAPs under Distribution of Wealth. The tables can be copied and pasted into an Excel spreadsheet. If direct pasting does not put the data into columns: + +- Select (highlight) the first column (already highlighted after pasting) and deselect the other columns. +- Under the Datamenu item, select Text to Columns (the default is Fixed and that frequently works, but you can check and see). +- Click on Finish—and all the data are in columns. There may be a couple of details that you have to clean up, but they are minor. + +# 3. Requirements + +# Requirement 1 + +Table 1 gives a breakdown of individuals not covered by health insurance in 1999. Develop a profile for those without insurance. Show the results graph- + +ically if appropriate. Provide a rationale for why the results turned out as they did. + +# Requirement 2 + +Table 2 gives a breakdown of 1999 health insurance coverage status for persons living below the poverty level by selected characteristics. This group represents $11.8\%$ of the United States population, and some people in this group do not qualify for Medicaid (government health insurance for the poor) or Medicare (government health insurance for the elderly). Develop a profile for those without insurance in this group. Discuss why the results turned out as they did. Be sure to look at the percentage who are under 18. Compare and contrast these results to those in Requirement 1. How is the profile different for those in poverty than for the general population? + +# Requirement 3 + +Go to the Internet site http://www.census.gov and examine other Census Bureau data concerning poverty. Determine how "poor" is defined by the Census Bureau. Write a two- to three-page paper on what you have found and be prepared to present your results to the class. + +# Requirement 4 + +After completing the first three requirements, summarize your results in the form of a newspaper article that is approximately two pages in length. The goal is to see how effective you can be in translating the numbers into meaningful rhetoric. After all the reports are in, you will select one group's report to submit to the school and/or the city newspaper. + +# Requirement 5 + +Table 3 provides a breakdown regarding those without health insurance in each state. Categorize each state in terms of its location. Use Table 3 to construct boxplots and scatterplots to explore differences in statistics among the states relative to their location and to their population. Compute the mean and median for the group as a whole and for each region of the country and compare the measures of central tendency within and across regions. Provide plausible explanations for variations you see. + +# Requirement 6 + +How could the information given about health insurance coverage be used to convince someone that a change in national health policy is needed? How could the same information be used to convince someone that a change is not needed? What does this tell you about this type of information? + +# Table 1. + +All Persons Not Covered by Health Insurance, by Selected Characteristics: 1999. + +Source: U.S. Census Bureau, Current Population Surveys, March 1999 and 2000. + +
Numbers in thousands
CharacteristicTotal NumberNot covered by health insurance
Number Not CoveredPercent Not Covered
All274,08742,55415.5
Sex
Male133,93322,07316.5
Female140,15420,48114.6
Age
Under 18 years72,32510,02313.9
18 to 24 years26,5327,68829.0
25 to 34 years37,7868,75523.2
35 to 44 years44,8057,37716.5
45 to 64 years60,0188,28813.8
65 years and over32,6214221.3
Race and Hispanic Origin
White224,80631,86314.2
White, not of Hisp. Origin193,63321,36311.0
Black35,5097,53621.2
Asian / Pacific Islander10,9252,27220.8
Hispanic origin132,80410,95133.4
Education (persons aged 18 and over)
No high school diploma34,0879,11126.7
High school graduate only66,14111,61917.6
Some college, no degree39,9406,05115.2
Associate degree14,7151,90212.9
Bachelor's degree or higher46,8803,8488.2
Work Experience (persons aged 18 to 64)
Worked during year139,21824,18717.4
Worked full-time115,97318,98416.4
Worked part-time23,2455,20422.4
Did not work29,9237,92126.5
Nativity
Native245,70833,08913.5
Foreign-born28,3799,46533.4
Naturalized citizen10,6221,90017.9
Not a citizen17,7587,56542.6
Household Income
Less than $25,00064,62815,57724.1
$25,000–$49,99977,11913,99618.2
$50,000–$74,99956,8736,70611.8
$75,000 or more75,4676,2758.3
+ +1 Persons of Hispanic origin may be of any race. + +Poor Persons Not Covered by Health Insurance, by Selected Characteristics: 1999. + +Source: U.S. Census Bureau, Current Population Surveys, March 1999 and 2000. + +Table 2. + +
Numbers in thousands
CharacteristicTotal NumberNot covered by health insurance
Number Not CoveredPercent Not Covered
All32,25810,43632.4
Sex
Male13,8134,83035.0
Female18,4455,60630.4
Age
Under 18 years12,1092,82523.3
18 to 24 years4,6032,08845.4
25 to 34 years3,9682,05951.9
35 to 44 years3,7331,67244.8
45 to 64 years4,6781,68636.0
65 years and over3,1671073.4
Race and Hispanic Origin
White21,9227,27133.2
White, not of Hisp. Origin14,8754,15828.0
Black8,3602,34728.1
Asian / Pacific Islander1,16348541.7
Hispanic origin17,4393,25443.7
Education (persons aged 18 and over)
No high school diploma7,8882,87636.5
High school graduate only6,8102,61138.3
Some college, no degree3,1621,27840.4
Associate degree83632438.8
Bachelor's degree or higher1,45252135.9
Work Experience (persons aged 18 to 64)
Worked during year8,6494,10447.5
Worked full-time5,5822,65447.5
Worked part-time3,0661,45047.3
Did not work8,3333,40040.8
Nativity
Native27,5077,81728.4
Foreign-born4,7512,61955.1
Naturalized citizen96834735.9
Not a citizen3,7832,2716.0
+ +1 Persons of Hispanic origin may be of any race. + +Table 3. +Number of Persons Covered and Not Covered by Health Insurance by State in 1999 Source: U.S. Census Bureau, Current Population Surveys, March 1999 and 2000. + +
StateTotal thousandsCovered thousandsNot Covered thousandsNot Covered percent
United States272,691230,42442,26715.5
Alabama4,3703,74562514.3
Alaska62050111819.1
Arizona4,7783,7651,01321.2
Arkansas2,5512,17637514.7
California33,14526,4176,72820.3
Colorado4,0563,37568116.8
Connecticut3,2822,9603229.8
Delaware7546688611.4
District of Columbia5194398015.4
Florida15,11112,2102,90119.2
Georgia7,7886,5341,25416.1
Hawaii1,1851,05413211.1
Idaho1,2521,01323919.1
Illinois12,12810,4181,71014.1
Indiana5,9435,30164210.8
Iowa2,8692,6312388.3
Kansas2,6542,33332112.1
Kentucky3,9613,38757414.5
Louisiana4,3723,38898422.5
Maine1,2531,10414911.9
Maryland5,1724,56161011.8
Massachusetts6,1755,52764810.5
Michigan9,8648,7591,10511.2
Minnesota4,7764,3933828.0
Mississippi2,7692,30946016.6
Missouri5,4684,9984708.6
Montana88371916418.6
Nebraska1,6661,48618010.8
Nevada1,8091,43537520.7
New Hampshire1,2011,07912310.2
New Jersey8,1437,0521,09113.4
New Mexico1,7401,29144925.8
New York18,19715,2122,98416.4
North Carolina7,6516,4731,17815.4
North Dakota6345597511.8
Ohio11,25710,0181,23811.0
Oklahoma3,3582,77058817.5
Oregon3,3162,83248414.6
Pennsylvania11,99410,8671,1279.4
Rhode Island991922686.9
South Carolina3,8863,20268417.6
South Dakota73364787733
Tennessee5,4844,85363111.8
Texas20,04415,3744,67011.5
Utah2,1301,82730214.2
Vermont5945217312.3
Virginia6,8735,90496914.1
Washington5,7564,84791015.8
West Virginia1,8071,49830917.1
Wisconsin5,2504,67357811.0
Wyoming4804027716.1
+ +Title: Who Falls Through the Healthcare Safety Net? + +# Instructors' Comments and Solutions + +This ILAP requires minimal computations. The focus here is to raise the mathematical literacy of the intended audience, so that they might become a more "informed citizenry" (in the words of Thomas Jefferson). A "liberal arts" mathematics course is generally intended for students in English, fine arts, history, etc.; such individuals generally avoid mathematics and having to sift through numbers to see what the story is behind the numbers. + +This ILAP provides current information about the state of the nation regarding those without health insurance. The focus of the first four Requirements is summed up in Requirement 4. Students must study the data and write intelligently about where the problem might be. In a similar way, Requirement 6 asks students to interpret the data pertaining to individual states. + +Requirement 5 has formal mathematical content and suggested solutions follow in Table S1 and Figure S1. + +Table S1. +State summary statistics. + +
Statistic% Uninsured
Mean14.4
Median14.2
Std. Deviation4.3
Minimum6.9
Maximum25.8
Percentiles:
25th11.1
50th14.2
75th17.1
+ +![](images/83b4c89d28e0ebb0f3b067f22b8ab753ac716e6d959e6b3de878e725c14e4d50.jpg) +Figure S1. Histogram of state percentages. + +# About the Author + +Marie Vanisko is a professor of mathematics at Carroll College in Helena, Montana, where she has taught for more than 30 years. As a co-director of the NSF Project INTERMATH at Carroll, she has been a primary mover in initiating the writing of interdisciplinary projects (ILAPs), and she has taken a lead role in instituting curricular reform in the undergraduate mathematics program. Marie is co-author of the technology supplement that accompanies the 10th edition of Thomas's Calculus (2001) and has served as a judge in COMAP's Mathematical Contests in Modeling at both the undergraduate and the high school levels. In Spring 2002, she was a visiting professor at the Department of Mathematical Sciences at the U.S. Military Academy at West Point. + +# Reviews + +Krantz, S. 1999. Handbook of Complex Variables. New York: Birkhäuser. xxiv + 290 pp, $69. ISBN 0-8176-40118. + +The author is a well-known expositor; he has written about ten books in addition to the one under review, as well as being Editor of the Notices of the American Mathematical Society. In addition, he is a spirited participant in public issues affecting mathematics education, including the impact of technology and software. This book is modeled in many ways on his text on complex analysis, written with Robert E. Greene [1997]. The author is a well-known worker in several complex variables, and his text is a standard reference in that subject; however, the focus of this book is a single complex dimension. + +This "handbook" is written in a different spirit than a traditional textbook. There are almost no proofs, and the material is presented in short, snappy paragraphs which are designed to be read somewhat independently. Thus, while not suitable as a textbook, it is a useful reference for students, teachers, or scientists who need a convenient source to check matters such as contour integrals, numerical methods in conformal mapping, basic transforms (Laplace, Fourier, $Z$ ), approximation theory (Mergelyan, Runge), potential theory (harmonic and subharmonic functions), and an overview of five computer packages (e.g., Mathematica, Maple, Matlab) that are used for computation or sketching. There are also drawings of elementary conformal mappings. Some of these topics are seldom in the textbook literature. + +In addition, the book has a 37-page "Glossary" of mathematical terms at the end, with references to sections in the text in which they are used. There are also tables listing the power series expansions of the most familiar functions, the computation of Laurent coefficients, and standard definite integrals in terms of residues. Other useful features include material on numerical techniques of conformal mapping and an intuitive introduction to homotopy (in the context of analytic continuation) and Riemann surfaces (with lots of pictures). Most proofs are omitted, and although there is a list of textbooks at the end of the book—divided into the categories "classical," "modern," and "applied"—there are few specific references for proofs. + +However, there are some slip-ups and inconsistencies that should be mentioned. This is not a complete list; Mitrovic [2001] has others, and Prof. R. Burckel's list appears below, with his permission. The need to compress material of interest to a wide audience into about 200 pages forces many arbitrary choices, and any reviewer will have other preferences. + +Some situations indicate a disconnect between the author's sophistication and what he reveals to his audience. One occurs at the end of the chapter on complex line integrals and the Cauchy theory: If $f$ is analytic inside and on a simple closed curve $\gamma$ and if $z_0$ is inside the curve, then Cauchy's formula asserts that + +$$ +2 \pi i f (z _ {0}) = \int_ {\gamma} \frac {f (z)}{z - z _ {0}} d z. +$$ + +The expression on the right is the Cauchy integral of $f$ . In fact, the Cauchy integral is an analytic function of $z_0$ when $z_0$ does not lie on $\gamma$ , even when $f(z)$ is replaced in the integral by any continuous function $g(z)$ . We then call the left side $G(z_0)$ . In his "coda" to Chapter 2, the author observes that the relation between $g$ and $G$ is "rather subtle." That said, we are referred to one of his other texts (no page citation), with the barest hint of the relation. + +Another example: The chapter on residue integration is very extensive, and even includes the useful case of rational functions on the positive axis $0 \leq x < \infty$ (thus exploiting the multivaluedness of the logarithmic function). The section devoted to + +$$ +\int_ {- \infty} ^ {\infty} \frac {x ^ {1 / 3}}{1 + x ^ {2}} d x +$$ + +(which also uses multivaluedness to advantage) has in its heading a reference to making the "spurious part" of the integral disappear. It would seem that this refers to the contribution to the integrand near $z = 0$ , but the author does not reveal what the term "spurious" means; indeed, the behavior at $z = \infty$ is just as "spurious." In contrast, the preceding section considers + +$$ +\int {\frac {\sin x}{x}} d x, +$$ + +where the singularity at zero is not "spurious." (The answer depends on the nature of the singularity at $z = 0$ ; the function $z^{-1}e^{iz}$ has a nonzero residue at the origin. This matter reappears on p. 124 when the author refers to an "integrable singularity" although this term is not defined.) + +The careful reader will be confused about whether a sequence of constants $z_{n} = a, n = 1,2,\ldots$ , has $a$ as a limit point—according to p. 37, the answer is "no," but p. 110 suggests that the answer is "yes." The definition of accumulation point in the glossary is simply wrong; according to any other text, the sequence $z_{n} = (-1)^{n}$ has $\pm 1$ as accumulation points. + +Perhaps the topic that causes the greatest confusion to students is the matter of multiple-valued "functions," such as the complex logarithm. The (multivalued) function $\arg z$ is defined at the outset, but the logarithm has to wait until the chapter on analytic continuation. This leads to a discussion of homotopy, and afterward (p. 135) we are told that the multivaluedness of "functions" such as $\sqrt{z}$ (or $\log z$ ) can be "explained" by the monodromy theorem, since the region $D = \{0 < z < 1\}$ is not simply-connected. That is not the full story, since the function $g(z) = z^{-n}$ for any positive integer $n \geq 2$ is holomorphic in $D$ but is single-valued (and does not extend to be holomorphic at 0). + +The book encompasses a large amount of material very efficiently. Prof. Burckel suggests Herz [1996] (in German) as having a similar orientation. + +# References + +Greene, Robert E., and Krantz, Steven G. 1997. Function Theory of One Complex Variable. New York: Wiley. Mathematical Reviews 98d: 30001. +Herz, Andreas. 1996. Repetitorium der Funktionentheorie (mit Staatsexamenaufgaben). Wiesbaden, Germany: Vieweg. Zentralblatt 885.30002. +Mitrovic, D. 2001. Review of Handbook of Complex Variables by Steven G. Krantz. Mathematical Reviews 2001a: 30001. +Timmann, Steffen. 1998. Repetitorium der Funktionentheorie. Springe, Germany: Binomi Verlag. ISBN 3-923923-56-2. +David Drasin, Math Dept., Purdue University, West Lafayette, IN 47907; drasin@math.purdue.edu. + +# Mini-List of Errata + +p. 7, l. 7: Polynomials have zeros, equations and numbers have roots. +p. 22, last sentence in 2.1.6: Makes no sense: The derivative of $f$ appears in formula (2.1.6.1). +p. 29, l. -4: $F \longrightarrow f$ +p. 36: Nonemptiness of $U, V$ is not enough; they must each have nonempty intersection with $W$ . +p. 58, 4.5.5: "spurious"? Strange use of the word. +p. 64, l. 5: The subtle linguistic distinction that you were at pains to point out in def. 4.6.2 is lurking here, too: $S$ is not a "singular set" (sic), it is rather a "singularity set" (i.e., comprised of singularities). +p. 87, l. 6: "deformed to a point": meaning that the deformation never leaves $U$ ? +p. 87, 1. 8: "the complement of $U$ has only one component"? The simply-connected region $\mathbb{C} \left\{ x \in \mathbb{R} : |x| \geq 1 \right\}$ fails this criterion. +p. 88, l. -1: The slit-plane result is valid without finite-connectivity. +p. 89: Why $\equiv$ in (7.1.2.1) but simple $\equiv$ in (7.1.2.2)? I should think either both $\equiv$ or only the second (being a definition). +p. 96, 1.7: $\zeta \longrightarrow x$ +p. 104, 8.1.4: Not the customary definition. This is called local uniform convergence. A series $\sum x_{n}$ in a normed linear space is normally convergent if $\sum \|x_{n}\| < \infty$ . Your definition does not have the desirable feature that a normally convergent series is absolutely convergent. +p. 110, 8.2.4: For a constant sequence, this would seem to be false; that is, if $\{a_{n} : n \in \mathbb{N}\}$ is a single number, there is then no accumulation point per definition on p. 37. +p. 113, (8.3.6.1): Is $a_{l}^{j}$ a power, like $(z - \alpha_{j})^{l}$ adjacent to it? +p. 117, 9.1.1: Spelling. + +p. 122, 9.3.6: "finite order" seems irrelevant: $f$ can be any nonpolynomial entire function. +p. 124, (10.1.2.2.2): "... the singularity will be integrable ... ? Was this defined? +p. 135, l. 5: “... we can now understand ...” How does the Monodromy Theorem, which provides (only) a sufficient condition for analytic continuability, explain non-continuability? +p. 145, Ex. 11.1.4.2: The "uniform" clause violates the Maximum Principle. +p. 150: Paul Koebe never wrote his name as "Köbe." +p. 162, 13.3.3: Why not mention the beautiful proof of D.J. Newman (Mathematical Reviews 82h: 10056, 98j: 11069, 2000b: 11110)? +p. 168, (14.2.2.10): The right-hand side is not a function of $(x,y)$ . +p. 170, Figure 14.6: The wrong triangle is shaded. +p. 187: The top figure can't be correct, because (1) the image set is unbounded and (2) the image set is conjugate-closed. +p. 195, l. 5: Among "delightful" (and very accessible) texts you should mention T.W. Körner's Fourier Analysis, Cambridge University Press, Cambridge, 1988; (Mathematical Reviews 89f: 42001). +p. 231: Accumulation point seems to describe rather convergence. +p. 247, l. -3: "alternate" should be "alternative"—cf. p. 251, l. -2. +p. 247, ll. 7-8: What if $x = 0$ and $y = i$ ? Better define imaginary part $z$ as $(z + \bar{z}) / 2$ . Similar problem on p. 252, l. 10, and in the definition of complex conjugate itself. + +R.B. Burckel, Math Dept., Kansas State University, Cardwell Hall, Manhattan, KS 66506-2602; burckel@math.ksu.edu. + +Colley, Susan Jane. Vector Calculus. 2nd ed. Upper Saddle River, NJ: Prentice-Hall, 2002; xiv + 558 pp, $93.33. ISBN 0-13-041531-6. + +According to the author, this second edition differs from the first edition in addition of exercises, a new final Chapter 8 on differential forms, and expanded discussion of several topics. The first seven chapters are devoted to vectors, differentiation, vector-valued functions, maxima and minima, multiple integrals, line integrals, and surface integrals. + +The book has a number of striking good qualities. The author has made a strenuous effort to treat each topic thoroughly, to anticipate points that can cause difficulty for the student, to provide abundant illustrations (including those from applied fields), to point out computer techniques where relevant, and to supply a large body of exercises. There are also some novel ideas drawn from the literature, such as a test for constrained local extrema due to D. Spring and a discussion of Bézier curves. + +For the difficult task of presenting the underlying theory, the author provides carefully stated propositions (which are highlighted), discusses their significance, and gives proofs or proof sketches for many but not all results. For example, an intuitive proof is given for the anticommutativity of the vector product, and the basic theorems on limits and continuity are stated without proof; but the differentiability of a function having continuous partial derivatives is proved, as is the theorem on interchange of order of partial derivatives. + +For theorems for which no proof is given, the reviewer wishes that references to other sources had been provided, so that a curious student could have a complete understanding of the theorems. Very few such references are given. However, at the end of the book, there is a short list of works for further reading. + +The first four chapters follow traditional approaches to the subjects. The later chapters concern integration, and for them the author has made some novel choices in treating the theory. For the double integral, the author gives a fairly standard definition of the Riemann integral for a function over a rectangular region in an $xy$ -plane. The reviewer is not satisfied with the definition, since it refers to a limit as "all $\Delta x_i$ , all $\Delta y_j$ approach 0," and this has no clear meaning. The standard use of a mesh would clarify this. The theorem that the integral exists if the function is continuous is then stated but not proved. This is followed by the theorem that the integral exists if the function is bounded and its set of discontinuities has zero area; again there is no proof, and "zero area" is not defined. Next comes a theorem called Fubini's theorem, which is in fact a very weak form of that theorem, but a form very adequate for most applications; a proof is given. Four basic properties of the double integral are listed and one of them is proved. Next, an "elementary region" in the $xy$ -plane is defined (suitable for forming an iterated integral), and the integral over such a region is defined to be the integral over an enclosing rectangle of the function extended to equal zero outside the region. It is then proved that the integral just defined equals the appropriate iterated integral. The proof, occupying 4 pages, is difficult. Triple integrals are treated similarly. The theorem on change of variables for double integrals is stated only for the case of a mapping of an elementary region onto an elementary region, and a sketch of a proof is given. + +It would be difficult to present this exposition of the theory of multiple integrals to students. Typically, at this stage of their mathematical education, their greatest need is for practice with many examples and for guidance in gaining an intuitive feeling for the subject. The fine points of theory, even very well presented, usually escape them. + +The next chapter, on line integrals, starts by defining the integral over a smooth path in the $xy$ -plane with respect to arclength. The proof that the integral defined is the limit of appropriate sums is not complete. The vector line integral is then defined and shown to have a value independent of parametrization of the path. The author then introduces the concept of a "curve" as the image set of a path and defines line integrals over such curves. The reviewer finds this development unnecessary and confusing; it would be far better to define a (directed) curve as an equivalence class of paths, and then the theorems already proved provide meaning for line integrals over such curves. + +The word "region" appears at various points in the later chapters but apparently is not defined, and the term "connected" appears for the first time on p. 403. Noting these facts, the reviewer was led to examine the domains of functions throughout the book and was surprised to find that for the most part the domain is an arbitrary open set in $n$ -space. Thus, even for functions of one variable, the domain is allowed to be an arbitrary open set on the real + +line, hence a union of disjoint open intervals. It seems strange to expect a student just beginning to learn about partial derivatives and multiple integrals to be thinking in terms of such sets. The tradition for one-variable calculus is to consider functions defined on intervals, in many cases closed intervals; for many-variable calculus, one considers functions defined on regions, which may be open or closed or occasionally formed of an open region plus part of its boundary. The examples in this book are all of this sort, so nothing is gained by allowing disconnected domains. The unnecessary generality can even lead to errors; for example, on p. 241 equation (2) gives a remainder theorem that is false if the domain is disconnected. + +The final chapter, on differential forms, is well written, with many examples. The smoothness of the mapping functions needs clarification; on p. 489 it is suggested that they are of class $C^k$ , but $k$ is never specified and $k$ is used in another context later in the chapter. One wonders how often an instructor will include this topic in a course. However, it is good to have it in the book, and it may well encourage a student to learn about Stokes's Theorem in all its generality and to pursue more advanced mathematics. + +Wilfred Kaplan, Math Dept., University of Michigan, 2072 East Hall, 525 E. University Ave., Ann Arbor, MI 48109-1109; wilkap@umich.edu. + +Beatrous, Frank and Casper Curjel. Multivariable Calculus: A Geometric Approach. Upper Saddle River, NJ: Prentice-Hall, 2002; xii + 456 pp, $88. ISBN 0-111-22222-3. + +This work is directed at students with a very weak background who want to gain some feeling for calculus beyond the first course without having to master difficult mathematical theory. Very little is proved, and no linear algebra is assumed or even mentioned. Concepts are introduced by examples, with no attempt to provide a clear and precise formulation of the theory. The term "geometric approach" in the title refers to frequent appeals to geometric intuition, with the aid of a large number of drawings presenting 2-dimensional and 3-dimensional figures. Euclidean geometry is hardly referred to; even the Pythagorean theorem is applied without being mentioned. Analytic geometry is treated in a superficial manner, with no discussion of conic sections or quadric surfaces. The text does not refer to computer aids, except for a discussion in the preface of a computer lab, in which problems are treated graphically with the aid of computer programs. + +The first chapter treats vectors and curves in the plane and in space, taking 65 pages to cover vector operations through the cross product. Chapter 2 covers partial derivatives, directional derivatives, local, global and constrained extrema. Chapter 3 treats double and triple integrals, with an intuitive approach to the limit processes. The remaining four chapters are concerned with vector fields, line and surface integrals, and the familiar theorems. + +A good instructor can certainly convey a substantial amount of mathematics with the aid of good figures and intuitive reasoning based on them. Thus, the authors' goal makes sense and the book can well meet the needs of an appropriate body of students. For those wanting to go further in mathematics, the book is inadequate, since its absence of rigor prevents it from laying the foundations for advanced courses. This reviewer found some minor flaws, which are noted in the following paragraphs. + +Very explicit instructions on graphing are provided (pp. 40-41): The unit of distance is always to be 1 centimeter. The angle between two vectors is defined (p. 31) without mention of the case where one or both vectors are the zero vector; hence, there may be zeros in denominators on subsequent pages. There is similar difficulty for the cross product (p. 60). + +The theorem on interchange of order of partial derivatives is stated (p. 122) with a footnote indicating that it is valid only for "certain functions" (unspecified). On p. 125, a problem on interest payments involves differentiating a function defined only for integer values. Generally, there is little attention paid to where functions are defined. In particular, intervals, open regions, and closed regions are not discussed except for the statement (p. 187) that "filled in figures are called regions or domains"; later, the term "of finite extent" is used for bounded sets, with no explanation. In evaluating double integrals, the authors use what they call "x-slices" and "y-slices," with no discussion of the sets for which the method is applicable. + +The authors present a "curl test" for independence of path of a line integral; but, in the absence of a discussion of open regions, connectedness, and simple-connectedness, the reader is left with a very incomplete statement of validity of the test (pp. 318-320). Surface integrals are developed before a discussion of surface area. + +Wilfred Kaplan, Math Dept., University of Michigan, 2072 East Hall, 525 E. University Ave., Ann Arbor, MI 48109-1109; wilkap@umich.edu. + +Snieder, Roel. A Guided Tour of Mathematical Methods for the Physical Sciences. Cambridge, U.K.: Cambridge University Press, 2001; xi + 429 pp, $30 (P). ISBN 0-521-78751-3. + +This book is fun. It introduces the topics of vector analysis in an easy, discursive style with engaging examples. It is ideal for self-study or seminar-study or for the instructor who is looking for great applications of the mathematical ideas. Where else can one find out: + +- How fast is the Earth growing by the accumulation of cosmic dust? +- Why is life not possible in a five-dimensional world? +- Where does lightning start? + +- What connects quantum mechanics and hydrodynamics? +- What causes the explosion of a nuclear bomb? +- Is the Earth's mantle convecting? +- How to design a frequency filter? +- How to predict the motion of a particle in syrup? +- Why pressure in a fluid is isotropic? + +Most of the book is devoted to vector analysis, but there also is one chapter each on linear algebra (which gets as far as singular value decomposition), the Dirac delta function, Green's function, Fourier analysis, perturbation theory, and two chapters on complex analysis. The bulk of the examples draw on gravitational fields, electricity, and magnetism; but as the list given in the first paragraph illustrates, there is also a playfulness about the chosen illustrations. The book is a trove of interesting and richly-referenced applications to spark up mathematics classes. The first example of the first chapter takes an old chestnut—Given the rate of change of the volume of a sphere, how fast is its radius changing?—and gives it new life by asking: Given that cosmic dust is estimated to add $4 \times 10^{7} \mathrm{~kg}$ per annum to the earth's mass and that the density of a meteor is $2.5 \times 10^{3} \mathrm{~kg} / \mathrm{m}^{3}$ , how fast is the radius of the Earth increasing? + +What makes this book special is that it is designed to be read. The introduction of the mathematical technique and the description of the applications are done in a conversational tone. Whole paragraphs—occasionally, whole pages—go by without a displayed equation. But the author makes sure that the reader has pen and paper nearby, for he constantly interrupts his narrative to ask the reader to check something, to perform a calculation, to graph a function, to try out the theory in a particular case. + +Something has to be sacrificed: This book would be difficult to use as a text in a traditional course. Nothing is labeled as a theorem or as a definition; there are no formal proofs. As a reference book, it would be awkward and frustrating, because most definitions and principal results are embedded within the text. There are no practice problems at the ends of the chapters. But the author is up-front about this drawback and suggests suitable supplementary texts with a more conventional approach. + +The level is described as advanced undergraduate to beginning graduate. That may be a little high for mathematics or physics majors. I could see using the book in the junior or senior year. The author assumes that the reader is familiar with partial derivatives, multiple integrals, basic linear algebra, and vector algebra. With this preparation and a willingness to read with pen and paper at hand, any student is equipped to embark on a delightful journey. + +David Bressoud, Mathematics and Computer Science Dept., Macalester College, 1600 Grand Ave., St. Paul, MN 55105-1899; bressoud@macalester.edu. + +# Guide for Authors + +# Focus + +The UMAP Journal focuses on mathematical modeling and applications of mathematics at the undergraduate level. The editor also welcomes expository articles for the On Jargon column, reviews of books and other materials, and guest editorials on new ideas in mathematics education or on interaction between mathematics and application fields. Prospective authors are invited to consult the editor or an associate editor. + +# Understanding + +A manuscript is submitted with the understanding—unless the authors advise otherwise—that the work is original with the authors, is contributed for sole publication in the Journal, and is not concurrently under consideration or scheduled for publication elsewhere with substantially the same form and content. Pursuant to U.S. copyright law, authors must sign a copyright release before editorial processing begins. Authors who include data, figures, photographs, examples, exercises, or long quotations from other sources must, before publication, secure appropriate permissions from the copyright holders and provide the editor with copies. The Journal's copyright policy and copyright release form appear in Vol. 18 (1997) No. 1, pp. 1-14 and at ftp://cs.beloit.edu/math-cs/Faculty/Paul Campbell/Public/UMAP. + +# Language + +The language of publication is English (but the editor will help find translators for particularly meritorious manuscripts in other languages). The majority of readers are native speakers of English, but authors are asked to keep in mind that readers vary in their familiarity with vocabulary, idiomatic expressions, and slang. Authors should use consistently either British or American spelling. + +# Format + +Even short articles should be sectioned with carefully chosen (unnumbered) titles. An article should begin by saying clearly what it is about and what it will presume of the reader's background. Relevant bibliography should appear in a section entitled *References* and may include annotations, as well as sources not cited. Authors are asked to include short biographical sketches and photos in a section entitled *About the Author(s)*. + +# Style Manual + +On questions of style, please consult current Journal issues and The Chicago Manual of Style, 13th or 14th ed. (Chicago, IL: University of Chicago Press, 1982, 1993). + +# Citations + +The Journal uses the author-date system. References cited in the text should include between square brackets the last names of the authors and the year of publication, with no intervening punctuation (e.g., [Kolmes and Mitchell 1990]). For three or more authors, use [Kolmes et al. 1990]. Papers by the same authors in the same year may be distinguished by a lowercase letter after the year (e.g., [Fjelstad 1990a]). A specific page, section, equation, or other division of the cited work may follow the date, preceded by a comma (e.g., [Kolmes and Mitchell 1990, 56]). Omit "p." and "pp." with page numbers. Multiple citations may appear in the same brackets, alphabetically, separated by semicolons (e.g., [Ng 1990; Standler 1990]). If the citation is part of the text, then the author's name does not appear in brackets (e.g., "... Campbell [1989] argued ..."). + +# References + +Book entries should follow the format (note placement of year and use of periods): + +Moore, David S., and George P. McCabe. 1989. Introduction to the Practice of Statistics. New York, NY: W.H. Freeman. + +For articles, use the form (again, most delimiters are periods): + +Nievergelt, Yves. 1988. Graphic differentiation clarifies health care pricing. UMAP Modules in Undergraduate Mathematics and Its Applications: Module 678. The UMAP Journal 9 (1): 51-86. Reprinted in UMAP Modules: Tools for Teaching 1988, edited by Paul J. Campbell, 1-36. Arlington, MA: COMAP, 1989. + +# What to Submit + +Number all pages, put figures on separate sheets (in two forms, with and without lettering), and number figures and tables in separate series. Send three paper copies of the entire manuscript, plus the copyright release form, and—by email attachment or on diskette—formatted and unformatted ("text" or ASCII) files of the text and a separate file of each figure. Please advise the computer platform and names and versions of programs used. The Journal is typeset in $\mathrm{LATEX}$ using EPS or PICT files of figures. + +# Refereeing + +All suitable manuscripts are refereed double-blind, usually by at least two referees. + +# Courtesy Copies + +Reprints are not available. Authors of an article each receive two copies of the issue; the author of a review receives one copy; authors of a UMAP Module or an ILAP Module each receive two copies of the issue plus a copy of the Tools for Teaching volume. Authors may reproduce their work for their own purposes, including classroom teaching and internal distribution within their institutions, provided copies are not sold. + +# UMAP Modules and ILAPs + +A UMAP Module is a teaching/learning module, with precise statements of the target audience, the mathematical prerequisites, and the time frame for completion, and with exercises and (often) a sample exam (with solutions). An ILAP (Interdisciplinary Lively Application Project) is a student group project, jointly authored by faculty from mathematics and a partner department. Some UMAP Modules and ILAPs appear in the Journal, others in the annual Tools for Teaching volume. Authors considering whether to develop a topic as an article, a UMAP Module, or an ILAP should consult the editor. + +# Where to Submit + +Reviews, On Jargon columns, and ILAPs should go to the respective associate editors, whose addresses appear on the Journal masthead. Send all other manuscripts to + +Paul J. Campbell, Editor + +The UMAP Journal + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +USA + +voice: (608) 363-2007 fax: (608) 363-2718 email: campbell@beloit.edu \ No newline at end of file diff --git a/MCM/1995-2008/2002MCM/2002MCM.md b/MCM/1995-2008/2002MCM/2002MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..e05905783e6a8774966b30fa15ada5fd9562a3f6 --- /dev/null +++ b/MCM/1995-2008/2002MCM/2002MCM.md @@ -0,0 +1,4502 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +David C. "Chris" Arney + +Dean of the School of + +Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy State Univ. Montgomery + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Director of Educ. Technology + +Roland Cheyne + +Production Editor + +Pauline Wright + +Copy Editor + +Timothy McLean + +Distribution + +Kevin Darcy + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 23, No. 3 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +David C. "Chris" Arney + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. "Gene" Woolsey + +Brigham Young University + +The College of St. Rose + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# MEMBERSHIP PLUS FOR INDIVIDUAL SUBSCRIBERS + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription includes print copies of quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in their classes, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2220 \$75 + +(Outside U.S.) #2221 $85 + +# INSTITUTIONAL PLUS MEMBERSHIP SUBSCRIBERS + +Institutions can subscribe to the Journal through either Institutional Pus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to search our on-line catalog, download COMAP print materials, and reproduce for use in any class taught in the institution, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2270 $395 + +(Outside U.S.) #2271 $415 + +# INSTITUTIONAL MEMBERSHIP SUBSCRIBERS + +Regular Institutional members receive only print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP materials. + +(Domestic) #2240 $165 + +(Outside U.S.) #2241 $185 + +# LIBRARY SUBSCRIPTIONS + +The Library Subscription includes quarterly issues of The UMAP Journal and our annual collection UMAP Modules: Tools for Teaching and our organizational newsletter Consortium. + +(Domestic) #2230 $140 + +(Outside U.S.) #2231 $160 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Second-class postage paid at Boston, MA + +and at additional mailing offices. + +Send address changes to: + +The UMAP Journal + +COMAP, Inc. + +57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2002 by COMAP, Inc. All rights reserved. + +# Vol. 23, No. 3 2002 + +# Table of Contents + +# Publisher's Editorial + +Roots + +Solomon A. Garfunkel 185 + +# Modeling Forum + +Results of the 2002 Contest in Mathematical Modeling 187 + +Frank Giordano + +Simulating a Fountain + +Lyric P. Doshi, Joseph Edgar Gonzalez, and Philip B. Kidd . . . . . 209 + +The Fountain That Math Built + +Alex McCauley, Josh Michener, and Jadrian Miles. 221 + +Wind and Waterspray + +Tate Jarrow, Colin Landon, and Mike Powell. 235 + +A Foul-Weather Fountain + +Ryan K. Card, Ernie E. Esser, and Jeffrey H. Giansiracusa . 251 + +Judge's Commentary: The Outstanding Wind and Waterspray Papers + +Patrick J. Driscoll 267 + +Things That Go Bump in the Flight + +Krista M. Dowdey, Nathan M. Gossett, and Mark P. Leverentz . 273 + +Optimal Overbooking + +David Arthur, Sam Malone, and Oaz Nir 283 + +Models for Evaluating Airline Overbooking + +Michael P. Schubmehl, Wesley M. Turner, and Daniel M. Boylan. 301 + +Overbooking Strategies, or + +"Anyone Willing to Take a Later Flight?!" + +Kevin Z. Leder, Saverio E. Spagnolie, and Stefan M. Wild . . . . . 317 + +ACE is High + +Anthony C. Pecorella, Crystal T. Taylor, and Elizabeth A. Perez . 339 + +The Airline Overbooking Problem + +John D. Bowman, Corey R. Houvard, and Adam S. Dickey . . . 351 + +Author-Judge's Commentary: The Outstanding Airline Overbooking Papers + +William P. Fox 365 + +![](images/6737061072c40ec4556b0da9c61404b65c1d4ec639fe235f3c11cb83c5a2e5e8.jpg) + +# Publisher's Editorial Roots + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +s.garfunkel@mail.comap.com + +In the beginning... Well, to be honest, I no longer remember the beginning very well. But when COMAP began in late 1980, we had a rather simple mission. We believed, then as now, in teaching mathematics through modeling and applications. We believed, and continue to believe, that students and teachers need to have persuasive answers to "What will we ever use this for?" + +We set out to create a body of curriculum materials—in instructional modules, journals, newsletters and texts; in print, video, and electronic formats—that embodied this approach, presenting mathematics through its contemporary applications. And, over the years, we have worked at every educational level, from elementary school to graduate school, because we believe that students need to see mathematics as part of their personal experience from their first encounters with number and pattern. + +But in the beginning... In the beginning, we worked at the undergraduate level. We—and here I include the first authors, field-testers, and users—were college mathematics faculty. Many of us received our degrees in the late 1960s—and we were legion. Remember, universities were granting approximately 1,400 doctorates per year in the mathematical sciences then (1,100 today), and over $90\%$ were U.S. citizens (about $50\%$ today)—all looking for jobs at the same 20 universities at the same time. Not surprisingly, we were flung far and wide; and many found themselves at colleges without a legacy of mathematical research and with consequently high teaching loads. + +But we had the energy of youth and the idealism of the late 1960s. We would change the way mathematics was taught where we worked in college mathematics departments. And with a great deal of simplification, that is how the UMAP Module series and this Journal were born. + +As I'm sure you know, in the last several years we have had a number of projects that focused on the high school level; readers of this Journal certainly + +don't need a sermon on the importance of K-12 mathematics instruction. And, importantly for COMAP, we have greatly expanded our experiences, and I hope grown with these efforts. But in this editorial, I want to describe a new project we are undertaking that takes us back to our undergraduate roots. + +Mathmodels.org—the Mathematical Modeling Forum, or just the Modeling Forum—is a new program recently funded by the National Science Foundation. Its purpose is to help teachers and students learn and participate in the modeling process. One of COMAP's most successful endeavors has been the Mathematical Contest in Modeling (MCM). But one failing of the contest format is that we cannot provide students with feedback on their papers. Were they on the right track? Did they not take into account a crucial variable? Also, contests are designed to select winning papers, not to promote cooperation between teams. Mathmodels.org is designed to do better. + +The idea is simple. Have a Web site where modeling problems are posted and where students, individually and in teams, can present whole or partial solutions. The student work will be monitored by experienced faculty, who will give feedback at regular intervals as well as encourage cooperation with other students/teams. Students and faculty will communicate through threaded discussions. Prof. Pat Driscoll (U.S. Military Academy) is the project director, and I'll do some work on this as well. We see this project as a natural extension of MCM—the forum can clearly help students prepare for the contest. But more importantly, it can give them rich experiences with a wide variety of modeling problems without time constraints. For the faculty, the forum will provide an important source of problems and examples. We hope that you will join us as site mentors, problem sources, or just enthusiastic members of our modeling forum. We are very excited about the opportunity that this grant will afford us to strengthen our ability to meet our core mission as we strengthen our roots in the undergraduate mathematics community. + +And speaking of the mathematics community, how many of each year's 1,100 new Ph.D.'s in mathematics know about COMAP or about this ? + +College mathematics faculty are the main readers and subscribers to this Journal, which was founded for them and for their students. But COMAP cannot be sustained into the future by the same college mathematics faculty who contributed ideas, articles, and Modules 15, 20, or 25 years ago. We and they will pass; before then, however, we must pass the torch. + +Hence we ask you to + +- show this and other issues of this Journal and other COMAP materials to young members in your department, +- suggest that they consider joining COMAP (information is on the back side of the title page of this issue), and +- urge them to contact myself, editor Paul Campbell, any of the associate editors of this Journal, or any of the coaches of MCM teams listed in this issue about how to become active as an author, a reviewer of manuscripts, a book reviewer, or coach of an MCM or ICM team. + +# Modeling Forum + +# Results of the 2002 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +f.giordano@mail.comap.com + +# Introduction + +A total of 525 teams of undergraduates, from 282 institutions in 11 countries, spent the second weekend in February working on applied mathematics problems in the 18th Mathematical Contest in Modeling (MCM). + +The 2002 MCM began at 8:00 P.M. EST on Thursday, Feb. 7 and officially ended at 8:00 P.M. EST on Monday, Feb. 11. During that time, teams of up to three undergraduates were to research and submit an optimal solution for one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problems at the appropriate time, and entered completion data through COMAP'S MCM Web site. + +Each team had to choose one of the two contest problems. After a weekend of hard work, solution papers were sent to COMAP on Monday. Ten of the top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first sixteen contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2001). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +This year's Problem A was about controlling the amount of spray hitting passersby that is produced by wind acting on an ornamental fountain located in the midst of a plaza surrounded by buildings. The water flow is controlled by + +a mechanism linked to an anemometer located on top of an adjacent building. Students were asked to design a control algorithm that would provide a balance between an attractive spectacle and a soaking. + +Problem B focused on the challenge associated with airline practices overbooking of flight reservations. Students were asked to determine an optimal overbooking strategy in light of operational constraints evolving from the events of September 11, 2001. + +In addition to the MCM, COMAP also sponsors the Interdisciplinary Contest in Modeling (ICM) and the High School Mathematical Contest in Modeling (HiMCM). The ICM, which runs concurrently with MCM, offers a modeling problem involving concepts in mathematics, environmental science, environmental engineering, and/or resource management. Results of this year's ICM are on the COMAP Web site at http://www.comap.com/undergraduate/ contests; results and Outstanding papers appeared in Vol. 23 (2002), No. 1. The HiMCM offers high school students a modeling opportunity similar to the MCM. Further details about the HiMCM are at http://www.comap.com/ highschool/contests. + +# Problem A: Wind and Waterspray + +An ornamental fountain in a large open plaza surrounded by buildings squirts water high into the air. On gusty days, the wind blows spray from the fountain onto passersby. The water flow from the fountain is controlled by a mechanism linked to an anemometer (which measures wind speed and direction) located on top of an adjacent building. The objective of this control is to provide passersby with an acceptable balance between an attractive spectacle and a soaking: The harder the wind blows, the lower the water volume and the height to which the water is squirted, hence the less spray falls outside the pool area. + +Your task is to devise an algorithm that uses data provided by the anemometer to adjust the water-flow from the fountain as the wind conditions change. + +# Problem B: Airline Overbooking + +You're all packed and ready to go on a trip to visit your best friend in New York City. After you check in at the ticket counter, the airline clerk announces that your flight has been overbooked. Passengers need to check in immediately to determine if they still have a seat. + +Historically, airlines know that only a certain percentage of passengers who have made reservations on a particular flight will actually take that flight. Consequently, most airlines overbook—that is, they take more reservations than the capacity of the aircraft. Occasionally, more passengers will want to take + +a flight than the capacity of the plane, leading to one or more passengers being bumped and thus unable to take the flight for which they had reservations. + +Airlines deal with bumped passengers in various ways. Some are given nothing, some are booked on later flights on other airlines, and some are given some kind of cash or airline ticket incentive. + +Consider the overbooking issue in light of the current situation: + +- fewer flights by airlines from point A to point B; +heightened security at and around airports, +- passengers' fear, and +- loss of billions of dollars in revenue by airlines to date. + +Build a mathematical model that examines the effects that different overbooking schemes have on the revenue received by an airline company, in order to find an optimal overbooking strategy—that is, the number of people by which an airline should overbook a particular flight so that the company's revenue is maximized. Ensure that your model reflects the issues above and consider alternatives for handling "bumped" passengers. Additionally, write a short memorandum to the airline's CEO summarizing your findings and analysis. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at Southern Connecticut State University (Problem A) or at the U.S. Military Academy (Problem B). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Wind and Waterspray44860167279
Airline Overbooking63861138246
1086121305525
+ +The ten papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +# Wind and Waterspray Papers + +"Simulating a Fountain" + +Maggie L. Walker Governor's School + +Richmond, VA + +John A. Barnes + +Lyric P. Doshi + +Joseph E. Gonzalez + +Philip B. Kidd + +"The Fountain That Math Built" + +North Carolina School of Science and Mathematics + +Durham, NC + +Daniel J. Teague + +Alex McCauley + +Josh Michener + +Jadrian Miles + +"Wind and Waterspray" + +U.S. Military Academy + +West Point, NY + +David Sanders + +Tate Jarrow + +Colin Landon + +Mike Powell + +"A Foul-Weather Fountain" + +University of Washington + +Seattle, WA + +James Allen Morrow + +Ryan K. Card + +Ernie E. Esser + +Jeffrey H. Giansiracusa + +# Airline Overbooking Papers + +"Things That Go Bump in the Flight" + +Bethel College + +St. Paul, MN + +William M. Kinney + +Krista M. Dowdey + +Nathan M. Gossett + +Mark P. Leverentz + +"Optimal Overbooking" + +Duke University + +Durham, NC + +David P. Kraines + +David Arthur + +Sam Malone + +Oaz Nir + +"Models for Evaluating Airline Overbooking" +Harvey Mudd College +Claremont, CA +Michael E. Moody + +Michael B. Schubmehl +Wesley M. Turner +Daniel M. Boylan + +"Probabilistically Optimized Airline Overbooking Strategies, or 'Anyone Willing to Take a Later Flight?" + +University of Colorado at Boulder +Boulder, CO +Anne M. Dougherty + +Kevin Z. Leder +Saverio E. Spagnolie +Stefan M. Wild + +"ACE Is High" +Wake Forest University (Team 69) +Winston-Salem, NC +Edward E. Allen + +Anthony C. Pecorella +Elizabeth A. Perez +Crystal T. Taylor + +"Bumping for Dollars: The Airline Overbooking Problem" Wake Forest University (Team 273) Winston-Salem, NC Frederick H. Chen + +John D. Bowman +Corey R. Houmard +Adam S. Dickey + +# Meritorious Teams + +Wind and Waterspray Papers (48 teams) +Asbury College, Wilmore, KY, USA (Kenneth P. Rietz) +Beijing Institute of Technology, Beijing, P.R. China (Yao Cui Zhen) +Beijing University of Chemical Technology, Beijing, P.R. China (Yuan WenYan) +Beijing University of Posts and Telecommunication, Beijing, P.R. China (He Zuguo) (two teams) +Beijing University of Posts and Telecommunication, Beijing, P.R. China (Sun Hongxiang) +Bethel College, St. Paul, MN (William M. Kinney) +Boston University, Boston, MA (Glen R. Hall) +California Polytechnic State University, San Luis Obispo, CA (Thomas O'Neil) +Central South University, Changsha, Hunan, P.R. China (Xuanyun Qin) +The College of Wooster, Wooster, OH (Charles R. Hampton) +East China University of Science and Technology, Shanghai, P.R. China (Lu Yuanhong) +Goshen College, Goshen, IN (David Housman) +Hangzhou University of Commerce, Hangzhou, Zhejiang, P.R. China (Zhao Heng) +Hangzhou University of Commerce, Hangzhou, Zhejiang, P.R. China (Zhu Ling) Humboldt State University, Arcata, CA (Roland H. Lamberson) +Jacksonville University, Jacksonville, FL (Robert A. Hollister) +James Madison University, Harrisonburg, VA (Caroline Smith) + +Lafayette College, Easton, PA (Thomas Hill) + +Lawrence Technological University, Southfield, MI (Scott D. Schneider) + +Lawrence Technological University, Southfield, MI (Howard E. Whitston) + +Luther College, Decorah, IA (Reginald, D. Laursen) (two teams) + +Magdalen College, Oxford, Oxfordshire, United Kingdom (Byron W. Byrne) + +Massachusetts Institute of Technology, Cambridge, MA (Daniel H. Rothman) + +Nankai University, Tianjin, P.R. China (Huang Wuqun) + +North China Electric Power University, Baoding, Hebei, P.R. China (Gu Gendai) + +Northern Jiaotong University, Beijing, P.R. China (Wang Bingtuan) + +Southern Oregon University, Ashland, OR (Kemble R. Yates) + +State University of West Georgia, Carrollton, GA (Scott Gordon) + +Trinity University, San Antonio, TX (Jeffrey K. Lawson) + +Trinity University, San Antonio, TX (Hector C. Mireles) + +University College Cork, Cork, Ireland (Donal J. Hurley) + +University of Colorado at Boulder, Boulder, CO (Anne M. Dougherty) + +University of Colorado at Boulder, Boulder, CO (Michael Ritzwoller) (two teams) + +University of Elec. and Sci. Technology, Chengdu, Sichuan, P.R. China (Qin Siyi) + +University of New South Wales, Sydney, NSW, Australia (James W. Franklin) + +University of North Carolina, Chapel Hill, NC (Jon W. Tolle) + +University of Washington, Seattle, WA (James Allen Morrow) + +Wright State University, Dayton, OH (Thomas P. Svobodny) + +Xavier University, Cincinnati, Ohio (Michael Goldweber) + +Youngstown State University, Youngstown, OH (Angela Spalsbury) + +Zhejiang University, Hangzhou, Zhejiang, P.R. China (Yang Qifan) + +# Airline Overbooking Papers (38 teams) + +Albertson College of Idaho, Caldwell, ID (Mike P. Hitchman) + +Asbury College, Wilmore, KY (Kenneth P. Rietz) + +Beijing Institute of Technology, Beijing, Beijing, P.R. China (Zhang Bao Xue) + +China University of Mining and Technology, Xuzhou, Jiangsu, P.R. China (Zhu Kaiyong) + +Chongqing University, Chongqing, Shapingba, P.R. China (Yang Xiaofan) + +Colgate University, Hamilton, NY (Warren Weckesser) + +College of Sciences of Northeastern University, Shenyang, Liaoning, P.R. China (Han Tie-min) + +Fudan University, Shanghai, P.R. China (Cai Zhijie) + +Gettysburg College, Gettysburg, PA (James P. Fink) + +Harbin Institute of Technology, Harbin, Heilongjiang, P.R. China (Wang Xuefeng) + +Harvey Mudd College, Claremont, CA (Michael E. Moody) + +Harvey Mudd College, Claremont, CA (Ran Libeskind-Hadas) (two teams) + +Institut Teknologi Bandung, Bandung, Jabar, Indonesia (Edy Soewono) + +Juniata College, Huntingdon, PA (John F. Bukowski) + +Lipscomb University, Nashville, TN (Gary Clark Hall) + +Maggie L. Walker Governor's School, Richmond, VA (John A. Barnes) + +Maggie L. Walker Governor's School, Richmond, VA (Crista Hamilton) + +Massachusetts Institute of Technology, Cambridge, MA (Martin Zdenek Bazant) + +Nankai University, Tianjin, Tianjin, P.R. China (Ruan Jishou) + +North Carolina State University, Raleigh, NC (Dorothy Doyle) + +Northern Jiaotong University, Beijing, P.R. China (Wang Xiaoxia) + +NUI Galway, Galway, Ireland (Niall Madden) + +Pacific Lutheran University, Tacoma, WA (Zhu Mei) + +School of Mathematics and Computer Science, Nanjing Normal University, Nanjing, Jiangsu, P.R. China (Zhu Qunsheng) + +Shanghai Jiading No. 1 High School, Shanghai, P.R. China (Chen Li) + +Shanghai Jiaotong University, Shanghai, P.R. China (Song Baorui) + +South China University of Technology, Guangzhou, Guangdong, P.R. China (Lin Jian Liang) + +South China University of Technology, Guangzhou, Guangdong, P.R. China (Zhuo Fu Hong) + +Stetson University, DeLand, FL (Lisa O. Coulter) + +Tianjin University, Tianjin, P.R. China (Rong Xin) + +Tsinghua University, Beijing, P.R. China (Hu Zhiming) + +U.S. Military Academy, West Point, NY (Elizabeth Schott) + +University of South Carolina, Columbia, SC (Ralph E. Howard) + +University of Washington, Seattle, WA (Timothy P. Chartier) + +Xidian University, Xi'an, Shaanxi, P.R. China (Zhang Zhuo-kui) + +Youngstown State University, Youngstown, OH (Angela Spalsbury) + +Youngstown State University, Youngstown, OH (Stephen Hanzely) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, gave a cash prize and a three-year membership to each member of the teams from North Carolina School of Science and Mathematics (Wind and Waterspray Problem) and Wake Forest (Team 69) (Airline Overbooking Problem). Also, INFORMS gave free one-year memberships to all members of Meritorious and Honorable Mention teams. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from University of Washington (Wind and Waterspray Problem) and Duke University (Airline Overbooking Problem). Each of the team members was awarded a \(300 cash prize and the teams received partial expenses to present their results at a special Minisymposium of the SIAM Annual Meeting in Philadelphia, PA in July. Their schools were given a framed, hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding team from each problem as an MAA Winner. The teams were from U.S. Military Academy (Wind and Waterspray Problem) and Harvey Mudd College + +(Airline Overbooking Problem). With partial travel support from the MAA, both teams presented their solutions at a special session of the MAA Mathfest in Burlington, VT in August. Each team member was presented a certificate by MAA President Ann E. Watkins. + +# Judging + +Director + +Frank R. Giordano, COMAP, Lexington, MA + +Associate Directors + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Contest Coordinator + +Kevin Darcy, COMAP Inc., Lexington, MA + +# Wind and Waterspray Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, Stillwater, OK + +Associate Judges + +William C. Bauldry, Appalachian State University, Boone, NC + +Kelly Black, Mathematics Dept., University of New Hampshire, Durham, NH (SIAM) + +Courtney Coleman, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Gordon Erlebacher, School of Computational Science and Information Technology, Florida State University, Tallahassee, FL + +J. Douglas Faires, Youngstown State University, Youngstown, OH (MAA) + +Ben Fusaro, Mathematics Dept., Florida State University, Tallahassee, FL + +Mario Juncosa, RAND Corporation, Santa Monica, CA + +John Kobza, Texas Tech University, Lubbock, TX (INFORMS) + +Deborah Levinson, Compaq Computer Corp., Colorado Springs, CO + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Mark R. Parker, Mathematics Dept., Carroll College, Helena, MT + +John L. Scharf, Carroll College, Helena, MT + +Daniel Zwillinger, Newton, MA + +# Airline Overbooking Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +James Case, Baltimore, MD (SIAM) + +Lisette De Pillis, Harvey Mudd College, Claremont, CA + +William P. Fox, Francis Marion University, Florence, SC (MAA) + +Jerry Griggs, University of South Carolina, Columbia, SC + +Don Miller, Dept. of Mathematics, St. Mary's College, Notre Dame, IN (SIAM) + +Lee Seitelman, Glastonbury, CT (SIAM) + +Dan Solow, Mathematics Dept., Case Western Reserve University, Cleveland, OH (INFORMS) + +Robert Tardiff, Salisbury State University, Salisbury, MD + +Michael Tortorella, Lucent Technologies, Holmdel, NJ + +Marie Vanisko, Carroll College, Helena, MT (MAA) + +Larry Wargo, National Security Agency, Ft. Meade, MD (Triage) + +# Triage Sessions: + +# Wind and Waterspray Problem + +Head Triage Judge + +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Associate Judges + +Steve Horton, Michael Jaye, and Doug Matty, all of the U.S. Military Academy, West Point, NY + +# Airline Overbooking Problem + +Head Triage Judge + +Larry Wargo, National Security Agency, Ft. Meade, MD + +Associate Judges + +James Case, Baltimore, Maryland + +Paul Boisen and 7 others from the National Security Agency, Ft. Meade, MD + +# Sources of the Problems + +The Wind and Waterspray Problem was contributed by Tjalling Ypma, Mathematics Dept., Western Washington University, Bellingham, WA. The Airline Overbooking Problem was contributed by William P. Fox and Richard D. West, Mathematics Dept., Francis Marion University, Florence, SC. + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency and by COMAP. We thank Dr. Gene Berg of NSA for his coordinating efforts. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +We thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Editing (and sometimes substantial cutting) has taken place: minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# Appendix: Successful Participants + +KEY: + +P = Successful Participation A = Bicycle Wheel Control Problem + +H = Honorable Mention B = Hurricane Evacuation Problem + +M = Meritorious + +$\mathbf{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORAB
ALABAMA
Huntingdon CollegeMontgomeryVyacheslav V. RykovP,P
ARKANSAS
Hendrix CollegeConwayDuff Gordon CampbellHH
ARIZONA
McClintock High SchoolTempeIvan BarkdollP
CALIFORNIA
Calif. Institute of TechnologyPasadenaDarryl H. YongH
Calif. Polytechnic State Univ.San Luis ObispoThomas O’NeilM,P
Jennifer M. S WitkesP
California State University at Monterey BaySeasideHongde HuP
California State UniversityBakersfieldMaureen E. RushP
Claremont McKenna CollegeClaremontMario U. MartelliP
Harvey Mudd CollegeClaremontMichael E. MoodyO,M
Ran Libeskind-HadasM,M
Humboldt State UniversityArcataRoland H. LambersonM
Pomona CollegeClaremontAmi E. RadunskayaH
Sonoma State UniversityRohnert ParkElaine T. McDonaldHP
University of San DiegoSan DiegoJeffrey H. WrightP
University of Southern Calif.Los AngelesRobert J. SackerH
Geoffrey R. SpeddingH
COLORADO
Colorado CollegeColorado SpringsPeter L. StaabP
Colorado State UniversityFort CollinsMichael J. KirbyP
Mesa State CollegeGrand JunctionEdward. K. Bonan-HamadaP
Regis UniversityDenverLinda L. DuchrowP
U.S. Air Force AcademyUSAF AcademyGerald E. SohanP
James S. RolfH
J. GerkenH
James E. WestP
University of ColoradoBoulderAnne M. DoughertyMO
Michael RitzwollerM,M
Univ. of Southern ColoradoPuebloBruce N. LundbergP
CONNECTICUT
Sacred Heart UniversityFairfieldPeter LothP,P
Southern Connecticut State Univ.New HavenRoss B. GingrichP
Therese L. BennettP
DELAWARE
University of DelawareNewarkLouis F. RossiP,P
FLORIDA
Embry-Riddle Aeronautical Univ.Daytona BeachGreg Scott SpradlinP
Florida Gulf Coast UniversityFort MyersCharles LindseyP
Florida State UniversityTallahasseeMark M. SussmanP
Jacksonville UniversityJacksonvilleRobert A. HollisterM,P
Stetson UniversityDeLandLisa O. CoulterM
University of Central FloridaOrlandoHeath M. MartinP,P
Florida Institute of TechnologyMelbourneMichael O. GonsalvesP,P
GEORGIA
Georgia Southern UniversityStatesboroJacalyn M. HubardP
Laurene V. FausettH
State University of West GeorgiaCarrolltonScott GordonM,P
IDAHO
Albertson College of IdahoCaldwellMike P. HitchmanM
Boise State UniversityBoiseJodi L. MeadP
Idaho State UniversityPocatelloRob Van KirkP
ILLINOIS
Greenville CollegeGreenvilleGalen R. PetersPH
McKendree CollegeLebanonRaymond E. RobbP
Monmouth CollegeMonmouthChristopher G. FasanoP
Wheaton CollegeWheatonPaul IsiharaP
INDIANA
Earlham CollegeRichmondCharlie PeckH
Goshen CollegeGoshenDavid HousmanMH
Indiana UniversityBloomingtonMichael S. JollyH
Rose-Hulman Institute of Tech, Saint Mary's CollegeTerre HauteDavid J. RaderPH
Notre DameJoanne R. SnowP,P
IOWA
Grinnell CollegeGrinnellMark MontgomeryPH
Royce WolfH.P
Iowa State UniversityAmesStephen J. WillsonH
Luther CollegeDecorahReginald D. LaursenM,M
Mt. Mercy CollegeCedar RapidsK.R. KnoppPP
Simpson CollegeIndianolaMurphy WaggonerPP
Werner S. KollnPH
Wartburg CollegeWaverlyMariah BirgenP
KANSAS
Kansas State UniversityManhattanKorten N. AucklyP
KENTUCKY
Asbury CollegeWilmoreKenneth P. RietzMM
Bellarmine UniversityLouisvilleWilliam J. HardinPP
Northern Kentucky UniversityHighland HeightsGail S. MackinP
Shamanthi Marie FernandoP
MAINE
Colby CollegeWatervilleJan HollyP
MARYLAND
Salisbury UniversitySalisburyMichael J. BardzellP
Goucher CollegeBaltimoreRobert LewandP
Hood CollegeFrederickBetty MayfieldP
Mt. Saint Mary's CollegeEmmitsburgJohn J. DroppP
John E. AugustPP
Towson UniversityTowsonMike P. O'LearyP
Washington CollegeChestertownEugene P. HamiltonP
MASSACHUSETTS
Boston UniversityBostonGlen R. HallM
Greenfield Community CollegeGreenfieldPeter R. LetsonP
Massachusetts Institute of Tech.CambridgeDaniel H. RothmanM,P
Martin Zdenek BazantM
Olin College of EngineeringNeedhamBurt S. TilleyP
Simon's Rock CollegeGreat BarringtonAllen B. AltmanPH
Michael BergmanHH
University of MassachusettsAmherstEdward A. ConnorsP
University of MassachusettsLowellJames Graham-EagleH
Western New England CollegeSpringfieldLorna B. HanesP
WorcesterArthurC. HeinricherP
MICHIGAN
Calvin CollegeGrand RapidsThomas L. JagerP
Eastern Michigan UniversityYpsilantiChristopher E. HeeH
Hillsdale CollegeHillsdaleJohn P. BoardmanP
Hope CollegeHollandAaron C. CinzoriP
Lake Superior State UniversitySault Sainte MarieJohn H. JaromaH
Lawrence Technological Univ.SouthfieldHoward E. WhitstonM
Ruth G. FavroH
Scott D. SchneiderM
Valentina TobosP
Siena Heights UniversityAdrianToni CarrollP,P
MINNESOTA
Augsburg CollegeMinneapolisRebekah N. DupontH
Bemidji State UniversityBemidjiColleen G. LivingstonH
Bethel CollegeSt. PaulWilliam M. KinneyMO
College of Saint Benedict, St. John's UniversityCollegevilleRobert J. HesseP,P
Gustavus Adolphus CollegeSt. PeterThomas P. LoFaroPH
Macalester CollegeSt. PaulDaniel T. KaplanPP
Elizabeth ShoopP
Winona State UniversityWinonaBarry A. PerattP
MISSOURI
Missouri Southern State CollegeJoplinPatrick CassensP
Northwest Missouri State UniversityMaryvilleRussell N. EulerP
Southeast Missouri State UniversityCape GirardeauRobert W. SheetsP
Truman State UniversityKirksvilleSteve J. SmithP
Washington UniversitySt. LouisHiro MukaiP
MONTANTA
Carroll CollegeHelenaHolly S. ZulloHH
NEBRASKA
University of Nebraska-LincolnLincolnGlenn W. LedderH
NEW JERSEY
Rowan UniversityGlassboroHieu D. NguyenP
Paul J. LaumakisP
William Paterson UniversityWayneDonna J. Cedio-FengyaH
NEW MEXICO
New Mexico State UniversityLas CrucesCaroline SweezyP
Western New Mexico UniversitySilver CityThomas P. GruszkaH
NEW YORK
Colgate UniversityHamiltonWarren WeckesserM
Ithaca CollegeIthacaJim ConklinP
Keuka CollegeKeuka ParkCatherine A AbbottP
Manhattan CollegeRiverdaleKathryn WeldH
Rensselaer Polytechnic InstituteTroyPeter R. KramerP
Roberts Wesleyan CollegeRochesterGary L. RadunsH
St. Bonaventure UniversitySt. BonaventureAlbert G. WhiteP
U.S. Military AcademyWest PointDavid SandersO
Elizabeth SchottM
Gregory S. ParnellH
Ray EasonH
Wells CollegeAuroraThomas A. StiadleP
Westchester Community CollegeValhallaSheela L. WhelanP
NORTH CAROLINA
Appalachian State UniversityBooneAlan T. ArnholtPP
Eric S. MarlandP,P
Brevard CollegeBrevardClarke WellbornPP
Duke UniversityDurhamDavid KrainesPO
Meredith CollegeRaleighCammey E. ColeP
Mount Olive CollegeMount OliveOllie J. RosePP
North Carolina School of Science and MathematicsDurhamDaniel J. TeagueO,P
John KolenaH
North Carolina State UniversityRaleighThomas L. HoneycuttH,P
Dorothy DoyleM,P
University of North CarolinaChapel HillJon W. TolleM
Wake Forest UniversityWinston-SalemFrederick H. ChenO
Miaohua JiangP
Edward E. AllenO
OHIO
Hiram CollegeHiramBrad Scott GubserP
Malone CollegeCantonDavid W. HahnPP
Miami UniversityOxfordDoug E. WardP
College of WoosterWoosterCharles R. HamptonMP
Wright State UniversityDaytonThomas P. SvobodnyM
Youngstown State UniversityYoungstownAngela SpalsburyMM
Bob KramerP
Stephen HanzelyM
University of DaytonDaytonMuhammad N. IslamP
Xavier UniversityCincinnatiMichael GoldweberM
OKLAHOMA
Southeastern Oklahoma State UniversityDurantChristopher MorettiP
OREGON
Eastern Oregon UniversityLa GrandeAnthony A. TovarP,P
Robert L. BrandonP
Lewis and Clark CollegePortlandRobert W. OwensPH
Portland State UniversityPortlandGerardo A. LafferriereH
Southern Oregon UniversityAshlandKemble R. YatesMH
University of PortlandPortlandMichael W. AkermanP
PENNSYLVANIA
Bloomsburg UniversityBloomsburgKevin K. FerlandP,P
Bucknell UniversityLewisburgKarl KnaubP
Sally KoutsoliotasH,P
Clarion UniversityClarionKaren D. BolingerP
Richard M. SmabyP
John W. HeardP
Gettysburg CollegeGettysburgJames P. FinkM
Indiana University of PennaIndianaFrederick A. AdkinsP
Juniata CollegeHuntingdonJohn F. BukowskiM
Lafayette CollegeEastonThomas HillMP
Messiah CollegeGranthamLamarr C. WidmerP
Westminster CollegeNew WilmingtonBarbara T. FairesP,P
SOUTH CAROLINA
Charleston Southern UniversityCharlestonStan PerrineP
University of South CarolinaColumbiaRalph E. HowardM
SOUTH DAKOTA
SD School of Mines and Tech.Rapid CityKyle RileyPP
TENNESSEE
Austin Peay State UniversityClarksvilleNell K. RayburnPP
Lipscomb UniversityNashvilleGary Clark HallM
University of the SouthSewaneeCatherine E. CavagnaroPP
TEXAS
Abilene Christian UniversityAbileneDavid HendricksP
Angelo State UniversitySan AngeloRobert L. HamiltonP
Baylor UniversityWacoFrank H. MathisP
Southwestern UniversityGeorgetownTherese N. SheltonH,P
Texas Christian UniversityFort WorthGeorge T. GilbertP
Trinity UniversitySan AntonioHector C. MirelesM
Jeffrey K. LawsonM
Kenneth L. NelsonH
VERMONT
Johnson State CollegeJohnsonChristopher A. AubuchonP
VIRGINIA
Chesterfield County Mathematics and Science High SchoolMidlothianDiane LeightyP
Godwin High School Science and Mathematics CenterRichmondAnn W. SebrellP
James Madison UnivHarrisonburgJoseph W. RudminP
Caroline SmithM
Dorn W. PetersonP
Paul G. WarneP
Maggie L. Walker Governor's SchoolRichmondCrista HamiltonM
John A. BarnesOM
Roanoke CollegeSalemJeffrey L. SpielmanP
University of RichmondRichmondKathy W. HokeP
University of Virginia's College at WiseWiseGeorge W. MossP,P
Virginia TechBlacksburgCatherine A. StephensP
Laura J. SpielmanH
Virginia Western Comm. CollegeRoanokeSteven T. HammerP
Ruth A. ShermanP
WASHINGTON
Central Washington UniversityEllensburgStuart F. BoersmaP,P
Pacific Lutheran UniversityTacomaMei ZhuHM
University of Puget SoundTacomaJohn RiegseckerP
Michael Scott CaseyH,P
University of WashingtonSeattleAnne GreenbaumH
James Allen MorrowO,M
Timothy P. ChartierM
Western Washington UniversityBellinghamTjalling J. YpmaPP
WISCONSIN
Beloit CollegeBeloit CollegePaul J. CampbellP
Ripon CollegeRiponDavid W. ScottP,P
University of WisconsinRiver FallsKathy A. TomlinsonP
AUSTRALIA
University of New South WalesSydneyJames W. FranklinM,H
CANADA
Brandon UniversityBrandon, ManitobaDoug A. PickeringH
Memorial Univ. of NewfoundlandSt. John's NFAndy FosterH
Dalhousie UniversityHalifax NSDorothea A. PronkH
University of Western OntarioLondon ONPeter H. PooleH
University of SaskatchewanSaskatoon SKJames A. BrookeP
CHINA
Anhui UniversityHefeiCai QianP
Yang ShangjunH
Baoding Teachers' CollegeBaodingJing ShuangyanP,P
Wang XinzheP,P
Yuan ShaoqiangPP
Beijing Institute of TechnologyBeijingChen Yi HongP
Cui Xiao DiH
Zhen Yao CuiM
Zhang Bao XueM
Beijing Polytechnic UniversityBeijingXue YiP
Beijing Union UniversityBeijingZeng QingliPP
Beijing Univ. of Aero. and Astro.BeijingPeng LinpingH
Liu HongyingP
Wu SanxingP
Beijing Univ. of Chemical TechnologyBeijingJiang GuangfengH
Weiguo LinP
Jiang DongqingP
Liu DaminP
Yuan WenYanM
Beijing Univ. of Posts & Telecom.BeijingHe ZuguoM,M
Luo ShoushanH,H
Sun HongxiangM
Central South UniversityChang ShaChen XiaosongH
Qin XuanyunMP
ChangChun UniversityChangchunYuan ShuaiP
China University of Mining and TechnologyXuzhouWu ZongxiangH
Zhang XingyongH
Zhou ShengwuP
Zhu KaiyongM
Chongqing UniversityChongqingHe ZhongshiH
Li ChuandongP
Wen LuoshengP
Yang XiaofanM
Dalian Nationalities UniversityDalianWang JinzhiP,P
Dalian UniversityDalianTan XinxinH
Dalian University of TechnologyDalianHe MingfengP,P
Zhao LizhongPP
Wang YiP,P
Dong Hua UniversityShanghaiHu LiangjianP
Du YugenH
East China University of Science and Techn.ShanghaiSu ChunjieH,P
Lu YuanhongM,H
Educational AdministrationShenyangZhang ShujunP,P
Fudan UniversityShanghaiHu JinjinP
Cao YuanP
Cai ZhijieM
Guangdong Commercial CollegeGuangzhouXiang ZiguiP
Hangzhou University of CommerceHangzhouDing ZhengzhongH
Hua Jiu KunH
Zhao HengM
Zhu LingM
Harbin Engineering UniversityHarbinGao ZhenbinP
Luo YueshengP
Shen JihongP
Zhang XiaoweiP
Harbin Institute of TechnologyHarbinShao JiqunP
Shang ShoutingPH
Wang XuefengPM
Harbin University of Science and TechnologyHarbinChen DongyanP
LiDongmeiP
Tian GuangyueP
Ni XiaoyingP
Hefei University of TechnologyHefeiSu HuamingH
Du XueqiaoH
Zhou YongwuP
Huang YouduP
Huazhong University of Science & TechnologyWuhanWang YizhiH
Hunan UniversityChangshaYi KunnanP
Han LuoPH
Jiamusi UniversityJiamusiWei FanP
Zhi Gu LiP
Jiamusi University College of SciencesJiamusiShan Bai FengP
Liv TongfuP
Jiangxi Normal UniversityNanChangWu Gen XiuP
Jilin Institute of TechnologyChangchunChun Sun ChangP
Yue Xi TingP
Jilin UniversityChangchunIv XianruiP
Fang PeichenP
Song LixinP
Zou YongkuiH
Jinan UniversityGuangzhouHu DaiqiangH
Fan SuohaiP
Ye ShiqiP,P
Nanjing Normal UniversityNanjingFu ShitaiH,P,P
Nanjing Normal University
School of Mathematics and Computer ScienceNanjingZhu QunshengM,H
Nanjing University of Science and TechnologyNanjingZhang ZhengjunP
Xu YuanH
Xu ChungenP
Huang Zhen YouP
Nankai UniversityTianjinHuang WuqunM
Ruan JishouM,H
Nankai University School of MathematicsTianjinRuan JishouH
Yang QingzhiP
National Univ. of Defence Tech.ChangshaWu MengdaP
Duan XiaoLongP,P
Wu MengdaH
North China Electric Power UniversityBaodingGu GendaiM
Xie HongH
North China Inst. of TechnologyTaiyuanXue Ya-kuiH
Lei Ying-jieH
Yong BiP
Northeastern University
College of SciencesShenyangSun PingPP
Han Tie-minM,P
Information Science and EngineeringShenyangHao PeifengPH
Inst. of Artificial Intelligence & RoboticsShenyangCui JianjiangPH
Northern Jiaotong UniversityWang BingtuanM
Wu FaenP
Wang BingtuanP
Wang XiaoxiaM
Northwest UniversityXi'anHe Rui-chanB,B
Northwestern Polytechnical UniversityXi'anZhang Li NingP
Hua Peng GuoH
Sun HaoP
Xu WeiH
Peking UniversityBeijingLian GuijunPP
Wang MingPP
Zhang MingquanP
Shu YoushengP
Liu YulongPP
Qufu Normal UniversityQufuWang FengPP
Shandong UniversityJinanLuan JunfengP
Shanghai Foreign Language SchoolShanghaiWan BaiheP
Pan LiQunH,P
Shanghai Jiading No. 1 High SchoolShanghaiXie XiLinH
Chen LiM
Shanghai Jiaotong UniversityShanghaiSong BaoruiM,H
Huang JianguoH,P
Shanghai Normal UniversityShanghaiZhu DetongP
Zhang JizhouP
Guo ShenghuanP
Shanghai UniversityShanghaiWang YuandiH
Shanghai Univ. of Finance and EconomicsShanghaiYang Xiao BinPP
Shanghai XiangMing Senior High SchoolShanghaiWang DarenP
Shanxi UniversityTaiyuanYang AiminP
Li JihongP
Zhao AiminPP
South China Normal UniversityGuangzhouWang LiminH,P
South China Univ. of Tech.GuangzhouZhuo Fu HongM
Lin Jian LiangM
Tao Zhi SuiH
Feng Zhu FengH
South-west Jiao Tong UniversityChengduZhao Lian WenH,H
Han YangP,P
Tianjin UniversityTianjinRong XiminM,H
Liu ZeyiHH
Tsinghua UniversityShanghaiHu ZhimingM,H
Ye JunP,P
Univ. of Elec. Sci. Tech.ChengduDu HongfeiPP
Qin SiyiMP
Univ. of Sci. & Tech. of ChinaHefeiDou DouP
Sun GuangzhongH
Li YuP
Yang ZhouwangH
Wuhan University of Tech.WuhanPeng SijunHP
Huang ZhangcanH,P
Xi'an Jiaotong UniversityXi'anZhou YicangP
Wu Xiang ZhongP
Dai YonghongH
Xi'an University of Tech.Xi'anCao MaoshengP
Xidian UniversityXi'anChen Hui-chanH
Liu Hong-weiH
Ye Ji-minH
Zhang Zhuo-kuiM
Zhejiang UniversityHangzhouYang QifanMH
Yong HePP
Zhongshan UniversityGuangzhouChen ZepengP
Tang MengxiP
Yuan ZhuojianH,H
FINLAND
Päivölä CollegeTarttilaMerikki LappiH
HONG KONG
Hong Kong Baptist Univ.KowloonTong Chong-szeH
Shiu Wai-cheeP
INDONESIA
Institut Teknologi BandungBandungrEdy SoewonoM
Kuntjoro Adji SidartoH
IRELAND
National Univ. of IrelandGalwayNiall MaddenM,P
Trinity College DublinDublinTimothy G. MurphyP
University College CorkCorkDonal J. HurleyM
James J. GrannellH
University College CorkCorkSupratik RoyP
University College DublinBelfieldTed CoxH,P
Maria G. MeehanP,P
University College DublinDublinPeter DuffyM
Maria G. MeehanP,P
LITHUANIA
Vilnius UniversityVilniusRicardas KudzmaP
SINGAPORE
National Univ. of SingaporeSingaporeVictor TanB
SOUTH AFRICA
University of StellenboschMatielandJan H. Van VuurenHP
UNITED KINGDOM
Magdalen CollegeOxford, EnglandByron W. ByrneM
+ +# Editor's Note + +For team advisors from China and Singapore, we have endeavored to list family name first, with the help of Susanna Chang, Beloit College '03. + +# Simulating a Fountain + +Lyric P. Doshi + +Joseph Edgar Gonzalez + +Philip B. Kidd + +Maggie L. Walker Governor's School + +for Government and International Studies + +Richmond, VA + +Advisor: John A. Barnes + +# Introduction + +We establish the mathematical behavior of water droplets emitted from a fountain and apply this behavior in a computer model to predict the amount of splash and spray produced by a fountain under given conditions. Our goal is a control system that creates the tallest fountain possible while limiting water spillage to a specified level. + +We combine height and volume of the fountain spray, making both functions of the speed at which water exits the fountain nozzle. We simulate water droplets launched from the fountain, using basic physics to model the effects of drag, wind, and gravity. The simulation tracks the flight of droplets in the air and records their landing positions, for wind speeds from 0 to $15\mathrm{m / s}$ and water speeds from 5 to $30\mathrm{m / s}$ . It calculates the amount of water spilled outside of a pool around the fountain, for pool radii from 0 to $40\mathrm{m}$ . + +We design an algorithm for a programmable logic controller, located inside an anemometer, to do a table search to find allowable water speeds for given pool radius, acceptable water spillage, and wind velocity. We test the control system with simulation, subjecting a fountain with a 4-m pool radius to wind speeds from 0 to $3\mathrm{m / s}$ with an allowable spillage of $5\%$ . We also test the model for accuracy and for sensitivity to changes in the base variables. + +# Problem Analysis + +# Wind + +The anemometer measures two main wind factors that affect the fountain: speed, which affects the force exerted on the water, and direction. + +# Fountain + +The main components of the fountain are the pool and the nozzle. The factors associated with the pool are its radius, which remains constant within a trial, and the acceptable level of spillage, which describes the percentage of water that may acceptably fall outside of the fountain. + +# Nozzle + +Major aspects of the nozzle are the radius of the opening, the angle relative to the vertical axis (normal axis), and the spread and speed of the water passing through it. The angle of the nozzle relative to the vertical axis determines the initial trajectory of the water. The spread, described in standard deviations from the angle of the nozzle, determines the extent to which the initial trajectory of droplets differs from the angle of the nozzle. For a given water speed and nozzle radius, the flow of water through the nozzle may be determined from + +$$ +f = \pi r ^ {2} v, +$$ + +where $f$ is flow, $v$ is the water launch speed, and $r$ is the radius of the nozzle. The radius is constant, so the flow and consequent volume are functions of the speed, the dominant controllable factor affecting the height of the stream. + +# Assumptions + +# ... about Fountains + +- The fountain is composed of a single nozzle located at the center of a circular pool. +- The ledge of the pool is sufficiently high to collect the splatter produced by particles impacting the surface of the water. +- Fountains with higher streams are more attractive than those with lower streams. + +# ... about the Nozzle + +- The nozzle has a fixed radius, but the speed of the water through it can be controlled. +- The nozzle is perpendicular to the ground. +- The nozzle responds rapidly to input from the anemometer. +- The nozzle produces a normally distributed spread of droplets with a low standard deviation. + +# ... about Water Droplets + +- Because the droplets are small and roughly spherical, they may be treated as spherical. +- The radii of droplets are normally distributed. +- The density of water is unaffected by conditions and therefore remains constant among and within droplets. +- The only outside forces exerted on a water droplet are gravity and the force exerted by the surrounding air, including drag and wind. +- Acceleration due to gravity is the same for all droplets. +- The effect of air perturbations produced by droplets on other droplets is insignificant. +- All droplets share the same constant drag coefficient. +- Droplet interactions and collisions do not increase the overall energy of the system nor increase significantly the distance traveled by droplets. + +# ... about the Anemometer and Control System + +- The anemometer and control system can rapidly evaluate the wind speed, apply a basic formula, and adjust the nozzle in changing wind conditions. + +# ... about the Wind + +- The wind speed is uniform regardless of altitude. +- Wind blows parallel to the ground without turbulence or irregularities. + +# Basic Description of Model + +Water droplets are emitted from the nozzle and follow trajectories affected by wind and drag. The particles are tracked until they land, including recalculations of trajectories in case of changes in conditions, such as wind. The landing distance from the center of the fountain is recorded. Since the fountain pool is circular, only radial distance is important. + +The model ignores wind direction (does not affect a circular fountain pool) and turbulence (insignificant and too complicated to model accurately). + +We tested droplet collisions and found that they do not greatly affect the distance that droplets land from the center of the pool; so we ruled out incorporating complex interactions into the model. Further physical analysis supported that decision: Since energy and momentum are conserved, a droplet could not travel significantly farther after a collision. + +Finally, we combined fountain height and volume into speed of the water out of the nozzle, because they are directly determined by the speed. + +Our simulation tries all combinations of 11 different water speeds, from 5 to $30\mathrm{m / s}$ (at intervals of $2.5\mathrm{m / s}$ ), with 16 wind speeds, from 0 to $15\mathrm{m / s}$ (at intervals of $1\mathrm{m / s}$ ). Each combination is run for five trials of 10,000 droplets. Spillage is logged for radii from 0 to $40\mathrm{m}$ (at intervals of $0.1\mathrm{m}$ ). The five trials are then averaged to construct an entry in a three-dimensional reference table with axes of radial distance from nozzle, wind speed, and water speed. The control system functions by referring in the table to the current wind speed, the fountain's radius, and the acceptable level of spillage to identify the corresponding maximum water speed. + +# The Underlying Mathematics + +The simulation uses basic physics equations to model the flight of water droplets through the air. + +Each droplet is acted on by three forces: gravity, drag, and wind. Drag is calculated from the following equation [Halliday et al. 1993]: + +$$ +D = \frac {1}{2} C \rho A v ^ {2}, +$$ + +where + +$D$ is the drag coefficient, an empirically-determined constant dependent mainly on the shape of an object; + +$\rho$ is the density of the fluid through which the object is traveling, in this case air; + +$A$ is the cross-sectional area of the object; and + +$v = |\vec{v}|$ is the speed of the object relative to the wind. + +The drag coefficient of a raindrop is 0.60 and the density of air is about $1.2\mathrm{kg} / \mathrm{m}^3$ [Halliday et al. 1993]. Drag acts directly against velocity, so the acceleration vector from drag can be found from Newton's law $\vec{F} = m\vec{a}$ as + +$$ +\vec {a} = \frac {- D}{m} \frac {\vec {v}}{| v |} = \frac {\frac {1}{2} C \rho A | \vec {v} | ^ {2}}{m} \frac {\vec {v}}{| \vec {v} |} = \frac {\frac {1}{2} C \rho A | \vec {v} |}{m} \vec {v}, +$$ + +where $\vec{a}$ is the acceleration vector and $m$ is mass. + +We factor in gravity by subtracting the acceleration $g$ of gravity at Earth's surface, $9.8 \, \mathrm{m/s}^2$ , from the vertical component of the acceleration vector: + +$$ +\vec {a} _ {z} = - \frac {\frac {1}{2} C \rho A | \vec {v} |}{m} \vec {v} _ {z} - g. +$$ + +Next, we use the acceleration to find velocity, beginning with the expression + +$$ +\frac {d \vec {v}}{d t} = - \frac {\frac {1}{2} C \rho A | \vec {v} |}{m} \vec {v} = \vec {a}. +$$ + +To circumvent the difficulties of solving a differential equation for each component of the velocity vector, we use Euler's method to approximate the velocity at a series of discrete points in time: + +$$ +\frac {d \vec {v}}{d t} = \vec {a}, \qquad \Delta \vec {v} \approx \Delta t \vec {a}, \qquad \vec {v} _ {1} \approx \vec {v} _ {0} + \Delta t \vec {a} _ {0}. +$$ + +We use a similar process to find the position of the droplet, resulting in + +$$ +\vec {x} _ {1} \approx \vec {x} _ {0} + \Delta t \vec {v} _ {0}. +$$ + +With $\Delta t = 0.001\mathrm{s}$ , error from the approximation is virtually zero. + +Now that we have equations for describing the droplet in flight, we generate its initial position and velocity. First, we randomly select a value $z$ from a standard Gaussian (normal) distribution (mean 0, standard deviation 1). We calculate the angle from a set mean $\mu$ and standard deviation $\sigma$ of the distribution of possible angles as + +$$ +\phi = z \sigma + \mu . +$$ + +We randomly select another angle $\theta$ between 0 and $2\pi$ radians to be the angle between the velocity vector and the $x$ -axis. + +Thus, the initial velocity vector of the droplet in spherical coordinates is $(\rho, \theta, \phi)$ , where $\rho$ is the magnitude of the velocity. Conversion to rectangular coordinates yields $(\rho \sin \phi \cos \theta, \rho \sin \phi \sin \theta, \rho \cos \phi)$ . + +We also randomly select a starting location within the nozzle (whose diameter is $1\mathrm{cm}$ ) and create a radius for the droplet using a similar sampling from a normal distribution. The mass of the droplet is then + +$$ +m = \frac {4}{3} \pi r ^ {3} \rho , +$$ + +where $\rho$ is the density of water, $998.2\mathrm{kg} / \mathrm{m}^3$ at $20^{\circ}\mathrm{C}$ [Lide 1995]. In the basic simulation, the $\phi$ distribution has a mean of 0 and a standard deviation of $\pi /60$ radians, and the radius distribution has a mean of $0.0015\mathrm{m}$ and a standard deviation of $0.0001\mathrm{m}$ . + +In the basic simulation, the nozzle points straight up; however, we also test the effect of tilting the nozzle into the wind. The program first rotates the nozzle a set angle away from the $z$ -axis ( $\pi/18$ , $\pi/9$ , or $\pi/6$ radians). The initial position and velocity vectors are changed by the formula for rotating a point $t$ radians about the $x$ -axis, from $z$ towards negative $y$ [Dollins 2001]: + +$$ +\left[ \begin{array}{l} x ^ {\prime} \\ y ^ {\prime} \\ z ^ {\prime} \end{array} \right] = \left[ \begin{array}{c c c} 1 & 0 & 0 \\ 0 & \cos t & - \sin t \\ 0 & \sin t & \cos t \end{array} \right] \left[ \begin{array}{l} x \\ y \\ z \end{array} \right]. +$$ + +Next, the program rotates the nozzle around the $z$ -axis to point directly away from the wind (in spherical coordinates, the $\theta$ of the nozzle is equal to that of the wind vector). The formula to rotate a point $t$ radians about the $z$ -axis, from $x$ towards $y$ [Dollins 2001] is + +$$ +\left[ \begin{array}{c} x ^ {\prime} \\ y ^ {\prime} \\ z ^ {\prime} \end{array} \right] = \left[ \begin{array}{c c c} \cos t & - \sin t & 0 \\ \sin t & \cos t & 0 \\ 1 & 0 & 0 \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right]. +$$ + +# Design of Program + +We developed a program to simulate the fountain. The program component Simulator.class manages interactions among the other components of the program. Particle.class describes a water droplet in terms of position, velocity, radius, and mass. Vector3D.class creates and performs functions with vectors, including setting vector components, adding and subtracting vectors, multiplying vectors by scalars, finding the angle between vectors, and finding the magnitude of a vector. + +Emitter.class creates a fountain by spraying droplets. It considers the nozzle radius, direction, and angle orientations and generates launch angle $\phi$ and launch location on the nozzle according to the prescribed distributions. + +Launch speed is determined by Anemometer.class, which takes the wind-speed reading from the anemometer and sends that plus fountain radius and tolerable spillage percentage to FindingVelocity.class. This latter class does a table lookup and returns the maximum droplet speed for the spillage percentage. Anemometer.class then sets the droplet emission speed. + +Once a droplet is emitted, its trajectory is updated every iteration using Physics.class, which checks Wind.class (which contains a vector of the current wind) in each iteration in calculating an updated trajectory. Then Physics.class iterates through the entire collection of particles and computes new velocities and positions based on the forces acting on them. + +The Analyzer.class checks to see if any particles have hit the ground; their locations are recorded and they are removed from consideration. It then relays this information back to Simulator.class, where it is written to disk. + +# Results + +A program run takes 5 min to model 2 sec of spray (10,000 droplets). + +Scatterplots showing where droplets land appear uniform and radially symmetric (Figure 1); a side profile of the points appears uniformly distributed along a line and bilaterally symmetric (Figure 2). + +![](images/976be161ff08d110d22864c4050ba580f82907966b5b7d61d1e6ee3f953e112d.jpg) +Figure 1. Fountain from overhead: launch speed $10\mathrm{m / s}$ , no wind. + +![](images/91042f7d0302455fc166e15a3f6d4277c282d115066c43dc8aad188f4d53e52b.jpg) +Figure 2. Fountain from the side: launch speed $10\mathrm{m / s}$ , no wind. + +We then introduced wind in the positive $x$ -direction. As expected, the landing plot and the side profile plot are skewed horizontally (Figure 3). + +![](images/5fb02fbe16cc4822174359b508ef262033d5b3aaa22e55f467399a81615f8a07.jpg) +Figure 3. Fountain from the side: launch speed $10\mathrm{m / s}$ , wind of $5\mathrm{m / s}$ + +Figures 1-3 conform very well to the actual appearance of fountains, indicating that our model creates an accurate portrait of a real fountain. + +We used a pool radius of $4\mathrm{m}$ and an acceptable spillage of $5\%$ to generate a table of water speeds. We then simulated control of the fountain by a theoretical anemometer using the table. The anemometer was subjected to sinusoidal wind ranging from 0 to $3\mathrm{m / s}$ . There was $7.6\%$ spillage, a success since the extra loss above $5\%$ is from droplets carried farther by an increase of wind after launch. + +# Analysis of Results + +We tested the model for accuracy and sensitivity. We did some useful analysis of the physics of the model by creating a miniature version of the simulation on an Excel spreadsheet to track the trajectory of a single particle. + +Our first test was of the accuracy of the Euler's method approximation. Continuous equations for the motion of a flying droplet can be easily developed if drag and wind are ignored, so we chose this scenario to test our approximation. We considered a particle with a speed of $10\mathrm{m / s}$ and a launch angle of $\pi /60$ radians. We calculated its trajectory using + +$$ +x = (v _ {i} \sin \phi) t, y = (v _ {i} \cos \phi) t - \frac {1}{2} g t ^ {2}, +$$ + +where + +$x$ is the position along the horizontal axis, + +$y$ is the position along the vertical axis, + +$v_{i}$ is the magnitude of the initial velocity, + +$t$ is time, + +$g$ is the acceleration of gravity, and + +$\phi$ is the launch angle, measured from the vertical axis towards the horizontal. + +We compared that trajectory with the one calculated Euler's method. The two were indistinguishable, showing that the Euler's method approximation results in virtually no error. + +We also used the spreadsheet model to examine the effects of wind and drag on individual particle trajectories. Figure 4 compares trajectories of particles with and without drag; and Figure 5 compares the trajectories of two droplets, one with a $5\mathrm{m / s}$ wind and the other with no wind. Drag has a major effect and cannot be ignored. + +# Sensitivity + +We tested the effect of changing some base factors in the model, using an initial water speed of $10 \mathrm{~m} / \mathrm{s}$ . Fountain pool radii were chosen to highlight general trends in the data, either stability or sensitivity. + +![](images/3dabdd6ea40f00c19b3f397b691917ba69b135e95dcad6cf7d7532268c8177fa.jpg) +Figure 4. Droplet trajectories with and without drag. + +![](images/8adbd8c2b40e41f64e5c41c796e8ec78fc4c04c70ffe294dbeca9f29700e3719.jpg) +Figure 5. Droplet trajectories with and without wind. + +# Nozzle angle + +We ran the simulation at a wind speed of $5\mathrm{m / s}$ with the nozzle tilted 0, $\pi /18$ , $\pi /9$ , or $\pi /6$ radians in the same direction as the wind vector. For a pool with a radius of $6\mathrm{m}$ , no water fell outside when the nozzle was pointed straight up and virtually none with a tilt of $\pi /18$ radians. With a tilt of $\pi /9$ radians, $47\%$ of the water fell outside; for $\pi /6$ radians, $99.9\%$ fell outside. The data suggest that tilting the nozzle into the wind could be used to prevent spillage. + +# Nozzle radius + +With no wind and a pool radius of $2\mathrm{m}$ , virtually no water was spilled for nozzle radii of 0.25, 0.5, or $1\mathrm{cm}$ . With a $5\mathrm{m/s}$ wind, virtually all of the water was spilled at all three radii. The radius of the nozzle thus has virtually no effect on the percentage spilled, supporting our decision to use a percentage measure so as to allow the model to apply to fountains with different flow rates. + +# Water droplet size + +In a fountain with a pool radius of $3.5\mathrm{m}$ , droplet radii of 0.75, 1.5, and $3\mathrm{mm}$ resulted in $94\%$ , $53\%$ , and $6\%$ percent spillage. The sensitivity to droplet radius is a reflection of real-world behavior rather than a weakness of the model: Small particles, because of their low mass, are greatly affected by wind and drag. + +# Variability of launch angle + +With a $3.5\mathrm{m}$ pool, a $5\mathrm{m/s}$ wind produced $15\%$ , $45\%$ , and $49\%$ spillage for standard deviations of $\pi/180$ , $\pi/20$ , and $\pi/12$ radians. Thus, results are sensitive to the launch angles of the droplets, dictating that the angle be measured carefully before the model is used. + +# Strengths + +As intended, the model controls the fountain height and volume according to conditions. It creates the tallest and therefore most interesting fountain possible while maintaining the set spillage level. At low spillage-level settings, no passersby get drenched nor is much water wasted. + +The model is easy to adapt by changing parameters, including nozzle size, mean droplet size, mean launch angle and standard deviation, and mean droplet size and standard deviation. + +Graphs of the droplets in midair show that the programmed fountain accurately depicts a real fountain. + +Use of a table means that the radius or spill percentage can be changed without requiring rec calculations. Since the control system does not do any calculation, it can respond almost instantaneously. + +# Weaknesses + +A major problem occurs when wind speed increases quickly: Water droplets already emitted cannot be slowed down and will be carried away on the wind. However, any fountain system will suffer from this dilemma. To give the fountain a small buffer, the radius entered into the fountain control system can be set lower than the actual radius of the pool. + +We model the wind as moving parallel to the ground with uniform speed. Real wind may vary with altitude and may blow from above or below the droplets. We also neglect wind turbulence. + +We ignore droplet collisions. Some droplets may combine and then separate, causing slightly more splatter or mist; or the droplets' collisions may cause more of them to fall short of their expected trajectories, reducing spillage. + +# References + +Dollins, Steven C. 2001. Handy mathematics facts for graphics. http://www.cs.brown.edu/people/scd/facts. Dated 6 November 2001; accessed 9 February 2002. +Halliday, David, Robert Resnick, and Jearl Walker. 1993. Fundamentals of Physics. 4th ed. New York: Wiley. +Lide, David R., ed. 1995. CRC Handbook of Chemistry and Physics. Boca Raton, FL: CRC Press. +Yates, Daniel, David Moore, and George McCabe. 1999. The Practice of Statistics. New York: W.H. Freeman. + +# The Fountain That Math Built + +Alex McCauley + +Josh Michener + +Jadrian Miles + +North Carolina School of Science and Mathematics + +Durham, NC + +Advisor: Daniel J. Teague + +# Introduction + +We are presented with a fountain in the center of a large plaza, which we wish to be as attractive as possible but not to splash passersby on windy days. Our task is to design an algorithm that controls the flow rate of the fountain, given input from a nearby anemometer. + +In calm weather, the fountain sprays out water at a steady rate. When the wind picks up, the flow should be attenuated so as to keep the water within the fountain's pool; in this way, we strike a balance between esthetics and comfort. + +We consider the water stream from the fountain as a collection of different-sized droplets that initially leave the fountain nozzle in the shape of a perfect cylinder. This cylinder is broken into its component droplets by the wind, with smaller droplets carried farther. In the reference frame of the air, a droplet is moving through stationary air and experiencing a drag force as a result; since the air is moving with a constant velocity relative to the fountain, the force on the droplet is the same in either frame of reference. + +Modeling this interaction as laminar flow, we arrive at equations for the drag forces. From these equations, we derive the acceleration of the droplet, which we integrate to find the equations of motion for the droplet. These allow us to find the time when the droplet hits the ground and—assuming that it lands at the very edge of the pool—the time when it reaches its maximum range from the horizontal position equation. Equating these and solving the initial flow rate, we arrive at an equation for the optimal flow rate at a given constant wind speed. Since the wind speeds are not constant, the algorithm must make its best prediction of wind speed and use current and previous wind speed measurements to damp out transient variations. + +Our final solution is an algorithm that takes as its input a series of wind speed measurements and determines in real time the optimal flow rate to maximize the attractiveness of the fountain while avoiding splashing passersby excessively. Each iteration, it adds an inputted wind speed to a buffer of previous measurements. If the wind speed is increasing sufficiently, the last 0.5 s of the buffer are considered; otherwise, the last 1 s is. The algorithm computes a weighted average of these wind speeds, weighting the most recent value slightly more than the oldest value considered. It uses this weighted velocity average in the equation that predicts the optimal flow rate under constant wind. The result is the optimal flow rate under variable wind, knowing only current and previous wind speeds. + +A list of relevant variables, constants, and parameters is in Table 1. + +Table 1. Relevant constants, variables, and parameters. + +
Physical constantsDescriptionValue
ηaViscosity of air1.849 × 10-5kg/m·s [Lide 1999]
ρwDensity of water1000 kg/m3
ρaDensity of air1.2 × 10-6kg/m3
Situational constantsUnits
ACross-sectional area of fountain nozzlem2
fmaxMaximum flow rate of fountain's pumpm3/s
RpRadius of fountain poolm
rRadius of smallest uncomfortable water dropletm
dtSampling interval of anemometers
kk = 9ηa/2ρw r2
Situational variables
vaInstantaneous wind speedm/s
fInstantaneous flow rate of water from the fountainm3/s
nn = g/k + f/Am/s
Dynamic variables
x(t), y(t)Droplet's horizontal and vertical positionsm
vx(t), vy(t)Droplet's horizontal and vertical speedsm/s
ax(t), ay(t)Droplet's horizontal and vertical accelerationsm/s2
Situational parameters
τdDefault sample wind velocity buffer times
τiBuffer time for quickly increasing sample wind velocitiess
KWeight constantdimensionless
+ +# Assumptions + +- Passersby find a higher spray more attractive. +- Avoiding discomfort is more important to passersby than the attractiveness of the fountain. +- The water stream can be considered a collection of spherical droplets, each of which has no initial horizontal component of velocity. +- Every possible size of sufficiently small water droplet is represented in the water stream in significant numbers. +Water droplets remain spherical. +- The interaction between the water droplets and wind can be described as non-turbulent, or "laminar," flow. +- There exists a minimum uncomfortable water droplet size; passersby find it acceptable to be hit by any droplets below this size but by none above. +- When the wind enters the plaza, its velocity is entirely horizontal. +- The wind speed is the same throughout the plaza at any given time. +- The pool and the area around it are radially symmetric, so there is no preferred radial direction. +- We can neglect any buoyant force on the water due to the air, since the error introduced by this approximation is equal to the ratio of densities of the fluids involved, on the order of $10^{-3}$ , which is negligible. +- The anemometer reports wind speeds at discrete time intervals $dt$ . + +# Analysis of the Problem + +For a water stream viewed as a collection of small water droplets blown from a core stream, the interaction between the droplets and the air moving past them can best be described in the inertial reference frame of the moving air. In this frame, the air is stationary while the droplet moves horizontally through the air with a speed equal to the relative speed of the droplet and wind, $v_{r} = v_{a} - v_{x}$ . In the vertical direction, $v_{r} = v_{y}$ , since the wind blows horizontally. + +In the air's frame of reference, the water droplet experiences a drag force opposing $v_{r}$ . Assuming that the air moves at a constant velocity, this force is the same in both frames of reference. In the frame of the fountain, then, the droplet is being blown in the direction of the wind. The smaller water droplets are carried farther, so we need only consider the motion of the smallest + +uncomfortable water droplets, knowing that bigger droplets do not travel as far. + +The water droplet initially has a vertical velocity $v_{y}(0)$ that is directly related to the flow rate of water through the nozzle of the fountain. This initial vertical velocity component can be controlled by changing the flow rate. The droplet's motion causes vertical air resistance, slowing the droplet and affecting how long $(t_{w})$ the droplet is in the air. + +Since the vertical and horizontal components of a water droplet's motion are independent, $t_w$ is determined solely by the vertical motion. Knowing this time allows us to find the horizontal distance traveled, which we wish to constrain to the radius of the pool. + +When the wind is variable, however, we cannot determine exactly the ideal flow rate for any given time. We must instead act on the current reading but also rely on previous measurements of wind speed in order to restrain the model from reacting too severely to wind fluctuations. We need to react faster to increase in wind speed, since they result in splashing which is weighted more heavily. + +# Design of the Model + +For our initial model, we assume that $v_{a}$ is constant for time intervals on the order of $t_{w}$ , so that any given droplet experiences a constant wind speed. + +We model the water stream as a collection of droplets that are initially cohesive but are carried away at varying velocities by the wind. The distances that they travel depend on the wind speed $v_{a}$ and the initial vertical velocity of the water stream through the nozzle, $v_{y}(0)$ . Since the amount of water flowing through the nozzle per unit time is $f = v_{y}(0)A$ , we have $v_{y}(0) = f / A$ . The dynamics of the system, then, is fully determined by $f$ and $v_{a}$ . First, we find the equations of motion for the droplet. + +# Equations of Motion for a Droplet + +For laminar flow, a spherical particle of radius $r$ traveling with speed $v$ through a fluid medium of viscosity $\eta$ experiences a drag force $F_{D}$ such that + +$$ +F _ {D} = (6 \pi \eta r) v \quad [ W i n t e r s 2 0 0 2 ]. +$$ + +Since a spherical water droplet has a mass given by + +$$ +m = \rho_ {w} \left(\frac {4}{3} \pi r ^ {3}\right), +$$ + +the acceleration felt by the droplet is given by Newton's Second Law as the total force over mass. Since there are no other forces acting in the horizontal + +direction, the horizontal acceleration $a_{x}$ is given by: + +$$ +a _ {x} (t) = \frac {d ^ {2} x}{d t ^ {2}} = \left(\frac {9 \eta_ {a}}{2 \rho_ {w} r ^ {2}}\right) v _ {r} = k \left(v _ {a} - v _ {x}\right), \tag {1} +$$ + +where $k = 9\eta_{a} / 2\rho_{w}r^{2}$ + +The droplet experiences both air drag and gravity in the vertical direction, so the vertical acceleration is + +$$ +a _ {y} (t) = - \left[ \left(\frac {9 \eta_ {a}}{2 \rho_ {w} r ^ {2}}\right) v _ {y} + g \right] = - k \left(v _ {y} + \frac {g}{k}\right). +$$ + +With constant $v_{a}$ , we use separation of variables and integrate to find $v_{x}(t)$ and $v_{y}(t)$ , using the facts that $v_{x}(0) = 0$ and $v_{y}(0) = f / A$ . The results are + +$$ +v _ {x} (t) = v _ {a} \left(1 - e ^ {- k t}\right), \qquad v _ {y} (t) = n e ^ {- k t} - \frac {g}{k}, +$$ + +where $n = g / k + f / A$ + +Integrating again, and using $x(0) = y(0) = 0$ , we have + +$$ +x (t) = \frac {v _ {a}}{k} \left(k t + e ^ {- k t} - 1\right), \qquad y (t) = \frac {1}{k} \left(n \left(1 - e ^ {- k t}\right) - g t\right). +$$ + +# Determining the Flow Rate + +Because $f$ is the only parameter that the algorithm modifies, we wish to find the flow rate that would restrict the smallest uncomfortable water droplets to ranges within $R_{p}$ , so that they would land in the fountain's pool. + +After a time $t_w$ , the droplet has fallen back to the ground. Thus, $y(t_w) = 0$ . This equation is too difficult to solve exactly, so we use the series expansion for $e^{-kt}$ and truncate after the quadratic term: $e^{-kt} \approx 1 - kt + (kt)^2 / 2$ . Solving $y(t_w) = 0$ , we find + +$$ +t _ {w} \approx \frac {2}{k} \left(1 - \frac {g}{n k}\right). +$$ + +We know that the maximum horizontal distance $x(t_w)$ must be less than or equal to $R_p$ , with equality holding for the smallest uncomfortable droplet. For that case, using the same expansion for $e^{-kt}$ as above, + +$$ +R _ {p} = x (t _ {w}) \approx \frac {v _ {a}}{k} \left(k t _ {w} - 1 + 1 - k t _ {w} + \frac {(k t _ {w}) ^ {2}}{2}\right) = \frac {v _ {a} k}{2} t _ {w} ^ {2}. +$$ + +Solving for $t_w$ and equating it to the earlier expression for $t_w$ , we get + +$$ +\sqrt {\frac {2 R _ {p}}{v _ {a} k}} = t _ {w} = \frac {2}{k} \left(1 - \frac {g}{n k}\right). +$$ + +Recalling that in this equality only $n$ is a function of $f$ , we substitute for $n$ and solve for $f$ . The result is + +$$ +f \left(v _ {a}\right) = \frac {A g}{\sqrt {\frac {2 v _ {a} k}{R _ {p}}} - k}. \tag {2} +$$ + +As $v_{a} \rightarrow kR_{p} / 2$ , this equation becomes singular (see Figure 2). At lower values of $v_{a}$ , it gives a negative flow rate. These wind speeds are very small; at such speeds, the droplets would not be deflected significantly by the wind. Since (2) assumes that the flow rate can be made arbitrarily high, it is unrealistic and invalid in application. To make the model more reasonable, we modify (2) to include the maximum flow rate achievable by the pump, $f_{\mathrm{max}}$ : + +$$ +F \left(v _ {a}\right) = \left\{ \begin{array}{l l} \min \left(\frac {A g}{\sqrt {\frac {2 v _ {a} k}{R _ {p}}} - k}, f _ {\max }\right), & v _ {a} > k R _ {p} / 2; \\ f _ {\max }, & v _ {a} \leq k R _ {p} / 2. \end{array} \right. \tag {3} +$$ + +An algorithm can use the given constants and a suitable minimal droplet size to determine the appropriate flow rate for a measured $v_{a}$ . However, (3) assumes that the wind speed is constant over the time scale $t_{w}$ for any given droplet. A more realistic model must take into account variable wind speed. + +# Variable Wind Speed + +When wind speed varies with time, the physical reasoning used above becomes invalid, since the relative velocity of the reference frames is no longer constant. Mathematically, this is manifested in the equation for velocity-dependent horizontal acceleration; integrating is now not so simple, and we must resort to numerical means to find the equations of motion. Additionally, the algorithm can rely only on past and present wind data to find the appropriate flow rate. Our model needs to incorporate these wind data to make a reasonable prediction of the wind's velocity over the next $t_w$ and determine an appropriate flow rate using (3). + +A gust is defined to be a sudden wind speed increase on the order $5\mathrm{m / s}$ that lasts for no more than 20 s; a squall is a similarly sudden wind speed increase that lasts longer [Weather Glossary 2002]. Our model should account for gusts and squalls, as well as for "reverse" gusts and squalls, in which the wind speed suddenly decreases. Since wind speeds can change drastically and unpredictably over the flight time of a droplet, our model will behave badly at times and there is no way to completely avoid this—only to minimize its effects. + +The model's reaction to wind speed is not fully manifested until the droplet lands, after a time $t_w$ (approximately 2 s). By the time our model has reacted to a gust or reverse gust, therefore, the wind speed has stopped changing. Without some type of buffer, in a gust our model would react by suddenly dropping flow rate as the wind peaked and then increasing it again as the wind decreased; the fountain would virtually cut off for the duration of any gust, which would release less water and thus seem very unattractive to passersby. Additionally, the water released just before the onset of the gust would be airborne as the wind speed picked up, splashing passersby regardless of any reaction by our model. + +We exhibit an algorithm for analyzing wind data that makes use of (3). Because velocity now varies within times on the order of $t_w$ , we do not want to directly input the current wind speed but rather a buffered value, so that the model does not react too sharply to transient wind changes. The model should react more quickly to sudden increases in wind than to decreases, because increases cause splashing, which we weight more heavily than attractiveness. + +The model, therefore, has two separate velocity buffer times: one, $\tau_{d}$ , the default, and another, $\tau_{i}$ , for when the wind increases drastically. We also weight more-recent values in the buffer more heavily, since we want the model to react promptly to wind speed changes but not to overreact. We weight each value in the velocity buffer with a constant value $K$ plus a weight proportional to its age: Less-recent velocities are considered but given less weight than more recent ones. The weight of the oldest value in the buffer is $K$ and that of the most recent is $K + 1$ , with a linear increase between the two. With the constraint that the weights are normalized (i.e., they sum to 1), the equation for the $i$ th weight factor is + +$$ +w _ {i} = \frac {\left(K + \frac {i}{\tau - d t}\right) d t}{\left(K + \frac {1}{2}\right) \tau}. +$$ + +The speeds are multiplied by their respective normalized weights and summed. This sum, $v^{*}$ , is then used in (3) to find the appropriate flow rate for the fountain at a given time. We use $\tau_{i}$ rather than $\tau_{d}$ when the wind speed increases sufficiently over a recent interval, but not when it increases slightly or fluctuates rapidly. We switch from $\tau_{d}$ to $\tau_{i}$ whenever the wind speed increases over two successive 0.2 s intervals and by a total of at least 1 m/s over the entire 0.4 s interval. + +Our algorithm follows the flowchart in Figure 1 in computing the current flow rate. We wrote a $C++$ program to implement this algorithm, the code for which is included in an appendix. [EDITOR'S NOTE: We omit the code.] + +![](images/95532ee41de650477e5a92951b5fa4dc8aa0bad7ae1b92b97bff349fd0fdc09f.jpg) +Figure 1. Flow chart for computing flow rate with variable wind speed. + +# Testing and Sensitivity Analysis + +# Sensitivity of Flow Equation + +In our equation for flow rate, two variables can change: minimal droplet size and wind speed. While the minimal droplet size will not change dynamically, its value is a subjective choice that must be made by the owner of the fountain. The wind speed, however, will change dynamically throughout the problem, and the purpose of our model is to react to these changes. + +We examined (3) for varying minimal drop sizes (Figure 2) and wind speeds (Figure 3). We used a fountain with nozzle radius $1\mathrm{cm}$ , maximum flow rate $7.5\mathrm{L/s}$ , and pool radius $1.2\mathrm{m}$ . (This maximum flow rate is chosen for illustrative purposes and is not reasonable for such a small fountain.) + +![](images/3654e977f8d609282f9c35d6e81aeec7759cb5b2bbe8c6838714c4ab3fb3e9e1.jpg) +Figure 2. Graphs of flow rate $f$ vs. wind speed $v_{a}$ for several values of radius $r$ of smallest uncomfortable droplet. + +At any wind speed, as the acceptable droplet radius decreases, the flow rate decreases. At higher wind speeds, this difference is less pronounced; but at lower speeds, acceptable size has a significant impact on the flow rate. At very low wind speeds, the fountain cannot shoot the droplets high enough to allow the wind to carry them outside the pool, regardless of drop size. Our cutoff, $f_{\mathrm{max}}$ , reflects that the fountain pump cannot generate the extreme flow needed to get the droplets to the edge of the pool in these conditions. + +![](images/15bea9e0706421e528018154348d499e1aec213ee2800f9fa3f12ca496451c1e.jpg) +Figure 3. Graphs of flow rate $f$ vs. radius $r$ of smallest uncomfortable droplet for several values of wind speed $v_{a}$ . + +For any droplet size, as the wind speed increases, the flow rate must decrease to keep the droplets in the pool. For large $r$ , a change in wind speed requires a greater absolute change in flow rate than for small $r$ . For very small droplets, the drag force dominates the force of gravity, and an increase in flow also increases the drag force to such an extent that the particle spends no more time in the air. This behavior is readily apparent in (1) as $r$ approaches zero. These extremely small values of $r$ , though, describe droplets that are unlikely to discomfort passersby and thus are not significant to our model. + +# Sensitivity of Flow Algorithm + +The results of the algorithm depend on the parameters $\tau_{i},\tau_{d}$ , and $K$ , which determine the size of the buffer and weights of the velocities in the buffer. To test sensitivity to these parameters and to find reasonable values for them, we + +created the set of simulated wind speeds shown in Figure 4, including small random variations, on which to test our algorithm. This data set does not reflect typical wind patterns but includes a variety of extreme conditions. + +![](images/0937d35c19e9eb223d53b5ed118c3b82030fdf06d53ffd48a8a1b02da85a8b3c.jpg) +Figure 4. Simulation of wind speed for $3\mathrm{min}$ + +We wish to create a quantitative estimate of the deviation of our flow algorithm from ideal performance and then test the algorithm with different combinations of parameters to find the set that produces the smallest deviation under simulated wind conditions. + +To measure how "bad" a set of flow choices is, we consider only the droplets that fall outside the pool. The "badness" is the sum over the run of the distances outside the pool at which droplets land. + +To determine the distance, we need to know how droplets move through the air in varying wind speeds. Describing this motion in closed form is mathematically impossible without continuous wind data, so we approximate the equations of motion with an iterative process. + +Since the time that a particle spends in the air, $t_w$ , is not affected by the wind speed, we know $t_w$ for each particle. We step through the time $t_w$ in intervals of $dt$ , computing the particle's acceleration, velocity, and position as + +$$ +a _ {i} = k (v _ {a, i} - v _ {i}), \qquad a _ {0} = k v _ {a, 0}; +$$ + +$$ +v _ {i} = v _ {i - 1} + a _ {i - 1} d t, \qquad v _ {0} = 0; +$$ + +$$ +x _ {i} = x _ {i - 1} + v _ {i} d t, \quad \quad x _ {0} = 0. +$$ + +When we reach $t_w$ , the droplet has hit the ground, and we compare its horizontal position to the radius of the pool. We do this for each droplet, keeping track of both the largest absolute difference and the average difference. + +To test the flow algorithm, we ran our program on the flow data with each combination of parameters. The parameter values that produced the least deviation were $\tau_{i} = 0.5$ , $\tau_{d} = 1$ , and $K = 10$ . These values imply that only + +fairly recent wind speed measurements should be held in the buffer, with most recent velocity having a weight of $(K + 1) / K = 1.1$ relative to the oldest. Lowering $K$ beyond this value increases the deviation from the ideal, while increasing it further makes no difference. Similarly, increasing $\tau_{i}$ or $\tau_{d}$ increases the deviation, because the algorithm cannot respond quickly to changes in wind speed. Decreasing $\tau_{i}$ below 0.5 makes no difference, while decreasing $\tau_{d}$ would make the model too sensitive to short fluctuations in wind speed. + +![](images/10fff09f9b9344b7b658daa3286635a4e891eb41f41485c80b6ad29f5eb0e4c8.jpg) +Figure 5. Range of droplets over the simulation overlaid with scaled wind speeds. + +# Justification + +# Validity of the Laminar Flow Assumption + +Our model is based on a drag force proportional to $v_{r}$ , which is not necessarily correct. For higher speeds or large droplet sizes, the drag becomes proportional to $v_{r}^{2}$ . We thus need to determine whether reasonable physical scenarios allow us to model the drag force as proportional to $v_{r}$ and not $v_{r}^{2}$ . + +For a sphere of radius $r$ moving through the air with speed $v_{r}$ , the Reynolds number $R$ is defined to be + +$$ +R = \frac {2 \rho_ {a} v _ {r}}{\eta_ {a}} r \quad [ W i n t e r s 2 0 0 2 ]. +$$ + +When $R < 10^{3}$ , there is little turbulence and laminar flow dominates, so air resistance is roughly proportional to $v_{r}$ . If $R > 10^{3}$ , the flow is turbulent and the drag force is proportional to $v_{r}^{2}$ [Winters 2002]. Using a physically reasonable relative speed of $4.5\mathrm{m / s}$ (corresponding to a wind speed of roughly $10\mathrm{mph}$ ), we obtain $R = (5.8\times 10^{5})r$ , which gives predominantly laminar flow when $r < 1.7 \mathrm{mm}$ . Because water droplets of diameter greater than $3\mathrm{mm}$ are uncomfortable, these provide an upper limit on the droplet sizes to consider. Because these smaller droplets bound the larger droplets in how far they go + +from the fountain (see below), all of our analysis is concerned with droplets whose sizes are within the allowed range for laminar flow. + +# Bounding the Droplet Range + +For either laminar or turbulent flow, the acceleration due to drag scales with $F / m \propto r^{-n}$ , where $1 \leq n \leq 2$ . Larger droplets therefore experience a lower horizontal acceleration due to drag, while acceleration in the vertical direction is dominated by gravity ( $k < 0.1g$ ); so the time that a particle spends in the air is roughly the same for droplets of varying radius. The heavier droplets have less horizontal acceleration, so they travel a shorter horizontal distance in the same amount of time than smaller droplets. The ranges are, therefore, shorter for larger droplets, so we can bound all uncomfortably-sized droplets by the range of the smallest such droplet. + +# Initial Shape of the Water Stream + +We assume that the water coming out of the fountain nozzle has no initial horizontal velocity; that is, the stream is a perfect cylinder with the same radius as the nozzle. In fact, the stream is closer to the shape of a steep cone and the droplets have some horizontal velocity. In the absence of wind, this assumption has a significant impact on where the droplets land, since without wind the algorithm predicts a horizontal range of zero. However, in these cases, the flow rate is bounded by $f_{\mathrm{max}}$ regardless of initial velocity, so the natural spread of the fountain is irrelevant. In higher wind, the initial horizontal velocity is quickly dominated by the acceleration due to the wind and thus makes a negligible contribution to the total range. + +# Exclusively Horizontal Wind + +We assume that the wind is exclusively horizontal. Since the anemometer measures only horizontal wind speed, that is the only component that we can consider in our model. Additionally, the buildings around the plaza would tend to act as a wind tunnel and channel the wind horizontally. + +# Quadratic Approximation of $e^{-kt}$ + +Because the series for $e^{-kt}$ is alternating, the error from truncating after the second term is no greater than the third term, which is $(kt)^3 / 6$ . The relative error is $(kt)^3 / 6e^{-kt} \approx 0.001$ for reasonable values of $k$ and $t$ , so our approximation introduces very little error. + +# Conclusions + +Our final solution is an algorithm that takes as its input a series of wind speed measurements and determines in real-time the optimal flow rate to maximize the attractiveness of the fountain while avoiding splashing passersby excessively. It takes an inputted wind speed and adds it to a buffer of previous measurements. If the wind speed is increasing sufficiently, the last 0.5 s of the buffer are considered; otherwise, the last 1 s is. The algorithm computes a weighted average of these wind speeds, weighting the most recent value $10\%$ more heavily than the oldest value considered. It then takes this weighted average and uses it in the equation that predicts the optimal flow rate under constant wind. The result is the optimal flow rate under variable wind, knowing only current and previous wind speeds. + +# Strengths and Weaknesses + +# Strengths + +- Given reasonable values for the characteristics of the fountain and for wind behavior, our model returns values that satisfy the goal of maintaining an attractive fountain without excessively splashing passersby. +- The model can compute optimal flow rates in real time. Running one cycle of the algorithm takes a time on the order of $0.001 \, \text{s}$ , so the fountain's pump could be adjusted as fast as physically possible. +- The values for the parameters that determine the behavior of the algorithm, $\tau_{d}, \tau_{i}$ , and $K$ , are not arbitrary but instead are the values that perform best under simulation. +- Our algorithm is very robust; it works well under extreme conditions and can be readily modified for different situations or fountains. + +# Weaknesses + +- A primary assumption is that the droplets coming from the fountain nozzle have no horizontal velocity. In reality, the nozzle sprays a cone of water, rather than a perfect cylinder; but this difference does not have a significant impact on the results. +- Another important assumption is laminar flow. The water droplets are of a size to experience a combination of laminar and turbulent flow, but describing such a combination of regimes is mathematically difficult and is known only through experimentation. A more rigorous representation of the drag force would increase the accuracy of our simulation, but doing so would + +markedly increase the complexity of the algorithm and thus make real-time computation more difficult. + +- We have ignored the abundances of droplet sizes in considering discomfort. If one droplet would spray passersby, we assume that enough droplets would spray passersby to make them uncomfortable. In fact, it is only significant numbers of droplets that discomfort passersby; but we do not know how many droplets would be released nor how many would be needed to be discomforting. + +# References + +Goldstein, S., ed. 1965. Modern Developments in Fluid Dynamics. Vol. 2. New York: Dover. +Hughes, W.F., and J.A. Brighton. 1967. Fluid Dynamics. New York: McGraw-Hill. +Lide, David R., ed. 1999. CRC Handbook of Chemistry and Physics. 80th ed. Boca Raton, FL: CRC Press. +Winters, L. 2002. Theory of velocity dependent drag forces. http://courses.ncssm.edu/ph220/labs/vlab1/theory.pdf. Accessed February 2002. +Weather Glossary. 2002. http://www.weather.com/glossary/. Accessed February 2002. + +# Wind and Waterspray + +Tate Jarrow +Colin Landon +Mike Powell +U.S. Military Academy +West Point, NY + +Advisor: David Sanders + +# Introduction + +Given anemometer readings from a nearby building, the task is to devise an algorithm that controls the height of a fountain in an open square. Our mission is to keep passersby dry and yet have the fountain look as impressive as possible. With ever-changing winds, we must devise a scheme to regulate the flow of water through the fountain to ensure that the bulk of the water shot into the air falls back to the ground within the fountain basin boundary. + +Our model considers many factors and is divided into five basic parts: + +- The conversion of wind speed on top of the building to wind speed at ground level based on height and the force of drag. +- The determination of initial velocity, maximum height, and time of flight from fountain nozzle characteristics, using Bernoulli's equation and the rate of flow equation of continuity. +- The assessment of the displacement effects of the wind on the water's ascent. +- The assessment of the displacement effects of the wind on the water's descent. +- The calculation of the optimal flow rate by comparing the water's total horizontal displacement to the radius of the fountain basin. + +After creating this model in a MathCAD worksheet, we solved every function involved in this model as a function of the water flow rate. This worksheet takes the input from several variables such as the nozzle radius, the maximum flow rate the fountain can handle, the dimensions of the building on which the + +anemometer is placed, and the dimensions of the fountain. From the inputs, the model finds the maximum flow rate that keeps the water in the fountain basin. As wind speed and direction vary, the model reacts to produce the optimal flow rate. + +Testing the model shows that while the results are reasonable, the main source of error results from our drag calculations due to the interaction between wind and the buildings. To solve this error, measurements should be taken at both the building roof and the fountain itself. Although future work would resolve this issue and improve the model, our current model still provides realistic results. + +We provide in Table 1 a list of symbols used. + +# Problem Approach + +We break the overall problem down into several smaller pieces, solve the pieces separately, and put the pieces together to find the overall solution. + +- How the wind is affected as it flows around the buildings. + +- How the wind varies with height off the ground. + +- How the buildings slow the wind. + +- How the wind affects the water from the fountain. + +- How the wind affects the water on the way up. + +- How the wind affects the water on the way down. + +- How to contain that total displacement within the basin. + +# Assumptions + +# Overall Assumptions + +- The plaza has a fountain in the center with four surrounding buildings. Other arrangements can be handled with slight modifications. +- The buildings are rectangular and have the same dimensions. Most buildings are rectangular; for same-size buildings, we can use a single constant drag coefficient. +- The distances from each building to the fountain are the same, so each building has the same effect on the fountain water. +- The acceptable splash area is the radius of the fountain basin. A basin surrounds the water jet, and people walking outside the fountain do not want to get wet. + +Table of symbols. + +Table 1. + +
SymbolMeaning (units)
Rrate of flow of the fountain (m3/s)
ReReynolds number
vflow speed (m/s)
da relevant dimension (m)
νkinematic viscosity of the fluid
FDforce of drag (N)
ρdensity of the wind (kg/m3)
vbhspeed of the wind before the building at height h (m/s)
Cddrag coefficient
Asurface area interacting with the wind (m2)
vzwind speed measured by the anemometer at the height z (m)
hheight above ground (variable) (m)
zheight of the building (m)
αterrain constant number = 0.105
hmaxmaximum height that the water reaches, a function of R (m)
Kikinetic energy of the wind-building system before the wind hits the building (J)
Kfkinetic energy of the wind-building system after the wind passes the building (J)
WNCwork done by nonconservative forces, drag of the building times the length over which it is applied (J)
ddistance over which drag acts, length and width of the building (m)
bwidth or half the length of one of the buildings (m)
vhspeed of the wind after it passes the building at a height h (m/s)
mmass of the air that interacts with the building in 1 s if the speed vbh was constant over the face of the building (kg)
θangle at which the wind strikes the building (°)
Apcross-sectional area of the pipe at the nozzle tip (m2)
vfspeed of the water as it leaves the nozzle (m/s)
rpradius of the pipe at the nozzle tip (m)
gacceleration due to gravity, 9.803 m/s2
rcradius of the column of water at a time t after leaving the nozzle with a rate of flow R (m)
Ppressure on the water caused by the wind (N/s2)
Acsurface area of the column of ascending water(m2)
ρdensity of air (kg/m3)
mTtotal mass of the water in the air at a flow rate R (kg)
TTotaltotal time that the water spends in the air with a flow rate R (s)
ρwaterdensity of water (kg/m3)
achorizontal acceleration of the water in the column with a flow rate of R and a wind of speed vh (m/s2)
Fcforce on the column of water from the wind of speed vh (N)
xchorizontal displacement of the ascending column of water with a flow rate R and wind speed vh at a time t (m)
PDpressure on a drop of water from wind of speed vh (N/m2)
Fdforce on the drop from wind of speed vh (N)
Adarea of a drop (m2)
mdmass of a drop of water (kg)
adhorizontal acceleration of the drop of water as a function of rate of flow R and time in air t (m/s2)
aavgaverage horizontal acceleration of a drop during its descent at a rate of flow R and wind of speed vh (m/s2)
+ +![](images/501c2ee9923145a214e6c5d433093f44ea3fa70144777f5f0bf55a649dc518d4.jpg) +Figure 1. The fountain in the center of four buildings. + +- The fountain does not squirt water higher than the buildings, although shooting water over the roofs would indeed be spectacular. +- The fountain shoots water straight into the air. This is important for our model so that we can predict how the water will flow up, how it will fall, and where it will fall. +- The fountain nozzle creates a single sustained stream of water. This assumption enables us to neglect drag as the water reaches its peak height. Furthermore, most fountains have a continuous flow of water. + +# Wind + +- The pertinent wind flow is around the sides of the buildings, not over them. Since the fountain does not exceed the height of the buildings, it does not interact with wind that passes over the tops of the buildings. This assumption is important in calculating the drag caused by the buildings. +- The flow of the wind continues in the same direction across the entire plaza. The wind flows through the plaza in a constant direction, goes around obstacles, and resumes the same direction of motion. The wind does not get stuck in the plaza nor react to cars, people, doors, or windows in the plaza. +- Wakes caused by buildings are not factors. The wake that results when wind hits a building and goes around it does not change the velocity after the wake, so the wake force does not influence the wind's speed or direction. +- The fountain is not in the wake of the buildings. With this assumption, there is no need to worry about wake in our model. This is important because wake is too complex to be modeled. +- The change in wind velocity is due solely to drag. The reason that the wind decreases before and after hitting the building is because of drag. This assumption allows us to use the law of conservation of energy to predict the change in velocity. + +- The anemometer measures wind speed and direction at the top of the building before any effects of drag. The anemometer must be at the top of the building on the windward side, elevated above the height of the building so as not to measure any of the effects of the building. To simplify, we assume that it is at the height of the building. +- The wind pattern is the same across the entire plaza as measured at the anemometer. If the pattern changed, the anemometer reading would be invalid. +- The fountain is in a city or urban area. This assumption allows us to determine the effect of the ground on wind speed at a given height. +- The drag applied to wind at a certain height is equal to the average effect of drag, that is, to the total drag caused by the building at the velocity at that height divided by the height of the building. This is slightly inaccurate but still produces a reasonable model. + +# Water Height + +- Water has laminar flow. Water has a constant velocity at any fixed point, regardless of the time. A fluid may actually have various internal flows that complicate the model, but we consider the flow as the jet of water ascends to be constant so that we can model it as an ideal fluid. +- Water has nonviscous flow. The water experiences no viscous drag force in the pipe or in the air. The outer edge of the column of water actually interacts with the air and loses some energy to due to the viscosity of both fluids; but since air and water both have a low viscosity, this loss is negligible. +- Water is incompressible. The density of water is constant and does not change as the water moves up into the air and back down again. + +# Water Movement Sideways + +- The water jet upward flows as a cylinder. Since the surface tension of the water holds it together unless it is acted upon by a force, the water should retain the dimensions of the nozzle from which it emerges. +- The pressure of the wind is a force per area on the water column and on water drops. Wind and water are both fluids, so the interaction between them is a complex relationship of their viscosities; however, we also know that wind creates a pressure difference that we can model. We model the force on the water as the pressure caused by a certain velocity of wind multiplied by the surface area of the body of water. + +- The largest particle of water that we want to contain is the size of average drop of water $0.05 \mathrm{~mL}$ . The column of water breaks into smaller particles at the peak of its ascent, and they descend individually. We estimate that particles smaller than that size would be acceptable to bystanders hit by them. Any larger particle would have more mass, hence a higher mass-to-surface-area ratio, so the pressure could not push it as far. + +- Water drop behaves as a rigid body. Since a drop is small, internal currents have very little effect. Additionally, the pressure acts over the entire surface area of the drop and should accelerate it as a single body. + +# Model Design + +# Effects of Buildings on Wind Velocity + +Because buildings surround the fountain, the wind velocity at the anemometer on top of a building is different from that at fountain level. Buildings disrupt wind currents, slow the wind, and change its direction [Liu 1991, 62]. Buildings create areas of increased turbulence, as well as a wake—an area of decreased pressure—behind the building. Thus, the behavior of wind after it passes a building is so complex as to be almost impossible to model. Hence, we assume that the fountain is located outside of the wakes of the buildings. + +# Wind Speed Reduction + +The wind inside a group of buildings is less than that outside of the group; the interaction between the wind and the buildings causes a decrease in speed. The drag between the building and the wind decreases the kinetic energy of the wind and hence its speed. + +Since the fountain is squirting water into the air in a symmetrical shape, the wind affects where the water lands in the same way regardless of the wind's direction; so there is no need to find the wind direction after it hits the building. + +# Drag + +Nevertheless, wind direction before the wind hits the building is an important factor. The angle at which the wind hits the building changes the surface area that the wind interacts with, and drag changes with area. The drag force $\vec{F}_d$ is given by + +$$ +\vec {F} _ {d} = \frac {1}{2} \rho v _ {\mathrm {b h}} ^ {2} C _ {d} A, +$$ + +where $\rho$ is the density of air, $v_{\mathrm{bh}}$ is the speed of wind at height $h$ , $C_d$ is the drag coefficient, and $A$ is the surface area interacting with the wind. Therefore, we + +must know from which angle the wind approaches the building and how this affects the surface area perpendicular to the direction of the wind. + +For a rectangular building with the narrow face to the wind, $C_d = 1.4$ [Macdonald 1975, 80]. + +Figure 2 diagrams the plaza and fountain. No matter which way the wind blows, it interacts with a narrow edge of a building. Wind from due east or west creates a problem for this model because of discontinuity in the drag coefficient. Instead, we assume that the coefficient remains constant. + +![](images/76903516ec7fe3dceb6872b7d55f96fa84acdf88b1e9721d2ca83c44ea6658e6.jpg) +Figure 2. The plaza. + +# Wind Speed at Differing Heights + +The speed of wind changes with the height from the ground because there is an additional force on the wind due to surface friction (dependent on the surface characteristics of the ground). The effect of this friction decreases as the wind speed is measured at a greater distance from the ground, creating faster speeds at greater heights. + +Wind speed also varies because the temperature varies with height and location. However, if we assume that temperature and ground roughness are constant, a mean speed at a certain height can be modeled by + +$$ +v _ {\mathrm {b h}} = v _ {z} \left(\frac {h}{z}\right) ^ {\alpha} \tag {1} +$$ + +where $v_{\mathrm{bh}}$ is the speed of the wind before it hits the building, $v_{z}$ is the wind speed measured by the anemometer at the height $z$ of the building, $h$ is the + +variable height of the water, and $\alpha$ is the terrain constant number. We use $\alpha = 0.105$ , the value for ground roughness of a city center [Macdonald 1975, 48]. + +We assume that the greatest height of the water that the fountain hits, $h_{\mathrm{max}}$ , does not exceed the height of the building, so we can neglect the drag from the building's roof (since the wind that goes over the building does not interact with or affect the water in the fountain). + +# Converting Drag to Work + +We need to convert the drag force into a form that will enable us to determine the actual loss of speed. Since drag is a nonconservative force (energy is lost during its application), we can use conservation of energy in the form that says that the initial kinetic energy $K_{i}$ minus the work $W_{\mathrm{NC}}$ done by the nonconservative force equals the final kinetic energy $K_{f}$ , or + +$$ +K _ {i} = K _ {f} + W _ {\mathrm {N C}}. \tag {2} +$$ + +For the $K$ terms, we use the kinetic energy equation $K = \frac{1}{2} mv^2$ . For $K_{i}$ , we have $v_{\mathrm{bh}}$ ; for $K_{f}$ , we have $v_{h}$ . + +Work is the dot product of the force and the distance that the force is in contact with the surface, or + +$$ +W _ {\mathrm {N C}} = \vec {F} _ {d} \cdot \vec {d}. +$$ + +The work done is the drag force exerted by the building on the wind multiplied by the distance that the wind travels along the sides of the building. + +With substitution, we find + +$$ +W _ {\mathrm {N C}} = \frac {1}{2} \rho v _ {\mathrm {b h}} ^ {2} C _ {d} A d. \tag {3} +$$ + +The drag coefficient $C_d$ is for the entire building. However, we cannot have the entire building's drag force working on the speed at a specific height or we will overestimate the influence of the drag. Instead, we find the average drag per meter of the building. To do this, we divide (3) by the height $z$ of the building, then substitute the result into (2): + +$$ +\frac {1}{2} m v _ {\mathrm {b h}} ^ {2} = \frac {1}{2} m v _ {h} ^ {2} + \frac {\frac {1}{2} \rho v _ {\mathrm {b h}} ^ {2} C _ {d} A d}{z}. +$$ + +Using (1), we can find $v_{\mathrm{bh}}$ at any height $h$ ; but the equation still has several unknowns that stop us from solving for $v_{h}$ : the mass $m$ , the area $A$ , and the distance $d$ . + +# Mass of Air + +The mass of wind that interacts with the building per second at height $h$ is + +$$ +m = v _ {\mathrm {b h}} A \rho t. +$$ + +It is reasonable for convenience to use the average mass over 1 s. + +# Surface Area Interacting with Wind + +As shown in Figure 3, the surface area as it relates to the drag due to wind is the cross section of the building perpendicular to the wind. + +![](images/2f20764c89944e040d2b7b6f77e6842a0069fb414a78ef0347126fa9efc153e6.jpg) +Figure 3. Orientation of wind to building. + +Therefore, the surface area of the building based on the angle $\theta$ at which the wind strikes the building of width $b$ is found using trigonometry and gives + +$$ +A = (b | \cos \theta | + 2 b | \sin \theta |) z, +$$ + +where $z$ is the height of the building. We take the absolute value of the cosine and sine because we use the direction of the wind measured by the anemometer in terms of a $360^{\circ}$ compass. + +# Distance + +The distance $d$ that the wind goes over the building is $3b$ , the length of one side plus the width of the building, because the wind will curve around the building. + +# Combining the Equations + +Combining, solving for $v_{h}$ , and using $\alpha = 0.105$ gives the speed $v_{h}$ at height $h$ . [EDITOR'S NOTE: We do not reproduce the complicated expression.] + +# Height of the Fountain + +We find a function for the maximum height $h_{\max}(R)$ of the fountain in terms of the rate of flow $R$ . We assume that the water acts as an ideal fluid and that the fountain shoots water straight into the air in a single sustained stream. + +# Volume Flow Rate and Bernoulli's Equation + +We have from Halliday et al. [2001, 334] + +$$ +R = A _ {p} v _ {f}, \quad \mathrm {o r} \quad v _ {f} (R) = \frac {R}{A _ {p}} = \frac {R}{\pi r _ {p} ^ {2}}, +$$ + +where $R$ is the rate of flow, $v_{f}$ is its speed, $A$ is the cross-sectional area of the pipe, and $r_{p}$ is the radius of the pipe. + +Based on the effect that we want the fountain to have, we make the water column (the radius of the pipe at the tip of the nozzle) have a 6-cm diameter, hence a radius of $0.03\mathrm{m}$ . + +We use Bernoulli's equation [Halliday et al. 2001, 336], which relates forms of energy in a fluid, to calculate the maximum height of the water as it shoots into the air: + +$$ +p _ {1} + \frac {1}{2} \rho \nu_ {1} ^ {2} + \rho g y _ {1} = p _ {2} + \frac {1}{2} \rho \nu_ {2} ^ {2} + \rho g y _ {2}, +$$ + +where $p_1$ and $p_2$ are the pressure of the water (both are zero since we are looking only at the water in the air) and $g$ is the acceleration due to gravity. At the initial point, we consider the height of the nozzle as having zero gravitational potential energy, so the pressure head $\rho g y_1$ equals zero. Additionally, the speed $v_1$ is the speed from (1). At the endpoint, the water has height $h_{\max}$ and the kinetic energy is zero. Substituting and simplifying gives + +$$ +h _ {\mathrm {m a x}} (R) = \frac {\left(\frac {R}{\pi r _ {p} ^ {2}}\right) ^ {2}}{2 g}. +$$ + +With the radius $r_p$ constant, the height of the top of the water stream varies directly with the square of the rate of flow $R$ . Figure 4 shows the heights for values of $R$ between 0 and $0.04\mathrm{m}^2/\mathrm{s}$ of water. Whatever mechanism pumps the water must be able to vary the flow rate by small amounts, particularly for large $R$ , to maintain the maximum height allowable for the wind conditions. + +![](images/2c731eee52a12f8e284a752f4dd041ba54401bc476ddabaa511e0a5cffd3376b.jpg) +Figure 4. The effect of rate of flow on height of the fountain. + +# The Effect of Wind on the Water Ascent + +# Radius Change in Ascent + +Photos of mountains show that the water ascends as a slowly widening column until it reaches its maximum height, then falls back on itself and scatters. We can derive an expression that shows the change in the radius as the cylinder of water ascends; but since the change is very small, on the order of $1\mathrm{mm}$ , we use the initial radius at the nozzle, $r_p$ , in our calculations. + +# Wind Effects in Ascent + +The other contributor to the water's horizontal movement is the wind, whose force can be determined from pressure. Pressure is force exerted over an area, so pressure multiplied by area gives the force: + +$$ +P = \frac {F _ {c}}{A _ {c}}, +$$ + +where $P$ is pressure, $F_{c}$ is force, and $A_{c}$ is area. The cylinder of water has height $h_{\mathrm{max}}$ and width twice the radius $r_p$ , so $A_{c} = 2r_{p}h_{\mathrm{max}}$ . + +We find pressure in terms of wind speed using + +$$ +P = \frac {1}{2} \rho v _ {h} ^ {2}, +$$ + +where $\rho$ is the density of air and $v_{h}$ is the speed of wind at height $h$ . So we have + +$$ +\frac {1}{2} \rho v _ {h} ^ {2} = \frac {F _ {c}}{A _ {c}} = \frac {F _ {c}}{2 r _ {p} h _ {\mathrm {m a x}}}. +$$ + +Solving for $F_{c}$ gives + +$$ +F _ {c} (R) = \rho r _ {p} h _ {\mathrm {m a x}} v _ {h} ^ {2} = \rho r _ {p} \frac {\left(\frac {R ^ {2}}{\pi r _ {p} ^ {2}}\right) ^ {2}}{2 g} v _ {h} ^ {2}. +$$ + +Since we have the $F$ from $F = ma$ , finding mass should lead us to acceleration. Discovering the mass equation is pleasantly simple. If you take the flow rate and multiply it by the time that the water is in the air, then you know exactly how much water is in the air. Multiplying that volume by water's density gives the total mass, $m_T$ : + +$$ +m _ {T} (R) = R \frac {t _ {\mathrm {t o t a l}} (R)}{2} \rho_ {\mathrm {w a t e r}}, +$$ + +where $\rho_{\mathrm{water}} = 1000\mathrm{kg / m^3}$ . We solve for the acceleration: + +$$ +a _ {c} (R) = \frac {F _ {c} (R)}{m _ {T} (R)}. +$$ + +We use kinematics yet again to find how far the center of the water cylinder shifts, $x_{c}$ , by the time it reaches the top of its ascent: + +$$ +x _ {c} (R) = \frac {1}{2} a _ {c} (R) \left(\frac {t _ {\mathrm {t o t a l}} (R)}{2}\right) ^ {2}. +$$ + +# The Effect of Wind on Water Descent + +The water's surface tension holds it in a very cylinder-like column during its accent; but when the water reaches the top of its path, it runs out of kinetic energy and begins falling. Modeling the erratic behavior of the fall is somewhat difficult. At that point, turbulence caused by the competition between the gravitational force and the momentum of the ascending water overcomes the water's surface tension and smaller bodies of water descend individually. + +Since we are concerned about the bystanders level of dryness, we want the fountain to shoot to a height that will keep within the fountain's basin all particles with potential to dampen the onlookers. Anything smaller than a drop from a common eyedropper, about $0.05\mathrm{mL}$ , will not considerably moisten a person. Therefore, we want to spray the fountain to a height that will not let the wind carry this size drop outside the fountain's basin. + +Anything larger than such a drop has a greater mass-to-surface-area ratio, so it does not accelerate as much nor travel as far. So, we need model the flight of only such a drop. + +We assume that the drop behaves as a rigid body. This assumption neglects the forces that act internally in the fluid and thereby overestimates the effect of the wind. Thus, this assumption may lower the maximum height of the water but will not result in any excessive water hitting bystanders. + +Since the drop behaves as a rigid body, Newton's second law applies: The sum of all the external forces equals the mass of the drop of water, $m_{d}$ , times the net acceleration: + +$$ +\sum \vec {F} _ {d} = m _ {d} \vec {a} _ {d}. +$$ + +The only significant forces are the wind (parallel to the ground) and gravity (vertical). The wind force $F_{d}$ we know from $P = F_{d} / A_{d}$ ; we can calculate the surface area of the drop, and we know the pressure of the wind at height $h$ . We can calculate the force on the drop as a function of height. Dividing that by the mass of the droplet gives its acceleration $a_{d}$ parallel to the ground as a function of height $h$ : + +$$ +a _ {d} (h) = \frac {P (h) A S _ {d}}{m _ {d}}. +$$ + +Unfortunately this acceleration depends on height, which is a function of time $t$ in the air and the rate of flow $R$ ; so we cannot use the constant-acceleration kinematics equations. Also, the nature of the equation for acceleration makes integration with respect to time an unwieldy task. We can, however, get the + +average acceleration by integrating the acceleration from the time at the peak to the total time in the air and dividing by half of the time in air: + +$$ +a _ {\mathrm {a v g}} = \frac {\int_ {t _ {\mathrm {t o t a l}} / 2} ^ {t _ {\mathrm {t o t a l}}} a _ {d} \big (h (t , R) \big) d t}{t _ {\mathrm {t o t a l}} / 2}. +$$ + +Using $a_{\mathrm{avg}}$ as a constant, we can find the displacement $x_{d}$ of the drop in the horizontal direction. We know that + +$$ +x - x _ {0} = v _ {0} t + \frac {1}{2} a t ^ {2}. +$$ + +Applying this equation to the motion of the drop, we see that + +$$ +\begin{array}{l} x _ {d} (R, t) = \left[ \frac {d}{d t} r (R, t _ {\mathrm {t o t a l}} / 2) + \frac {d}{d t} x _ {c} (R, t _ {\mathrm {t o t a l}} / 2) \right] t + \frac {1}{2} a _ {\mathrm {a v g}} (R) \left(\frac {t}{2}\right) ^ {2} + \\ x _ {c} \left(R, t _ {\mathrm {t o t a l}} / 2\right) + r _ {p}. \\ \end{array} +$$ + +Combined with the $y$ position of the droplet, we get the flight path of Figure 5. + +![](images/717a0d3e9835b8256c184a5d8c522fac4ceeb9cf688745ee1462dc2458c89cee.jpg) +Figure 5. Water path from the fountain due to wind. + +The initial speed is the derivative of two position functions, $r_c(R,t)$ (the radius of the column) and $x_{c}(R,t)$ (the displacement of the column due to the wind). Taking the rate that those distances change when the drop separates from the column gives the initial speed of the drop. In addition, the equation shows that the initial displacement, $x_0$ , is the original width of the column plus the distance that the wind pushes the center of the cylinder, $x_{c}(R,t)$ for the time $t$ that it takes for a particle of water to reach its peak, $t_\mathrm{total} / 2$ . + +The displacement depends on the rate of flow. This is useful, since we must moderate the rate of flow to control the amount of water that escapes the basin. We can now set the displacement $x_{d}(R,t)$ equal to the maximum allowable displacement—the radius of the basin—and solve for the rate of flow. + +# The Optimal Rate of Flow + +Our computer algebra system choked on solving for $R$ exactly in terms of the other parameters. Instead, we adapted an incremental approach with a simple program in MathCAD. The maximum value for $R$ could be anything; but an available off-the-shelf industrial pump has a maximum value of $0.04\mathrm{m}^3/\mathrm{s}$ [Fischer Process Industries 2002]. We set that as the upper limit for $R$ . We set $R = 0.001\mathrm{m}^3/\mathrm{s}$ and increment it in steps of $0.001\mathrm{m}^3/\mathrm{s}$ until the displacement is greater than the radius of the basin. + +# Results and Discussion + +We discuss how well our model handles each of the six variables that affect the solution: + +- the fountain nozzle radius, +the height of the building, +the wind speed, +the angle between the wind and the building, +the building width, and +the fountain radius. + +# Wind Speed + +As the wind speed increases, the flow rate must decrease inside the fountain basin. Since the flow rate decreases, the height should also decrease because less water is forced through the nozzle, causing a lower initial speed. Does our model reflect these phenomena? Yes, it does. + +# Angle Between Wind and Building + +The angle has no apparent effect on the solution our model produces. How is this possible? It is possible because our fountain is surrounded by buildings. The way we calculated the buildings' effect on the wind created similar effects at any angle. + +There were indeed variations present when we calculated how much the wind was slowed down by the building depending on the angle; however, these variations were too small to affect the fountain's setting. + +# Nozzle Radius + +A smaller radius at a given flow rate means a higher speed. As the radius increases, so does the flow rate until the maximum rate is reached. + +What does this do to the height of the fountain? Height increases as the radius increases (because the rate of flow increases as well) until the maximum rate is reached. If the radius still keeps increasing, the flow remains constant through a larger opening, causing a lower speed and therefore a lower height. + +# Height of Building + +As the height of the building increases, the height of the fountain could increase as well. Our model doesn't accurately reflect this. The problem most likely lies in our drag calculations, the only place where building height shows up. Both the wind angle and the building height depend on the accuracy of our drag assumptions, and both have produced questionable results. + +# Building Width + +We finally find some data that suggest that our drag equation is at least partially correct. As the width of the building increases, more surface area is created for wind/building interaction. The increased surface area leads to more drag, a lower wind speed by the time the wind reaches the fountain, and therefore a higher flow rate and higher height of the fountain. + +# Fountain Radius + +Our goal is to keep the water contained in the fountain basin. If the radius increases, we can shoot the water higher up into the wind and still have it land in the fountain basin. Both the rate of flow and the height produce an increased basin radius. + +# Summary and Conclusions + +Our task was to develop a model to take inputs of wind speed and direction measured on a rooftop and use them to regulate flow through a nearby fountain. + +By breaking the problem down into parts, we developed a model that produces believable results; and we have shown how our model responds to different inputs. + +The effects of wake formation and wind interaction against a building are the two biggest problems. We assumed that the wake has no effect, and we dealt with the building interaction—but our findings raise questions. Clearly, the best test would be a fountain in a wind tunnel. + +It would make sense to install an anemometer into the fountain structure and use wind-speed readings from the fountain itself. Direction and surroundings would be insignificant; only the wind speed at the fountain would be important. + +We assume that wind gusts do not occur. There must somehow be a warning for the fountain that a gust is coming. Perhaps the rooftop anemometer could gauge wind speed change and send a signal to the fountain to reduce flow until the wind speed returns to normal. + +# References + +Bertin, John J. 1984. Engineering Fluid Mechanics. Englewood Cliffs, NJ: Prentice-Hall. +Fischer Process Industries. 2002. http://64.44.169.181:8080/examples/jsp/fpvcom/fpvresults.jsp. Accessed 11 February 2002. +Floating Fountains. 1998. +http://www.atlanticmountains.com/floatingfountains.htm. Accessed 11 February 2002. +Halliday, David, Robert Resnick, and Jearl Walker. 2001. Fundamentals of Physics. 6th ed. New York: Wiley. +King, Steven. 1999. Air Movement. http://fridge.arch.uwa.edu.au/topics/thermal/airflow/airflow.html. Accessed 11 February 2002. +Liu, Henry. 1991. Wind Engineering: A Handbook for Structural Engineers. Englewood Cliffs, NJ: Prentice Hall. +Macdonald, Angus. 1975. Wind Loading on Buildings. New York: Wiley. +Mooney, Douglas, and Randall Swift. 1999. A Course in Mathematical Modeling. Washington, DC: Mathematical Association of American. +Tritton, D.J. 1977. Physical Fluid Dynamics. New York: Van Nostrand Reinhold. +Yuan, Jinchao. 2002. Wind Simulation Around Building Clusters by Numerical Methods. http://www.arche.psu.edu/courses/ae597J/Jinchao.pdf. Accessed 11 February 2002. + +# A Foul-Weather Fountain + +Ryan K. Card + +Ernie E. Esser + +Jeffrey H. Giansiracusa + +University of Washington + +Seattle, WA + +Advisor: James Allen Morrow + +# Introduction + +We devise a fountain control algorithm to monitor wind conditions and ensure that a fountain at the center of a plaza fires water high enough to be dazzling while not drenching the pedestrian areas surrounding the fountain. + +We construct a model of a fountain based on the physics of falling water droplets considered as a particle system. We examine the behavior of a fountain under various wind conditions through computer simulation. Using complex analytic techniques, we model the wind flow through the plaza and estimate how anemometer readings from a nearby rooftop relate to plaza conditions. + +We construct four algorithms—two intelligent algorithms, a conservative approach, and an enthusiastic system—to control the fountain. + +We devise a measure of unacceptable spray levels outside the fountain and use this criterion to compare performance. First, we examine the behavior of these algorithms under general abstract wind conditions. Then we construct a wind signal generator that simulates the conditions of several major cities from meteorological database data, and we compare the performance of our control systems in each city. + +Simulations show that the Conservative and Enthusiastic algorithms both perform unacceptably in realistic conditions. The Weighted Average Algorithm works best in gusty cities such as Chicago, but the Averaging Algorithm is superior in calmer cities such as Los Angeles and Seattle. + +The control algorithm cannot possibly respond to changes in conditions at anything below the 10 s scale, since wind is highly variable and the response of the anemometer is somewhat slow [Industrial Weather Products 2002]. The goal is therefore to design the algorithm to operate on a time scale of 10 s up to a couple of hours and adapt the height of the fountain to a maximum safe level. + +# Model of the Water Jet + +We model the spray from the fountain as a particle system. As water droplets spew forth from the nozzle, they are subjected to forces (gravity, air drag, turbulence, etc.). We formulate a simplified differential equation governing the motion and then numerically integrate to find the trajectory for each droplet. This equation is based on a physically realistic model of small droplets (around $1\mathrm{mm}$ radius) and we scale it up to an effective model for larger clumps of water (up to $10\mathrm{cm}$ across) because the physics of turbulence and viscosity at the larger scale cannot be computed accurately. + +We need the following assumptions: + +- The drag force is proportional to the square of the speed and to the square of the radius [NASA 2002]. +- Droplets break into smaller droplets when subjected to wind. Breakup rate is proportional to relative wind speed and surface area [Nobauer 1999]. +- When a droplet breaks, turbulence causes the new droplet fragments to move slightly away from their initial trajectory. + +# Modeling a Single Droplet + +We formulate the motion of a water droplet as + +$$ +m \frac {d \vec {v}}{d t} = - m g \hat {z} + \eta | w | ^ {2} \hat {w} r ^ {2}, +$$ + +where $\vec{v}$ is the velocity, $\vec{w}$ is the wind velocity relative to the motion of the droplet (wind vector minus velocity vector), $m$ and $r$ are the droplet's mass and radius, and $\eta$ is a constant of proportionality. According to the Virtual Science Center Project Team [2002], a raindrop with radius $1\mathrm{mm}$ falls at a terminal velocity of $7\mathrm{m / s}$ ; so we determine that $\eta = 0.855\mathrm{kg / m^3}$ . Large drops fall quickly; very tiny drops fall very slowly, mimicking a fine mist that hangs in the air for a long time. + +We assume droplet breakup is a modified Poisson process, with rate + +$$ +\lambda_ {\mathrm {b r e a k u p}} = \lambda_ {0} | w | r ^ {2}. +$$ + +If the breakup rate did not depend on variable parameters $|w|$ and $r^2$ , the process would be a standard Poisson process. We determine $\lambda_0$ by fitting the water stream of our fountain to the streams of two real fountains: the Jet D'Eau of Geneva, Switzerland, and the Five Rivers Fountain of Lights in Miami, Florida. + +When a breakup occurs, we split the droplet into two new droplets and divide the mass randomly, using a uniform distribution. Air turbulence tends to impart to the two new droplets a small velocity component perpendicular to the relative wind direction $\vec{w}$ . This effect causes a tight stream of water to spread + +out as it travels, even under zero-wind conditions. We let this velocity nudge have magnitude $2\%$ of the particle's speed relative to the air and a random perpendicular direction. We give the two drops equal and opposite nudges. + +# Putting Water Drops Together to Make a Fountain + +We define the water jet as a stream of large water drops. Their size is roughly the size of the nozzle, and they leave with an initial velocity equal to the nozzle's output velocity (Figure 1). + +![](images/f1727001403134a330a35024e8df504c1d31da950f62cd5164284157e7ae4988.jpg) +Figure 1. A continuous water jet is approximated by a discrete stream of water blobs. + +The water blobs leave at a rate such that the flux of water is equal to the flux given by a nozzle-sized cylindrical stream moving at the same speed. + +To model the turbulence in the jet as the water leaves the nozzle, we give each water blob a normal distribution of radius and initial speed: + +- The standard deviation of blob radii is $10\%$ of the nozzle size. +- The standard deviation of initial speeds is $5\%$ of the initial speed. +- The blobs leave with an angular spread of $3^{\circ}$ , consistent with industrial high-pressure nozzles [Spray Nozzles 2002]. + +Wind drag in particle streams is significantly reduced for particles following one another closely (NASCAR drivers and racing cyclists are intimately familiar with this phenomenon). These effects are already incorporated into the dynamics of large water blobs (which can be thought of as representing many small drops moving together). We therefore consider this to be an effective model for large drops rather than a realistic interaction model. + +# Fitting the Fountain + +The Five Rivers Fountain of Lights in Daytona, Florida, is one of the largest fountains in the world. It consists of several water jets, and on low-wind days each propels a water stream $60\mathrm{m}$ high and $120\mathrm{m}$ out. The Jet D'Eau in Geneva, Switzerland, another impressive fountain, shoots a $30~\mathrm{cm}$ -diameter stream of water at $60~\mathrm{m / s}$ straight up. The water reaches a height of $140\mathrm{m}$ and on an average breezy day (wind speed $5\mathrm{m / s}$ ) returns to earth approximately $35\mathrm{m}$ downwind from the nozzle [Micheloud & Cie 2002] (Figure 2). + +![](images/f3b5003884441c48f0d4e3700154d808b407275c2c146992e2e9649b83132f78.jpg) +Figure 2. The Jet D'Eau and the Fountain of Lights. + +To determine $\lambda_0$ , we first match our geometry to the Five Rivers Fountain of Lights. We fix $\lambda_0$ so that with an initial velocity such that the stream reaches a height of $60\mathrm{m}$ , it returns to the ground at a distance of just over $100\mathrm{m}$ . Too large a $\lambda_0$ results in the water breaking up too quickly into tiny droplets, which have a much lower terminal velocity and thus fail to reach the desired distance; if the value is too small, then an unrealistically small amount of spray is produced and the water blob travels too far. The results are summarized in Table 1. + +We set $\lambda_0 = 5000$ . The results are highly insensitive to this parameter; varying $\lambda_0$ by a factor of 2 cause only a $15\%$ changes in the distances. Therefore, even though our method for determining this parameter is fairly rough, the important behavior is much more strongly affected by other parameters. + +Table 1. Comparison between real fountains and our model. + +
Jet D'EauFive Rivers Fountain
Height (m)realmodelrealmodel
1401216062
Distance (m)3530120100
+ +We conclude from this comparison that our model reproduces the spray patterns of extreme fountains accurate to within about $15\%$ . We expect that for + +a plaza-sized fountain, our model will be more accurate, since our formulas for breakup and drag force are derived under less extreme conditions. + +# Wind Flow Through the Plaza + +Buildings and other structures in an urban environment can cause significant disturbances to wind flow patterns; rooftop and street-level conditions can often be quite different, so readings from a rooftop anemometer could be biased. To model the plaza wind, we assume: + +- There are no significant structures between the buildings beside of the plaza. +- The plaza is large, so effects caused when wind flow leaves the plaza are negligible at the plaza center; the significant effects are entirely caused at the inward boundary passage. +- The air flow is smooth enough so that turbulent vortices are negligible. + +# Formulation + +We approximate the geometry of the plaza as in Figure 3 and use complex analytic flow techniques [Fisher 1990, 225]. + +![](images/4d214261ee35cf81e31f4d35387848da43a13252c5377ac5dc930427b45b6e66.jpg) +Figure 3. Schematic representation of the relevant features of the plaza. + +With a Schwarz-Cristoffel mapping of a smooth horizontal flow from the upper half of the complex plane onto the region above the plaza, we obtain a flow function for the wind as it enters the plaza area: + +$$ +\Gamma_ {c} (t) = \frac {h _ {0}}{\pi} \left\{\left[ (t + i c) ^ {2} - 1 \right] ^ {1 / 2} + \log \left(t + i c + \left[ (t + i c) ^ {2} - 1 \right] ^ {1 / 2}\right) \right\}, +$$ + +where $t$ parametrizes a streamline for each value of $c$ . These streamlines are plotted in Figure 4, where the acceleration of the wind as it passes over the building edge and the decreased velocity in the plaza are both clearly visible. + +The flow velocity $\vec{v}$ is inversely proportional to the streamline spacing, so the horizontal component of it is + +$$ +v _ {x} = \operatorname {I m} \left[ \frac {\partial \Gamma_ {c}}{\partial c} \right]. +$$ + +![](images/7bf5fbf99e41724b57dde2051558af342f9ec1fe42e343a813cd707320965c48.jpg) +Figure 4. Streamlines for wind flow entering the plaza; decreased wind speed at the plaza level is apparent. Note the highly increased wind speed near the edge of the building. + +The horizontal velocity profile for a streamline that passes about $3\mathrm{m}$ above the building roof (corresponding to $c = 0.6$ ) is plotted in Figure 5; $3\mathrm{m}$ is a reasonable height for an anemometer mounting. From these graphs, one can see that the wind speed through the plaza center (at a distance of 30 to $40\mathrm{m}$ from edge) is approximately half of the rooftop wind speed. + +![](images/38652124c2136429427645dae2921a6034babf2f18ee68be760ce37221edc28f.jpg) +Figure 5. Horizontal velocity profile for the streamline corresponding to $c = 0.6$ . This streamline passes above the building's roof at a height of $3\mathrm{m}$ , a reasonable anemometer mounting height. + +This calculation is validated by its excellent agreement with the findings of Santamouris and Dascalaki [2000], who report that in flows perpendicular to a street the ground-level speeds are between zero and $55\%$ of the free-stream speeds. + +# Results + +We conclude from this flow model: + +- Placement of the anemometer is important! It should be mounted near the center of the rooftop to minimize disturbances from the roof's edge. +- The anemometer reports a wind speed that is highly biased! Plaza-level wind moves approximately half as fast as the roof-level wind. +- Wind speeds are spatially constant within the plaza airspace. If the fountain is not significantly higher than the surrounding buildings, then spatial wind variation can be safely ignored. + +# Modeling Wind Variation Over Time + +The control system must be able to handle a range of weather conditions, from calm up to strongly gusty. We abstract the wind patterns into three generalized types of increasing complexity: + +- Type 1: A low intensity constant breeze of a few $\mathrm{m} / \mathrm{s}$ , meant to test the algorithm's ability to judge the proper height for a given wind speed. +- Type 2: A breeze varying smoothly over a timescale of a couple of minutes. We use a sinusoidal oscillation in magnitude and direction, with a constant term to reflect the prevailing wind direction of the hour. This type tests the algorithm's ability to adapt to slowly changing conditions. +- Type 3: Sudden unexpected wind gusts, with a few seconds duration and very high intensity. We model the occurrence of a gust as a Poisson process and distribute the gust durations and intensities normally. The mean and variance are chosen to produce reasonable results. This is perhaps the most important test, since the gusty scenario can easily fool a naive algorithm. + +# Generating a Realistic Wind Signal + +We parametrize the wind profile of a location by four numbers: + +The mean steady wind $\mu_{\mathrm{steady}}$ +- The mean gust strength $\mu_{\mathrm{gust}}$ , where a gust is defined to be variation on the sub-15 s timescale. +- The mean gust duration $t_{\mathrm{gust}}$ . +- The gust deviation $\sigma_{\mathrm{gust}}$ + +From WebMET data [2001], we estimate these characteristic numbers for some major U.S. cities (Table 2). + +We construct realistic wind signals from these characteristic numbers to correspond to our types: + +Table 2. +Characteristic parametrization of several major U.S. cities. These parameters specify the plaza wind conditions, which are slightly milder than the free-stream conditions. + +
μsteady(m/s)μgust(m/s)tgust(s)σgust(m/s)
Seattle, WA1.22.256.00.7
Chicago, IL2.04.03.04.0
Boston, MA2.34.24.02.2
Los Angeles, CA1.72.03.00.7
Washington DC1.33.43.01.0
+ +- Type 1: constant wind of strength $\frac{2}{3} \mu_{\text {steady }}$ , +- Type 2: sinusoidal oscillations of amplitude $\frac{2}{3}\mu_{\mathrm{steady}}$ , and +- Type 3: a gust signal with mean amplitude $\mu_{\mathrm{gust}}$ , amplitude standard deviation $\sigma_{\mathrm{gust}}$ , duration mean $t_{\mathrm{gust}}$ , and deviation $\frac{1}{2} t_{\mathrm{gust}}$ . + +Figure 6 shows a comparison of wind signals for Seattle and Chicago; the extreme gustiness of the "Windy City" is apparent. + +We also create a "Hurricane Floyd" wind profile by multiplying a Chicago wind signal by a factor that damps the wind to zero early on (the calm before the storm) and then amplifies it to hurricane level over a period of 10 min. + +# Fountain Control Algorithms + +The goal of the control algorithm is to respond to the anemometer data by maximizing the height of the fountain while minimizing the probability of the plaza area outside the fountain pool being drenched. The control algorithm has access to anemometer readings and direct control over the nozzle speed. + +The control system must have some knowledge of how the water spray travel distance relates to nozzle speed and wind speed. We develop two complementary measures of spray distance and tabulate the relationship between them and nozzle/wind conditions. The algorithms that we develop combine this table with an estimate of possible future wind speed (based on the current wind and a stored recent history) to decide on a good nozzle speed. + +# Measures of Water Spray Distance + +Our measures of spray distance are + +- the radius within which $99\%$ of sprayed water lands, and +- a threshold for acceptable water density outside the fountain that corresponds roughly to a light rain: $1 \mathrm{~cm}$ in $10 \mathrm{~h}$ $(2.8 \times 10^{-4} \mathrm{~mm} / \mathrm{s})$ . + +![](images/de0ed8009e73dcf1ebb6116d513d23e2b794cd7b60aae88b812ab678d9e9fdef.jpg) +Figure 6. Horizontal velocity profile. + +In simulations over a suitably long time period, we find that these two measures agree to within $1\%$ . + +We evaluate the performance of our control algorithm by measuring how the spray distance compares to the actual radius of the pool. If the spray radius goes beyond the pool radius, then people might become unacceptably wet. However, if this radius is significantly less than the pool radius, then we are not getting as much height out of the fountain as we could. + +# Constructing the Control System + +We begin with a few useful assumptions: + +- Variation in wind direction can be safely ignored. We use the triangle inequality: If the wind pushes a drop first in one direction and then in another, it will necessarily land nearer to the fountain than if it had been pushed in one direction continuously. +- The algorithm has access to real-time anemometer data averaged over 10-s intervals as well as at least a 10-min history of measurements. Even if the anemometer responds faster than 10 s, it is nonsensical to vary the fountain any faster than this, because the water requires approximately this much time to complete its flight. + +- For concreteness, we focus on the plaza configuration of Figure 7. Most importantly, the fountain is at the center of a circular pool of radius $5\mathrm{m}$ . + +![](images/daee2ddfa0038ea09923009f8e52425ec20e9ab50c9db7f9c9c915ac5caf702a.jpg) +Figure 7. The layout of our hypothetical model plaza. + +We display the spray distance as a function of wind speed and nozzle speed in Figure 8. + +An estimate of how far a water droplet can travel starting at height $z_0$ , falling at its terminal velocity $v_t$ , and moving at horizontal wind speed $w$ is + +$$ +\mathrm {d i s t a n c e} \approx \frac {z _ {0} w}{v _ {t}}. +$$ + +The smallest droplets that our simulations produce have radii of about $1\mathrm{mm}$ with corresponding terminal velocity $7\mathrm{m / s}$ . For specific heights and winds, we find that this rough estimate is usually within $30\%$ of the corresponding minimum safe distance shown in Figure 8, a good indication that our simulations produce reasonable results. + +# The Control Algorithms + +We formulate four control algorithms: + +- Averaging Algorithm: This algorithm considers an average of the previous $10\mathrm{min}$ of wind data and the sample variance. The worst-case scenario is estimated to be a wind strength of one standard deviation above the average. + +![](images/222dd4f89a6547f9076ce56cc14bee7a15e81de0d34dca3a48f26fc170f433e0.jpg) +Figure 8. Linearly interpolated table of spray distance as a function of wind speed and nozzle speed. Each data point represents a burst of 5 nozzle-size water blobs. + +- Weighted Average Algorithm: The key feature of this algorithm is that the data of the last $10\mathrm{min}$ are weighted linearly according to recentness. The current measurement gets the highest weight. +- Conservative Algorithm: This algorithm uses the maximum wind speed measured over the last $10\mathrm{min}$ to predict the worst-case wind. This is the most conservative approach—it will always err towards safety. +- Enthusiastic Algorithm: This algorithm ignores previous wind data history and puts the fountain to the maximum safe height given immediate conditions. No precaution is taken with regard to possible future wind behavior. + +# Results + +# Comparing the Algorithms + +We test each algorithm against the following gamut of wind conditions: + +- Type 1: constant wind +- Type 2: smoothly varying wind + +- Type 3: highly variable gusty wind +- Real wind data from Seattle, Chicago, Boston, Los Angeles, and Washington DC. +Hurricane Floyd-type winds! + +We run several simulations of the fountain, each for $3\mathrm{min}$ , under the control of each algorithm—long enough to capture relevant wind features and give statistical significance to the results. For consistency, we run each algorithm under an identical wind signal (to remove random variation). We use the following criteria for comparing the performance of the algorithms: + +- The average height of the water spout over the time of the simulation. +- The percentage of the total water contained within the pool. +- The ratio of the highest density of water landing outside the pool area to the maximum acceptable spray density $(2.8 \times 10^{-4} \mathrm{~mm} / \mathrm{s})$ . + +The results of our simulations (Table 3) indicate that the performance of the algorithms depends significantly on the wind data provided. + +# Strengths and Weaknesses + +All of the algorithms perform equally well under constant wind conditions, but each has unique strengths and weaknesses. + +- The Enthusiastic Algorithm consistently achieves the most spectacular fountain heights but at a cost. Since it considers only the current wind reading, it is always caught by surprise by sudden gusts or any increase in wind speed. Except in the constant-wind case, the algorithm systematically results in too much water being sprayed outside the fountain. +- The Conservative Algorithm always has the most paranoid estimate of how bad the wind could get, and all the water is usually contained in the fountain except in rare cases when sudden gusts greatly surpass the maximum recorded wind speed before the next measurement is made. However, the fountain height is often disappointingly low compared to the other algorithms, especially when a large gust of wind was recorded in the wind speed history. +- The Weighted Average Algorithm performs about as well as the Averaging Algorithm. Both contain most of the water but are often surprisingly conservative. In the Gusty Wind simulations, the Weighted Average Algorithm is even more conservative than the Conservative Algorithm; since both averaging algorithms consider the standard deviation of previous wind speed data, they become more conservative when recent wind speeds are highly variable. But if wind speeds change suddenly, as in the Hurricane Floyd case, the Weighted Algorithm reacts slightly faster than the Averaging Algorithm. + +Comparisons of algorithm performance. When too much water spills out of the fountain, water densities become too computationally intensive to compute (denoted by $*$ ), and the fountain is operating well outside of acceptable parameters. + +Table 3. + +
Weighted AverageAverageConservativeEnthusiastic
Type 1: Constant Wind
Average height10.7 m10.6 m10.7 m10.6 m
% contained100%100%100%100%
Density ratio0000
Type 2: Smooth Wind
Average height12.0 m12.4 m12.1 m20.2 m
% contained100%100%100%100%
Density ratio0.90010321
Type 3: Gusty Wind
Average height11.7 m12.5 m12.0 m19.9 m
% contained100%100%100%99%
Density ratio0001357
Hurricane Floyd-type wind!
Average height3.3 m3.5 m3.3 m3.3 m
% contained99%98%99%98%
Density ratio124234505
Seattle
Average height10.4 m10.6 m5.0 m20.7 m
% contained99%99%100%75.6%
Density ratio61250*
Chicago
Average height10.3 m7.7 m5.0 m20.9 m
% contained99%99%99%62%
Density ratio1357246722*
Boston
Average height7.6 m7.9 m2.4 m21.0 m
% contained98%97%100%95%
Density ratio1964110000*
Los Angeles
Average height7.6 m10.5 m5.9 m10.2 m
% contained99%99%100%91%
Density ratio21960.40*
Washington, DC
Average height8.7 m10.2 m7.7 m20.8 m
% contained99%99%100%92%
Density ratio3.36180*
+ +# Possible Extensions + +# Tiltable Nozzles + +Water jets with directional control exist (firefighters use them extensively!). So, with a steady wind, aiming the fountain slightly into the wind may allow for a higher water stream without additional water spraying outside the pool. + +For a range of constant wind speeds, we simulate the fountain at various tilt angles and find the angle that maximizes fountain height without unacceptable spray landing outside the pool (Table 4). For each run, we fire enough blobs (10) so that results are statistically significant. + +Table 4. Results of tilting the fountain into the wind. + +
Wind speed (m/s)Maximum height (m)
no tilttiltangle
216.431.037.5°
510.822.58.5°
75.912.732.0°
+ +The fountain can be made nearly twice as high by directing the nozzle into the wind. This would appear very encouraging indeed, were it not for two important points: + +- The spray distance is extremely sensitive to the tilt angle. Variations of a single degree cause unacceptable amounts of water outside the pool area. +Real wind is rarely so constant. + +We therefore consider it infeasible to use tilting to increase the fountain height. + +# Multiple Nozzles + +Our model can be extended to handle multiple nozzles by superimposing, provided that the stream-stream interaction is not significant. + +# Alternative Pool Geometries + +We can handle fountains with noncircular pools, measuring the percentage of water that lands outside of the pool and requiring that no region gets too wet. If the fountain is in a city with wind predominantly in one direction, then an elliptical pool with major axis parallel to the wind direction may work better, though variation in wind direction can no longer be ignored by the model. + +# Other Considerations + +- There are parameters that we did not incorporate in our model that may have effect in real life, such as temperature and barometric pressures. +- If a storm is approaching, the fountain should be turned off. +- At low temperature, we might set the algorithms to be more conservative, because it is very unpleasant to be wet in cold weather and ice formation can be dangerous. +- If the buildings around the plaza are significantly closer to the fountain than the $40\mathrm{m}$ considered in our simulations, then the dynamics of the wind near the fountain may be altered with the addition of eddies and other turbulence. +- For mountains that reach heights significantly higher than the nearby buildings, the magnitude of the wind will grow stronger farther above the plaza. +- A longer wind history could be incorporated into the algorithm. + +# Recommendations and Conclusions + +If keeping the water spray contained in the pool is a much larger concern than shooting the fountain high into the air, then the Conservative Algorithm may be the best choice. Conversely, if water spray outside the fountain is not an overriding concern, than the Enthusiastic Algorithm may be best. + +For a reasonable balance between safety and dazzle, the Conservative Algorithm and the Enthusiastic Algorithm are both totally inadequate: + +Use either the Weighted Average Algorithm or the Averaging Algorithm. + +The Weighted Average Algorithm responds faster to sharp changes in wind speed and performs better in places like Chicago where wind gusts are more frequent. However, if wind variations are fairly smooth, as in Los Angeles, then the Averaging Algorithm is the best choice. + +# References + +Fisher, Stephen D. 1990. Complex Variables. 2nd ed. New York: Dover. + +WebMET: The Meteorological Resource Center. 2001. www.webmet.com. Accessed 11 February 2002. + +Santamouris, Matheos, and Elena Dascalaki. 2000. Wind speed in the urban environment. www.brunel.ac.uk/research/solvent/pdf/report3.pdf. Accessed 10 February 2002. + +Industrial Weather Products Catalog. 2002. www.scientificsales.com. Accessed 10 February 2002. +Micheloud & Cie. 2002. www.switzerland.isyours.com. Accessed 10 February 2002. +NASA. 2002. Re-Living the Wright Way. wright.wrc.nasa.gov. Accessed 11 February 2002. +National Climatic Data Center. 2002. www.ncdc.noaa.gov. Accessed 10 February 2002. +Nobauer, Gerhard Thomas. 1999. Droplet Motion and Break-up. Oxford, England: University of Oxford Computing Laboratory-Mathematical Institute. +Spray Nozzles. 2002. www.mrpressurewasher.com/spraynozzles.html. Accessed 9 February 2002. +Virtual Science Center Project Team. 1997. ScienceNet. Accessed 9 February 2002. +Ross, Sheldon M. 2000. Introduction to Probability Models. 7th ed. New York: Academic. + +# Judge's Commentary: The Outstanding Wind and Waterspray Papers + +Patrick J. Driscoll +Department of Systems Engineering +U.S. Military Academy +West Point, NY 10996 +fp5543@exmail.usma.army.mil + +# Introduction + +As so often is the case with events that iterate on an annual basis, many of the same lessons learned carry on from year to year, never losing their relevancy. Certainly, the MCM this year is no exception to the trend. + +In an attempt to maintain some degree of economy in this commentary, I will resist the temptation to reiterate many of these again herein and point the interested reader to MCM commentaries previously appearing in this Journal. + +However, there are several notable modeling issues that clearly surfaced in consideration of the Wind and Waterspray Problem that had an impact on the quality of the papers and are worth mentioning to assist teams in future competitions. In this vein, the following comments represent a compendium of observations during the final judging session and are taken in no particular order of preference or priority. + +# Style and Economy + +As to style and clarity of the papers, it is probably sufficient to state that teams should bear in mind that they are writing to a population of modeling experts from both academia and industry who will spend a limited amount of time reading their paper. During this period, judges must assess the quality of a team's approach, the validity of their results, and the paper's completeness with regard to the modeling process. Contrast this with the hours and sometimes + +days available for a professor to grade a similar project of this type, and it is apparent that teams must choose a writing style that maximizes clarity and gets across their modeling work in the most effective manner possible. Using concise and properly labeled tables and graphics to illustrate the trends and results of experimental trials that are commented on in the body of the report goes a long way towards achieving this goal. + +# The Specific Challenges of This Problem + +The stated challenge of the Wind and Waterspray Problem was to develop an algorithm that uses data provided by an anemometer to adjust the water flow from a fountain as wind conditions change. + +In a most general sense, an algorithm can be succinctly defined as a "method for the solution of all problems of a given class ... whose purpose is to describe a process in such a way that afterwards [it] can be imitated or governed by a machine" [Gellert et al. 1977, 340]. A basic characteristic of an algorithm is that it transforms given quantities (input) into other quantities (output) on the basis of a system of transformation rules. The input quantities (anemometer data) and output quantities (water flow characteristics) for the problem were clear from the problem description. The particular transformation rules for this problem were unspecified and left up to the individual teams to decide upon. + +Formulating these transformation rules constituted the heart of each approach used to model the water flow and spray patterns associated with the fountain. The most predominant appeared to be Newton's Second Law of Motion, Bernoulli's formula, continuity equations, fuzzy membership sets, Poiseuille's equation, or Navier-Stokes equations, largely dependent on the assumptions that teams were willing to make. + +The better papers walked the reader through the application of the approach chosen, clearly explaining exactly how each variable and parameter applied to the problem, and then used the known results of the specific approach directly. + +# How to Make Assumptions + +Most technical report formats advise students to list and explain all their assumptions in one concise location, typically in the front portion of the report. While this advice is sound for constructing a technical report, it might be helpful to note that it is in contrast with the pattern of how assumptions occur chronologically during a modeling process. For the MCM, useful assumptions typically arise in one of two settings: + +- either a team needs specific information concerning the problem that they do not have (and cannot get in the time allotted) and hence must make an assumption in order to carry on; or + +- a team decides to make an assumption that simplifies some detail(s) of the problem in order to use the mathematics they are familiar with or risk not being able to complete their modeling effort in the time allotted. + +Both of these situations arise naturally in the chronological flow of attacking a problem, and not during a single brainstorming effort at the onset. + +When a paper contains a long list of assumptions, many of which neither get used nor justified in the modeling that follows, it is a clear indication that the team does not quite understand the roles that the assumptions play in the overall modeling effort. Such papers typically possess a very shallow or missing "Strengths and Weaknesses" section, which is supposed to constitute an analysis of one's model and results in consideration of the assumptions that were included by necessity. If a team does not know why they need a particular assumption, chances are that they will do a poor task of explaining why they made the assumption! + +The lesson here is that teams should struggle mightily to make only the assumptions they need when they need them, thereby minimizing the diluting effect on model fidelity caused by an excessive number of assumptions. + +# The Importance of Model Validation + +When all is said and done, a paper introducing a proposed algorithm must resolve the question, "Does the author provide me with sufficient evidence that it works?" While occasionally provided by way of convergence proofs, this type of evidence more commonly appears in MCM papers by way of computational testing. For the MCM, at least three categories of testing come to mind that support model validation: + +- Once the team is convinced that their base model produces reasonable results, special cases of interest (e.g., no wind, no spread angle, etc.) should be tested. +- Recognizing that model parameters contain some amount of uncertainty, high, most likely, and low values of important parameters used in the base model should be examined by systematically altering these values and rerunning the model to see if the output results remain reasonable. For this MCM problem, these parameters might be drag coefficients, shapes of water droplets, wind speed and direction, and so on. This process essentially constitutes what is commonly referred to as sensitivity analysis of the parameters. +- The effects of relaxing a select number of simplifying assumptions made during the course of developing the model should be examined. However, it is fair to stress that this last category is safely performed only when time permits, because it generally requires substantial model modifications to examine the desired effects. A good example of this third category for + +the Wind and Waterspray problem would be adding the influence of surrounding buildings on wind speed and direction after they were previously assumed away. Such a change would be nontrivial and might consume more time than what is available. + +Teams must link their computational results back to the problem that they are trying to solve. Tell the reader what to conclude from the results! This is what is referred to as analyzing the results. Never, ever, ever leave this task to the reader! + +When the conclusions of these analyses remain the same despite changes in parameters such as those noted, it is appropriate to conclude that the model results are robust. These analyses also highlight any limitations of the model, which then provide a basis for recommending ways the model could be enhanced or improved in the future. + +# The Summary + +The summary that the MCM asks for is a standalone object that should not be identical to the introduction to the paper. The summary should briefly + +state the problem, +- describe the approach taken to modeling the problem, +- state the most important results and conclusions the reader should remember, and +- mention any recommendations directly relevant to the problem. + +The summary should not include a statement such as "read inside for results" or its equivalent. A good test a team can use to assess the quality of their summary is to ask, "If someone read only the summary without the rest of the report available, would it clearly tell the big picture story of what the problem was, what we did, what we concluded, and what we recommend?" As a note, most equations, code, and derivations belong somewhere else as well. + +# Advance Planning + +With regard to time management, something that teams can do ahead of the contest is to decide + +- what document-writing environment they intend to use; +- how equations will be entered and labeled; +- the outline format of the paper; + +- how tables, figures, and graphics are going to look; +- how captions are going to be stated for all tables, figures, and graphics; and +- who will be responsible for what task in the final write-up. + +Human nature being what it is, a sloppy or haphazard paper that looks as if it was put together 15 min before it had to be postmarked almost assuredly will be downgraded in the mind of a judge, independent of the specific results obtained. + +# Use of Sources + +Finally, the observed trend continues that teams are becoming increasingly selective with regard to the Web sites that they will trust for credible information. I also encourage teams to maintain their effort to properly document sources used to support their work. This practice explicitly recognizes the intellectual property and work of others while strengthening the quality of their paper at the same time. + +# Reference + +Gellert, W., H. Kustner, M. Hellwich, and H. Kastner. 1977. The VNR Concise Encyclopedia of Mathematics. New York: Van Nostrand Reinhold. + +# About the Author + +![](images/fb198bc16b9f2a2c787d1e544716a3856ef94f01df1e6b8e914b3746f7432195.jpg) + +Pat Driscoll is Professor of Operations Research in the Department of Systems Engineering at the United States Military Academy. He holds an M.S. in both Operations Research and Engineering Economic Systems from Stanford University, and a Ph.D. in Industrial and Systems Engineering from Virginia Tech. His research focuses on mathematical programming, systems design for reliability, and information modeling. Pat is the INFORMS Head Judge for the MCM. + +# Things That Go Bump in the Flight + +Krista M. Dowdey +Nathan M. Gossett +Mark P. Leverentz +Bethel College +St. Paul, MN + +Advisor: William M. Kinney + +# Introduction + +We develop a risk assessment model that allows an airline to specify certain parameters and receive recommendations for compensation policy for bumped passengers and for how much to overbook each flight. The basis is the potential cost of each bumped passenger compared to the potential revenue from booking an extra passenger. Our model allows an airline to compare quickly the likely results of different compensation and overbooking strategies. + +To demonstrate how our model works, we apply it to Vanguard Airlines. Publicly available data provide all of the needed parameters for our model. Our software package reaches an overbooking policy by calculating and comparing the expected revenues for all possible situations and compensation policies. + +# Terms and Definitions + +We set out terminology, taking much of it from Delta Airlines [2002]. + +- Available seat miles (ASM): A measure of capacity which is calculated by multiplying the total number of seats available for transporting passengers by the total number of miles flown during a reporting period. +- Revenue passenger mile (RPM): One revenue-paying passenger transported one mile. RPM is calculated by multiplying the number of revenue passengers by the number of miles they are flown for the reporting period. +- Load factor (LF): A measure of aircraft utilization for a reporting period, calculated by dividing RPM by ASM. + +- Cost per available seat mile (CASM): Operating cost per available seat mile during a reporting period; also referred to as unit cost. +- Revenue per available seat mile (RASM): Total revenue for a reporting period divided by available seat miles; also referred to as unit revenue. +- "No-show": A person who purchased a ticket but does not attempt to board the intended flight. +- Bumping: The practice of denying boarding to a ticket holder due to lack of sufficient seating on the flight. +- Voluntary bumping: When passengers who purchased ticket for a flight give up their seats for some compensation offered by the airline. +- Involuntary bumping: When not enough passengers voluntarily give up their seats, the airline chooses whom to bump against their will. +Revenue: Money gained by the airline from a flight minus penalties paid to bumped passengers. This is not the standard definition of revenue ("inflow of assets as result of sales of goods and/or services" [Porter 2001, 146]). We use this different definition to highlight the effect of bumping practices.) +- Flight leg: A direct flight from one airport to another with no stops. + +# Assumptions + +- Passenger airline traffic is returning to normal, so yearly industry statistics can be used. Airline traffic trends are returning to the levels before the terrorist attacks on September 11 [Airline Transport Association 2001], so statistics from before that date are still valid. +- We model U.S. flights only. International flights have different policies. +- The "no-show" rate is about $10\%$ . ["More airline passengers ... " 1999]. +- Ticket prices may be represented by calculated averages. +- The number of passengers on the plane does not affect the cost of the flight to the airline. The most significant part of the operating costs for a flight are fixed costs that are not be affected by the number of passengers. +- The flight schedule is static. The schedule of flights is outside of the scope of our problem statement. Thus, we make recommendations only about the overbooking strategy, not about changes to the schedule. +- Airlines must follow the DOT "Fly-Rights" regulations. These regulations outline the minimal compensation required to passengers when bumping occurs [U.S. Department of Transportation 1994]. + +- Compounded overbooking takes care of itself (i.e., goes away naturally). Consistent industry-wide statistics establish a $60\%$ to $80\%$ load factor [Airline Transport Association 2002], resulting in naturally combating the waterfall effect of one overbooked flight causing another to be even more overbooked. +- There is sufficient demand for at least some flights to warrant overbooking. +- No-shows do not generate revenue. No-shows are given a refund or (if original ticket was nonrefundable) a ticket voucher. +- Taxes paid by a passenger are nonrefundable. + +# Statement of Purpose + +- Our first priority is to maximize revenue for the airline. +- Our second priority is to maximize customer service in the form of providing as much compensation to bumped passengers as is financially feasible. + +# Naive Model + +The naive approach is to assume that since not all ticket buyers show up for the flight, we simply overbook the flight so that on average the plane fills to capacity. If on average $90\%$ show up, we book to 100/90 capacity. + +However, the $90\%$ is only an average; for some flights, more than $90\%$ will show up, resulting in bumped passengers and a penalty for the airline paid to bumped passengers. Since the penalty is often more than the potential revenue for one more passenger, the airline could pay more in penalties than the extra revenue received. We need a way to factor the risk of penalties into our model. + +# Risk Assessment Model + +We maximize revenue on each individual flight leg, which we regard as independent of other flight legs. Thus, optimizing the revenue of one flight does not adversely affect potential revenue from other flights. + +Since an airline incurs an increased penalty the longer that a bumped passenger is delayed, an airline minimizes the penalty by transporting the passenger to their destination as quickly as possible. Therefore, bumped passengers are usually booked on the next flight or series of flights to their destination. + +# Expected Revenue of a Flight + +Let a flight have capacity of $c$ and we book $b$ passengers. Let $r$ be the potential revenue from a passenger and $p$ the potential penalty cost of a passenger bumped. Finally, let $x$ be the percentage of ticket holders who show up for the flight. The revenue generated by the flight is + +$$ +\operatorname {r e v e n u e} (x, b) = \left\{ \begin{array}{l l} x b r, & \text {i f} x b \leq c; \\ c r - (x b - c) p, & \text {i f} x b > c. \end{array} \right. +$$ + +The percentage $x$ of passengers who show up follows some probability distribution with density function $f(x)$ and an appropriate mean (in our case, 0.9). We find the value of $b$ that maximizes the expected revenue for $b$ passengers: + +$$ +\text {e x p e c t e d} _ {-} \operatorname {r e v e n u e} (b) = \int_ {0} ^ {1} f (x) \cdot \operatorname {r e v e n u e} (x, b) d x +$$ + +Repeat this process for all flights and you have a complete recommendation for an overbooking policy. + +# Examining Compensation Policies + +We can adjust our model even further by examining the effects of different compensation policies. Airlines have several forms of compensation at their disposal, from food to hotel stays to vouchers. The cost of the compensation policy is the penalty paid to a bumped passenger ( $p$ in our formulas above). By rerunning our expected revenue calculations for each compensation policy, we can see how each policy affects the maximum expected revenue of a flight. + +# Key Overbooking Flights + +An airline can determine from historical data the "key" overbooking flights, the ones most likely to require overbooking. It can then use a compensation policy that concentrates on maximizing expected revenue for those flights. + +# From Theory to Reality: Vanguard Airlines + +We illustrate our ideas by a case study of Vanguard Airlines, using publicly available information below [Vanguard Airlines 2001]. We assume that the January 2001 through September 2001 statistics provide an accurate picture of the airline: + +• RASM = $ 0.073/seat-mile. +- RPM = 817,330 passenger-miles. + +- ASM = 1,225,942 seat-miles. +- Operating expenses per ASM = $ 0.090/seat-mile. +- A full flight (Boeing 737-200 or MD-80 aircraft) holds $c = 130$ passengers. +- $95\%$ of bumped passengers are volunteers [U.S. Department of Transportation 2001]. + +# Applying the Model + +We created a software package parameterized for adaptation to any airline. + +Vanguard's Web site [2002] gives a list of flight legs, along with source cities, destination cities, departure times, and arrival times. All flight legs are flown daily, except for four; to keep our example simple, we ignore these exceptions and treat all flights as daily. + +The potential revenue $r$ per passenger is the average ticket price for the flight leg; we calculate it as flight-leg distance times revenue earned per passenger mile. The latter is total revenue (RASM × ASM) divided by passenger-miles flown (RPM). So we have + +$$ +r = \frac {(\text {d i s t a n c e}) (\text {R A S M}) (\text {A S M})}{\text {R P M}}. +$$ + +We could not locate good data on the distribution of how many ticket buyers show up for the flight. In lieu of a real distribution, we use a truncated normal distribution with mean 0.9 and appropriately small standard deviation (0.05): + +$$ +f (x) = \frac {1 . 0 2 3}{0 . 0 5 \sqrt {2 \pi}} e ^ {2 0 0 (x - 0. 9) ^ {2}}. +$$ + +Penalty costs depend on how long the passenger is delayed, so we search the flight schedule for the quickest alternative route for each flight leg. We require at least 30 min between connecting flights. + +# Compensation Policies + +There are three main forms of compensating bumped passengers: + +- Cash Payment vs. Ticket Voucher + +- Bumped passengers who arrive at their destination within one hour of their originally scheduled arrival receive no compensation. +- Those who arrive between one and two hours after their originally scheduled arrival are eligible for compensation in the amount of their full ticket cost up to $200. + +- A passenger who arrives two or more hours late is eligible for compensation in the amount of double their ticket cost up to $400. + +Compensation is required only for passengers involuntarily bumped, but common practice is to offer similar amounts to attract volunteers for bumping. We assume that $95\%$ of all "bumped" passengers are voluntary and we offer them vouchers in place of cash. We calculate that a $1.00 voucher costs the airline $0.82. Incorporating that $5\%$ of bumped passengers receive cash, this plan costs (voucher_value) $\times 0.831$ per bumped passenger. + +- Meal Compensation In our software, a passenger sitting in an airport through particular intervals gets compensation for a meal: 6 A.M. to 9 A.M., breakfast ($10); 11 A.M. to 1 P.M., lunch ($10); 5 P.M. to 8 P.M., dinner ($15). This compensation is not mandated, so it serves only as customer service. +- Providing Lodging A quick survey of airport motels in Kansas City (the hub for Vanguard) showed that \(50 is reasonable to cover a motel room along with transportation to and from the motel. Our plan offers overnight accommodation to a passenger stranded in an airport for at least 6 hrs including midnight who has a flight leaving after 4 A.M. This compensation is not mandated, so it serves only as customer service. + +# Choosing a Compensation Policy + +We compare the impacts of the following policies: + +- Meal compensation, hotel compensation, and cash +- Hotel compensation and cash +- Meal compensation, hotel compensation, and voucher +- Hotel compensation and voucher +- Meal compensation and cash +Meal compensation and voucher + +We tabulate penalties for each flight leg and each compensation policy and calculate an optimal number of passengers to book on each flight leg depending on the compensation policy. To ensure that bumping is no more likely than not needing to bump, we impose a maximum booking level of 10/9. We then calculate the expected revenue for each flight leg at the optimal booking level for each policy and rank the policies for each flight leg by expected revenue. [EDITOR'S NOTE: We omit the authors' extensive tables giving results for specific flights.] + +An important consideration in choosing a compensation package is customer service. While there is little short-term impact on revenue from good or bad customer service, there can be significant long-term impact. We should + +give some preference to policies that offer greater customer satisfaction. When two or more policies produce the same revenue, our model chooses one that maximizes customer service. + +# The Best Compensation Policy for Vanguard + +To determine its best compensation policy, Vanguard would need to examine historical data to determine flights most likely to require overbooking. + +# Responding to the Current Situation + +We turn to issues currently facing the airline industry. Here we demonstrate how our model deals with unexpected circumstances. + +# Fewer Flights + +The airline sets its schedule; our model adapts to it. In any case, flight traffic is increasing back to the level before September 11 (Figure 1). + +![](images/5a74ce11f188f952afb7d715c69dc0ca24dfd6184a6c70523654623e141c321d.jpg) +Figure 1. Domestic available seat miles (ASM) by month [Airline Transport Association 2002]. + +# Heightened Security + +Since the change in security policies at airports nationwide, both the checking in for a flight and layover gate changes could slow down passengers. Our model adjusts for that by factoring in 30 min for a layover. + +# Passengers' Fear + +Passengers' fear could reduce no-shows (because those who purchase tickets are more serious about needing to fly) or increase them; there are no statistics to verify either effect. In either case, any effect of passengers' fear of flying seems to be declining [Airline Transport Association 2002]. + +# Airlines' Losses + +Revenue losses will likely make airlines cautious about taking on too much risk yet anxious to maximize revenue. Our model takes both goals into account, including enhancing revenue by dropping some customer service aspects. + +Revenue loss also could cause an airline to schedule fewer flights to reduce costs. Our model gives an optimal recommendation adapted to the schedule. + +# Other Recommendations + +- If two compensation packages have the same revenue benefits, choose the package that benefits the customer the most. +- Use vouchers instead of cash for compensation, because it costs less yet carries comparable perceived value for the customer. +- Give gate attendants some power to negotiate with angry customers, possibly including additional food vouchers. +- Upgrade bumped passengers to first class on their next flight when possible. This has no added cost in the case of an empty first class seat, yet has high value to the customer. +- Whenever possible, bump volunteers first, followed by passengers flying only one flight leg. This reduces the risk of further complicating a passenger's schedule. +- Ensure that the compensation policy is comparable to other airlines'. + +# Analysis of our Model + +# Strengths + +The fundamental strengths of our model are its robustness and flexibility. All of the data is fully parameterized, so the model can be applied to any airline. An airline can easily create probability distributions that accurately reflect not only average no-show percentages but also historical or per-flight trends. Although the industry may face constantly changing situations, our model adjusts to give the best recommendation possible. + +# Opportunities for Further Development + +- The Vanguard implementation of our model ignores exceptions in flight schedules, assuming that all flights are daily. The incorporation of flight schedule exceptions into our implementation would be straightforward. +- Proprietary data would improve the accuracy of our Vanguard example, including the probability distribution of no-shows, average ticket prices, the current cost of various forms of compensation, and which flights are high-demand flights. + +# References + +Airline Rules and Policies. 2002. http://www.onetravel.com/advisor/AR规则菜单.cfm. Accessed 9 February 2002. +Airline Transport Association. 2002. http://www.airlines.org/public/industry/. +Delta Airlines. 2000. Glossary of Terms. http://www.delta.com/inside/investors/annual_reports/2000b_annual/06_glossary.html. Accessed 9 February 2002. +More airline passengers get "bumped". 2000. USA Today (6 October 2000). http://www.usatoday.com/life/travel/business/1999/bt002.htm. +Porter, Gary A., and Curtis L. Norton. 2001. *Financial Accounting*. New York: Harcourt College Publishers. +U.S. Department of Transportation. 1994. Fly-Rights Regulations. http://www.dot.gov/airconsumer/flyrights.htm#overbooking. September 1994. Accessed 9 February 2002. +______ 2001. Air Travel Consumer Report. http://www.dot.gov/airconsumer/0112atcr.doc. December 2001. Accessed 11 February 2002. +Vanguard Airlines. 2001. Vanguard Airlines 10-Q Filing. Securities and Exchange Commission. http://www.sec.gov/Archives/edgar/data/1000578/000100057801500016/r10q111401.htm. Filed 11 November 2001. Accessed 11 February 2002. +2002. http://www.flyvanguard.com. +WebFlyer MileMarker. 2002. Air Distance Calculator. http://www.webflyer.com/milemarker/getmileage.cgi. Accessed 9 February 2002. + +
Memeo r a n d u m
+ +February 11, 2002 + +To: Scott Dickson, + +Chairman, CEO and President of Vanguard Airlines + +From: Team 229 + +Airline Yield Management Research + +Subject: Policy Changes for Optimal Overbooking and Compensation + +After careful analysis of our company's current flight schedule, revenue per additional passenger, overbooking strategy, and several different compensation schemes for bumped passengers, we have the following recommendations to maximize revenue on high demand flights. + +1) Since our historical records indicate that flights 101b, 251a, 325b, 451a, 552a and 902a are the flights in highest demand, we recommend that our airline adopt a compensation package consisting of the following policies +a) Provision of overnight accommodations for those stranded during applicable times +b) Ticket vouchers for all sufficiently delayed passengers who will accept them in lieu of cash. +c) In addition, we are recommending against the use of meal compensation. This policy ensures the least cost to the airline in the case of an overbooked passenger while maintaining the highest possible level of customer satisfaction at that cost. +2) While using this compensation package, we also advise you to overbook flights using the following numbers of allowable bookings for each high demand flight. A complete reference of allowable flight booking levels is available. + +
FlightAllowable Bookings
101b142
207a140
325b142
451a142
552a142
902a142
+ +# Optimal Overbooking + +David Arthur + +Sam Malone + +Oaz Nir + +Duke University + +Durham, NC + +Advisor: David P. Kraines + +# Introduction + +We construct several models to examine the effect of overbooking policies on airline revenue and costs in light of the current state of the industry, including fewer flights, increased security, passengers' fear, and billions in losses. + +Using a plausible average ticket price, we model the waiting-time distribution for flights and estimate the average cost per involuntarily bumped passenger. + +For ticketholders bumped voluntarily, the interaction between the airline and ticketholders takes the form of a least-bid auction in which winners receive compensation for foregoing their flights. We discuss the precedent for this type of auction and introduce a highly similar continuous auction model that allows us to calculate a novel formula for the expected compensation required. + +Our One-Plane Model models expected revenue as a function of overbooking policy for a single plane. Using this framework, we examined the relationship between the optimal (revenue-maximizing) overbooking strategy and the arrival probability for ticketholders. We extend the model to consider multiple fare classes; doing so does not significantly alter optimal overbooking policy. + +Our Interactive Simulation Model takes into account estimates for average compensation costs. It simulates the interaction between 10 major U.S. airlines with a market base of 10,000 people, factoring in passenger arrival probability, flight frequency, compensation for bumping, and the behavior of rival airlines. Thus, we estimate optimal booking policy in a competitive environment. Simulations of this model with likely parameter values before and after September 11 gives robust results that corroborate the conclusions of the One-Plane Model and the compensation-cost formula. + +Overall, we conclude that airlines should maintain or decrease their current levels of overbooking. + +# Terms + +- Ticketholders: People who purchased a ticket. +- Contenders: Ticketholders who arrive in time to board their flight. +- Boarded passengers: Contenders who board successfully. +- Bumped passengers: Contenders who are not given seating on their flight. +- Voluntarily bumped passengers: Bumped passengers who opt out of their seating in exchange for compensation. +- Involuntarily bumped passengers: Bumped passengers who are denied boarding against their will. +- Compensation costs: The total value of money and other incentives given to bumped passengers. +- Flight Capacity: The number of seats on a flight. +- Overbooking: The practice of selling more tickets that flight capacity. +- Waiting time: The time that a bumped passenger would have to wait for the next flight to the destination. +- Load factor: The ratio of the number of seats filled to the capacity. + +# Assumptions and Hypotheses + +- Flights are domestic, direct, and one-way. +- The waiting time between flights is the amount of time until the scheduled departure time of the next available flight to a given destination. +- The ticket price is \(140 [Airline Transport Association 2002], independent of when the ticket is bought, except when we consider multiple fares. +- Pre-September 11, the average probability of a ticketholder checking in for the flight (and thus becoming a contender) was $85\%$ [Smith et al. 1992, 9]. +- The pre-September 11 average load factor was $72\%$ [Bureau of Transportation Statistics 2000]. + +# Complicating Factors + +Each of our models attempts to take into account the current situation facing airlines: + +The Traffic Factor + +On average, there are fewer flights by airlines between any given locations. + +The Security Factor + +Security in and around airports has been heightened. + +The Fear Factor + +Passengers are more wary of the dangers of air travel, such as possible terrorist attacks, plane crashes, and security breaches at airports. + +The Financial Loss Factor + +Airlines have lost billions of dollars in revenue due to decreased demand for air travel, increased security costs, and increased industry risks. + +# The Traffic Factor + +Because there are fewer flights, it is likely that the demand for any given flight will increase. Flights are likely to be fuller; the average waiting time between flights to a destination is likely to increase, so bumped passengers will demand higher compensation. + +# The Security Factor + +The increase in security will likely lead to an increase in the number of ticketholders who arrive at the airport but—due to security delays—do not arrive at their departure gates in time. + +Successful implementation of security measures may lead to an improvement in the public perception of the airline industry and an increase in demand for air travel. + +# The Fear Factor + +Increased fear of flying decreases demand for air travel, so security delays may not be as serious. + +On the other hand, if a higher percentage of ticketholders are flying out of necessity, then the probability that a ticketholder becomes a contender may increase because of decreased cancellations and no-shows. However, fewer ticketholders are likely to agree to be bumped voluntarily at any price, so the percentage of involuntarily bumped passengers may increase. + +# The Financial Loss Factor + +Because companies may seek to increase short-term profits in the face of recent losses, some airlines may implement more aggressive overbooking, which could induce an overbooking war between airlines [Suzuki 2002, 148]. The likely increase in the number of bumped passengers would lead to a rise in compensation costs that would partially offset increased revenue. + +Decreasing the number of bumped passengers would improve the airlines' image and might spur demand, which would bolster future revenue. + +# One-Plane Model + +# Introduction and Motivation + +We first consider the optimal overbooking strategy for a single flight, independent of all other flights. We will see later that its results are a good approximation to the results of the full-fledged Interaction Simulation Model. + +# Development + +Let the plane have a capacity of C identical seats and let a ticket cost T = $140 independent of when it is bought. Let the airline's overbooking strategy be to sell up to B tickets, if possible (B > C). We analyze this strategy in the case when all B tickets are sold. + +We model the number of contenders for the flight with a binomial distribution, where a ticketholder becomes a contender with probability $p$ . The average $p$ for flights from the ten leading U.S. carriers is $p = 0.85$ [Smith et al. 1992]. The value of $p$ for a particular flight depends on a host of factors—flight time, length, destination, whether it is a holiday season—so we carry out our analysis for a range of possible $p$ values. + +With our binomial model, the probability of exactly $i$ contenders among the $B$ ticket-holders is $\binom{B}{i}p^{i}(1-p)^{B-i}$ . + +We assume that each bumped passenger is paid compensation $(1 + k)T = 140(1 + k)$ , for some constant $k$ . Translated into everyday terms, this means that a bumped passenger receives compensation equal to the ticket price $T$ plus some additional compensation $kT > 0$ . Later, we relax the assumption that compensation cost is the same for each passenger, when we consider involuntary vs. voluntary bumping. + +We define the compensation cost function $F(i, C)$ to be the total compensation the airline must pay if there are exactly $i$ contenders for a flight with seating capacity $C$ : + +$$ +F (i, C) = \left\{ \begin{array}{l l} 0, & i \leq C; \\ (k + 1) T (i - C), & i > C. \end{array} \right. +$$ + +We calculate expected revenue $R$ as a function of $B$ : + +$$ +\begin{array}{l} R (B) = \sum_ {i = 1} ^ {B} {\binom {B} {i}} p ^ {i} (1 - p) ^ {B - i} (B T - F (i, C)) \\ = 1 4 0 B - 1 4 0 (k + 1) \sum_ {i = C + 1} ^ {B} {\binom {B} {i}} p ^ {i} (1 - p) ^ {B - i} (i - C) \\ \end{array} +$$ + +We use a computer program to determine, for given $C$ , $p$ , and $k$ , the overbooking strategy $B_{\mathrm{opt}}$ that maximizes $R(B)$ . However, it is also possible to produce a close analytic approximation, which we now derive. + +The revenue for a bumped passenger, $T - (k + 1)T = -kT$ , has magnitude $k$ times that for a boarded passenger, $T$ . Thus, the optimal overbooking strategy is such that the distribution of contenders is in some sense "balanced," with $1 / (k + 1)$ of its area corresponding to bumped passengers and the remaining $k / (k + 1)$ corresponding to boarded passengers. + +We approximate the binomial distribution of contenders with a normal distribution: + +$$ +\frac {C - B p}{\sqrt {B p (1 - p)}} \approx \Phi^ {- 1} \left(\frac {k}{k + 1}\right), +$$ + +where $\Phi$ is the cumulative distribution function of the standard normal distribution. Clearing denominators and solving the resulting quadratic in $\sqrt{B}$ gives + +$$ +B _ {o p t} ^ {\prime} = \left(\frac {- \Phi^ {- 1} \left(\frac {k}{k + 1}\right) \sqrt {p (1 - p)} + \sqrt {\Phi^ {- 1} \left(\frac {k}{k + 1}\right) ^ {2} p (1 - p) + 4 p C}}{2 p}\right) ^ {2} \tag {1} +$$ + +as an analytic approximation to $B_{\mathrm{opt}}$ . For $k = 1$ , we get $B_{\mathrm{opt}}' = C / p$ . + +This analytic approximation is always within 1 of the optimal overbooking strategy for $.80 \leq p \leq .90$ and $1 \leq k \leq 3$ . + +# Results and Interpretation + +The airline should be able to obtain good approximations to $p$ and $k$ empirically. Thus, it can take our computer program, insert its data for $C, T, p,$ and $k$ , and obtain the optimal overbooking strategy $B_{\mathrm{opt}}$ . Figure 1 plots expected revenue $R(B)$ vs. $BC = 150, k = 1, p = 0.85,$ and $T = 140$ . + +At $B = 177$ , the airline can expect revenue $R(177) = \$ 24,200\), which is more than $15\%$ in excess of the expected revenue $R(150) = \\(21,000$ from a policy of no overbooking. + +Operating at a less-than-optimal overbooking strategy can have serious consequences. For example, American Airlines has an annual revenue of $20 billion [AMR Corporation 2000]. An overbooking policy B outside the range of [173, 183] implies an expected loss of more than$ 1 billion over a 5-year period compared with the expected revenue at $B_{\mathrm{opt}} = 177$ . + +![](images/9937cdb6571479d62ecf4df16ebd2ddbb8e272b9732e69e1d631b82f81bdb7ee.jpg) +Figure 1. Revenue $R$ vs. overbooking strategy $B$ for $C = 150, k = 1, p = 0.85,$ and $T = \$140$ . + +# Limitations + +The single-plane model + +- fails to account for bumped passengers' general dissatisfaction and propensity to switch airlines; +- assumes a simple constant-cost compensation function for bumped passengers; +- ignores the distinction between voluntary and involuntary bumping; +- assumes that all tickets are identical—that is, everyone flies coach; +- assumes that all $B$ tickets that the airline is willing to sell are actually sold. + +Even so, the model successfully analyzes revenue as a function of overbooking strategy, plane capacity, the probability that ticket-holders become contenders, and compensation cost. Later, we develop a more complete model. + +# The Complicating Factors + +First, though, we use the basic model to make preliminary predictions for the optimal overbooking strategy in light of market changes due to the complicating factors post-September 11. + +Of the four complicating factors, only two are directly relevant to this model: the security factor and the fear factor. The primary effect of the security factor is to decrease the probability $p$ of a ticketholder reaching the gate on time and becoming a contender. On the other hand, the primary effect of the fear factor is that a greater proportion of those who fly do so out of necessity; since such passengers are more likely to arrive for their flights than more casual flyers, the fear factor tends to increase $p$ . + +Figure 2 plots the optimal overbooking strategy $B_{\mathrm{opt}}$ vs. $p$ for fixed $k = 1$ and $C = 150$ . + +![](images/b008c5621cf1bbf80d60efcf1ad6bbcd6d92365736d84c9b908d24651e3e3ab3.jpg) +Figure 2. Optimal overbooking strategy vs. arrival probability $p$ . + +It is difficult to assess the precise change in $p$ resulting from the security and fear factors. However, airlines can determine this empirically by gathering statistics on their flights, then use our graph or computer program to determine a new optimal overbooking strategy. + +# One-Plane Model: Multifare Extension + +# Introduction and Motivation + +Most airlines sell tickets in different fare classes (most commonly first class and coach). We extend the basic One-Plane Model to account for multiple fare classes. + +# Development + +For simplicity, we consider a two-fare system, with C1 first-class seats and C2 coach seats. We assume that a first-class ticket costs T1 = $280 and that a + +coach ticket costs \(T_{2} = \\)140\). We consider an overbooking strategy of selling up to \(B_{1}\) first class tickets and up to \(B_{2}\) coach tickets, where the two types of sales are made independently of one another. + +We assume that a first-class ticketholder becomes a first-class contender with probability $p_1$ and that a coach ticketholder becomes a coach contender with probability $p_2$ . We use two independent binomial distributions as our model. First-class ticketholders are more likely to become contenders than coach passengers, since they have made a larger monetary investment in their tickets; that is, $p_1 > p_2$ . Thus, the probabilities of exactly $i$ first-class contenders and exactly $j$ coach contenders are + +$$ +\left( \begin{array}{c} B _ {1} \\ i \end{array} \right) p _ {1} ^ {i} (1 - p _ {1}) ^ {B _ {1} - i}, \qquad \left( \begin{array}{c} B _ {2} \\ j \end{array} \right) p _ {2} ^ {j} (1 - p _ {2}) ^ {B _ {2} - j}. +$$ + +We model compensation costs as constant per bumped passenger but dependent on fare class, with $(k_{1} + 1)T_{1}$ as compensation for a bumped first-class passenger and $(k_{2} + 1)T_{2}$ for a bumped coach passenger. We define the compensation cost function: + +$$ +F (i, j, C _ {1}, C _ {2}) = \left\{ \begin{array}{l l} 0, & i \leq C _ {1}, j \leq C _ {2}; \\ T _ {1} (k _ {1} + 1) (i - C _ {1}), & i > C _ {1}, j \leq C _ {2}; \\ \max \{T _ {2} (k _ {2} + 1) ((j - C _ {2}) - (i - C _ {1})), 0 \}, & i \leq C _ {1}, j > C _ {2}; \\ T _ {1} (k _ {1} + 1) (i - C _ {1}) + T _ {2} (k _ {2} + 1) (j - C _ {2}), & i > C _ {1}, j > C _ {2}. \end{array} \right. +$$ + +The justification for the third case is that an excess of coach contenders is allowed to spill over into any available first-class seats. On the other hand, excess first-class contenders cannot be seated in any available coach seats; this fact is reflected in the second case. + +We model expected revenue $R$ as a function of the overbooking strategy $(B_{1}, B_{2})$ : + +$$ +\begin{array}{l} R (B _ {1}, B _ {2}) = \sum_ {i = 1} ^ {B _ {1}} \sum_ {j = 1} ^ {B _ {2}} {\binom {B _ {1}} {i}} {\binom {B _ {2}} {j}} p _ {1} ^ {i} (1 - p _ {1}) ^ {B _ {1} - i} p _ {2} ^ {j} (1 - p _ {2}) ^ {B _ {2} - j}. \\ \left(B _ {1} T _ {1} + B _ {2} T _ {2} - F \big (i, j, C _ {1}, C _ {2} \big)\right) \\ \end{array} +$$ + +# Results and Interpretation + +For fixed $C_i, T_i, p_i$ , and $k_i$ ( $i = 1,2$ ), we can find $(B_{1,\mathrm{opt}}, B_{2,\mathrm{opt}})$ for which $R(B_1, B_2)$ is maximal by adapting the computer program used to solve the one-fare case. + +For example, for a plane with $C_1 = 20$ first class seats, $C_2 = 130$ coach seats, ticket costs of $T_1 = \\(280$ and $T_2 = \$ 140 \), and compensation constants $k_1 = k_2 = 1$ , we obtain the optimal overbooking strategies listed in Table 2. + +The optimal strategy involves relatively little overbooking of first-class passengers, since there is a much higher compensation cost. However, the total + +Table 2. Two-fare optimal overbooking strategies for selected arrival probabilities. + +
p1p2B1,optB2,opt
0.850.8023165
0.900.8022165
0.950.8020166
0.850.8523155
0.900.8522155
0.950.8520155
0.900.9022146
0.950.9021145
+ +number of passengers (coach plus first-class) overbooked in an optimal two-fare situation is virtually the same as the total number overbooked in the one-fare situation. The upshot is that the effect of multiple fare classes on the optimal overbooking strategy is not very significant; so, when we construct our more general model, we do not take into account multiple fares. + +# Compensation Costs + +The key element that separates different schemes for compensating bumped ticketholders is the degree of choice for the passenger. Airlines often hold auctions for contenders in which the lowest bids are first to be bought off of a flight. + +We construct a model for involuntary bumping costs that is based on DOT regulations and takes into account the waiting time distribution for flights. Then we discuss auction methods for voluntary bumping and derive novel results for expected compensation cost for a continuous auction that matches actual ticket auctions fairly well. + +# Involuntary Bumping: DOT Regulations + +The Department of Transportation (DOT) requires each airline to give all passengers who are bumped involuntarily a written statement describing their rights and explaining how the airline decides who gets on an overbooked flight and who does not [Department of Transportation 2002]. Travelers who do not get to fly are usually entitled to an "on-the-spot" payment of denied boarding compensation. The amount depends on the price of their ticket and the length of the delay: + +- Passengers bumped involuntarily for whom the airline arranges substitute transportation scheduled to get to their final destination within one hour of their original scheduled arrival time receive no compensation. + +- If the airline arranges substitute transportation scheduled to arrive at the destination between one and two hours after the original arrival time, the airline must pay bumped passengers an amount equal to their one-way fare, with a $200 maximum. +- If the substitute transportation is scheduled to get to the destination more than two hours later, or if the airline does not make any substitute travel arrangements for the bumped passenger, the airline must pay an amount equal to the lesser of $200\%$ of the fare price and $400. +- Bumped passengers always get to keep their tickets and use them on another flight. If they choose to make their own arrangements, they are entitled to an "involuntary refund" for their original ticket. + +These conditions apply only to domestic flights and not to planes that hold 60 or fewer passengers. + +The function for the compensation cost for an involuntarily bumped passenger is + +$$ +C (T, F) = \left\{ \begin{array}{c l} 0, & \text {i f} 0 < T \leq 1; \\ \min (2 F, F + 2 0 0), & \text {i f} 1 < T \leq 2; \\ \min (3 F, F + 4 0 0), & \text {i f} 2 < T, \end{array} \right. +$$ + +where $T$ is waiting time and $F$ is the fare price. We assume that all flights to a given location are direct and have the same flight duration. Thus, the waiting time between flights equals the difference in departure times, and the waiting time $T$ is the time until the next flight to the destination departs. We assume that involuntarily bumped passengers always ask for a refund of their fare. + +# Involuntary Bumping: The Waiting Time Model + +To use the compensation cost function to determine the average compensation (per involuntarily bumped passenger), we would need to know the joint distribution of fare prices and waiting times. Because this information would be extremely difficult to obtain, we opt instead for practical compromises: + +- We restrict our attention to determining the expected compensation cost for the average ticket price, $140 [Airline Transport Association 2000]. +- We specify a workable model for the distribution of waiting times that allows us to calculate this cost directly. + +Our model for the distribution of waiting times is the exponential distribution, a common distribution for waiting times. Let $T$ be a random variable representing waiting time between flights; then + +$$ +P r (T \leq t) = 1 - e ^ {- \lambda t} +$$ + +and $E(T) = \tau = 1 / \lambda$ , where $\tau$ is the mean waiting time for the next available flight. + +The expected cost of compensating an involuntarily bumped passenger who purchased a ticket of price $P$ can be evaluated directly and is + +$$ +\min (2 P, P + 2 0 0) \left[ e ^ {- \lambda} - e ^ {- 2 \lambda} \right] + \min (3 P, P + 4 0 0) \left[ e ^ {- 2 \lambda} \right]. +$$ + +From examining airline booking sites, we estimate the average daytime waiting time $\tau$ to be $2.6\mathrm{h}$ , not including the time between the last flight of the day and the first flight of the next day. If we include these night-next-day waiting times in our calculations, we obtain $\tau \approx 4.8\mathrm{h}$ ; this value corresponds to five flights per 24-hour period, which is fairly typical. Using the smaller, strictly daytime value $\tau = 2.6\mathrm{h}$ , we obtain an expected compensation cost of $255. + +# Voluntary Bumping: Auction Methods + +In 1968, J.L. Simon proposed an auction among ticketed passengers. Each ticketed passenger contending for a seat on a flight would submit a sealed envelope bid of the smallest amount of money for which the contender would give up the seat and wait until the next available one. The airline would compensate the passengers who required the least money and require that they give up their seats. Passengers would never get bumped without suitable compensation, and airlines could raise their overbooking level much higher than they could otherwise. After Ralph Nader successfully sued Allegheny Airlines for bumping him, variants on this scheme have gradually become standard throughout the industry. + +There are two reasonable ways to attempt an auction. + +- Per Simon, force every contender to choose a priori a price for which they would give up their ticket. The airline could arrange all bumpings immediately. +- The actual practice by most airlines is to announce possible compensation prices in discrete time intervals. Customers can then accept any offer they wish to. + +The first is attractive to the airlines because it is instant and minimizes compensation. The second, however, can be started well before a flight departs; and if intervals are increased gradually enough, the difference in cost is negligible. The methods should generate similar results, so for simplicity we concentrate on the second, though with continuous compensation offerings. + +# Voluntary Bumping: Continuous-Time Auction + +In the literature, it is common to assume that if $m$ passengers are compensated through an auction, the total cost for the airline should be linear in $m$ , + +although some authors (such as Smith et al. [1992]) recognize that the function should be nonlinear and convex but do not analyze it further. In fact, we can say a great deal more with only a few basic assumptions. Indeed, suppose that + +- $n$ ticketholders check in for a flight with capacity $C$ , with $n > C$ . +- Each contender has a limit price, the smallest compensation to be willing to give up the seat. +- An airline can always rebook a ticket holder on one of its own later flights at no cost (i.e., it does not have to pay for a ticket on a rival airline). + +In an ideal auction, the airline offers successively higher compensation prices; whenever the offer exceeds a contender's limit price, the contender gives up the ticket voluntarily. Suppose that ticketholders $(\Gamma_1, \Gamma_2, \ldots, \Gamma_n)$ are ordered so that $\Gamma_i$ 's limit price is less than $\Gamma_j$ 's limit price for $i < j$ . Define: + +- $D(x) =$ the probability that a randomly selected ticketholder gives up the seat for a price $x$ . +- $Y_{m} =$ the compensation that the airline must pay $\Gamma_{m}$ to give up the ticket. +- $X_{m} =$ the total compensation that the airline must pay for $m$ contenders give up their seats. + +We have $X_{m} = \sum_{i=1}^{m} Y_{i}$ . To determine $E[X_{m}]$ , we determine $E[Y_{i}]$ for $i \leq m$ . To do this, we need the following result: + +$$ +E [ Y _ {m} ] = \sum_ {i = 0} ^ {m - 1} {\binom {n} {i}} \int_ {0} ^ {\infty} {\Big (D (x) \Big)} ^ {m} {\Big (} 1 - D (x) {\Big)} ^ {n - m} d x. +$$ + +[EDITOR'S NOTE: We omit the authors' proof.] + +Very little can be done beyond this point without further knowledge about the nature of $D(x)$ . There is not much recent data on this; but when airlines were first considering moving to an auction-based system, K.V. Nagarajan [1978] polled airline passengers on their limit price. Although he performed little analysis, we find that the cumulative distribution function of this limit price fits very closely exponential curves of the form $1 - e^{-Ax}$ for a fixed $A$ (Figure 3). + +With $D(x) = 1 - e^{-Ax}$ for some constant $A$ , then + +$$ +E [ X _ {m} ] = \frac {1}{A} \left[ m - (n - m) \left(\frac {1}{n} + \frac {1}{n - 1} + \frac {1}{n - 2} + \ldots + \frac {1}{n - m + 1}\right) \right]. +$$ + +[EDITOR'S NOTE: We omit the authors' proof.] + +Using the approximation + +$$ +{\frac {1}{1}} + {\frac {1}{2}} + \ldots + {\frac {1}{n}} \approx \ln n, +$$ + +![](images/b74c87b282badf90155b2c2c385066d86bea34b692247cc350a549b959b5985a.jpg) + +![](images/2690bfaca18dd1bed1f9df16651efa5339c17a4163c204bbb3a39a58b01f930d.jpg) +Figure 3. Polled distribution of ticketholder limit price, with best fit graphs $1 - e^{0.046x}$ for 2-hour wait and $1 - e^{0.0175x}$ for 6-hour wait (data from [Nagarajan 1978, 113]). + +this becomes + +$$ +E [ X _ {m} ] \approx \frac {1}{A} \left[ m - (n - m) \ln \left(\frac {n}{n - m}\right) \right]. +$$ + +There is no reason to believe that the value of A is constant across all scenarios. For example, contenders will certainly accept a smaller compensation if the next flight is departing soon. For our purposes, however, we assume that A is constant over all situations; and we estimate that on a flight with capacity C = 150 and only a small number of overbooked passengers, Γ₁ has a limit price of $100. Then we have $\frac{1}{A} \cdot \frac{1}{150} \approx $100, so A ≈ $1/15,000. + +Hence, the expected compensation required to bump $m$ out of $n$ ticketholders via auction is approximately + +$$ +\frac {\S 1}{1 5 , 0 0 0} \left[ m - (n - m) \ln \left(\frac {n}{n - m}\right) \right], +$$ + +compared to a cost of \(255m (plus ill will) for involuntary bumping the same number of ticketholders. + +# Effects of Overbooking on Market Share + +# Constructing the Model + +We focus on the 10 largest U.S. airlines (Alaska, America West, American, Continental, Delta, Northwest, Southwest, Trans World, United, US Air), which comprise $90\%$ of the market. We use 1997-1998 statistics on their flight frequency and market share. [EDITOR'S NOTE: We omit the data table.] + +Flights are modeled as identical in all respects except for market interest. The market is simulated as a group of initially 10,000 people, each loyal to one airline, who independently buy tickets on their airline with a fixed probability and meet reservations with a fixed probability. Each member of the market independently chooses to stay with an airline or change airline based on treatment regarding each flight. + +Each company choose a number $r$ , which specifies its overbooking strategy: On a flight of capacity $C$ , the company will sell up to $B = Cr$ tickets. + +In each time period, precisely one flight is offered. The chance that a given airline will offer that flight is proportional to the number of flights that it offers per year. We also determine a constant $k$ that indicates the level of interest in this flight. Each flight has capacity $C = 150$ seats each sold at $140. + +The exact size of the market should have little effect on the result. We assume that the total market is initially made up of 10,000 independent people, each loyal to one carrier. The relative sizes of the company market shares are initialized according to 1997-98 industry data. We assume each person in the market flies on average the same number of times in a year. + +We assume that each person in a company's market has probability $k$ of wanting to buy a ticket for a flight by the company. We have $k$ follow a normal + +distribution with mean fixed so that the average load factor on all flights is the industry average of 0.72 [Bureau of Transportation Statistics 2002]. + +Industry data prior to September 11 indicate a probability of .85 that a tick-etholder will check in for the flight. + +If necessary, each airline bumps some passengers voluntarily and some involuntarily, according to its strategy. The immediate cost of bumpings is set to the values that we derived in the previous section. We surmise that voluntarily bumped passengers are relatively happy and thus leave the airline with probability only .05, whereas involuntarily bumped passengers are furious and leave with probability .8. + +A person who leaves an airline stays within the market with probability .9 (0.95 if bumped voluntarily) and simply switches to another airline; otherwise, the person leaves the market altogether. People trickle into the market fast enough to compensate for the loss of people due to dissatisfaction, thus allowing the market to grow slowly. + +# Simulation Results, Pre-September 11 + +We investigate the effect of different overbooking rates on profit. For each overbooking rate, we calculate net profit over 500 time periods (ensuring that the same random events occur regardless of the strategy tested). The strategy that maximizes profit for that time period is then determined and tabulated. We repeat this 40 times for each airline. + +This leaves open the question of what strategies the companies not being tested should use. To determine this, we initially assume that each company would overbook by 1.17 (as computed in the single-plane model), run the program to get a first estimate of a good strategy, and use the optimal results from that preliminary run to set the default overbooking rates of each company in a final run. Finally, we use the industry figure that $5\%$ of all bumped passengers are bumped involuntarily to set the company compensation strategies. + +The optimal overbooking rate for all companies other than Alaska is between 1.165 and 1.176, close to but a little less than the results from the One-Plane Model. This is reasonable, since the most significant improvement that this simulation makes over the One-Plane Model is the consideration of lost customers, whose effect should slightly reduce the optimal overbooking rate. + +The program generates very consistent answers on each run for every airline except Alaska. Alaska has far fewer passengers per flight than its competitors and rarely fills any plane entirely, so its overbooking policy has a negligible effect on its overall profit. Thus, the simulation is almost certainly too coarse to generate useful data on Alaska. + +# Adjusting the Model Due to September 11 + +We estimate the effects of the complicating factors after September 11 have on the simulation parameters: + +- Arrival probability $p$ increases from 0.85 to 0.90. +- Flight frequency decreases by $20\%$ [Parker 2002]. +- Total arket size decreases by $15\%$ . Fourth quarter data from 2001 are not yet available, so we make an estimate. Our own experience is that flights are more crowded now, which suggests that the percentage of market size decrease is smaller than the percentage of flight frequency decrease. Thus, we estimate that market size has decreased by $15\%$ . +- Market return rate doubles. The market size has decreased due to the fear factor, but Parker [2002] anticipates that demand will return to pre-September 11 levels by mid-2002. Moreover, public perception of airline safety is improving due to the security factor. Thus, the market return rate should be substantially higher than its pre-September 11 level. +- Market exit rate decreases by $50\%$ . The market composition is now more heavily weighted towards those who fly only out of necessity; such fliers are much less likely than casual fliers to leave the market. +- Percentage of bumps that are voluntary decreases from $95\%$ to $90\%$ . There are fewer flights, hence the waiting time between flights is greater. Since passengers are more likely to be flying of necessity, they are much less interested in giving up a seat for compensation. +- Compensation cost of voluntary bumping increases by $20\%$ . +- Compensation cost of involuntary bumping increases by $20\%$ . Bumped passengers face longer waiting times; because of DOT regulations, average involuntary compensation costs must rise. +- Competitors increase their overbooking levels from $r$ to $r + 0.02$ . Due to financial losses, an airline can expect its competitors to focus more heavily on short-term profits than previously. + +# Simulation Results, Post-September 11 + +Using the parameter changes outlined, we ran the simulation again to estimate the effect of the events of September 11 on optimal overbooking strategies. The results are shown in Table 3. + +There is again a strong correlation between the simulation results for these parameters and the corresponding results from the One-Plane Model. + +From Table 3, it is clear that the events of September 11 have indeed had a significant effect on optimal overbooking rates. Indeed, for a company the + +Table 3. Optimal overbooking rates, from simulation results. + +
AirlinePre-September 11Post-September 11
Alaska1.3191.260
America West1.1691.094
American1.1711.094
Continental1.1701.096
Delta1.1731.095
Northwest1.1741.095
Southwest1.1731.095
Trans World1.1761.096
United1.1681.094
US Air1.1651.092
+ +size of American Airlines, the $7 \%$ change in these rates could easily lead to a difference in profits on the order of $1 billion. + +Thus, if our estimates of parameter changes due to September 11 are reasonable, all major airlines should significantly decrease their overbooking rates. + +# References + +Air Transport Association. 2001. http://www.air-transport.org/public/industry/display1.asp?nid=1138. +AMR Corporation. 2000. http://www.amrcorp.com/ar2000/fin_highlights.html. +Bureau of Transportation Statistics. 2000. http://www.bts.gov/oai/ aviation_ industry/BlueBook_Dec200.pdf . +Department of Transportation. 2002. http://www.dot.gov/airconsumer/flyrights.htm#overbooking. +Infoplease. 2002. http://www.infoplease.com/ipa/A077823.html. +Nagarajan, K.V. 1979. On an auction solution to the problem of airline overbooking. *Transportation Research A13: 111-114*. +Parker, James D. 2001. Growth airline outlook even more favorable in new airline environment. http://www.raymondjames.com/invbrf/airline.htm. +Smith, Barry C., John F. Leimkuhler, and Ross M. Darrow. 1992. Yield management at American Airlines. Interfaces 22 (1) (January-February): 8-31. +Suzuki, Yoshinori. 2002. An empirical analysis of the optimal overbooking policies for U.S. major airlines. *Transportation Research E38: 135-149*. + +# Memorandum + +Attn: Don Carty, CEO American Airlines + +From: MCM Team 180 + +Subject: Overbooking Policy Assessment Results + +We completed the preliminary assessment of overbooking policies that you requested. There is a great deal of money at stake here, both from ticket sales and also from compensation that must be given to bumped passengers. Moreover, if too many passengers are bumped, there will be a loss of good will and many regular customers could be lost to rival airlines. In fact, we found that the profit difference for American Airlines between a good policy and a bad policy could easily be on the order of $1 billion a year. + +Using a combination of mathematical models and computer simulations, we considered a wide variety of possible strategies that could be tried to confront this problem. We naturally considered different levels of overbooking, but we also looked at different ways in which airlines could compensate bumped passengers. In terms of the second question, we find that the current scheme of auctioning off compensations for tickets, combined with certain calculated forced bumpings, is still ideal, regardless of changes to the market state. + +Although we were forced to work without much recent data, we were also able to achieve reliable and consistent results for the optimal overbooking rate. In particular, we found that prior to September 11, American Airlines stood to maximize profits by selling approximately 1.171 times as many tickets as seats available. + +We next considered how this number would likely be affected by the current state of the market. In particular, we focused on four consequences of the events on September 11: all airlines are offering fewer flights, there is heightened security in and around airports, passengers are afraid to fly, and the industry has already lost billions of dollars. Analyzing each of these in turn, we found that they did indeed have a significant effect on the market. In particular, American Airlines should lower its overbooking rate to 1.094 tickets per available seat. + +In conclusion, we found that there is indeed a tremendous need to re-evaluate the current overbooking policy. According to our current data, we believe that the rate should be dropped significantly. It would be valuable, however, to supplement our calculations with some of the confidential data that American Airlines has access to, but that we do not. + +# Models for Evaluating Airline Overbooking + +Michael P. Schubmehl + +Wesley M. Turner + +Daniel M. Boylan + +Harvey Mudd College + +Claremont, CA + +Advisor: Michael E. Moody + +# Introduction + +We develop two models to evaluate overbooking policies. + +The first model predicts the effectiveness of a proposed overbooking scheme, using the concept of expected marginal seat revenue (EMSR). This model solves the discount seat allocation problem in the presence of overbooking factors for each fare class and evaluates an overbooking policy stochastically. + +The second model takes in historical flight data and reconstructs what the optimal seat allocation should have been. The percentage of overbooking revenue obtained in practice serves as a measure of the policy's value. + +Finally, we examine the overbooking problem in light of the recent drastic changes to airline industry and conclude that increased overbooking would bring short-term profits to most carriers. However, the long-term ill effects that have traditionally caused airlines to shun such a policy would be even more pronounced in a post-tragedy climate. + +# Literature Review + +There are two major ways that airlines try to maximize revenues: overbooking (selling more seats than available on a given flight) and seat allocation (price discrimination). These measures are believed to save major airlines as much as half a billion dollars each year, in an industry with a typical yearly profit of about $1 billion dollars [Belobaba 1989]. + +Beckman [1958] models booking and no-shows in an effort to find an optimal overbooking strategy. He ignores advance cancellations, assuming that all cancellations are no-shows [Rothstein 1985]. A method that is easy to implement but sophisticated enough to allow for cancellations and group reservations was developed by Taylor [1962]. Versions of this model were implemented at Iberia Airlines [Shlifer and Vardi 1975], British Overseas Airways Corporation, and El Al Airlines [Rothstein 1985]. + +None of these approaches considers multiple fare classes. Littlewood [1972] offers a simple two-fare allocation rule: A discount fare should be sold only if the discount fare is greater than or equal to the expected marginal return from selling the seat at full-fare. This idea was generalized by Belobaba [1989] to include any number of fare classes and allow the integration of overbooking. We use expected marginal seat revenue in predicting about overbooking schemes. + +There is a multitude of work on the subject [McGill 1999]—according to Weatherford and Bodily [1992], there are more than 124,416 classes of models for variations of the yield management problem, though research has settled into just a few of these. Several authors tried to better Belobaba's [1987] heuristic in the presence of three or more fare classes (for which it is demonstrably suboptimal) [Weatherford and Bodily 1992]; generally, these adaptive methods for obtaining optimal booking limits for single-leg flights are done by dynamic programming [Mcgill 1999]. + +After deregulation in 1978, airlines were no longer required to maintain a direct-route system to major cities. Many shifted to a hub-and-spoke system, and network effects began to grow more important. To maximize revenue, an airline may want to consider a passenger's full itinerary before accepting or denying their ticket request for a particular leg. For example, an airline might prefer to book a discount fare rather than one at full price if the passenger is continuing on to another destination (and thus paying an additional fare). + +The first implementations of the origin-destination control problem considered segments of flights. The advantage to this was that a segment could be blacked out to a particular fare class, lowering the overall complexity of a booking scheme. Another method, virtual nesting, combines fare classes and flight schedules into distinct buckets [Mcgill 1999]. Inventory control on these buckets would then give revenue-increasing results. Finally, the bid-price method deterministically assigns a value to different seats on a flight leg. The legs in an itinerary are then summed to establish a bid-price for that itinerary; a ticket request is accepted only if the fare exceeds the bid-price [Mcgill 1999] + +The most realistic yield management problem takes into account five price classes. The ticket demands for different fare classes are randomized and correlated with one other to allow for sell-ups and the recapture of rejected customers on later flights. Passengers can no-show or cancel at any time. Group reservations are treated separately from individuals—their cancellation probability distribution is likely different. Currently, most work assumes that passengers who pay full fare would not first check for availability of a lower-class ticket; a more realistic model would allow buyers of a higher-class ticket to be di + +verted by a lower fare. A full accounting of network effects would consider the relative value of what Weatherford and Bodily [1992] terms displacement—denying a discount passenger's ticket request to fly a multileg itinerary in favor of leaving one of the legs open to a full-fare passenger. + +Unfortunately, while the algorithms for allocating seats and setting overbooking levels are highly developed, there has been little work done on the problem of evaluating how effective these measures actually are. Our solution applies industry-standard methods to find optimal booking levels, then examines the actual booking requests for a given flight to determine how close to an optimal revenue level the scheme actually comes. + +# Factors Affecting Overbooking Policy + +# General Concerns + +The reason that overbooking is so important is because of multiple fare classes. With only one fare class, it would be easier for airlines to penalize customers for no-shows. However, while most airlines offer nonrefundable discount tickets, they prefer not to penalize those who pay full fare, like business travelers, because these passengers account for most of the profits. + +The overbooking level of a plane is dictated by the likelihood of cancellations and of no-shows. An overbooking model compares the revenue generated by accepting additional reservations with the costs associated with the risk of overselling and decides whether additional sales are advisable. In addition, the "recapture" possibility can be considered, which is the probability that a passenger denied a ticket will simply buy a ticket for one of the airline's other flights. Since a passenger is more valuable to the airline buying a ticket on a flight that has empty seats to fill than on one that is already overbooked, a high recapture probability reduces the optimal overbooking level [Smith et al. 1992]. + +No major airline overbooks at profit-maximizing levels, because it could not realistically handle the problems associated with all the overloaded flights. This gives the overbooking optimization problem some important constraints. The total flight revenue is to be maximized, subject to the conditions that only a certain portion of flights have even one passenger denied boarding (one oversale), and that a bound is placed on the expected total number of oversales. Dealing with even one oversale is a hassle for the airline's staff, and they are not equipped to handle such problems on a large scale. Additionally, some research indicates that increased overbooking levels would most likely trigger an "overbooking war" [Suzuki 2002], which would increase short-term profits but would probably engender enough consumer resentment that the industry as a whole would lose business. + +While the overbooking problem sets a limit for sales on a flight as a whole, proper seat allocation sets an optimal point at which to stop selling tickets for individual fare levels. For example, a perfectly overbooked plane, loaded + +exactly to capacity, could be flying at far below its optimal revenue level if too many discount tickets were sold. The more expensive tickets are not for first-class seats and involve no additional luxuries above the discount tickets, apart from more lenient cancellation policies and the ability to buy the tickets a shorter time before the flight's departure. + +# September 11, 2001 + +Since the September 11 terrorist attacks, there have been significant changes in the airline business. In addition to the forced cancellation of many flights in the immediate aftermath of the attacks and the extreme levels of cancellations and no-shows by passengers after that, passenger traffic has dropped sharply in general. The huge downturn in passenger levels has led to large reductions in service by most carriers. + +In terms of the booking problem, there are fewer flights for passengers to spill over onto, which could increase the loss by an airline if it bumps a passenger from a flight. On the other hand, since passenger levels have fallen so far, it is less likely that an airline will overfill any given flight. The heightened security at airports will likely increase the passenger no-show rate somewhat, as passengers get delayed at security checkpoints. At the very least, it should almost completely remove the problem of "go-shows," passengers who show up for a flight but are not in the airline's records. + +On the whole, optimal booking strategies have become even more vital as airlines have already lost billions of dollars, and some teeter on the brink of failure. Some overbooking tactics previously dismissed as too harmful in the long run might be worthwhile for companies in trouble. For example, an airline near failure might increase the overbooking rate to the level that maximizes revenue, without regard to the inconvenience and possible future resentment of its customers. + +# Constructing the Model + +# Objectives + +A scheme for evaluating overbooking policies needs to answer two questions: how well should a new overbooking method perform, and how well is a current overbooking scheme already working? The first is best addressed by a simple model of an airline's booking procedures; given some setup for allocating seats to fare classes, candidate overbooking schemes can be laid on top and tested by simulation. This approach has the advantage that it provides insight into why an overbooking scheme is or is not effective and helps to illuminate the characteristics of an optimal overbooking approach. + +The second question is, in some respects, a simpler one to answer. Given the actual (over)booking limits that were imposed on each fare class, and all avail- + +able information on the actual requests for reservations, how much revenue might have been gained from overbooking, compared to how much actually was? This provides a very tangible measure of overbooking performance but very little insight into the reasons for results. + +The enormous number of factors affecting the design and evaluation of an overbooking policy force us to make simplifying assumptions to construct models that meet both of these goals. + +# Assumptions + +- Fleet-wide revenues can be near-optimized one leg at a time. + +Maximizing revenue involves complicated interactions between flights. For instance, a passenger purchasing a cheap ticket on a flight into a major hub might actually be worth more to the airline than a business-class passenger, on account of connecting flights. We assume that such effects can be compensated for by placing passengers into fare classes based on revenue potential rather than on the fare for any given leg. This assumption effectively reduces the network problem to a single-leg optimization problem. + +- Airlines set fares optimally. + +Revenue maximization depends strongly on the prices of various classes of tickets. To avoid getting into the economics of price competition and supply/demand, we assume that airlines set prices optimally. This reduces revenue maximization to setting optimal fare-class (over)booking limits. + +- Historical demand data are available and applicable. + +The model needs to estimate future demand for tickets on any given flight. We assume that historical data are available on the number of tickets sold any given number of days $t$ before a flight's departure. In some respects, this assumption is unrealistic because of the problem of data censorship—that is, the failure of airlines to record requests beyond the booking limit for a fare class [Belobaba 1989]. On the other hand, statistical methods can be used to reconstruct this information [Boeing Commercial Airline Company 1982, 7-16; Swan 1990]. + +- Low-fare passengers tend to book before high-fare ones. + +Discount tickets are often sold under advance purchase restrictions, for the precise reason that it enables price discrimination. Because of restrictions like these, and because travelers who plan ahead search for cheap tickets, low-fare passengers tend to book before high-fare ones. + +# Predicting Overbooking Effectiveness + +Disentangling the effects of overbooking, seat allocation, pricing schemes, and external factors on revenues of an airline is extremely complicated. To + +isolate the effects of overbooking as much as possible, we want a simple, well-understood seat allocation model that provides an easy way to incorporate various overbooking schemes. In light of this objective, we pass up several methods for finding optimal booking limits on single-leg flights detailed in, for example, Curry [1990] and Brumelle [1993], in favor of the simpler expected marginal seat revenue (EMSR) method [Belobaba 1989]. + +EMSR was developed as an extension of the well-known rule of thumb, popularized by Littlewood [1972], that revenues are maximized in a two-fare system by capping sales of the lower-class ticket when the revenue from selling an additional lower-class ticket is balanced by the expected revenue from selling the same seat as an upper-class ticket. In the EMSR formulation, any number of fare classes are permitted and the goal is "to determine how many seats not to sell in the lowest fare classes and to retain for possible sale in higher fare classes closer to departure day" [Belobaba 1989]. + +The only information required to calculate booking levels in the EMSR model is a probability density function for the number of requests that will arrive before the flight departs, in each fare class and as a function of time. For simplicity, this distribution can be assumed to be normal, with a mean and standard deviation that change as a function of the time remaining. Thus, the only information an airline would need is a historical average and standard deviation of demand in each class as a function of time. Ideally, the information would reflect previous instances of the particular flight in question. Let the mean and standard deviations in question be denoted by $\mu_i(t)$ and $\sigma_i(t)$ for each fare class $i = 1,2,\ldots ,k$ . Then the probability that demand is greater than some specified level $S_{i}$ is given by + +$$ +\bar {P} _ {i} (S _ {i}, t) \equiv \frac {1}{\sqrt {2 \pi} \sigma_ {i} (t)} \int_ {S _ {i}} ^ {\infty} e ^ {(r - \mu_ {i} (t)) ^ {2} / \sigma_ {i} (t) ^ {2}} d r. +$$ + +This spill probability is the likelihood that the $S_{i}$ th ticket would be sold if offered in the $i$ th category. If we further allow $f_{i}(t)$ to denote the expected revenue resulting from a sale to class $i$ at a time $t$ days prior to departure, we can define + +$$ +\mathrm {E M S R} _ {i} (S _ {i}, t) = f _ {i} (t) \cdot \bar {P} _ {i} (S _ {i}, t), +$$ + +or simply the revenue for a ticket in class $i$ times the probability that the $S_{i}$ th seat will be sold. The problem, however, is to find the number of tickets $S_{j}^{i}$ that should be protected from the lower class $j$ for sale to the upper class $i$ (ignoring other classes for the moment). The optimal value for $S_{j}^{i}$ satisfies + +$$ +\operatorname {E M S R} _ {i} \left(S _ {j} ^ {i}, t\right) = f _ {j} (t), \tag {1} +$$ + +so that the expected marginal revenue from holding the $S_{j}^{i}$ th seat for class $i$ is exactly equal to (in practice, slightly greater than) the revenue from selling it immediately to someone in the lower class $j$ . The booking limits that should be enforced can be derived easily from the optimal $S_{j}^{i}$ values by letting the + +booking limit $B_{j}$ for class $j$ be + +$$ +B _ {j} (t) = C - S _ {j} ^ {j + 1} - \sum_ {i < j} b _ {i} (t), \tag {2} +$$ + +that is, the capacity $C$ of the plane, less the protection level of the class above $j$ from class $j$ and less the total number of seats already reserved. Sample EMSR curves, with booking limits calculated in this fashion, are shown in Figure 1. + +![](images/d9f49e9a5599a8276a95b4b43fa8f0067a806effb421636297dfcf5cc9f3f878.jpg) +Figure 1. Expected marginal seat revenue (EMSR) curves for three class levels, with the highest-revenue class at the top. Each curve represents the revenue expected from protecting a particular seat to sell to that class. Also shown are the resulting booking limits for each of the lower classes—that is, the levels at which sales to the lower class should stop to save seats for higher fares. + +This formulation does not account for overbooking; if we allow each fare class $i$ to be overbooked by some factor $O V_{i}$ , the optimality condition (1) becomes + +$$ +\operatorname {E M S R} _ {i} \left(S _ {j} ^ {i}, t\right) = f _ {j} (t) \cdot \frac {O V _ {i}}{O V _ {j}}. \tag {3} +$$ + +This can be understood in terms of an adjustment to $f_{i}$ and $f_{j}$ ; the overbooking factors are essentially cancellation probabilities, so we use each $O V_{i}$ to deflate the expected revenue from fare class $i$ . Then + +$$ +\bar {P} _ {i} (S _ {j} ^ {i}, t) \cdot \frac {f _ {i} (t)}{O V _ {i}} = f _ {j} (t) \cdot \frac {f _ {j} (t)}{O V _ {j}}, +$$ + +which is equivalent to (3). Note that the use of a single overbooking factor for the entire cabin (that is, $OV_{i} = OV$ ) causes the $OV_{i}$ and $OV_{j}$ in (3) to cancel. Nonetheless, the boarding limits for each class are affected, because the capacity of the plane $C$ must be adjusted to account for the extra reservations, so now + +$$ +C ^ {*} = C \cdot O V +$$ + +and the booking limits in (2) are adjusted upward by replacing $C$ with $C^*$ . + +The EMSR formalism gives us the power to evaluate an overbooking scheme theoretically by plugging its recommendations into a well-understood stable model and evaluating them. Given the EMSR boarding limits, which can be updated dynamically as booking progresses, and the prescribed overbooking factors, a simulated string of requests can be handled. Since the EMSR model involves only periodic updates to establish limits that are fixed over the course of a day or so, a set of $n$ requests can be handled with two lookups each (booking limit and current booking level), one subtraction, and one comparison; so all $n$ requests can be processed on $\mathcal{O}(n)$ time. An EMSR-based approach would thus be practical in a real-world real-time airline reservations system, which often handles as many as 5,000 requests per second. Indeed, systems derived from EMSR have been adopted by many airlines [Mcgill 1999]. + +# Evaluating Past Overbookings + +The problem of evaluating an overbooking scheme that has already been implemented is somewhat less well studied than the problem of theoretically evaluating an overbooking policy. One simple approach, developed by American Airlines in 1992, measures the optimality of overbooking and seat allocation separately [Smith et al. 1992]. Their overbooking evaluation process assumes optimal seat allocation and, conversely, their seat allocation evaluation scheme assumes optimal overbooking. Under this assumption, an overbooking scheme is evaluated by estimating the revenue under optimal overbooking in two ways: + +- If a flight is fully loaded and no passenger is denied boarding, the flight is considered to be optimally overbooked and to have achieved maximum revenue. +- If $n$ passengers are denied boarding, the money lost due to bumping these passengers is added back in and the $n$ lowest fares paid by passengers for the flight are subtracted from revenue. +- On the other hand, if there are $n$ empty seats on the plane, the $n$ highest-fare tickets that were requested but not sold are added to create the maximum revenue figure. + +Their seat-allocation model estimates the demand for each flight by calculating a theoretical demand for each fare class and then setting the minimum flight revenue (by filling the seats lowest-class first) and the maximum flight + +revenue (by filling the seats highest-class first). To estimate demand, we use the information on the flight's sales up to the point where each class closed. By assuming that demand is increasing for each class, we can project the number of requests that would have occurred had the booking limits been disregarded. + +Given these projected additional requests and the actual requests received before closing, it is straightforward to compute the best- and worst-case overbooking scenarios. The worst-case revenue $R_{-}$ is determined by using no booking limits and taking reservations as they come, and the best-case revenue $R_{+}$ is determined by accommodating high-fare passengers first, giving the leftovers to the lower classes. The difference between these two figures is the revenue to be gained by the use of booking limits. Thus, the performance of a booking scheme that generates revenue $R$ is + +$$ +p = \frac {R - R _ {-}}{R _ {+} - R _ {-}} \cdot 100 \% \tag{4} +$$ + +representing the percentage of the possible booking revenue actually achieved. + +We select this method for evaluating booking schemes after the fact. + +# Analysis of the Models + +# Tests and Simulations + +The EMSR method requires information on demand as a function of time. Although readily available to an airline, it is not widely published in a detailed form. Li [2001] provides enough data to construct a rough piecewise-linear picture of demand remaining as a function of time, shown in Figure 2. + +This information can be inputted into the EMSR model to produce optimal booking limits that evolve in time. A typical situation near the beginning of ticket sales was shown in Figure 1, while the evolution of the limits themselves is plotted in Figure 3. + +The demand information in Figure 2 can also be used to simulate requests for reservations. By taking the difference between the demand remaining at day $t$ and at day $(t - 1)$ before departure, the expected demand on day $t$ can be determined. The actual number of requests generated on that day is then given by a Poisson random variable with parameter $\lambda$ equal to the expected number of sales [Rothstein 1971]. The requests generated in this manner can be passed to a request-handling simulation that looks at the most current booking limits and then accepts or denies ticket requests based on the number of reservations already confirmed and the reservations limit. An example of this booking process is illustrated in Figure 4. + +The results of the booking process provide an ideal testbed for the revenue opportunity model employed to evaluate overbooking performance. The simulation conducted for Figure 4 had demand values of $\{11, 41, 57\}$ , for classes 1, + +![](images/029812c6742a99794033dc7fb5d38dfaaa1cf094af750a256636f31f77fe027c.jpg) +Expected Remaining Demand as Flight Time Approaches +Figure 2. Demand remaining as a function of time for each of three fare classes, with the highest fare class on top. The curves represent the fraction of tickets that have yet to be purchased. Note that, for example, demand for high fare tickets kicks in much later than low-fare demand. (Data interpolated from Li [2001].) + +2, and 3 respectively, before ticket sales were capped. A linear forward projection of these sales rates indicates that they would have reached {18, 49, 69} had the classes remained open. Given fare classes {250, 150, 100}, the minimum revenue would be + +$$ +R _ {-} = \\( 1 0 0 (6 9) + \\) 1 5 0 (4 0) + \\( 2 5 0 (0) = \\) 1 2 , 9 0 0 +$$ + +and the maximum revenue would be + +$$ +R _ {+} = \\( 2 5 0 (1 8) + \\) 1 5 0 (4 9) + \\( 1 0 0 (4 2) = \\) 1 6, 0 5 0. +$$ + +The actual revenue according to the EMSR formalism was + +$$ +R = \$ 100 (5 7) + \$ 150 (4 1) + \$ 250 (1 1) = \$ 1 4, 6 0 0, +$$ + +so the efficiency is + +$$ +p = \frac {R - R _ {-}}{R _ {+} - R _ {-}} \cdot 100 \% = \frac {\mathbb {S} 14,600 - \mathbb {S} 12,900}{\mathbb {S} 16,050 - \mathbb {S} 12,900} \cdot 100 \% = 54 \% +$$ + +without the use of a complicated overbooking scheme. This is not close to the efficiencies reported in Smith et al. [1992], which cluster around $92\%$ . This relative inefficiency is to be expected, however, from a simplified booking scheme given incomplete booking request data. + +![](images/27b28a3ee1d9f5cf3ec86e33532056c5754a5083ab3babb7e4cd27c45ca53e4a.jpg) +Evolution of Booking Limits by the EMSR Method + +![](images/6d67b5fd847cdf385e38f17d9fb0f8c9cb5cc056940655a7c65c021b8e51f476.jpg) +Figure 3. Booking limits for each class are dynamically adjusted to account for tickets already sold. For illustrative purposes, the number of tickets already sold is replaced here with the number of tickets that should have been sold according to expectations. In this case, the booking limits estimated at the beginning of the process are fairly accurate and require relatively little updating. +Total Bookings by Fare Class: First Sale to Flight Time +Figure 4. The EMSR-based booking limits are used to decide whether to accept or reject a sequence of ticket requests. These requests follow a Poisson distribution where the parameter $\lambda$ varies with time to match the expected demand. Each fare class reaches its booking limit, as desired, so the flight is exactly full. Incorporating overbooking factors shifts the limits up accordingly. + +# Strengths and Weaknesses + +# Strengths + +- Applies widely accepted, industry-standard techniques. + +Although more advanced (and optimal) algorithms are available and are used, EMSR and its descendants are still widely used in industry and can come close to optimality. The EMSR scheme, tested as-is on a real airline, caused revenue gains as much as $15\%$ [Belobaba 1989]. + +Our method for determining the optimality of a scheme after the fact is also based on tried and true methods developed by American Airlines [Smith et al. 1992]. + +- Simplicity + +Since it does not take into account as many factors as other booking models, EMSR is easier to deal with computationally. While a simple model may not be able to model a major airline with complete accuracy, an optimal pricing scheme can be made using only three fare classes [Li 2001]. + +# Weaknesses + +- Neglects network effects + +We treat the problem of optimizing each flight as if it were an independent problem although it is not. + +- Ignores sell-ups + +In considering the discount seat allocation problem, we treat the demands for the fare classes as constants, independent of one other. This is not the case, because of the possibility of sell-ups. If the number of tickets sold in a lower fare class is restricted, then there is some probability that a customer requesting a ticket in that class will buy a ticket at a more expensive fare. This means it is possible to convert low-fare demand into high-fare demand, which would suggest protecting a higher number of seats for high fares than calculated by the model that we use. Sell-ups would be straightforward to incorporate into EMSR, but doing so would require additional information [Belobaba 1989]. + +Discounts possibility of recapture + +Similar to sell-ups, the recapture probability is the probability that a passenger unable to buy a ticket at a certain price on a given flight will buy a ticket on a different flight. Depending on the recapture probability for each fare class, more or fewer seats might be allocated to discount fares. + +# Recommendations on Bumping Policy + +In 1999, an average of only 0.88 passengers per 10,000 boardings were involuntarily bumped. Airlines are not required to keep records of the number of voluntary bumps, so it is impossible to determine a general bump rate. + +Before bumping passengers involuntarily, the airline is required to ask for volunteers. Because there are no regulations on compensation for voluntary bumps, this is often a cheaper and more attractive method for airlines anyway. If too few people volunteer, the airline must pay those denied boarding $200\%$ of the sum of the values of the passengers' remaining flight coupons, with a maximum of $400. This maximum is decreased to $200 if the airline arranges for a flight that will arrive less than 2 hours after the original flight. The airline may also substitute the offer of free or reduced fare transportation in the future, provided that the value of the offer is greater than the cash payment otherwise required. Alternatively, the airline may simply arrange alternative transportation if it is scheduled to arrive less than an hour after the original flight. + +Auctions in which the airline offers progressively higher compensation for passengers who give up their seats are both the cheapest and the most common practice. As long as the airline does not engage in so much overbooking that it cannot find suitable reroutes for passengers bumped from their original itineraries, no alternatives to this policy need to be considered. + +# Conclusions + +The two models presented in this paper work together to evaluate overbooking schemes by simulating their effects in advance and by quantifying their effects after implementation. + +The expected marginal seat revenue (EMSR) model predicts overbooking scheme effectiveness. It determines the correct levels of protection for each fare class above the lowest—that is, how many seats should be reserved for possible sale at later dates and higher fares. Overbooking factors can be specified separately for each fare class, so the model effectively takes in overbooking factors and produces booking limits that can be used to handle ticket requests. + +The revenue opportunity model attempts to estimate the maximum revenue from a flight under perfect overbooking and discount allocation. This is accomplished by estimating the actual demand for seats, then calculating the revenue that these seats would generate if sold to the highest-paying customers. Simple calculations produce the ideal overbooking cap and the optimal discount allocation for the flight. Thus, this model effectively represents how the airline would sell tickets if they had perfect advance knowledge of demand. + +After the September terrorist attacks and their subsequent catastrophic effects on the airline industry, heightened airport security and fearful passengers will increase no-show and cancellation rates, seeming to dictate increasing + +overbooking levels to reclaim lost profits. + +Airlines considering such action should be cautioned, however, that the negative effects of increased overbooking could outweigh the benefits. With reduced airline service, finding alternative transportation for displaced passengers could be more difficult. The effect of denying boarding to more passengers, along with the greater inconvenience of being bumped, could seriously shake consumers' already-diminished faith in the airline industry. With airlines already losing huge numbers of customers, it would be a mistake to risk permanently losing them to alternatives like rail and auto travel by alienating them with frequent overselling. + +# References + +Beckman, J.M. 1958. Decision and team problems in airline reservations. *Econometrica* 26: 134-145. +Belobaba, Peter. 1987. Airline yield management: An overview of seat inventory control. *Transportation Science* 21: 63-73. +_____. 1989. Application of a probabilistic decision model to airline seat inventory control. Operations Research 37: 183-197. +Biyalogorsky, Eyal, et al. 1999. Overselling with opportunistic cancellations. Marketing Science 18 (4): 605-610. +Boeing Commercial Airline Company. 1982. Boeing Promotional Fare Management System: Analysis and Research Findings. Seattle, WA: Boeing, Seattle. +Bowen, B.D., and D.E. Headley. 2000. Airline Quality Rating 2000. http://www.uomaha.edu/~unoai/research/aqr00/. +Brumelle, S.L., and J.I. McGill. 1993. Airline seat allocation with multiple nested fare classes. *Operations Research* 41: 127-137. +Chatwin, Richard E. 2000. Optional dynamic pricing of perishable products with stochastic demand and a finite set of prices. European Journal of Operations Research 125: 149-174. +Curry, R.E. 1990. Optimal airline seat allocation with fare classes nested by origins and destinations. *Transportation Science* 24: 193-204). +Davis, Paul. Airline ties profitability yield to management. 1994. Siam News 27 (5); http://www.siam.org/siamnews/mtc/mtc694.htm. +Li, M.Z.F. 2001. Pricing non-storable perishable goods by using a purchase restriction with an application to airline fare pricing. European Journal of Operations Research 134: 631-647. + +Littlewood, K. 1972. Forecasting and control of passenger bookings. AGIFORS Symposium Proceedings: 12. +McGill, J.I., and G.Z. and Van Ryzin. 1999. Revenue management: Research overview and prospects. *Transportation Science* 33 (2): 233-256. +Rothstein, M. 1971. An airline overbooking model. *Transportation Science* 5: 180-192. +Rothstein, Marvin 1985. OR and the airline overbooking problem. Operations Research 33 (2): 237-247. +Ross, Robert. 1998. Embry-Riddle experts reveal airfare secrets. http://comm.db.erau.edu/leader/fall98/priced.html. +Shlifer, R., and Y. Vardi. 1975. An Airline Overbooking Policy. *Transportation Science* 9: 101-114. +Swan. 1990. Revenue management forecasting biases. Working paper. Seattle, WA: Boeing Commercial Aircraft. +Smith, Barry C., et al. 1992. Yield management at American Airlines. Interfaces 22 (1): 8-31. +Suzuki, Yoshinori. 2002. An empirical analysis of the optimal overbooking policies for U.S. major airlines. *Transportation Research E38: 135-149*. +Taylor, C.J. 1962. The determination of passenger booking levels. AGIFORS Symposium Proceedings, vol. 2. Fregene, Italy. +Weatherford, L.R., and S.E. Bodily. 1992. A taxonomy and research overview of perishable-asset revenue management: Yield management, overbooking, and pricing. *Operations Research* 40: 831-844. + +![](images/6632bbf957dd26e4f7768582eabb1077053872c5152ad603eeb6790bb6d33649.jpg) + +Presentation by Richard Neal (MAA Student Activities Committee Chair) of the MAA award to Daniel Boylan and Wesley Turner of the Harvey Mudd College team (Michael Schubmehl could not come), after their presentation at the MAA Mathfest in Burlington, VT, in August. On the right is Ben Fusaro, Founding Director of the MCM. Photo by Ruth Favro, Lawrence Tech University. + +# Letter to the CEO of a Major Airline + +Airline overbooking is just one facet of a revenue management problem that has been studied extensively in operations research literature. Airlines have been practicing overbooking since the 1940's, but early models of overbooking considered only the most rudimentary cases. Most importantly, they did not take into account the revenue maximizing potential of price discrimination—charging different fares for identical seats. In order to maximize yield, it is particularly critical to price discriminate between business and leisure travelers. That is, when filling the plane, book as many full fare passengers and as few discount fare passengers as possible. + +The implementation of a method of yield management can have dramatic effects on an airline's revenue. American Airlines managed its seat inventory to a $1.4 billion increase in revenue from 1989 to 1992—about \(50\%$ more than its net profit for the same period. Controlling the mix of fare products can translate into revenue increases of \)200 million to $500 million for carriers with total revenues of $1 billion to $5 billion. + +Though several decision models of airline booking have been developed over the years, comparing one scheme to another remains a difficult task. We have taken a two-pronged approach to this problem, both simulating and measuring a booking scheme's profitability. + +In order to simulate a booking scheme's effect, we used the expected marginal seat revenue (EMSR) model proposed by Belobaba [1989] to generate nearoptimal decisions on whether to accept or deny a ticket request in a given fare class. The EMSR model accepts as input overbooking levels for each of the fare classes that compose a flight, so different policies can be plugged in for testing. + +Our approach to measuring a current scheme's profitability is similar to one used at American Airlines [Smith et al. 1992]. We compare the actual revenue generated by a flight with an ideal level calculated with the benefit of hindsight, as well as with a baseline level that would have been generated had no yield management been used. By calculating the percentage of this spread earned by a flight employing a particular scheme, we are able to gauge the effectiveness of different booking schemes. + +It is our hope that these models will prove useful in evaluating your airline's overbooking policies. Simulations should provide insight into the properties of an effective scheme, and measurements after the fact will help to provide performance benchmarks. Finally, while it may be tempting to increase overbooking levels in order to compensate for lost revenues in the post-tragedy climate, our results indicate this will probably hurt long-term profits more than they will help. + +Cordially, + +Michael P. Schubmehl, Wesley M. Turner, and Daniel M. Boylan + +# Probabilistically Optimized Airline Overbooking Strategies, or “Anyone Willing to Take a Later Flight?!” + +Kevin Z. Leder +Saverio E. Spagnolie +Stefan M. Wild +University of Colorado +Boulder, CO + +Advisor: Anne M. Dougherty + +# Introduction + +We develop a series of mathematical models to investigate relationships between overbooking strategies and revenue. + +Our first models are static, in the sense that passenger behavior is predominantly time-independent; we use a binomial random variable to model consumer behavior. We construct an auction-style model for passenger compensation. + +Our second set of models is more dynamic, employing Poisson processes for continuous time-dependence on ticket purchasing/cancelling information. + +Finally, we consider the effects of the post-September 11 market on the industry. We consider a particular company and flight: Frontier Airlines Flight 502. Applying the models to revenue optimization leads to an optimal booking limit of $15\%$ over flight capacity and potentially nets Frontier Airlines an additional $2.7 million/year on Flight 502, given sufficient ticket demand. + +# Frontier Airlines: Company Overview + +Frontier Airlines, a discount airline and the second largest airline operating out of Denver International Airport (DIA), serves 25 cities in 18 states. Frontier offers two flights daily from DIA to LaGuardia Airport in New York. We focus on Flight 502. + +# Technical Considerations and Details + +We discuss regulations for handling bumped passengers, airplane specifications, and financial interests. + +# Overbooking Regulations + +When overbooking results in overflow, the Department of Transportation (DOT) requires airlines to ask for volunteers willing to be bumped in exchange for compensation. However, the DOT does not specify how much compensation the airlines must give to volunteers; in other words, negotiations and auctions may be held at the gate until the flight's departure. A passenger who is bumped involuntarily is entitled to the following compensation: + +- If the airline arranges substitute transportation such that the passenger will reach his/her destination within one hour of the original flight's arrival time, there is no obligatory compensation. +- If the airline arranges substitute transportation such that the passenger will reach his/her destination between one and two hours after the original flight's arrival time, the airline must pay the passenger an amount equal to the one-way fare for flight to the final destination. +- If the substitute transportation is scheduled to arrive any later than two hours after the original flight's arrival time, or if the airline does not make any substitute travel arrangements, the airline must pay an amount equal to twice the cost of the fare to the final destination. + +# Aircraft Information + +Frontier offers only one class of service to all passengers. Thus, we base our overbooking models on single-class aircraft. + +# Financial Considerations + +Airline booking considerations are frequently based on the break-even load-factor, a percentage of airplane seat capacity that must be filled to acquire neither loss or profit on a particular flight. The break-even load-factor for Flight 502 in 2001 was $57.8\%$ . + +# Assumptions + +We need concern ourselves only with the sale of restricted tickets. Frontier's are nonrefundable, save for the ability to transfer to another Frontier flight for \(60 [Frontier 2001]. Restricted tickets represent all but a very small percentage of all tickets, and many ticket brokers, such as Priceline.com, sell only restricted tickets. + +- Ticketholders who don't show up at the gate spend $60 to transfer to another flight. +- Bumped passengers from morning Flight 502 are placed, at the latest, 4 h 35 min later on Frontier's afternoon Flight 513 to the same destination. Frontier Airlines first attempts to place bumped passengers on other airlines' flights to the same destination. If it can't do so, Frontier bumps other passengers from the later Frontier flight to make room for the originally bumped passengers. +- The annual effects/costs associated with bumping involuntary passengers is negligible in comparison to the annual effects/costs of bumping voluntary passengers. According to statistics provided by the Department of Transportation, $4\%$ of all airline passengers are bumped voluntarily, while only 1.06 passengers in 10,000 are bumped involuntarily. With a maximum delay for bumped passengers of $4 \mathrm{~h} 35 \mathrm{~min}$ , the average annual cost to Frontier of bumping involuntary passengers is on the order of $100,000—negligible compared to costs of bumping voluntary passengers. + +# The Static Model + +Our first model for optimizing revenues is static, in the sense that passenger behavior is predominantly time-independent: All passengers (save no-shows) arrive at the departure gate independently. This model does not account for when passengers purchase their tickets. This system may be modeled by the following steps: + +- Introduce a binomial random variable for the number of passengers who show up for the flight. +- Define a total profit function dependent upon this random variable. +- Apply this function to various consumer behavior patterns. +- Compute (for each behavioral pattern) an optimal number of passengers to overbook. + +# A Binomial Random Variable Approach + +We let the binomial random variable $X$ be the number of ticketholders who arrive at the gate after $B$ tickets have been sold; thus, $X \sim \operatorname{Binomial}(B, p)$ . Numerous airlines consistently report that approximately $12\%$ of all booked passengers do not show up to the gate (due to cancellations and no-shows) [Lufthansa 2000], so we take $p = .88$ . + +$$ +P r \{i \text {p a s s e n g e r s a r r i v e a t t h e g a t e} \} = P r \{X = i \} = \binom {B} {i} p ^ {i} (1 - p) ^ {B - i}. +$$ + +# Modeling Revenue + +We define the following per-flight total profit function and subsequently present a detailed explanation. + +$$ +\begin{array}{l} T _ {p} (X) = (B - X) R + \\ \left\{ \begin{array}{l l} \operatorname {A i r f a r e} \times X - \operatorname {C o s t} _ {\text {F l i g h t}}, & X \leq C _ {\S}; \\ \operatorname {A i r f a r e} - \operatorname {C o s t} _ {\text {A d d}} \times (X - C _ {\S}), & C _ {\S} < X \leq C; \\ \operatorname {A i r f a r e} - \operatorname {C o s t} _ {\text {A d d}} \times (X - C _ {\S}) - \operatorname {B u m p} (X - C), & X > C, \end{array} \right. \\ \end{array} +$$ + +where + +$R =$ transfer fee for no-shows and cancellations, + +$B =$ total number of passengers booked, + +Airfare = a constant + +$\mathrm{Cost}_{\mathrm{Flight}} = \text{total operating cost of flying the plane}$ + +$\mathrm{Cost}_{\mathrm{Add}} = \mathrm{cost}$ to place one passenger on the flight + +Bump = the Bump function (to be defined) + +$C_{\overline{\mathbb{S}}}$ = number of passengers required to break even on the flight + +$C =$ the full capacity of the plane (number of seats) + +For Airfare, we use the average cost of restricted-ticket fare over a one-week period in 2002: $316.$ CostFlight is based on the break-even load-factor of $57.8\%$ ; for Flight 502, we take $\text{CostFlight} = \\(24,648$ [Frontier Airlines 2001]. The average cost associated with placing one passenger on the plane is $\text{CostAdd} \approx \$ 16 \). The break-even occupancy is determined from the break-even load-factor; since Flight 502 uses an Airbus A319 with 134 seats, we take $C = 134$ and $C_{\S} = 78$ . + +# The Bump Function + +We consider various overbooking strategies, the last three of which translate directly into various Bump functions. + +No Overbooking +- Bump Threshold Model We assign a "Bump Threshold" (BT) to each flight, a probability of having to bump one or more customers from a flight given $B$ and $p$ : + +$$ +P r \{X > \text {f l i g h t c a p a c i t y} \} < \mathrm {B T}. +$$ + +We take $\mathrm{BT} = 5\%$ of flight capacity. The probability that more than $N$ ticketholders arrive at the gate, given $B$ tickets sold, is + +$$ +P r \{X > N \} = 1 - P r \{X \leq N \} = 1 - \sum_ {i = 1} ^ {N} {\binom {B} {i}} p ^ {i} (1 - p) ^ {B - i}. +$$ + +This simplistic model is independent of revenue and produces (through simple iteration) an optimal number of ticket sales $(B)$ for expecting bumping to occur on less than $5\%$ of flights. + +- Linear Compensation Plan This plan assumes that there is a fixed cost associated with bumping a passenger, the same for each passenger. The related Bump function is + +$$ +\operatorname {B u m p} (X - C) = B _ {\S} \times (X - C), +$$ + +where $(X - C)$ is the number of bumped passengers and $B_{\S}$ is the cost of handling each. + +- Nonlinear Compensation Plan Steeper penalties must be considered, since there is a chain reaction of expenses incurred when bumping passengers from one flight causes future bumps on later flights. Here we assume that the Bump function is exponential. Assuming that flight vouchers are still adequate compensation at an average cost of \(2 * \text{Airfare} + \\)100 = \$732\) when there are 20 bumped passengers, we apply the cost equation + +$$ +\operatorname {B u m p} _ {\mathrm {N L}} (X - C) = B _ {\S} (X - C) e ^ {r (X - C)}, +$$ + +where $B_{\mathbb{S}}$ is theompensation constant and $r = r(B_{\mathbb{S}})$ is the exponential rate, chosen to fit the curve to the points (0, 316) and (20, 732). + +- Time-Dependent Compensation Plan (Auction) The primary shortcoming of the nonlinear compensation plan is that it does not deal with flights with too few voluntarily bumped passengers, where the airline must increase its compensation offering. We now approximate the costs of an auction-type compensation plan. + +This plan assumes that the airline knows the number of no-shows and cancellations one-half hour prior to departure. The following auction system is employed. At 30 min before departure, the airline offers flight vouchers to volunteers willing to be bumped, equivalent in cost to the original airfare. This offer stands for 15 min, at which time the offer increases exponentially up to the equivalent of $948 by departure time. We chose this number as twice the original airfare (which is the maximum obligatory compensation for involuntary passengers if they are forced to wait more than 2 h), plus one more airfare cost in the hope that treating the customers so favorably will result in future business from the same customers. These specifications + +are enough to determine the corresponding time-dependent Compensation function, plotted in Figure 1. + +$$ +\operatorname {C o m p e n s a t i o n} (t) = \left\{ \begin{array}{l l} 3 1 6, & 0 \leq t \leq 1 5 \min ; \\ 1 0 5. 3 3 e ^ {0. 0 7 3 2 4 t}, & 1 5 \min < t \leq 3 0 \min . \end{array} \right. +$$ + +![](images/7e3deca2b731ce0ad4214e552ee0f9db96b599ad127671f8e44d9aac4fe77c51.jpg) +Figure 1. Auction offering (compensation) + +Consideration of passenger behavior suggests that we use a Chebyshev weighting distribution for this effort (shown in Figure 2). A significant number of passengers will take flight vouchers as soon as they become available. We simulate this random variable, which has probability density function + +$$ +f (s) = \frac {1}{\pi \sqrt {1 - s ^ {2}}}, \qquad s \in [ - 1, 1 ], +$$ + +and cumulative distribution function + +$$ +F (\tau) = \int_ {- 1} ^ {\tau} \frac {1}{\pi \sqrt {1 - \eta^ {2}}} d \eta = \frac {1}{2} + \sin^ {- 1} (\tau), +$$ + +where $\eta$ is a dummy variable. Inverting the cumulative distribution function results in a method for generating random variables with the Chebyshev distribution [Ross 1990]: + +$$ +F ^ {- 1} (\tau) = \sin \left[ \pi \left(U - \frac {1}{2}\right) \right], +$$ + +![](images/f7e5401fd0432147d77a3a4b03965653f4205ce81a73f8e4a2ac0d9175581373.jpg) +Figure 2. Chebyshev weighting function for offer acceptance + +where $U$ is a random uniform variable on $[0,1]$ . + +With a linear transformation from the Chebyshev domain $[-1, 1]$ to the time interval $[0, 30]$ via $t = 15\tau + 15$ , we find a random variable $t$ that takes on values from 0 to 30 according to the density function $f(s)$ . Figure 3 shows the results of using this process to generate 100,000 time values. We use this random variable to assign times for compensation offer acceptance under the auction plan. + +The total cost of bumping $(X - C)$ passengers is $\sum_{i=1}^{X-C} \text{Compensation}(t_i)$ . + +# Optimizing Overbooking Strategies + +Our goal is to maximize the expected value of the total profit function, $E[T_P(X)]$ , given the variability of the bump function and the probabilistic passenger arrival model. + +There are competing dynamic effects at work in the total profit function. Ticket sales are desirable, but there is a point at which the cost of bumping becomes too great. Also, the variability of the number of passengers who show up affects the dynamics. The expected value of the total profit function is + +$$ +E [ T _ {P} (X) ] = \sum_ {i = 1} ^ {B} T _ {P} (i) \binom {B} {i} p ^ {i} (1 - p) ^ {B - i}. +$$ + +![](images/27f22e0d5ce1e060962ae362fff5d97540b46795ce63035d04816d6eb6491579.jpg) +Figure 3. Histogram of 100,000 draws from the Chebyshev distribution. + +We optimize the revenue by finding the most appropriate booking limit $(B)$ for any bump function. Solving such a problem analytically is unrealistic; any solution would require the inversion of a sum of factorial functions. Therefore, we turn to computation for our results. We wrote and tested MatLAB programs that solve for $B$ over a range of trivial bump Functions. + +# Results of Static Model Analysis + +# No Overbooking + +If Frontier Airlines does not overbook its flights, it suffers a significant cost in terms of loss of opportunity. If the number of people that booked $(B)$ equals plane capacity $(C)$ , the expected value of $X$ (number of passengers who arrive at the gate) is $pB = pC = .88 \times 134 \approx 118$ passengers. Assuming (as in the total profit function) that each passenger beyond the 78th is worth $300 in profit, the expected profit is nearly + +$$ +(1 3 4 - 1 1 8) \times \$ 6 0 + \$ 3 0 0 \times (1 1 8 - 7 8) = \$ 1 2, 9 6 0 +$$ + +per flight. This is only an estimate, since a smaller or larger proportion than 57.8% of ticket-holding passengers may arrive at the gate. The profit is sizeable but there are still (on average) 16 empty seats! The approximate lost opportunity cost is $300 × 16 = $4,800! Thus, not overbooking sends Flight 502 on its way with only 63% of its potential profitability. + +# Bump Threshold Model + +Using a 0.05 bump threshold, we compute an optimal number of passengers to book on Flight 502. Given the Airbus A319 capacity of 134 passengers and a passenger arrival probability of $p = .88$ , the optimal number of tickets to sell to guarantee that bumping occurs less than $5\%$ of the time is $B = 145$ , or $107\%$ of flight capacity. + +# Linear Compensation Plan + +Table 1 shows the expected profit for various linear bump functions. + +Table 1. Linear bump functions compared. + +
Bump cost per passengerOptimal # to bookExpected profit per flight
200
316162$17,817
400156$17,394
500153$17,121
600152$16,940
700151$16,799
800151$16,692
900150$16,601
1000150$16,526
+ +If Frontier were to compensate bumped passengers less than the cost of airfare, bumping passengers would always cost less than revenue gained from ticket sales. Thus, assuming it could sell as many tickets as it wanted, Frontier would realize an unbounded profit on each flight! Obviously, the linear compensation plan is not realistic in this regime, and we must wait for subsequent models to see increased real-world applicability. These results agree with the result of using a simple bump threshold above and indicate an average profit of approximately $17,000. In comparison with using no overbooking strategy at all, Frontier gains additional profit of$ 4,000 per flight! + +The actual dynamics of the problem may be seen in Figure 4, where competing effects form an optimal number of tickets to sell $(B)$ when Frontier assumes a sizeable enough compensation average. We can also see the unbounded profit available in the unrealistic regime. + +# Nonlinear Compensation Plan + +Numerical results for the more realistic nonlinear model paint a more reasonable picture. + +Table 2 recommends booking limits similar to (though slightly higher than) previous models. The dynamics may be seen in the Figure 5. + +![](images/4d7e5f27633337f1feef2bb6c3297f330bce47ea774fef322955d0bac1646770.jpg) +Figure 4. Per-flight profit vs. booking limit $(B)$ for different bump costs (Linear Compensation Plan) + +Table 2. Nonlinear bump functions compared. + +
Bump functionOptimal number to bookProfit per flight
50e0.134(X-C)(X-C)160$18,700
100e0.100(X-C)(X-C)158$18,240
200e0.065(X-C)(X-C)156$17,722
316e0.042(X-C)(X-C)154$17,363
+ +All nonlinear bump functions that we investigated result in a maximum realizable profit, as expected. + +# Time-Dependent Compensation Plan + +The histogram of 1,000 runs using the time-dependent compensation plan in Figure 6 shows that the optimal booking limit is most frequently $B = 154$ . Figure 7 is a graph of expected total profit versus the optimal booking limit for 15 trials, displaying the randomness due to the Chebyshev draws at higher values of $B$ . If $B$ is too low, then all models have the same profit behavior, because the randomness from the overbooking scheme is not introduced until customers are bumped. This graph also shows that regardless of random + +![](images/6c55255ac62b517dcc65cfe8fa29cda8ccba210d49d5f1bc3335e98a7fbd3022.jpg) +Figure 5. Per-flight profit vs. booking limit $(B)$ for different bump functions (Nonlinear Compensation Plan). + +effects, profitability is maximized around $B = 160$ . + +# The Dynamic Model + +Many of the assumptions in the binomial-based models are loosened in this dynamic setting. Continuous time allows for more detailed analysis of the order of events in the airline booking problem. Keeping track of the order of reservation requests, ticket bookings, and cancellations results in a model that attempts to recommend what ticketing agents should do at a certain time. In the "Firesale Model," we attempt to increase revenue by selling the tickets of cancellations to customers who would otherwise be denied tickets due to a fixed booking limit. + +# Reservation Process + +We simulate the booking/reservations process, which often begins weeks before departure and continues right up until departure (due, for example, to other airlines booking their bumped customers into Frontiers' empty seats). + +To model the stream of reservation requests, we employ a Poisson process $\{N(t), t \geq 0\}$ — a counting process that begins at zero ( $N(0) = 0$ ) and has + +![](images/491813593518e18f7a77b7d98f561811a4955072efe679d2782c77b5d68a9866.jpg) +Figure 6. Time-dependent compensation plan simulated 1,000 times + +![](images/977ffc0bbbfe925e61ad21c9319bc865630b15b3994c0a22479180561d0d67f4.jpg) +Figure 7. 15 time-dependent compensation plan simulations + +independent increments, with the number of events in any interval of length $t$ Poisson-distributed with mean $\lambda t$ [Ross 2000]. The interarrival times of a Poisson process are distributed according to an exponential distribution with rate parameter $\lambda$ . Each reservation request comes with a variable number of tickets requested for that reservation. The number of tickets requested is generated from some specified batch distribution, BatchD, that we introduce later. + +This arrangement results in a compound Poisson process (in this case, a "stuttering" process [McGill and Garrett 1999]), which provides a more reasonable fit to real-world reservation request data than simpler processes. + +Simulating the first $T$ time units of a Poisson process using the method in Ross [1990] results in a vector $\mathbf{a}t$ of arrival times for the $A = \mathrm{length}(\mathbf{a}t)$ reservation requests received. + +Another vector, Bnum, the number of tickets requested in each of the $A$ reservations, is also generated according to the batch distribution. The density BatchD is shown in Figure 9; it states that callers reserve anywhere from 1 to 4 tickets at a time, with varying probabilities for each number. The total number of tickets (potential fares) requested is then $\sum_{i}(\mathbf{Bnum}(i))$ . + +![](images/4d4d313ecfcace6d4ff4f59f9cac173e30e9301381c6a6d47d76918c5724b422.jpg) +Figure 9. Density function for number of tickets in a batch of reservations. + +The arrival rates for these reservation requests are derived by setting the expected value of the Poisson process over an interval of length $T$ equal to the average ticket demand $A_{D}$ that we expect. Then a rate of $\lambda = A_{D} / E_{B}T$ , where $E_{B}$ is the expected value of BatchD (1.9 in this case), will on average generate $A_{D}$ tickets. The histogram in Figure 10 shows the results of a simulation of 10,000 + +Poisson processes outputting the number of reservations requested when the average demand for tickets was $134$ ( $A_{D} = C$ ). + +![](images/3654c9cfa8c4c5326afd33ce7775ddcb60e172affeb4261ba29197d3f69e220d.jpg) +Figure 10. Histogram of number of reservation requests for 10,000 flights with an average demand of 134 tickets. + +# Cancellations and No-Shows + +The binomial-based static models do not distinguish between cancellations (tickets voided before the flight departs) and no-shows (tickets not used or voided by flight departure); however, the dynamic model is well-suited for monitoring these events. We assume that $75\%$ of unused tickets are cancellations and $25\%$ are no-shows. Additionally, we assume that the time of cancellation for a set of tickets reserved together is uniformly distributed from the time that the tickets are granted to the time that the flight departs. This means that some cancellations occur almost immediately after the ticket(s) are granted (e.g., due to a typo on an online ticket service form), while some occur just before a plane is scheduled to depart (e.g., a last-minute change of plans). Lastly, we assume that multiple tickets in a single reservation behave equivalently (i.e., families act as unbreakable groups!). + +To simulate this process, for each requested reservation a biased coin is flipped to determine with probability $p$ if the group will keep their tickets. If not, another biased coin is flipped to determine whether the unused tickets are cancellations or no-shows. If a cancellation occurs, a cancellation time is drawn uniformly between that batch's arrival time and the flight departure time. + +# Dynamic Booking + +# Dynamic Test Model + +We use the dynamic model to make the binomial-based models more realistic by eliminating some assumptions and introducing randomness. The Dynamic Test allows for "group tickets" (for both reservations and cancellations). The Dynamic Test requires that average ticket demand $A_{D}$ be specified, so as to confirm the expected effects of less demand for tickets. + +# Firesale Model + +The Firesale Model uses cancellation times to sell all possible tickets. If the number of tickets requested (at time $t$ ) for a particular reservation plus $Tix$ (the number of tickets approved and still held at time $t$ ) is less than the predetermined booking limit ( $B$ ), then a reservation request is approved. Conversely, if $Tix(t)$ is equal to the booking limit or if the sale of the multiple tickets requested in a reservation batch would push $Tix(t)$ over the booking limit, the request is rejected. Thus, for a process with no cancellations, reservation requests totaling less than the booking limit would be approved while subsequent requests would be rejected. The Fireside Model is highly dependent on the average demand (i.e., if demand is high enough, the airline would end up with an overwhelming majority of no-shows, as opposed to cancellations). The Firesale Model is the most realistic model developed in this paper. + +# Results of Dynamic Model Analysis + +The Fireside Model attempts to capture a scenario where all tickets of cancellations are sold as long as there are customers willing to buy them. If demand for tickets is high enough, we expect to sell all tickets of cancellations, resulting in a large number of bumped passengers. However, because the airline profits \(60 from each cancellation or no-show and because the numbers of both cancellations and no-shows continue to increase as more tickets are sold, reasonable results are expected for reasonable ticket demand. + +Figure 11 plots expected profit as a function of bumping limit as determined from 1,000 Fireside Model simulations. An average demand twice that of capacity ( $A_{D} = 268$ ) is used and a maximum profit is realized at a booking limit of 163. Most importantly, this figure displays how a small variation in booking limit could significantly alter profit. A change in either direction of 3 in corresponds to a loss of more than $1,000 profit. + +![](images/d9dc9e115dab2f371a7917a5503fc317d57ecd8139c6fe869617c218e30344e7.jpg) +Figure 11. 1,000 simulations of the Fireside Model. + +# Dynamic Testing of the Static Model + +The dynamic model allows us to test the results from the static (binomial-based) models in a more realistic setting. The Dynamic Test allows tickets to be reserved in batches and introduces the randomness experienced in real-world airline booking. + +In all testing, 10,000 simulations are performed for each booking limit $(B)$ and then expected profits are computed. Booking limit vs. Profit $(\mathbb{S})$ is plotted for appropriate booking limit values. The average demand $(A_{D})$ used in this test is kept constant at twice the capacity of the airplane (so $A_{D} = 268$ ), to simulate a very large pool of customers so that the overbooking process could be tested. + +# Linear Compensation Plan + +We tested two Bump costs (\(B_{\S} = \\)316\) and \(B_{\S} = \$600\)) with different behaviors (as predicted by the static model). + +Figure 12 shows that for this compensation plan, an optimal booking limit is $B = 155$ , an increase of 3 from the optimal value for the static model. However, profit drops off steeply for booking limits over 155, indicating that a more conservative strategy might be to lower the bumping limit to ensure that this steep decline is rarely reached. + +![](images/03ea992344737e705f2c0bc4ddae15f8768d750bd3a374975e261afa74cea7be.jpg) +Figure 12. 10,000 simulations of the linear compensation plan with $B_{\S} = \$600$ . + +Figure 13 corresponds to a bump cost of 316; the optimal booking limit is now 166, again an increase (from 162). + +# Nonlinear Compensation Plan + +We tested two nonlinear bump coefficients $(B_{\S} = 316$ and $B_{\S} = 100)$ with different behaviors (as predicted by the static model). + +Figures 14 and 15 demonstrate the negative effect of too high a booking limit. For nonlinear bump coefficients $B_{\S} = 316$ and $B_{\S} = 100$ , optimal booking limits from the static model are 154 and 160, with Dynamic Test result values of 154 and 158. + +# Time-Dependent Compensation Plan + +Figure 16 shows that the optimal booking limit for the time-dependent compensation plan is $B = 155$ , an increase of 1 from the static model. Profit appears to rise relatively steeply until the optimal booking limit is reached and then falls steeply. Thus, in our most realistic static model, a careful overbooking plan matters the most! If the booking limit were altered by 3, the profit would shrink by more than $1,000, similar to the result detailed in the Fireside Model. + +![](images/e806cd70a34add9f5c39dcd037959375dbc9bde2eab928ccd059c2b7e5b4f6b9.jpg) +Figure 13. 10,000 simulations of the linear compensation plan with $B_{\S} = \$316$ . + +![](images/5df89824643e95ecccc60db311909398f38b934aa7d6e2888ff656edb7ebe3f9.jpg) +Figure 14. 10,000 simulations of the nonlinear compensation plan with $B_{\S} = 316$ . + +![](images/37a661d497d8788e7943b28d8a798c20b69ff66e7e0372024a060b518e2311c7.jpg) +Figure 15. 10,000 simulations of the nonlinear compensation plan with $B_{\S} = 100$ . + +![](images/bb960d68f12b23d617bdbc30ab3017d2d0eb75280875d0676309eb41236d7ab4.jpg) +Figure 16. 10,000 simulations of the time-dependent compensation plan. + +# Post-September 11 Effects + +Security checks (at Denver International Airport) add only 10 min to check in ["Frontier operating at $80\%$ " 2001], which may be considered negligible. + +The most significant post-September 11 effect that the airlines must consider is the consumer fear. The individual probability of passenger arrival $p$ should not change drastically, since ticket-purchasing customers after September 11 are fully aware of the risks involved. A consequence of September 11 that is difficult to model is the decrease in average demand for flight reservations. + +# Model Strengths and Weaknesses + +# Strengths + +- Time-dependent auction model for pre-flight compensation: When Frontier begins to offer compensation to voluntarily bumped passengers one-half hour before departure, our model allows consumer behavior to influence the financial results. +- Time-dependent decision process in the dynamic model: The dynamic model allows ticketing agents to decide whether or not to accept reservation requests based on the number of tickets sold by then and based on time until departure. +- Multiple considerations of consumer behavior via bump functions: The implementation of multiple bump functions allow for testing alternative strategies for compensation. Profit and customer satisfaction may then be balanced depending upon the company's short-term or long-term interests. +- Varying degrees of model complexity: Our early models are simple, making sizeable simplifying assumptions to exhibit the most basic dynamics inherent in the problem. We take small steps of increasing complexity towards a more realistic model. The intuitive relationships between the results from each step lead to increased confidence in the stability and applicability of the most involved models. + +# Weaknesses + +- Absence of a stability analysis: We lack an adequate mathematical understanding of the stability of our models. Varying parameters like $p$ could potentially alter our results. +- Infinite customer pool in the static model: In our static model, we assume that for any booking limit we set, all tickets will be sold. + +- Insufficient data: The only operational data that we could get from Frontier Airlines was its quarterly report, which contains general information on how many people flew, operating costs, revenues, number of flights flown, and occupancy rates. However, our model lacks information regarding cancellation rates, no-show rates, cost per flight, rates of reservation requests, and ratio of restricted tickets sold to unrestricted tickets sold. The lack of this information limits us because our parameters are not based on historical data, and therefore we cannot be confident in the accuracy of our rates. + +# Conclusion and Recommendations + +Our models are quite consistent in recommending similar booking limits: 154 passengers on 134-seat Flight 502, $115\%$ of capacity. This limit results in an average of $17,000 per flight; so this one flight alone, by employing one of our overbooking strategies, nets the company an extra$ 2.7 million profit per year, under the limiting assumption of an infinite demand pool. + +# References + +U.S. Department of Transportation. 2001. Air Travel Consumer Report. http://www.dot.gov/airconsumer/. +AlaskaAir. 2001. http://www2.alaskaair.com. +BBC News. 2001. Airlines receive $15bn aid boost. http://news.bbc.co.uk/hi/english/business/newsid_1558000/1558854.stm. +Boudreaux, Donald. 1998. Notes from FEE: Julian Simon Life Saver. http://www.fee.org/freeman/98/9804/notes.html. +Chatwin, Richard. 1999. Continuous-time airline overbooking with time-dependent fares and refunds. *Transportation Science* 33 (2) (May 1999). +Davis, Paul. 1994. Airline ties profitability yield to management. SIAM News 27 (5) (May/June 1994). +Delta Airlines. 2002. http://www deltas.com/care/service計劃/index.jsp. +Frontier Airlines reports fiscal third-quarter profit. 2002. Business Wire (2 February 2002). +Frontier Airlines Inc. 2001. Form 10-Q, Quarterly Report Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934. Securities and Exchange Commission file number 0-24126, quarterly period ending September 30, 2001. + +Frontier operating at $80\%$ . 2001. Denver Business Journal (20 September 2001). http://denver.bizjournals.com/denver/stories/2001/09/17/daily38.html. +Graves, Gary. 2001. Airline decline. CBC News Online (7 November 2001). http://www.cbc.ca/news/indepth/background/airlinedecline.html. +Gupta, Diwikar, and William Cooper. 2001. When is yield improvement really an improvement? Working Paper, University of Minnesota, Department of Mechanical Engineering. September 4, 2001. +Inse, Amy. 2001. Airport's a lonely place these days. Rocky Mountain News (19 September 2001). http://www.insidedenver.com/drnn/america_under_attack/article/0,1299,DRMN_664_828402,00.html. +Karaesmen, Itir, and Garrett van Ryzin. 1998. Overbooking with substitutable inventory classes. (20 September 1998). Columbia University. http://www.columbia.edu/~gjv1/subob1.pdf. +Kesmodel, David. 2002. Frontier expects profits for quarter. Rocky Mountain News (7 February 2002). +Lufthansa. 2001. Overbooking. http://cms.lufthansa.com/de/dlh/en/nws/0,1774,0-0-77892,00.html. +Masselli, Jennifer. 2001. IT's critical role in airlines. Information Week (12 November 2001). http://www.informationweek.com/story/ IWK20011112S0001. +McGill, Jeffery, and Garrett van Ryzin. 1999. Revenue management: Research overview and prospects. *Transportation Science* 33 (2) (May 1999). +Ross, Sheldon M. 1990. Simulation. 2nd ed. San Diego, CA: Harcourt Academic Press. +__________ 2000. Probability Models. 7th ed. San Diego, CA: Harcourt Academic Press. +Subramanian, J., S. Stidham, and C.J. Lautenbacher. 1999. Airline yield management with overbooking, cancellations, and no-shows. *Transportation Science* 33 (2) (May 1999). + +# ACE is High + +Anthony C. Pecorella + +Crystal T. Taylor + +Elizabeth A. Perez + +Wake Forest University + +Winston-Salem, NC + +Advisor: Edward E. Allen + +# Introduction + +We design a model that allows an airline to substitute its own values for ticket prices, no-show rates and fees, compensation for bumped passengers, and capacities to determine its optimal overbooking level. + +Our model is based on an equation that combines the two cases involved in overbooking: The first sums all cases in which the airline doesn't fill all seats with passengers, and the second sums all cases in which there is an overflow of passengers due to overbooking. The model includes the possibility of upgrading passengers from coach to first-class when there is overflow in coach. + +Furthermore, we use a binomial distribution of the probabilities of bumping passengers, given different overbooking percentages, to supply the airlines with useful information pertaining to customer relations. + +We apply our model with different values of the parameters to determine optimal overbooking levels in different situations. + +By using our model, an individual airline can find an optimal overbooking level that maximizes its revenue. A joint optimal overbooking strategy for all airlines is to agree to allow bumped passengers to fly at a discounted fare on a different airline. + +# Analysis of the Problem + +From January to September 2001, $0.19\%$ of passengers were bumped from flights due to overbooking. This seems like an inconsequential percentage, but it actually amounts to 730,000 people. Additionally, $4.4\%$ of those bumped, + +or 32,000 people, were denied their flights involuntarily [U.S. Department of Transportation 2002]. + +Since $10\%$ to $15\%$ of passengers who reserve a seat don't show up, airlines have little chance to fill their planes if they book only as many passengers as seats available. Overbooking by American Airlines helped save the airline $\$ 1.4$ billion between 1989 and 1992. + +We examine a fictional company to determine an optimal overbooking strategy that maximizes revenue. The goal is a model to increase revenue while maintaining favorable customer relations. + +Our main model, the Expected Gain Model, provides a clear formula for what percentage of the seats to overbook. Based on sample no-show rates, ticket prices, and seat numbers, our Expected Gain Model shows that a $16\%$ overbooking rate is the most effective choice. + +Our other model, the Binomial Distribution Model, calculates, for various overbooking levels, the probability that a passenger will be bumped. + +# Assumptions + +- There is no overbooking in first class (to maintain good relations with wealthy and influential passengers). +- Anyone bumped (voluntarily or involuntarily) is compensated with refund of ticket price plus an additional $100\%$ of the ticket price. +- There are only two flight classes, coach and first-class. +- The fare is constant regardless of how far in advance the ticket is purchased. Overbooked passengers are given seats on a first-come-first-served basis, as is often the case. Therefore, ticket prices will average out for both those bumped and those seated. +- Each passenger's likelihood of showing up is independent of every other passenger. +- First-class ticket-holders have unrestricted tickets, which allow a full refund in case of no-show; coach passengers have restricted tickets, which allow only a $75\%$ refund in case of no-show. +There are no walk-ons. +- There are no flight delays or cancellations. +- The marginal cost of adding a passenger to the plane is negligible. + +# The Model + +# Equations + +$$ +\operatorname {p r o b} (x, y, r) = \left( \begin{array}{c} x \\ y \end{array} \right) r ^ {y} (1 - r) ^ {x - y} +$$ + +$$ +P _ {1} (y) = \sum_ {k = S _ {f} + 1 - y} ^ {S _ {f}} \operatorname {p r o b} \left(S _ {f}, k, R _ {f}\right) \left[ \left(- B _ {c}\right) \left(\left(y - \left(S _ {f} - k\right)\right) + F _ {c} \left(S _ {f} - k\right) \right] \right. +$$ + +$$ +P _ {2} (y) = \sum_ {k = 0} ^ {S _ {f} - y} \operatorname {p r o b} \left(S _ {f}, k, R _ {f}\right) F _ {c} y +$$ + +$$ +M _ {1} (x) = \sum_ {i = 0} ^ {S _ {f} - y} \operatorname {p r o b} (x, i, R _ {c}) \left(F _ {C} i + N _ {c} (x - i)\right) +$$ + +$$ +M _ {2} (x) = \sum_ {j = S _ {c} + 1} ^ {x} \operatorname {p r o b} (x, j, R _ {c}) \left[ S _ {c} F _ {c} + N _ {c} (x - j) + P _ {1} (j - S _ {c}) + P _ {2} (j - S _ {c}) \right] +$$ + +$$ +M (x) = M _ {1} (x) + M _ {2} (x) +$$ + +# Parameters + +$S_{f} =$ seating available for first-class + +$S_{c} =$ seating available for coach + +$R_{f} =$ show-up rate for first-class reservations + +$R_{c} =$ show-up rate for coach reservations + +$F_{c} =$ coach fare + +$N_{c} =$ no-show fee for coach + +$B_{c} =$ coach bump cost to airline + +# Variables + +$x =$ number of reservations + +# Functions + +$M(x) =$ expected gain with $x$ reservations + +$\operatorname{prob}(x, y, r) = \text{probability of } y \text{ events happening in } x \text{ trials where } r \text{ is the chance of a single event happening}$ + +$P_{1}[y], P_{2}[y]$ : to be described later + +# Binomial Distribution Model + +We create ACE Airlines, a fictional firm, to understand better how to handle overbooking. We examine binomial distributions of ticket sales, so we call this the Binomial Distribution Model. + +ACE features planes with 20 first-class seats and 100 coach seats. The no-show rate is $10\%$ for coach and $20\%$ for first-class. Figure 1 compares various overbooking levels with the chance that there will be enough available seats in first-class to accommodate the overflow. The functions are + +$$ +\begin{array}{l} y = \sum_ {j = 0} ^ {1 0 0 + x} \binom {C} {j} (0. 9) ^ {j} (0. 1) ^ {C - j} \quad (\text {c o a c h}), \\ y = \sum_ {j = 0} ^ {2 0 - x} \binom {C} {j} (0. 8) ^ {j} (0. 2) ^ {2 0 - j} \quad (\text {f i r s t c l a s s}), \\ \end{array} +$$ + +where $C$ reservations are made for coach and 20 are always made for first class. + +Where the first-class line passes below the various overbooking lines indicates the probability at which we must start bumping passengers. + +This simplistic model doesn't account for ticket prices, no-show fees, or refunds to bumped passengers and doesn't specifically deal with revenue either. Thus, it can act as a good reference for verifying the customer-relations aspect of any solution but can't give a good solution on its own. To be sure that ACE is receiving the most revenue it can, we must create a more in-depth model. + +ACE coach fare is $200. We refund $150 on no-show coach tickets, thus gaining $50 on each. To keep good customer relations, when we are forced to bump a passenger from a flight, we refund the ticket price with an additional bonus of 100% of the ticket price (thus, we suffer a $200 loss). + +We define $\operatorname{prob}(x, y, r)$ as the binomial probability of $y$ independent events happening in $x$ trials, with a probability $r$ of each event happening: + +$$ +\operatorname {p r o b} (x, y, r) = \left( \begin{array}{c} x \\ y \end{array} \right) r ^ {y} (1 - r) ^ {x - y}. +$$ + +# Model for Coach + +We first ignore first class and maximize profit based solely on overbooking the coach section, via the Simple Expected Gain Model. This model is defined in two parts. The first looks at the chances of the cabin not filling— $i < 100$ people showing up. ACE gets $200 for each of the \(i$ passengers who arrive and fly and \)50 from each of the $(x - i)$ no-shows. We multiply the probability of each outcome (determined by the binomial distribution) by the resulting + +![](images/d00e8b4202908722b3ea77082cf7dae9d6e16b5821f768267d290ba93907f735.jpg) + +![](images/81816bc84eeb8482dc48a1f00498de45f6dc0709189a7ac5b2702be070bb0423.jpg) +Figure 1. Probability of enough seats vs. overbooking level. The graph below is a close-up of the upper left corner of the graph above. + +revenue and sum over all of these values of $i$ to find the expected gain, $M_{1}$ : + +$$ +M _ {1} (x) = \sum_ {i = 0} ^ {1 0 0} \operatorname {p r o b} (x, i, 0. 9) \left(2 0 0 i + 5 0 (x - i)\right). +$$ + +The second part of the model focuses on overflow in the coach section, when j > 100. In this case, ACE is limited to $200 fare revenue on 100 passengers, plus $50 for each of the (x - i) no-shows. However, for the (j - 100) passengers who arrive but have no seats, ACE bumps them and thus loses $200 in compensation per passenger. We again multiply by the probability of each outcome and sum: + +$$ +M _ {2} (x) = \sum_ {j = 1 0 1} ^ {x} \operatorname {p r o b} (x, j, 0. 9) \left[ 2 0 0 (1 0 0) + 5 0 (x - j) - 2 0 0 (j - 1 0 0) \right]. +$$ + +We add $M_{1}$ and $M_{2}$ to arrive at an expression $M$ for revenue. From the graph for $M$ , we discover that (independent of first class) for maximum revenue, ACE should overbook by about 11 people, expecting a net revenue from the coach section of $20,055 (Figure 2). + +![](images/5d1202d1feba16c4f9f1db980b11ef15c33b4618be136a7908c07a81194084cb.jpg) +Figure 2. Simple Expected Gain Model: Revenue $M$ vs. number of coach reservations. + +# Coach Plus First Class + +When we add in consideration of first-class openings, ACE can overbook by even more while still increasing revenue, since it can upgrade coach overflow into first-class openings. The first part of the previous formula, $M_{1}$ can still be used, since it deals with the cases in which the coach section isn't filled anyway. Since ACE will not overbook first-class, ACE should always book it fully. Thus, fare for first-class is unimportant when considering how to maximize revenue. We further assume that ACE sells only unrestricted first-class tickets (there is no penalty to first-class no-shows). + +The second part of the equation needs only a minor modification to adjust for seats made available by first-class no-shows. ACE still gets $200 for each of the 100 passengers who show up and gets$ 50 for each of the (x - j) no-shows. The difference now is that instead of simply multiplying by - $200 for each passenger over 100, we check for first-class openings and multiply - $200 by just the number who end up bumped. Those upgraded to first-class still pay coach fare (thus, more than 100 coach passengers can pay that $200). This function, $P_{1}(y)$ , with $y = j - 100$ the number of overflow coach passengers, gives the expected net revenue expected for that much overflow. Similarly, $P_{2}(y)$ gives the expected net revenue when ACE can seat all of the overflow. Thus, the new version of the second part of the formula reads: + +$$ +M _ {2} (x) = \sum_ {j = 1 0 1} ^ {x} \operatorname {p r o b} (x, j, 0. 9) \left[ 2 0 0 (1 0 0) + 5 0 (x - j) + P _ {1} (j - 1 0 0) + P _ {2} (j - 1 0 0) \right]. +$$ + +The form of the $P_{i}$ functions is similar to the two parts of the model already discussed. The probability of there being few enough first-class passengers is multiplied by $\$200$ (coach fare) times the number of extra coach passengers who can be seated ( $j - 100$ ). Recall that the show rate for first-class is 0.8: + +$$ +P _ {2} (y) = \sum_ {k = 0} ^ {2 0 - y} \mathrm {p r o b} (2 0, k, 0. 8) (2 0 0) y. +$$ + +The other case is when ACE can't seat all of the coach overflow. This time, we multiply by the loss of revenue from the coach spillover $y$ , $\$200$ for each of the $y - (20 - k)$ bumped customers, offset by $\$200$ for each of the $(20 - k)$ passengers upgraded to first-class. The result is + +$$ +P _ {1} (y) = \sum_ {k = 2 1 - y} ^ {2 0} \operatorname {p r o b} (2 0, k, 0. 8) \big [ (- 2 0 0) \big ((y - (2 0 - k)) + 2 0 0 (2 0 - k) \big ] +$$ + +At this point, we have all of the pieces of the expected gain model. We plot the equation $M = M_{1} + M_{2}$ in Figure 3 and find the maximum for $x \geq 100$ . + +The ideal overbooking level lies at 115 or 116 reservations, with a negligible difference in profit (\(0.07) between them. + +# Applying the Model + +We implemented our model in a computer program in which the parameters can be varied, including seating capacities, ticket price, and no-show fees. + +In the case of our example, the optimum is very broad around 116. When deciding optimal overbooking levels, the airlines must balance revenue is against the chance of bumping. If ACE books 113 passengers instead, the revenue decreases by $\$ {105}$ per flight but the probability of no bumping rises to 73% from 53%. Similarly, if it books 114 passengers, it loses $\$ {35}$ per flight but there is a 67% probability that no one will be bumped. + +![](images/13b7ae0977b8fe59ff2d49c8c74a47984b6d0c767315b0f7d77a2fd1d50c4ca3.jpg) +Figure 3. Expected Gain Model: Revenue $M$ vs. number of coach reservations. + +# Fewer Flights + +The decrease in air traffic by $20\%$ since September 11 means fewer flights. Due to more-detailed security checks, it is necessary for planes to have longer turnaround times between flights. Adding 15 extra minutes at each turnaround would cause an airline such as Southwest to need almost 100 additional planes to maintain its previous air traffic flow. Therefore, there are fewer flights. + +How does this circumstance affect our model? + +Federal regulations do not require compensating a bumped passenger scheduled to reach the destination within an hour of the original arrival time. But now the probability of accomplishing that is much smaller than before September 11th; we do not consider it likely and do not include it in the model. + +Can bumped passengers be put onto a later flight to arrive within two hours of their original scheduled time? If this happens, federal regulations require an airline to compensate them for a ticket, essentially flying them for free. There is no loss or gain from this transaction, which is certainly more desirable than paying every bumped passenger \(200 on top of refunding ticket price. + +If people are put onto later flights, ACE pays fewer passengers an extra $200. However, our Expected Gain Model attempts to maximize the number of people on the flight. Thus, the probability of a passenger being able to take a later flight is very low and the optimal overbooking level changes negligibly. For example, disregarding first class, our Expected Gain Model shows only a 7% chance of a coach seat available on the next flight. We conclude that the Expected Gain Model is just as effective and much simpler if we disregard the possibility of bumped passengers obtaining a seat on a later flight, so we assume that all bumped passengers are compensated with a refund of their ticket price and $200. + +# Heightened Security and Passengers' Fear + +Demand for flying is down, despite funding for additional security, which—while well-justified—causes problems for airlines and passengers. + +ACE is concerned about passengers who miss flights because of security checks. These, along with passengers' fear, can increase the no-show rate, which ACE must consider in its overbooking strategy. Passengers may reserve a seat but then decide that in light of events they are too frightened to get on the plane. With higher no-show rates, a higher overbooking rate may become optimal. In our expected gain model with no-show rates of $20\%$ and $30\%$ for coach and first-class, the optimal overbooking level jumps to 130 or 135 seats. + +# Dealing with Bumped Passengers + +While most ways that airlines have dealt with bumping passengers are subtle and good business practice, some border on the absurd. For example, until 1978, United Airlines trained employees to bump people less likely to complain: the elderly and armed services personnel—two groups that perhaps instead should have priority in seating! + +A strategy other than current compensations could be optimal for the airlines, but it depends on cooperation. If ACE could convince other airlines that they all should give bumped passengers discount tickets (usable on any of the airlines), then each airline would lose less money from compensating bumped passengers. This would create a mutually profitable situation for all airlines involved: The airline accepting bumped passengers would fill seats that would otherwise be empty; the airline bumping the passengers would cut the amount of compensation to the price of a discounted ticket. + +Suppose that the amount of compensation is decreased by one-half. The optimal level of overbooking rises, as does revenue; but we cannot be sure that every bumped passenger can be placed on another flight. + +# Strengths and Weaknesses of the Model + +# Strengths + +- Our model involves only basic combinatorics and elementary statistics. +- Because it is parametrized, the model can continue to be used as rates, seating capacities, and compensation amounts change. +- The model considers more than one class. +- An airline can attempt to find the balance between maximizing revenue and pleasing customers, depending on how much risk the airline chooses to take. + +# Weaknesses + +- We do not consider business class; including it would have risked the model being too complicated. Business class should not have a large effect on revenue maximization, because no-show rates are lower and business people are more concerned with reaching their destination on time than surrendering their seats for compensation. +- Our model does not take into consideration how multiple flights affect each other. If putting passengers onto later flights were an option, revenue would increase slightly but doing so would also further complicate optimal overbooking levels on other flights. +- Because ACE is not bumping passengers to later flights, bumped passengers are left out in the cold with no flight and just a little bit of extra money—a resolution that does not provide positive customer relations. +- We allow no overbooking in first class. If ACE is willing to take the risk of downgrading or bumping first-class passengers, then revenue could increase slightly by overbooking first-class seats. +- In reality, anyone can buy a restricted or unrestricted ticket in either class. Therefore, a more complicated model would include the possibility of some coach no-shows receiving full-refund and some first-class no-shows paying a no-show fee. +- Our binomial distribution for showing up assumes independence among passengers. However, many people fly and show up in groups. + +# References + +Airline Passengers to pay security fee next month. 2002. USA Today (3 January 2002). http://www.usatoday.com/news/washdc/jan02/2002-01-03-airlinesecurity.htm. + Alexander, Keith, and Frank Swoboda. 2002. US Airways posts record $1 billion quarterly loss. Washington Post (18 January 2002). http://www.washingtonpost.com/ac2/wp-dyn?pagename=article&node=&contentId=A64400-2002Jan17. +Bayles, Fred. 2001. New safety procedures likely to add costs to flying. *USA Today* (6 November 2001). http://www.usatoday.com/money/biztravel/2001-11-06-security-costs.htm. +Karaesmen, Itir, and Garrett van Ryzin. 1998. Overbooking with substitutable inventory classes. (20 September 1998). Columbia University. http:// www.columbia.edu/~gjv1/subob1.pdf . + +More airline passengers get bumped. 2000. USA Today (6 October 2000). http: //www.usatoday.com/life/travel/business/1999/bt002.htm. +Schmeltzer, John. 2002. Weak economy, Sept. 11 continue to buffet carriers. Chicago Tribune (4 January 2002). http://chicagotribune.com/business/chi-0201040415jan04.story?coll=chi-business-hed>. +Spiegel, Murray R. 1988. Theory and Problems of Statistics. 2nd ed. New York: McGraw-Hill. +US airline chiefs seek assurances on finance and security. 2001. AIRwise News (18 September 2001). http://news.airwise.com/stories/2001/09/1000839754.html. +U.S. Department of Transportation. 2002. Air Travel Consumer Report. February 2002. http://www.dot.gov/airconsumer/atcr02.htm. + +# Memorandum + +Date: 02/11/2002 + +To: CEO of ACE Airlines + +From: Aviophobia University + +RE: Your Troubles Solved + +Today is your lucky day!! We know that airlines have been going through especially hard times recently and so we have come up with something that will solve your problems. + +You and I both know that overbooking occurs not because you cannot count the number of seats on your plane, but rather because it is a brilliant business strategy that can increase revenue. We have created a model that allows you to find your optimal overbooking strategy. + +Our model can consider your specific situation because it can account for different no-show rates and fees, seat capacities, ticket prices, and bumped passenger compensations. We have designed an easy-to-use computer program that allows you to quickly find your optimal overbooking strategy based on your figures. This program saves you time in a business where time is money. In addition, using our model will allow you to maximize your revenue without bringing in an expensive consultant. + +When designing our model, we even used data concerning your airline, so half of the work is done for you! For your planes, fares, and policies, our model shows that $16\%$ overbooking is optimal for maximizing revenue. However, we find that to reduce the probability of bumping too many passengers and still maintain a high revenue rate, $14\%$ or $15\%$ is ideal. + +We hope that this information leads you to a profitable quarter and stock increase, which we would both find profitable. + +# Bumping for Dollars: The Airline Overbooking Problem + +John D. Bowman + +Corey R. Houvard + +Adam S. Dickey + +Wake Forest University + +Winston-Salem, NC + +Advisor: Frederick H. Chen + +# Introduction + +We construct a model that expresses the expected revenue for a flight in terms of the number of reservations, the capacity of the plane, the price of a ticket, the value of a voucher, and the probability of a person showing up for the flight. When values are supplied for every variable but the first, the function can be maximized to yield an optimal booking that maximizes expected revenue. + +We apply the model to three situations: a single flight, two flights in a chain of flights, and multiple flights in a chain of flights. We conclude that fewer flights will increase the value of the penalty or voucher and thus decrease the optimal number of reservations. Heightened security also lowers the optimal number of reservations. An increase in passengers' fear decreases the probability that a person will show up for a flight and thus increases the optimal number of reservations. Finally, the loss of billions of dollar in revenue has no effect on the optimal value of reservations. + +We model the probability of a given number of people showing up as a binomial distribution. We express the average expected revenue of a flight in terms of the number of bookings made. + +Starting with the Single-Flight case, we derive a model and revenue function for a flight unaffected by previous flights. From this situation, we expand the model to the Two-Flight case, in which the earlier flight affects the number of people who show up for the later flight. We generalize the model even further to the number of people showing up depending on many previous flights. + +# The Model + +In each of the three situations modeled, we derive two formulas. The first, $P(k)$ , describes the probability that $k$ people show up for a flight. The second, $\text{Revenue}(b, c, r, x, p)$ , describes the expected revenue for a flight as a function of the number of reservations. We verified these theoretical equations by a Monte-Carlo simulation. + +For the Single-Flight Model: + +$$ +P (k) = \left( \begin{array}{c} n \\ k \end{array} \right) p ^ {k} (1 - p) ^ {n - k}, +$$ + +$$ +\operatorname {R e v e n u e} (b, c, r, x, p) = \sum_ {k = 0} ^ {c + (b - c)} P (k) \left[ r \min (k, c)\right) - x \max (k - c, 0) ]. +$$ + +For the Two-Flight Model: + +$$ +P _ {2} (k) = P _ {1} (k) \left[ 1 - \sum_ {i = c + 1} ^ {b} P _ {1} (i) \right] + \sum_ {j = 1} ^ {b - c} P _ {1} (k - j) P _ {1} (c + j), +$$ + +$$ +\operatorname {R e v e n u e} _ {2} (b, c, r, x, p) = \sum_ {k = 0} ^ {c + 2 (b - c)} P (k) \left[ r \min (k, c)\right) - x \max (k - c, 0) ]. +$$ + +For the $n$ -Flight Model: + +$$ +\begin{array}{l} P _ {n} (k) = P _ {1} (k) \left[ 1 - \sum_ {i = c + 1} ^ {c + (n - 1) (b - c)} P _ {n - 1} (i) \right] \\ + \sum_ {j = 1} ^ {(n - 1) (b - c)} P _ {1} (k - j) P _ {n - 1} (c + j), \\ \operatorname {R e v e n u e} _ {n} (b, c, r, x, p) = \sum_ {k = 0} ^ {c + n (b - c)} P _ {n} (k) \left[ r \min (k, c)\right) - x \max (k - c, 0) ]. \\ \end{array} +$$ + +The variables are: + +$$ +b = \text {n u m b e r o f r e s e r v a t i o n s (o r b o o k i n g s) p e r f l i g h t} +$$ + +$$ +c = \text {p l a n e} +$$ + +$$ +r = \text {p r i c e} +$$ + +$$ +x = \text {v a l u e o f a v o u c h e r} +$$ + +$$ +p = \text {p r o b a b i l i t y} +$$ + +Given $p, c, r$ , and $x$ , the method finds the $b$ that maximizes revenue. + +# Derivation of the Single-Flight Model + +The binomial distribution applies to calculating the probability that a number of passengers shows up for a flight: + +- The probability involves repeated events (each trial calculates the probability of one person showing up) with only two possible outcomes (either the person is a show or no-show). +- We assume that people's actions do not influence one another; each person's chance of showing up is independent of another person's chance. This is not true in reality, as people often travel in groups; but this a necessary and appropriate simplification. +- We assume that the probability of a person arriving remains constant for each person. + +We use the binomial distribution to calculate expected revenue. Airlines overbook their flights, knowing that some people will not take the flight. Given a certain overbooking strategy $b$ (i.e., the maximum number of reservations taken for a particular flight, with $b > c$ , the capacity of the plane), the expected revenue is + +$$ +\operatorname {R e v e n u e} (b, r, p) = \sum_ {k = 0} ^ {c + (b - c)} P (k) r \min (k, c). +$$ + +The function is incomplete, however, because it does not penalize the airline for the consequences of overbooking. The airline usually provides bumped passengers with either an airline ticket voucher or a cash reimbursement, valued at $x$ per bumped person: + +$$ +\operatorname {R e v e n u e} (b, r, p) = \sum_ {k = 0} ^ {c + (b - c)} P (k) \left[ r \min (k, c) - x \max (k - c, 0) \right]. +$$ + +When $k \leq c$ , the $x$ term is zero; when $k > c$ , the airlines is penalized for having to bump people. + +The booking decision $b$ and the capacity $c$ of the plane are fixed before the model begins. This model considers just one flight in a complex network of flights; it does not allow for the possibility that passengers are bumped from a previous flight, since it assumes that the only passengers are those who made a reservation for this particular flight. The model also applies to just one flight: If the number of passengers who show up is greater than the capacity of the plane, those bumped passengers receive a voucher and—with a wave of the magic wand of assumption—disappear. Finally, regardless of the flight's destination (Hawaii or Death Valley), we assume that there is enough demand to fill the predetermined number of bookings. + +Since $p$ is constant throughout our model, the Revenue function is really dependent only on the number of bookings, the capacity of the plane, the cost of a ticket, and the cost of the penalty. + +# Application of the Single-Flight Model + +We set $p = .9$ . Since $b$ must be an integer, the revenue function is not continuous. Thus, the analytic method of maximizing of the function (namely, differentiating and setting the derivative equal to zero) cannot be applied. Instead, we use Maple 6. + +After setting values for the probability, plane capacity, and ticket and voucher values, we evaluate the function at $b = c$ , then increment $b$ until a maximum for Revenue is found. + +We used three plane capacities: 10, 30, and 100. The values of the ticket price r, the voucher x, and the arrival probability p are held constant at $300, $300, and .9 for the examples in Table 1. + +Table 1. Results for the Single-Flight Model. + +
CapacityOptimal overbookingRevenueBump probability
1011$2,78231%
3033$8,59835%
100111$29,25044%
+ +The probabilities of bumping are larger than the industry frequency of about $20\%$ . Worse, the model ignores all the problems created by these bumped passengers. The model is further weakened in light of the post-September 11 issues proposed by the problem. Among the four issues—fewer flights, heightened security, passengers' fear, and losses—this model can account only for increased passenger fear (indicated by a change in probability that a passenger shows up). Clearly this Single-Flight Model is not a proper solution to the airline-overbooking problem. + +# Derivation of the Two-Flight Model + +The Two-Flight Model begins with updating both the probability and revenue functions. Unlike the Single-Flight Model, the new functions reflect the possibility that passengers bumped from one flight fill seats on the next. By this assumption, the probability function for the second flight, $P_{2}(k)$ , changes, because $k$ may now also be expressed as a combination of people ticketed for the second flight and bumped passengers from the first flight. Since the revenue + +function depends on the probability function, it too must change. + +$$ +\begin{array}{l} P _ {2} (k) = \Pr (k \text {p e o p l e s h o w u p f o r f l i g h t 2}) \\ = P r (k \text {r e g u l a r p a s s e n g e r s a r r i v e}) P r (\text {n o} \\ + P r (k - 1 \text {p a s s e n g e r s a r r i v e}) P r (1 \text {p a s s e n g e r b u m p e d}) + \dots \\ + P r (k - j \text {a r r i v e}) P r (j \text {p a s s e n g e r s b u m p e d}) + \dots \\ + P r (k - (b - c) \text {a r r i v e}) P r (b - c \text {p a s s e n g e r s b u m p e d}) \\ = P _ {1} (k) \left[ 1 - \sum_ {i = c + 1} ^ {b} P _ {1} (i) \right] + \sum_ {j = 1} ^ {b - c} P _ {1} (k - j) P _ {1} (c + j). \\ \end{array} +$$ + +A maximum of $b - c$ people can be bumped from flight 1, since at most $b$ people show up for it and we assume that no passengers are carried over from any previous flight. The probability that 1 passenger is bumped from flight 1 is exactly the probability that $c + 1$ people are present for it. Thus we have $Pr(j \text{ passengers bumped}) = P_1(c + j)$ . As long as $b, p,$ and $c$ remain the same, the probability that new (prebooked, non-bumped) passengers arrive never changes; it is independent of the number of bumped passengers from a previous flight. (We assume that there is no way for a passenger to know how many people have been bumped onto his or her flight from a previous one.) Thus, $Pr(k - j \text{ regular passengers arrive})$ will always be computed by $P_1(k - j)$ , our original probability function for the Single-Flight Model. + +In the second summation of $P_{2}(k)$ , the term $k - j$ could be negative for small $k$ . If so, we define the probability of a negative number of people showing up from a previous flight to be 0 (empty seats on a flight cannot be filled by passengers from later flights!). + +We now express the second revenue function in terms of the second probability function: + +$$ +\operatorname {R e v e n u e} _ {2} (b, c, r, x, p) = \sum_ {k = 0} ^ {c + 2 (b - c)} P (k) \left[ r \min (k, c - x \max (k - c, 0) \right]. +$$ + +A passenger bumped from one flight is automatically booked on the next flight and seated before regular passengers, so as to have almost no chance of being bumped again. For the second flight, we assume that the number of people who show up is affected only by that flight and the previous flight, and that there is enough demand to fill the predetermined number of bookings. + +The summation now has $c + 2(b - c)$ as its maximum value. The second flight must not only account for $b$ passengers but must also account for the number of people possibly bumped from the first flight. + +# Application of the Two-Flight Model + +By introducing a second flight, we more accurately model the situation. The optimal overbooking strategy and maximum revenue should either remain the + +same or slightly decrease. + +Using the Revenue function for the Two-Flight Model, we now calculate maximum revenue and the associated overbooking strategy for the same plane capacities as for the Single-Flight Model. Again, the values of the ticket price $r$ , the voucher $x$ , and the arrival probability $p$ are held constant at 300, 300, and .9. The results are in Table 2. + +Table 2. Results for the Two-Flight Model. + +
CapacityOptimal overbookingRevenueBump probability
1011$2,74534%
3033$8,55142%
100111$29,10757%
+ +In each case, the optimal booking strategy is the same as the Single-Flight Model, but the maximum revenues is lower, and bump probability is higher. Since both flights are overbooked, the probability that someone is bumped should only increase. + +# The $n$ -Flight Model + +We generalize to $n$ flights. We allow each flight to be influenced by the $(n - 1)$ flights before it. We still assume that a passenger bumped from one flight is given preferential seating on the next. However, giving seats to bumped passengers who are already at the airport decreases the number of seats for pre-booked passengers. The $n$ -flight model allows this domino effect of bumping to ripple through $n - 1$ successive flights. As $n$ gets large, our model becomes a better and better approximation of the real case, in which every flight is affected by many previous flights. Our probability function becomes recursive: + +$$ +\begin{array}{l} P _ {n} (k) = P _ {1} (k) \left[ 1 - \sum_ {i = c + 1} ^ {c + (n - 1) (b - c)} P _ {n - 1} (i) \right] \\ + \sum_ {j = 1} ^ {(n - 1) (b - c)} P _ {1} (k - j) P _ {n - 1} (c + j), \\ \operatorname {R e v e n u e} _ {n} (b, c, r, x, p) = \sum_ {k = 0} ^ {c + n (b - c)} P _ {n} (k) \left[ r \min (k, c)\right) - x \max (k - c, 0) ]. \\ \end{array} +$$ + +For the first summation, zero people show up from the previous flight, meaning that there are enough seats for everyone on that flight and anyone bumped from a previous flight. If the total possible number of people who can show up to the current flight is $b + (n - 1)(b - c)$ (as is explained in a moment), + +then the total number of people who can show up for the previous flight must be $b + (n - 2)(b - c)$ , which we use as the upper limit of the summation. + +For the second summation, we use $(n - 1)(b - c)$ instead of $(b - c)$ , since now there can be at most $(n - 1)(b - c)$ passengers bumped from flight $n - 1$ . This upper bound for bumped passengers can be proved by mathematical induction. [EDITOR'S NOTE: We omit the authors' proof.] + +The revenue function for the 2-flight model can also be extended to $n$ flights in a straightforward way. Note that at most $n(b - c)$ people can be bumped from the $n$ th flight. We have: + +$$ +\mathrm {R e v e n u e} _ {n} (b) = \sum_ {k = 0} ^ {c + n (b - c)} P _ {n} (k) \big [ r \min (k, c) - c \max (k - c, 0) \big ]. +$$ + +We now consider booking strategies to optimize revenue. + +# Computation of the $n$ -Flight Model + +# The Recursive Method + +We can create documents in Maple to compute the probability and revenue functions. To compute $\mathrm{Revenue}_n(b)$ , we must evaluate $P_n(k)$ at a total of $b + (n - 1)(b - c)$ times. In turn, $P_n(k)$ must evaluate $P_{n-1}(k)$ at a total of $(2n - 1)(b - c)$ times, $P_{n-1}(k)$ must evaluate $P_{n-2}(k)$ at a total of $[2(n - 1) - 1](b - c) = (2n - 3)(b - c)$ times, and so on. Thus, without even accounting for all the evaluations of $P_1(k)$ in each iteration, we make + +$$ +\begin{array}{l} [ b + (n - 1) (b - c) ] (2 n - 1) (b - c) (2 n - 3) (b - c) \dots [ 2 n - (2 k + 1) ] (b - c) \dots (1) (b - c) \\ = [ b + (n - 1) (b - c) \frac {(2 n - 1) !}{2 (n - 1) !} (b - c) ^ {n - 1} \\ \end{array} +$$ + +function calls. With $b = 105$ and $c = 100$ , $\text{Revenue}_2(k)$ requires 1650 function calls, $\text{Revenue}_3(k)$ requires 86,250 calls, and $\text{Revenue}_4(k)$ requires more than 6.3 million function calls. The computation time is proportional to the number of function calls: $\text{Revenue}_2(105)$ takes less than 1 s, $\text{Revenue}_3(105)$ takes 13 s, and $\text{Revenue}_4(105)$ takes 483 s. + +Of course, this is a very inefficient method. A more efficient method would be to store all probability values in an array, beginning with the values for $P_{1}(k)$ and working upwards to $P_{n}(k)$ . However, Maple makes array manipulation difficult. Instead, we turn to another method. + +# Monte Carlo Simulation + +We develop a Monte Carlo computer simulation coded in Pascal that runs the $n$ -flight model numerous times and determines the average revenue for a large number of trials. Instead of obtaining precise probabilities using the functions developed above, we flip a (electronic) weighted coin to determine whether each individual passenger shows up for the flight. We tell the program how many trials to run, give it values for $n, p, c, r$ , and $x$ and tell it the largest value of $b$ to check. The program begins with $b = c$ . It flips numerous weighted coins to determine how many passengers show up for the first flight. It bumps any excess passengers to the second flight and flips coins again to see how many prebooked passengers arrive. The excess is bumped to the third flight and the process continues until the $n$ th flight. Revenue is evaluated by adding an amount equal to the ticket price for each passenger who flies and deducting a penalty for each passenger who is bumped. The program iterates for successive values of $b$ until it reaches the preassigned upper bound. + +The output includes, for each $b$ value, the mean revenue over all trials and the corresponding percentage standard error. Percentage standard error was usually less than $2\%$ and often less than $1\%$ . + +# Optimization Strategies for the $n$ -Flight Model + +We will never earn more than the ticket price $(r)$ times the number of seats $(c)$ , so the gain from overbooking is limited—but the possible costs are not. At some point, the costs of overbooking outweigh the benefits; there should be a unique maximum for revenue. + +To find the maximum revenue, we evaluate the revenue function at different booking values, beginning with $b = c$ , until we find a $b$ with $\text{Revenue}(b - 1) < \text{Revenue}(b)$ and $\text{Revenue}(b + 1) < \text{Revenue}(b)$ . This will be our $b_{\text{opt}}$ . + +The obvious booking strategy is to book every flight with $b_{\mathrm{opt}}$ passengers. While this method maximizes flight revenue, it yields a high percentage of flights with bumped passengers. For a plane with 100 seats, the maximum revenue occurs at $b = 108$ , with $34\%$ of flights bumping passengers. Because our model does not account for changes in demand due to the airlines' behavior, this might not be the truly optimal value of $b$ in the long run. Bumping large numbers of passengers will drive customers away; decreased demand will depress the price that we can charge and we reduce revenue in the long term. Similarly, an especially low percentage of bumped flights may increase demand, allow us to raise prices, and increase revenue. Thus, our model accounts only for short-term effects, not long-term ones. + +Moving away from maximum revenue lowers expected revenue by a small amount but decreases the bump probability by a large amount. For convenience, we set both the price and penalty to $1, to avoid large numbers. While$ 1 is unrealistic, the value does not change the optimal booking strategy from the case where both price an penalty are both $300, because it is the ratio of + +price to penalty—and not their actual values—that changes the optimal booking. Our example considers a 50-flight sequence of planes with capacity 100 each; if everyone showed up and there was no overbooking, the revenue would be \(5,000. At the optimal \(b = 108\) for \(p = .9\), the expected revenue is \(\$ 4,806\) with bump probability of \(33\%\). If we move down just 1 to \(b = 107\), the revenue is \(\$ 4,791\) and the bump probability drops to \(21\%\). + +Table 3. Results of simulation: for each number for bookings, 100 trials with 50 flights per trial. + +
BookingsRevenue% BumpDelta(%Bump)Delta(%Rev)D(Bmp/D(Rev))
100$4,5010.00%0.00%0.02%0.00
101$4,5390.02%0.02%0.84%0.02
102$4,5890.04%0.02%1.09%0.02
103$4,6310.04%0.58%0.92%0.63
104$4,6770.62%2.24%0.97%2.30
105$4,7222.86%5.66%0.97%5.82
106$4,7588.52%12.50%0.76%16.45
107$4,79121.02%12.64%0.68%18.64
108$4,80733.66%22.10%0.34%65.67
109$4,80055.76%17.04%-0.15%-115.28
+ +We adopt as a criterion to compare two values of $b$ the ratio of the relative change in the bump probability and the relative change in revenue: + +$$ +\frac {\frac {\Delta P _ {\text {b u m p}}}{P _ {\text {b u m p}}}}{\frac {\Delta \text {R e v e n u e}}{\text {R e v e n u e}}} = \frac {\Delta (\% \text {B u m p})}{\Delta (\% \text {R e v e n u e})}. +$$ + +The process goes: A maximum revenue is found, along with its high bump probability. The optimizer now considers a lower value of $b$ and looks at the ratio of the change in the bump probability to the change in revenue. If this ratio is above a certain value $k$ , the optimizer accepts the lower $b$ . The optimizer continues to do this until the ratio is no longer greater than the constant. In Table 3, with $k = 20$ , the new optimum $b$ would be 107, because the ratio 18.64 is not greater than $k = 20$ . + +Table 4 shows three different optimization values for different plane capacities. + +# Application of the $n$ -Flight Model + +The problem specifically mentions four issues to be addressed by our model: fewer flights, heightened security, passengers' fear, and revenue losses. + +Why are airlines offering fewer flights? If the airlines had kept offering the same number of flights, the question of an optimal overbooking strategy would be moot, because the planes would not fill. The huge drop in demand + +Table 4. Optimal bookings using different criteria $(p = .9, r = 1, x = 1)$ . For each number for bookings, 100 trials with 50 flights per trial. + +
cMaximum revenuek=20k=1
bRevPbump(%)bRevPbump(%)bRevPbump(%)
10104500104500104500
3032139045311385123013500
5054236050532360255122900.8
10010948105010747901910446700.7
15016372703716172201415770700.3
2002199740472159660921194900.4
28030713690463031361013299134501.3
+ +has reduced supply but could also result in slashed prices. Since the value of the compensation involuntarily bumped ticket-holders is tied to the ticket price (though with a ceiling), changes in ticket prices should affect the optimum booking level little if at all. + +However, the fewer flights, the longer people who are denied boarding must wait for the next flight; being denied boarding is less convenient. Since compensation is usually offered in a kind of auction to induce voluntary relinquishing of seats, the airline will have to offer more. Therefore, longer delays between flights will increase the ratio of compensation amount to ticket price, tending to decreasing the optimal booking level. + +How do heightened security measures affect our model? They mean more security checks, longer lines, longer waits, and an increased chance of missing a flight, particularly a connecting flight. Unfortunately, people who miss their connecting flight and thus are guaranteed a spot on the next flight are not included in our model explicitly; but they do have an implicit effect. If more people miss connecting flights, they put additional stress on the system: They increase the chance that the next and subsequent flights will have too many people. Therefore, in our booking strategy, we want a low bump probability. To attain it, we should decrease the ratio $k$ , which decreases optimal booking level $b$ . + +Passenger fear leads not only to decreased demand (which we have already considered above) but also to a decreased probability $p$ of a passenger showing up, which in turn increases the optimal booking level $b$ . + +However, the hardest to deal with is the huge revenue loss. Less profitable airlines may fold; but presumably if there is excess demand, other airlines will either add flights or raise the price. Hence, though the huge financial loss may change the industry as a whole, it doesn't affect the optimal booking strategy. It merely leads to fewer flights (already addressed) and may change prices (which we argued would have no effect). + +We summarize these effects in Table 5. + +Table 5. Effects of post-September 11 factors. + +
FactorDirect effectEffect on optimal booking level
Fewer flightsg↑b_opt ↓
Heightened security measuresΔP_bump/ΔRevenue ↓b_opt ↓
Passenger fearp↓b_opt ↑
Financial losses
+ +# Verification and Sensitivity of the Model + +Since at least 100 trials were used per calculation, the Central Limit Theorem assures us that the distribution of the sample mean approximates well a normal curve and we can be $95\%$ confident that the true value we are approximating is within two standard errors of the sample mean. Often this means we cannot be completely sure of the optimal $b$ , because the maximum revenue is within two standard errors of the revenues of the values for $b$ immediately above and below. + +Convincing for us is that for small $n$ and small $c$ , the simulation provides values very close to those from the exact solutions processed in Maple. Because of the agreement, we are confident that our simulation is coded correctly and that the simulations are accurate, even for higher $n$ and $c$ . + +That the simulation may be off by 1 for the optimal value of $b$ is not much of a problem. For large $c$ , though Maple may be too slow to calculate over a large range of values for $b$ , Maple can be used to spot-check the value of $b$ from the simulation, along with the ones immediately above and below. + +In fact, we need not be much concerned about $n \geq 5$ . A bumped person affects a second flight and may also affect a third and possibly a fourth flight. But the effect diminishes, so while the effect on flights close by cannot be discounted, ignoring her effect on a tenth flight does no great damage. + +One might expect that changes in both booking level $b$ and capacity $c$ would significantly change the behavior of the model. But around $b_{\mathrm{opt}}$ , the revenue curve is fairly flat. For example, for $n = 50$ , $c = 100$ , and $r = x$ , using $b_{\mathrm{opt}} + 1$ instead of $b_{\mathrm{opt}}$ decreases revenue by only $0.12\%$ , whereas adopting $b_{\mathrm{opt}} - 1$ instead decreases revenue by only $0.21\%$ . This insensitivity is important because one of our more limiting assumptions is constant $p$ . Since slightly changing $b$ only slightly changes revenue, the effect of varying $p$ should not be too detrimental. + +What is sensitive to changes in $b$ is the bump probability. Using the same example as before, moving to $b_{\mathrm{opt}} + 1$ increases the bump probability by 15 percentage points, while moving to $b_{\mathrm{opt}} - 1$ decreases it 11 percentage points. While the smallest percentage changes in revenue are grouped around $b_{\mathrm{opt}}$ , the largest percentage changes in bump probability are grouped there. + +# Strengths, Weaknesses, and Extensions + +# Strengths + +- The strong correspondence between the Maple calculations and the data from the simulation is quite heartening. +- Around $b_{\mathrm{opt}}$ , the revenue is insensitive compared to the bump probability. Variations on the $n$ -flight mode provide a small range of near-optimal $bs$ with similar results for revenue and a fairly wide range bump probability. The range allows an airline a choice. + +# Weaknesses + +- The most obvious defect of our model is that many overbooking strategies are in use—and none of the them is ours! Our model is very restrictive because it assumes a constant booking strategy, as well as constant levels of $p$ and $c$ . In reality, most airlines use a dynamic system in which the overbooking level is not constant but instead is varied based on conditions that change from day to day and flight to flight. +- We replace the nation's vastly complicated network of intermeshing flights with a single flight path. +- We simplify the oligopoly of airlines to a single airline. +- We do not account for no-shows such as missed connections that are the fault of the airline or due to circumstances beyond its control (e.g., weather). In such circumstances, a flight's chance of being full is influenced by previous flights even if there is no overbooking. +- In assuming a binomial distribution, we assume people do not travel in groups, and thus their showing up are independent events. + +# Potential Extensions + +- The bump probability could affect revenue in a way that we have not allowed for, namely, in terms of price. An airline that consistently offers better service should be able to charge a higher price. A way to incorporate this effect is to make price a function of bump probability, perhaps inversely proportional to it. +- It might be desirable to make the compensation $x$ a function of the percentage of people that must be excluded from the plane. If $50\%$ of the ticket-holders had to be excluded, then the incentives would have to be greater than if only $5\%$ had to be excluded. At some point the airline would stop raising the + +incentive and resort to involuntary denied boarding, but these would also have costs resulting from customer satisfaction. One could experiment with setting $x$ equal to some constant times the ratio of those to be bumped, $m - c$ , to the total number of people $m$ . + +- The probability function could easily be generalized to variable $p$ ; in that case, $P(k)$ would become $P(k, p_n)$ . The equation could be generalized to the planes having different values of $c$ and $b$ by changing the upper limit of the summations from $(n - 1)(b - c)$ to $\sum_{i=1}^{n-1}(b_i - c_i)$ . + +# References + +About.com. 2000. Voluntary and Involuntary ("Bumped") Denied Boardings (Jan-Sep 2000).http://airtravel/about.com/library/stats/blrpt8.htm. +Airlines Overbooking Project. n.d. http://math.uc.edu/~brycw/classes/361/overbook.htm. +Airlines Transport Association. 2001. ATA Annual Report 2001.http://www. airlines.org/public/industry/bin/2001FactFig.pdf. +Conway, Joyce, and Sanderson Smith. 1997. Overbooking airline flights. http: //www.keypress.com/tswt_projects/projects97/conwaysmith/airline_overbooking.html. +U.S. Department of Transportation, Office of Aviation Enforcement and Proceedings. n.d. Passengers denied boarding by U.S. airlines. http://www.dot.gov/airconsumer/984yovs.htm. + +# Memo + +To: CEO, TopFlight Airways + +From: Models R Us + +Re: Optimal Overbooking Strategy + +Dear Sir/Madam: + +We have heard of your company's financial hardships in the wake of September 11. We offer you our assistance. We are a team of students who have dedicated four intense days to understand the problem of airline overbooking. While many have been working on this problem for years, we feel our approach will give your company the extra edge you are seeking. + +Because only $90\%$ of passengers arrive for their scheduled flights, an overbooking strategy is necessary to maximize revenue. However, there is a penalty for overbooking. As you know, airlines offer vouchers and other incentives to + +passengers to entice them to give up their seats. The airline is also responsible for finding bumped passengers a later flight. + +Our model incorporates these features. We consider the effect on a given flight of any number of preceding flights. If too many passengers arrive from a previous flight, they can set off a domino effect; when these passengers are rescheduled on a later flight, they increase the chance that this flight, too, will be overbooked. + +Our model allows you to combat this effect by finding the optimal booking level for a plane of a given capacity. We did computations for the model in two different ways: once using the mathematical software package Maple and again using a Monte Carlo simulation developed in Pascal. We found the values for these two computational approaches to be in very close agreement. + +We also allow for the fact that maximizing revenue is not enough. If you maximize your revenue now but bump too many passengers, you could find demand for your services decreasing. You could be forced to charge a lower price, and your revenue might decrease in the long run. We offer you a way to establish a trade-off between revenue and percentage of flights with bumped passengers. You tell us how important it is to you to have few bumped flights, and we can tell you how many passengers to book. + +Even using three different optimization strategies to account for the effects of fluctuating demand, we find that optimal values fall in a very narrow range. For a 100-seat plane, this range is 104 to 108. + +We also evaluated the effect of the September 11th crisis on the airline industry. Our model predicts that, with a decreased number of flights, you should decrease the level of overbooking. If security delays many passengers from reaching their flights on time, you should also decrease the number of bookings. Increased passenger fear will decrease the probability that passengers show up for their flights, so in this case you should increase your booking number. + +We have given you only a taste of what our model can do. We hope you will agree that contracting for our services will be of the highest benefit to your esteemed company. + +Sincerely, + +Models R Us + +# Author-Judge's Commentary: The Outstanding Airline Overbooking Papers + +William P. Fox + +Dept. of Mathematics + +Francis Marion University + +Florence, SC 29501 + +bfox@fmarion.edu + +# Introduction + +Once again, Problem B proved to be a bigger challenge than originally considered, both for the students and the judges. + +The students had a wealth of information for the basic model from the Web and from other resources. Students could consider and refine the basic information to fit the proposed post-9-11 scenario. + +The judges had to read and evaluate many diverse (yet sometimes similar) approaches in order to find the "best" papers. Judges found mistakes—errors in modeling, assumptions, mathematics, and/or analysis—even in these "best" papers; so it is important to note that "best" does not mean perfect. The judges must read and apply their own subjective analysis to evaluate critically both the technical and expository solutions presented by the teams. + +No paper analyzed every element nor applied critical validation and sensitivity analysis to all aspects of their model. Judges found many papers with the exact same model (down to the exact same letters used for the variables) and none of these clearly cited the universal source anywhere in the submission. The failure to properly credit the original source critically hurt these papers; it was obvious their basic model was not theirs but came from a published source. + +# Advice + +At the conclusion of the judging, the judges offered the following comments: + +- Follow the instructions + +- Clearly answer all parts. +- List all assumptions that affect the model and justify your use of those assumptions. +- Make sure that your conclusions and results are clearly stated. +- In the summary, put the "bottom line and managerial recommendation" results—not a chronological description of what you did. +- Restate the problem in your words. + +A CEO memorandum + +- Be succinct. +- Include "bottom line and managerial results" answers. +- Do not include methods used or equations. + +- Clarity and Style + +- Use a clear style and do not ramble. +- A table of contents is very helpful to the judges. +- Pictures, tables, and graphs are helpful; but you must explain them clearly. +- Do not include a picture, table, or graph that is extraneous to your model or analysis. +- Do not be verbose, since judges have only limited time to read and evaluate your paper. + +The Model + +- Develop your model—do not just provide a laundry list of possible models. +- Start with a simple model and then refine it. + +- Computer Programs + +- If a program is included, clearly define all parameters. +- Always include an algorithm in the body of the paper for any code used. +- If running a Monte Carlo simulation, be sure to run it enough times to have a statistically significant output. + +- Validation + +- Check your model against some known baseline. +- Check sensitivity of parameters to your results. +- Check to see if your recommendation/conclusions make common sense. + +- Use real data. +- The model should represent human behavior and be plausible. + +# Resources + +- All work needs to be original or referenced; a reference list at the end is not sufficient! +- Teams can only use inanimate resources—no real people or people consulted over the Internet. +- Surf the Web but document sites where obtained information is used. +- This problem lent itself to a literature search, but few teams did one. + +# - Summary + +- This is the first piece of information read by a judge. It should be well written and contain the bottom-line answer or result. +- This summary should motivate the judge to read your paper to see how you obtained your results. + +# Judging + +The judging is accomplished in two phases. Phase I, at a different site, is "triage judging." These are generally only 10-minute reads with a subjective scoring from 1 (worst) to 7 (best). Approximately the top $50\%$ of papers are sent on the final judging. + +Phase II is done with different judges and consists of a calibration round and another subjection round based on the 1-7 scoring system. Then the judges collaborate to develop a 100-point scale to enable them to "bubble up" the better papers. Four or more longer rounds are accomplished using this scale, followed by a lengthy discussion of the last final group of papers. + +# Reflections of Triage + +- Lots of good papers made it to the final judging. +- The initial summary made a significant difference in the papers (results versus an explanation). +Report to the CEO also made a significant difference in papers. + +# Triage and Final Judges' Pet Peeves + +- Tables with columns headed with Greek letters or acronyms that are not immediately understood. +Definition and variable lists that are imbedded in a paragraph. +- Equations used without explaining terms and what the equation accomplished. +- Copying derivations from other sources; cite the reference and briefly explain is a better approach. + +# Approaches by the Outstanding Papers + +Six papers were selected as Outstanding submissions because they: + +- developed a workable, realistic model from their assumptions that could have been used to answer all elements; +made clear recommendations; +- wrote a clear and understandable paper describing the problem, their model, and results; and +- handled all the elements. + +The required elements, as viewed by the judges, were to + +- develop a basic overbooking model that enabled one to find optimal values, +- consider alternative strategies for handling overbooked passengers, +- reflect on post-9-11 issues, and +- contain the CEO report of finding and analysis. + +Most of the better papers did an extensive literature and Web search concerning overbooking by airlines and used this information in their model building. + +The poorest section in all papers, including many of the Outstanding papers, was the section on assumptions with rational justification. + +Many papers just skipped this section and went directly from the problem to model-building! + +Most papers used a stochastic approach for their model. With interarrival times assumed to be exponential, a Poisson process was often used to model passengers. Teams moved quickly from the Poisson to a binomial distribution with $p$ and $1 - p$ representing "shows" and "no-shows" for ticket-holders. + +Many teams started directly with the binomial distribution without loss of continuity. Some teams went on to use the normal approximation to the binomial. Revenues were generally calculated using some sort of "expected value" equation. Some teams built nonlinear optimization models, which was a nice and different approach. + +Teams usually started with a simple example: a single plane with a fixed cost and capacity, one ticket price, and a reasonable value for no-shows based on historical data. This then became a model from which teams could build refinements (not only to their parameters) but also to include the changes based on post-9-11. + +Teams often simulated these results using the computer and then made sense of the simulation by summarizing the results. + +Wake Forest had two Outstanding papers. Team 69, with their paper entitled "ACE is High," was the INFORMS winner because of its superior analysis. Both papers began using a binomial approach as their base model. Team 273 developed a single-plane model, a two-plane model, and generalized to an $n$ -plane model. Team 69 did a superb job in maximizing revenue after examining alternatives and varying their parameters. + +The Harvey Mudd team, the MAA winner, had—by far—the best literature search. They used it to discuss existing models to determine if any could be used for post-9-11. Their research examined many of the current overbooking models that could be adapted to the situation. + +The University of Colorado team used Frontier Airlines as their airlines. They began with the binomial random variable approach, with revenues being expected values. They modeled both linear and nonlinear compensation plans for bumped passengers. They developed an auction-style model using Chebyshev's weighting distribution. They also consider time-dependency in their model. + +The Duke University team, the SIAM winner, had an excellent mix of literature search material and development of their own models. They too began with a basic binomial model. They considered multiple fares and related each post-9-11 issue to parameters in their model. They varied their parameters and provided many key insights to the overbooking problem. This paper was the first paper in many years to receive an Outstanding rating from each judge who read the paper. + +The Bethel College team built a risk assessment model. They used a normal distribution as their probability distribution and then put together an expected value model for revenue. Their analysis of Vanguard Airlines with a plane capacity of 130 passengers was done well. + +Most papers found an "optimal" overbooking strategy to be to overbook between $9\%$ and $15\%$ , and they used these numbers to find "optimal" revenues for the airlines. Many teams tried alternative strategies for compensation, and some even considered the different classes of seats on an airplane. + +All teams and their advisors are commended for the efforts on the Airline Overbooking Problem. + +# About the Author + +![](images/d10e14c46bdf3456307d5870ec75e030bb5a52c78110d75afe2033bf72e41d31.jpg) + +Dr. William P. Fox is Professor and the Chair of the Department of Mathematics at Francis Marion University. He received his M.S. in operations research from the Naval Postgraduate School and his Ph.D. in operations research and industrial engineering from Clemson University. Prior to coming to Francis Marion, he was a professor in the Department of Mathematical Sciences at the United States Military Academy. He has co-authored several mathematical modeling textbooks and makes numerous conference presentations + +on mathematical modeling. He is a SIAM lecturer. He is currently the director of the High School Mathematical Contest in Modeling (HiMCM). He was a co-author of this year's airline overbooking problem. + +Statement of Ownership, Management, and Circulation + +
1. Publication Title +The UMAP Journal2. Publication Number3. Filing Date +9/30/2002
0197-3622
4. Issue Frequency +quarterly5. Number of Issues Published Annually +4 + annual collection6. Annual Subscription Price +$75.00
7. Complete Mailing Address of Known Office of Publication (Not printer) (Street, city, county, state, and ZIP+4) +57 Bedford Street, Suite 210, Lexington, MA 02420Contact Person +Kevin Darcy
Telephone +781-862-7878 x.31
+ +8. Complete Mailing Address of Headquarters or General Business Office of Publisher (Not printer) + +same + +9. Full Names and Complete Mailing Addresses of Publisher, Editor, and Managing Editor (Do not leave blank) + +Publisher (Name and complete mailing address) + +Solomon Garfunkel, 57 Bedford Street, Suite 210, Lexington, MA 02420 + +Editor (Name and complete mailing address) + +Paul J Campbell, Beloit College, 700 College Street, Beloit, WI 53511 + +Managing Editor (Name and complete mailing address) + +Pauline Wright, 57 Bedford Street, Suite 210, Lexington, MA 02420 + +10. Owner (Do not leave blank. If the publication is owned by a corporation, give the name and address of the corporation immediately followed by the names and addresses of all stockholders owning or holding 1 percent or more of the total amount of stock. If not owned by a corporation, give the names and addresses of the individual owners. If owned by a partnership or other unincorporated firm, give its name and address as well as those of each individual owner. If the publication is published by a nonprofit organization, give its name and address.) + +
Full NameComplete Malling Address
The Consortium for Mathematics and its Applications57 Bedford Street
COMAP, Inc.Suite 210
Lexington, MA 02420
+ +11. Known Bondholders, Mortgagees, and Other Security Holders Owning or + +Holding 1 Percent or More of Total Amount of Bonds, Mortgages, or Other Securities. If none, check box + +
Full NameComplete Mailing Address
+ +12. Tax Status (For completion by nonprofit organizations authorized to mail at nonprofit rates) (Check one) + +The purpose, function, and nonprofit status of this organization and the exempt status for federal income tax purposes + +Has Not Changed During Preceding 12 Months + +Has Changed During Preceding 12 Months (Publisher must submit explanation of change with this statement) + +
13. Publication Title +UMAP Journal14. Issue Date for Circulation Data Below +11/15/2002
15. Extent and Nature of CirculationAverage No. Copies Each Issue +During Preceding 12 MonthsNo. Copies of Single Issue +Published Nearest to Filing Date
a. Total Number of Copies (Net press run)13401300
b. Paid and/or Requested Circulation(1)Paid/Requested Outside-County Mail Subscriptions Stated on +Form 3541. (Include advertiser's proof and exchange copies)10401080
(2)Paid In-County Subscriptions (Include advertiser's proof +and exchange copies)00
(3)Sales Through Dealers and Carriers, Street Vendors, +Counter Sales, and Other Non-USPS Paid Distribution7570
(4)Other Classes Mailed Through the USPS00
c. Total Paid and/or Requested Circulation [Sum of 15b. (1), (2), (3), and +(4)]11151150
d. Free +Distribution +by Mall +(Samples, +compliment +any, and +other free)(1)Outside-County as Stated on Form 3541
(2)In-County as Stated on Form 35418060
(3)Other Classes Mailed Through the USPS
e. Free Distribution Outside the Mail (Carriers or other means)00
f. Total Free Distribution (Sum of 15d. and 15e.)8060
g. Total Distribution (Sum of 15c. and 15f.)11951210
h. Copies not Distributed14590
i. Total (Sum of 15g. and h.)13401300
j. Percent Paid and/or Requested Circulation +(15c. divided by 15g. times 100)9395
16. Publication of Statement of Ownership +Publication required. Will be printed in the third issue of this publication. □ Publication not required.
17. Signature and Title of Editor, Publisher, Business Manager, or Owner +Ahn AayhalDate +9/30/2002
+ +I certify that all information furnished on this form is true and complete. I understand that anyone who furnishes false or misleading information on this form or who omits material or information requested on the form may be subject to criminal sanctions (including fines and imprisonment) and/or civil sanctions (including civil penalties). + +# Instructions to Publishers + +1. Complete and file one copy of this form with your postmaster annually on or before October 1. Keep a copy of the completed form for your records. +2. In cases where the stockholder or security holder is a trustee, include in items 10 and 11 the name of the person or corporation for whom the trustee is acting. Also include the names and addresses of individuals who are stockholders who own or hold 1 percent or more of the total amount of bonds, mortgages, or other securities of the publishing corporation. In item 11, if none, check the box. Use blank sheets if more space is required. +3. Be sure to furnish all circulation information called for in item 15. Free circulation must be shown in items 15d, e, and f. +4. Item 15h., Copies not Distributed, must include (1) newsstand copies originally stated on Form 3541, and returned to the publisher, (2) estimated returns from news agents, and (3), copies for office use, leftovers, spoiled, and all other copies not distributed. +5. If the publication had Periodicals authorization as a general or requester publication, this Statement of Ownership, Management, and Circulation must be published; it must be printed in any issue in October or, if the publication is not published during October, the first issue printed after October. +In item 16, indicate the date of the issue in which this Statement of Ownership will be published. +7. Item 17 must be signed. + +Failure to file or publish a statement of ownership may lead to suspension of Periodicals authorization. \ No newline at end of file diff --git a/MCM/1995-2008/2003ICM/2003ICM.md b/MCM/1995-2008/2003ICM/2003ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..1a61f518d1b5c3941d5567ea1550f29d38d55069 --- /dev/null +++ b/MCM/1995-2008/2003ICM/2003ICM.md @@ -0,0 +1,2472 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +David C. "Chris" Arney + +Dean of the School of + +Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy State Univ. Montgomery + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Director of Educ. Technology + +Roland Cheyney + +Production Editor + +Pauline Wright + +Copy Editor + +Timothy McLean + +Distribution + +Kevin Darcy + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 24, No. 2 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +David C. "Chris" Arney + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. "Gene" Woolsey + +Brigham Young University + +The College of St. Rose + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes print copies of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2320 \$90 + +(Outside U.S.) #2321 $105 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2370 $415 + +(Outside U.S.) #2371 $435 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2340 $180 + +(Outside U.S.) #2341 $200 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2310 $39 + +(Outside U.S.) #2310 $39 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc.57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2003 by COMAP, Inc. All rights reserved. + +# Vol. 24, No. 2 2003 + +# Table of Contents + +# Editorial + +Environmental Modeling: Not Just a Senior Elective Any More Paul J. Campbell. 93 + +# Special Section on the ICM + +Results of the 2002 Interdisciplinary Contest in Modeling David C. "Chris" Arney 97 + +Airport Baggage Screening: Optimizing the Implementation of EDS Machines Gary Allen Olson, Kylan Neal Johnson, and Joseph Paul Rasca. . . 111 + +How I Learned to Stop Worrying and Find the Bomb Tara Martin, Gautam Thatte, and Michael Vrable 123 + +Advancing Airport Security through Optimization and Simulation Michelle R. Livesey, Carlos A. Diaz, and Terrence K. Williams . . . 141 + +The Price of Security: A Cost-Benefit Analysis of Screening of Checked Baggage +Michael Alan Powell, Tate Alan Jarrow, and Kyle Andrew Greenberg 153 + +Feds with EDS: Searching for the Optimal Explosive Scanning System Robert T. Haining, Dana M. Lindemann, and Neal P. Richardson .169 + +Judge's Commentary: The Outstanding Airport Screening Papers C. Richard Cassady 185 + +Authors' Commentary: Aviation Security Baggage Screening Strategies: To Screen or Not to Screen, That Is the Question! Sheldon H. Jacobson and John E. Kobza 189 + +![](images/6de243d8ac552a7e20a7e2b722628aee8a565c0e28857164f4f9709e17ad5dc8.jpg) + +# Editorial + +# Environmental Modeling: Not Just a Senior Elective Any More + +Paul J. Campbell + +Mathematics and Computer Science + +Beloit College + +Beloit, WI 5351 + +campbell@beloit.edu + +How much do you believe in the usefulness of mathematics? How well do your students concur? How much does their level of belief affect whether they choose to major in mathematics? How can you help up their level of belief? + +Students who arrive at college already bent on a career in engineering or physical science need no further convincing—they have already been converted to the "religion" of applied mathematics. The faith of other students, including those fascinated with or intrigued by mathematics as an art, may be much weaker. Calculus, commonly taught as pure mathematics, by pure mathematicians, may weaken faith in applicability of mathematics rather than strengthen it. + +Over the last 25 years, mathematics departments have introduced courses in mathematical modeling, usually as a senior elective and taken only by mathematics majors. Such courses tend to be annual tent revivals that attract the already-fervent. They are too little, too late; they miss major audiences whom we should strive to reach much earlier in their education, in particular + +- students who take just one mathematics course in college, +- potential mathematics majors, and +women. + +COMAP's college text For All Practical Purposes [COMAP 2003] is aimed directly at the first group and has reached half a million students over the last 15 years. Meanwhile, over the entire 25 years, COMAP has developed an immense amount of material for demonstrating the applicability of mathematics + +and mathematical modeling at all levels, from grade school through college, including an entire high school mathematics text series. Much of that material has appeared in this Journal and been used by instructors with the second group above. + +Yet the proportion of women in applied mathematics remains low. Why? A new longitudinal study suggests that "Girls shy away from careers in math not because they lack the skills. They just don't see math as useful." According to Jacquelynne Eccles, Professor of Psychology and Women's Studies at the University of Michigan's Institute for Research on Women and Gender, + +Girls do tend to underestimate their math ability in high school... But that's not what pushes them away from mathematically based majors. There are two key factors in that decision: how much they believe in the ultimate utility of mathematics, and how much they value working with, and for, people.... Boys' beliefs and values are pulling them toward [math-based majors and careers] while girls' are pushing them in other directions. [Becker 2003]. + +How can we "evangelize" this group? + +- To show how mathematics works for people, focus on environmental modeling. +- To increase the experience of working with people, teach the course in a "studio" format: have students work in teams on projects, as in COMAP's Interdisciplinary and Mathematical Contests in Modeling. +- To emphasize the utility of mathematics, teach the mathematics involved in a "just in time" spirit. This is the opposite of the senior modeling elective, for which the students spend three year preparing before seeing where applications may lie. +- Teach mathematical modeling earlier. Courses in finite mathematics, liberal arts mathematics, and quantitative literacy can all be redirected toward a modeling spirit; and mathematics departments should consider a mainline mathematics course in modeling at the freshman or sophomore level. Apart from COMAP's own materials, there are now abundant others [Banks 1998, 1999; Fusaro and Kenschaft 2003; Giordano et al. 2003; Hadlock 1998; Kalman 1997; Mooney and Swift 1999]. With the help of dynamical modeling software, quite sophisticated modeling can be done without prior background in calculus or differential equations [Campbell 1996]. + +Mathematical modeling is a great way to show both that mathematics is useful and furthermore that students themselves can use it to answer questions of interest to them. Why deprive mathematics majors of this experience until their senior year and most other students altogether? + +# References + +Banks Robert B. 1998. Towing Icebergs, Falling Dominoes, and Other Adventures in Applied Mathematics. Princeton, NJ: Princeton University Press. +_______. 1999. Slicing Pizzas, Racing Turtles, and Further Adventures in Applied Mathematics. Princeton, NJ: Princeton University Press. +Becker, Anne. 2003. Why girls are bored with math. Psychology Today (27 May 2003). http://www.psychologytoday.com/htdocs/prod/PTOArticle/PTO-20030527-000003.asp. +Campbell, Paul J. 1996. Finite mathematics as environmental modeling. In *Mathematical Modeling in the Undergraduate Curriculum: Proceedings of the 1996 Conference*, University of Wisconsin-La Crosse Press, 1997, 67-80. Reprinted in *The UMAP Journal* 17 (4) (1996): 415-430. +Fusaro, B.A., and P.C. Kenschaft. 2003. *Environmental Mathematics in the Classroom*. Washington, DC: Mathematical Association of America. +Giordano, Frank R., Maurice D. Weir, and William P. Fox. 2003. A First Course in Mathematical Modeling. 3rd ed. Pacific Grove, CA: Thomson. +Hadlock, Charles R. 1998. Mathematical Modeling in the Environment. Washington, DC: Mathematical Association of America. +Kalman, Dan. 1997. Elementary mathematical Models: ORder Aplenty and a Glimpse of Chaos. Washington, DC: Mathematical Association of America. +Mooney, Douglas, and Randall Swift. 1999. A Course in Mathematical Modeling. Washington, DC: Mathematical Association of America. + +# About the Author + +![](images/cb606d8a8525807c50c9b79754978e40dd35ca7f2af891c50af3bd53a93be0a1.jpg) + +Paul Campbell graduated summa cum laude from the University of Dayton and received an M.S. in algebra and a Ph.D. in mathematical logic from Cornell University. He has been at Beloit College since 1977, where he served as Director of Academic Computing from 1987 to 1990. He is Reviews Editor for Mathematics Magazine and author of the annual articles on mathematics for the Encyclopaedia Britannica yearbooks. He has been editor of The UMAP Journal since 1984. + +# Modeling Forum + +# Results of the 2003 Interdisciplinary Contest in Modeling + +Chris Arney, Director + +Dean of the School of Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +# Introduction + +A total of 146 teams of undergraduates, from 84 institutions in 6 countries, spent the second weekend in February working on an applied mathematics problem in the 5th Interdisciplinary Contest in Modeling (ICM). + +This year's contest began at 8:00 P.M. (EST) on Friday, Feb. 6, and ended at 8:00 P.M. (EST) on Monday, Feb. 10. During that time, the teams of up to three undergraduates or high-school students researched and submitted their solutions to an open-ended interdisciplinary modeling problem involving the coordination and management of airport security. After a weekend of hard work, solution papers were sent to COMAP. + +The five papers judged to be Outstanding appear in this issue of The UMAP Journal. Results and winning papers from the first four contests were published in special issues of The UMAP Journal in 1999 through 2002. + +In addition to the ICM, COMAP also sponsors the Mathematical Contest in Modeling (MCM), which runs concurrently with the ICM. Information about the two contests can be found at + +www.comap.com/undergraduate/contest/icm + +www.comap.com/undergraduate/contest/mcm + +The ICM and the MCM are the only international modeling contests in which students work in teams to find a solution. + +Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better informed and better-prepared citizens, consumers, and workers. + +This year's problem, the Airport Security Problem, which involved understanding, analyzing, and managing baggage screening and flight scheduling at U.S. airports, proved to be particularly challenging in that it contained various data sets to be analyzed, several challenging requirements needing scientific and mathematical connections, and also the ever-present requirements to use creativity, precision, and effective communication. The authors of the problem, operations research analysts and engineers Sheldon Jacobson and John Kobza, were members of the final judging team and their commentary appears in this issue. + +All the competing teams are to be congratulated for their excellent work and dedication to scientific modeling and problem solving. This year's judges remarked that the quality of the papers was extremely high, making it difficult to choose the five Outstanding papers. + +In 2003 the ICM continued to grow as an online contest, where teams registered, obtained contest instructions, and downloaded the problem through COMAP's ICM Website. + +# Problem: The Airport Security Problem + +# Aviation Baggage Screening Strategies: To Screen or Not to Screen, That Is the Question + +You are an analysis team in the Office of Security Operations for the Transportation Security Administration (TSA), responsible for the Midwest Region of the United States. New laws will soon mandate $100\%$ screening of all checked bags at the 429 passenger airports throughout the nation by explosive detection systems (EDSs; see Figure 1). EDSs use computed tomography (CT) technology to scan checked bags, similar to how CAT scans are used in hospitals. Using multiple x-rays of each bag, EDSs create three-dimensional images of a bag's content showing the density of each item. This information is utilized to determine whether an explosive device is present. Experimentation with EDSs indicate that each device is operational about $92\%$ of the time and each device can examine between 160 and 210 bags per hour. + +The TSA has been actively purchasing EDSs and deploying them at airports throughout the nation. Given that these devices cost nearly $1 million each, weigh as much as eight tons, and cost several thousand dollars to install in an airport, determining the correct number of devices to deploy at each airport and how best to use them (once operational) are important problems. + +Currently, manufacturers are not able to produce the expected number of EDSs required to meet the federal mandate of $100\%$ screening of checked lug- + +gage. Because of the limited number of EDS machines available, the Director of Airport Security for the Midwest Region (Mr. Sheldon) is not surprised that the TSA is requesting a detailed analysis on the estimated number of EDSs required at all airports. In addition, given the limited space and funds available for each airport, Mr. Sheldon believes that at some point a detailed analysis of emerging technologies will be needed. Promising technologies with more modest space and labor costs will emerge in the coming decade (e.g., x-ray diffraction; neutron-based detection; quadropole resonance; millimeter wave imaging; and microwave imaging). + +# Task 1 + +You have been tasked by your Director, Mr. Sheldon, to develop a model to determine the number of EDSs required at two of the largest facilities in the region, Airports A and B, which are described in the Technical Information Sheet (TIS) in Appendix A. Carefully describe the assumptions that you make in designing the model and then use your model to recommend the number of EDSs required using the data provided in Table 1 of the TIS. + +# Task 2 + +Prepare a short (one-page) position paper to accompany your model that describes the security-related objectives of the airlines and the constraints that the airlines must work within for the sets of flights described in Table 1 of the TIS. + +# Task 3 + +Since security screening takes time and might delay passengers, the airport managers at Airport A and B request that you develop a model that can help the airlines determine how to schedule the departure of different types of flights within the peak hour. Carefully describe all the assumptions that you make in designing the model and use your model to produce a schedule for the two airports with the data provided in Table 1. + +# Task 4 + +Based on your analysis, what can you recommend to Mr. Sheldon and the airlines about checked baggage screening for the flights during the peak hours at your two airports? + +# Task 5 + +Mr. Sheldon realizes that your work may have national impact and requests that you write a memo explaining how your models can be adapted to determine the number of EDSs and airline scheduling for all 193 airports in the + +Midwest Region. He will send the memo along with the models and the analysis to the Director of the Office of Security Operations (his boss) at the TSA and to all security directors of other airports in the region for their comment and possible implementation. + +Additional security measures associated with higher risks may require that up to $20\%$ of the passengers will need to have all their checked bags screened through both an EDS and an explosive trace detection (ETD) machine, even though an EDS is $98.5\%$ accurate in identifying explosive devices in checked bags. ETD machines use mass spectrometry technology to detect minute particles of explosive compounds. Each ETD machine costs $45,000 to purchase, however, the labor cost to operate the ETD machine is approximately 10 times that of the EDS. ETD can process 40 to 50 bags per hour; they are operational $98\%$ of the time; and they are $99.7\%$ accurate in identifying explosive materials on checked bags. At this time, ETD machines have not been federally certified, but Mr. Sheldon believes that they will soon be an integral part of national airport security systems. + +# Task 6 + +Modify your EDS models to incorporate the use of ETD machines and determine how many ETD machines are needed for Airports A and B and if the schedules need to be changed. Since this information may affect national level decisions, write a memo to the Director of Homeland Security and the Director of TSA with a technical analysis of this enhanced screening policy. Is the cost of such a policy justified in light of the value that it provides? Should the ETDs replace any of the EDS devices? + +# Task 7 + +The Director of Homeland Security must also decide how to best fund future scientific research programs. Use your EDS/ETD model to examine the possible effect of changes in the device technology, cost, accuracy, speed, and operational reliability. Include recommendations for the science, technology, engineering, and mathematics (STEM) research areas that will have the biggest impact on security system performance. Add your recommendation to the memo prepared in Task 6. + +# Appendix A: Technical Information Sheet (TIS) + +Although all the flights in Table 1 depart during a peak hour, their actual departure times are set by the airline when designing their flight schedule. A flight cannot depart until all its checked bags are screened using an EDS. The airline has the flexibility to schedule their flights during the peak hour to avoid undesirable flight delays due to unscreened bags. + +Historical data indicates that flights with 85 or fewer seats typically fly with between $70\%$ and $100\%$ of their seats occupied. Flights with between 128 and + +Table 1. Peak-hour flight departures for airports A and B. Note: On average, $2\%$ of flights are cancelled each day. + +
TypeSeats/flightAirport AAirport B
134108
24646
38537
412835
5142199
6194510
721512
835011
+ +215 seats typically fly with between $60\%$ and $100\%$ of their seats occupied. Flights with 350 seats typically fly with between $50\%$ and $100\%$ of their seats occupied. Passengers typically arrive for their flight between forty-five minutes and two hours prior to their scheduled departure time. For flights other than shuttles service, airlines claim that $20\%$ of the passengers do not check any luggage, $20\%$ check one bag, and the remaining passengers check two bags. + +Preliminary estimates indicate that it will cost $100,000 to modify existing infrastructure (reinforced flooring, etc.) to install each EDS at Airport A and$ 80,000 to install a device at Airport B. + +![](images/8763282f8b6fe1bd0679d076b780e676fda1ba7cc3b5dc21c4623eb063cfb80e.jpg) +Figure 1. Explosive Detection System (EDS). + +# The Results + +Solution papers were coded at COMAP headquarters so that names and affiliations of authors would be unknown to the judges. Each paper was read preliminarily by two "triage" judges at the U.S. Military Academy at West Point, + +NY. At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +Final judging took place at the United States Military Academy, West Point, NY. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
Airport Security5196062146
+ +The five papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries by the authors and by one of the judges. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +"Airport Baggage Screening: Optimizing the Implementation of EDS Machines" + +Carroll College + +Helena, MT + +Mark R. Parker + +Gary Allen Olson + +Kylan Neal Johnson + +Joseph Paul Rasca + +"How I Learned to Stop Worrying and Find the Bomb" + +Harvey Mudd College + +Claremont, CA + +Hank Krieger + +Tara Martin + +Gautam Thatte + +Michael Vrable + +"Advancing Airport Security through Optimization and Simulation" + +Humboldt State University + +Arcata, CA + +Eileen M. Cashman + +Michelle R. Livesey + +Carlos A. Diaz + +Terrence K. Williams + +"The Price of Security: +A Cost-Benefit Analysis of +100% Screening of Checked Baggage" +United States Military Academy +West Point, NY +Michael J. Johnson + +Kyle Andrew Greenberg +Tate Alan Jarrow +Michael Alan Powell + +"Feds with EDS: Searching for the Optimal Explosive Scanning System" Wake Forest University Winston-Salem, NC Bob Plemmons + +Robert T. Haining +Dana M. Lindemann +Neal P. Richardson + +# Meritorious Teams (19 teams) + +Asbury College, Wilmore, KY (Duk Lee) +Beijing Northern Jiaotong University, China (Yingdong Liu) +Beijing University of Posts and Telecommunications, China (Shoushan Chongqing University, China (Xiaofan Yang) +Elon University, Elon, NC (Crista Coles) +Harbin Institute of Technology, China (Kean Liu) +Harvey Mudd College, Claremont, CA (Hank Krieger) +Jinan University, China (Daiqiang Hu) +Maggie Walker Governor's School, Richmond, VA (Martha Hicks) +Olin College of Engineering, Needham, MA (Michael Moody) +School of Peking University, China (Yulong Liu) +Trinity University, San Antonio, TX (Allen Holder) +United States Military Academy, West Point, NY (Elizabeth Schott) +United States Military Academy, West Point, NY (Christopher Farrell) +University College Dublin, Ireland (Rachel Quinlan) +University of Science and Technology of Hefei, China (Hong Zhang) +University of Virginia, VA (Julian Noble) +Wake Forest University, Winston-Salem, NC (Hugh Howards) +Zhejiang University, China (Yong He) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and by the Head Judge. Additional awards were presented to the Humboldt State University team from Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Director + +Chris Arney, Dean of the School of Mathematics and Sciences, The College of Saint Rose, Albany, NY + +Associate Directors + +Michael Kelley, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Gary W. Krahn, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Judges + +Richard Cassidy, Dept. of Industrial Engineering, University of Arkansas, Fayetteville, AR + +John Kobza, Dept. of Industrial Engineering, Texas Tech University, Lubbock, TX + +Sheldon Jacobson, Dept. of Mechanical and Industrial Engineering, University of Illinois, Urbana, IL + +Frank Wattenberg, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Triage Judges + +Mike Arcerio, Gabe Costa, Eric Drake, Bill Felhman, Jeff Flemming, Andy Glen, Paul Goethals, Alex Heidenberg, Denise Jacobs, Alan Johnson, Gary Krahn, Rich Laverty, Tom Lainis, Barb Melendez, Chris Moseley, Joe Myers, Mike Phillips, Bart Stewart, Frank Wattenberg, Brian Winkel, Robbie Williams, and Shaw Yoshitani, all of the U.S. Military Academy, West Point, NY. + +# Source of the Problem + +The Airport Security Problem was contributed by Sheldon Jacobson (Dept. of Mechanical and Industrial Engineering, University of Illinois, Urbana, IL) and John Kobza (Dept. of Industrial Engineering, Texas Tech University, Lubbock, TX). + +# Acknowledgments + +Major funding for the ICM is provided by a grant from the National Science Foundation through COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS). + +We thank: + +- the ICM judges and ICM Board members for their valuable and unflagging efforts, and +- the staff of the Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY, for hosting the triage judging and the final judging. + +# Cautions + +# To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, style has been adjusted to that of The UMAP Journal, and the papers have been edited for length. Please peruse these student efforts in that context. + +# To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Editor's Note + +As usual, some of the Outstanding papers were several times as long as we can accommodate in the Journal; so space considerations forced me to edit the Outstanding papers for length. The code and raw output of computer programs is omitted, the abstract is often combined with the summary, and usually it is not possible to include all of the many tables and figures. + +For the Airport Security Problem, the memos of Tasks 2, 5, and 7 from most papers are largely omitted as such and their modeling content folded into the text. Although these memos provide valuable summaries, they do not contain modeling and tend to duplicate conclusions reached in other sections. + +In all editing, I endeavor to preserve the substance and style of the paper, especially the approach to the modeling. + +Paul J. Campbell, Editor + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORI
ALABAMA
Athens State UniversityAthensM. Leigh LunsfordP
CALIFORNIA
California State Polytechnic UniversityPomonaJennifer SwithkesP
Harvey Mudd CollegeClaremontArthur BenjaminH
Hank KriegerO, M
Humboldt State UniversityArcataEileen M. CashmanO
Sonoma State UniversityRohnert ParkElaine T. McDonaldP
COLORADO
Regis UniversityDenverJim SeibertH, P
University of ColoradoBoulderBengt FornbergH
ILLINOIS
Monmouth CollegeMonmouthChristopher G. FasanoP
INDIANA
Earlham CollegeRichmondMic JacksonP
KENTUCKY
Asbury CollegeWilmoreDavid L. CoulietteH
Duk LeeM, H
Bellarmine UniversityLouisvilleWilliam J. HardinH
Northern Kentucky UniversityHighland HeightsPhillip H. SchmidtH
MASSACHUSETTS
Olin College of EngineeringNeedhamMichael E. MoodyM
MICHIGAN
East Grand Rapids Public SchoolsGrand RapidsMary ElderkinP
Lawrence Technological UniversitySouthfieldHoward WhitstonH
Ruth FavroP
MINNESOTA
Bemidji State UniversityBemidjiColleen G. LivingstonH
MISSOURI
University of Missouri-RollaRollaMohamed Ben RhoumaM
MONTANA
Carroll CollegeHelenaHolly S. ZulloH
NEVADA
Sierra Nevada CollegeIncline VillageCharles LevitanP
NEW JERSEY
Rowan UniversityGlassboroHieu D. NguyenP
NEW YORK
Concordia CollegeBronxvilleJohn LoaseH, H
Nazareth CollegeRochesterNelson G. RichP
Saint Bonaventure CollegeOleanAlbert G. WhiteP
U.S. Military AcademyWest PointChristopher M. FarrellM
Elizabeth W. SchottM
Michael J. JohnsonO
NORTH CAROLINA
Appalachian State UniversityBooneEric S. MarlandH
Elon UniversityElonCrista ColesM, P
N.C. School of Science and MathematicsDurhamDot DoyleP
Wake Forest UniversityWinston-SalemBob PlemmonsO
Edward E. AllenP
Hugh N. HowardsM
OHIO
Ohio Wesleyan UniversityDelawareRichard S. LinderH
Youngstown State UniversityYoungstownJ.D. FairesH, P
Michael CrescimannoP
OREGON
Eastern Oregon UniversityLa GrandeJeffrey N. WoodfordH
Lewis and Clark CollegePortlandThomas OlsenH
PENNSYLVANIA
Lafayette CollegeEastonThomas HillH
TEXAS
Brazoswood High SchoolCluteDeborah E. SitkaP
Trinity UniversitySan AntonioAllen G. HolderM
UTAH
University of UtahSalt LakeDon H. TuckerH
VIRGINIA
Maggie Walker Governor's SchoolRichmondMartha A. HicksM, H
University of VirginiaCharlottesvilleJulian Victor NobleM
WASHINGTON
Washington State UniversityPullmanV.S. ManoranjanP
WISCONSIN
Beloit CollegeBeloitPaul J. CampbellP
CHINA
Anhui UniversityHefeiHaixian WangP
BeiHang UniversityBeijingPeng LinpingH
Beijing Institute of TechnologyBeijingCui Xiao DiH
Li Bing ZhaoH
Zhang Bao XueP
Beijing Northern Jiaotong UniversityBeijingLiu YingdongM
Beijing University of Chemical TechnologyBeijingLiu DaminH
Huang JinyangH
Xu LanxiP
Beijing University of Posts and Tel.BeijingSun HongxiangM
Luo ShoushanH
He ZuguoH
Central South UniversityChangshaChen XiaosongP
Qin XuanyunH
China University of Mining and TechnologyXuzhouXue XiuqianH
Zhu KaiyongP
Chongqing UniversityChongqingHe RenbinH
Li ZhiliangH
Yang Xiaofan YangM
Dalian UniversityDalianGang JiataiP
Dalian Univ. of Tech.DalianZhao LizhongH
Yu HongquanH
Yi WangH
Dong Hua UniversityShanghaiYing MingyouP
East China University of Sci. and Tech.ShanghaiNi ZhongxinP, P
Fudan UniversityShanghaiYuan CaoH
Cai ZhijieP
Hangzhou University of CommerceHangzhouHua JiukunH
Zhu LingH, H
Harbin Engineering UniversityHarbinYu TaoH
Luo YueshengP, P
Harbin Institute of TechnologyHarbinHong GeP
Kean LiuM
Tong ZhengP
Harbin University of Sci. and Tech.HarbinChen DongyanH
Hefei University of TechnologyHefeiSu HuamingP
Du XueqiaoP
Institution of Math., Nankai Univ.TianjinLei FuP, P
Jiao Tong UniversityShanghaiLiuqing XiaoP
Jilin UniversityChangchunCao ChunlingP
Wang ShuyunP
Jinan UniversityGuangzhouHu DaiqiangM
Fan SuohaiP
Nanjing Univ. of Sci. & Tech.NanjingChen PeixinP
Xu ChungenH
Northern Jiaotong UniversityBeijingGui WenhaoP
Wang XiaoxiaP
Northwestern Polytechnical UniversityXiánLu QuanyiH
Xiao Hua YongH
Zhao XuanmminP
Peking UniversityBeijingZhi LiP
Peking University, School of Math & Sci.BeijingLiu YulongM, H
Shanghai Jiaotong UniversityShanghaiGang ZhouH, P
South China University of TechnologyGuangzhouQin YonganH
Hao ZhifengP
Tao ZhisuiH
Southeast UniversityNanjingZhang LeihongP
Sun ZhizhongH, P
Tianjin UniversityTianjinLiu ZeyiH
Song ZhanjieH
Tsinghua UniversityBeijingHuang HongxuanP
Xi DengH
Zhe ZhouH
University of Electronic Sci. & Tech.ChengduXu QuanziH
Yong ZhangH, P
University of Sci. & Tech. of ChinaHefeiChao MengH
Hong ZhangM
Wuhan University of TechnologyWuhanHuangZhangcanP
Peng SijunP
Wang WeihuaH
Zhejiang UniversityHangzhouYang QifanP
Yong HeM
Tan ZhiyiH
Zhongshan UniversityGuangzhouLi CaiweiP
Zhongshan (Sun Yat-sen) UniversityGuangzhouYun BaoH
FINLAND
Päivölä CollegeTarttilaAnne KouhiaP, P
INDONESIA
Institut Teknologi BandungBandungEdy SoewonoH, P
Sapto Wahyu IndratnoP
IRELAND
University College DublinBelfieldRachel QuinlanM
Rachel QuinlanP
DublinPeter N. DuffyP
UNITED KINGDOM
Dulwich CollegeLondonJeremy LordH
+ +# Editor's Note + +For team advisors from China, we have endeavored to list family name first. + +# Airport Baggage Screening: Optimizing the Implementation of EDS Machines + +Gary Allen Olson +Kylan Neal Johnson +Joseph Paul Rasca +Carroll College +Helena, MT + +Advisor: Mark R. Parker + +# Summary + +As analysts for the Transportation Security Administration, we explore the effects of the new $100\%$ baggage screening law. Our first goal is to find the optimal number of Explosive Detection Systems (EDS) that an airport will require to meet the new federal mandates. In addition, we develop a scheduling algorithm to minimize airport congestion. Lastly, we use an analysis of cutting-edge technology, including Explosive Trace Detection (ETD), for recommendations concerning the future of airport security. + +We develop three models to estimate the optimal number of EDS machines required for the two largest airports in our region. Our first model is a simple approximation; we then develop a more accurate multichannel queuing system model. Finally, we create an influx simulation to analyze minute-by-minute baggage arrivals. This model accurately examines passenger arrival dynamics, including the build-up of baggage throughout peak hours of operation. + +For an optimum peak-hour schedule, we arrange the flights so that passengers are equally distributed among evenly spaced time intervals. This arrangement minimizes congestion in the airport and turmoil if delays occur. We find this optimal schedule for any given set of flights. Finally, by combining this model with our influx simulation, we find that airport A requires 23 EDS machines at a cost of $25.3 million and airport B requires 24 EDS at$ 25.9 million. + +We formulate recommendations for security decision-makers and address their concerns, including our dismissal of ETDs as a supplement to EDSs. + +# EDS Modeling + +# Model 1: The Hasty Model + +To find a quick approximation for the number of EDSs needed, we first determine the total number of people who use the airport during the given peak hours. + +The problem statement provides a range of probabilities for passenger turnout for each flight. Because the ranges are broad, we assume that: + +- $85\%$ of people show up for flights with 85 or fewer seats; +- $80\%$ of people show up for flights with between 128 and 215 seats; and +- $75\%$ of people show up for flights with 350 seats. + +The expected number of passengers who show up for a flight, $\mu_{x}$ , is the number $n$ of passengers scheduled to be on the flight times the probability $p$ of showing up: + +$$ +\mu_ {x} = n p. +$$ + +An EDS can scan between 160 and 210 bags per hour; to account for the worst case, we assume 160 bags per hour. Let $B$ be the number of bags to be scanned and $x$ be the number of EDS scanners needed at the airport. Then + +$$ +x = \frac {B}{1 6 0 t}, +$$ + +where $t$ is the number of hours of operation of the scanners. + +We assume that all passengers arrive and check their bags 2 hours before departure, so that $t = 2$ . + +Each EDS scanner costs $1 million plus an installation cost $I$ (dependent on the airport), for a total cost of + +$$ +\operatorname {C o s t} = (1, 0 0 0, 0 0 0 + I) x. +$$ + +The number of bags to be scanned for a flight is the expected number of passengers times the average number of bags per person. According to the problem statement, the distribution is 0 bags: $20\%$ , 1 bag: $20\%$ , and 2 bags: $60\%$ . The average is 1.4 bags per passenger, the bag rate. We have + +$$ +B = 1. 4 \mu_ {x}. +$$ + +We assume that all bags are present at the beginning of the peak hour and that the scanners have the complete time to work on them at a constant rate, so that each scanner can process a total of 320 bags over the two-hour period. + +Table 1. Flights at airport A and their expected numbers of passengers. + +
TypeSeats/flightFlightsOccupancyExpected passengers
1341070-100%—289
2464use 85%156
3853217
4128360-100%—307
514219use 80%2158
61945776
72151172
8350150-100%—use 75%263
Totals464338
+ +# Airport A + +Table 1 shows a breakdown of the flights at airport A and the expected number of passengers for each flight. + +From the table, we can determine that in the peak hour, airport A will see about 4,338 people leave on 46 flights. We estimate the total number of bags to be $4,338 \times 1.4 \approx 6,072$ , hence approximately $\lceil 6100 / 320 \rceil \approx 19$ scanners are needed. For airport A, we have $I = \\(100,000$ ; so the total cost of the scanners is \\)20.9 million. + +# Airport B + +The calculations for airport B are similar. At the peak hour, 4,665 people leave on 48 flights with 6,531 bags, requiring 21 scanners. For airport B, we have $I = \$ 80,000 \); the total cost of the scanners is \\(22.68 million. + +# The Extremes of Being Hasty + +Our calculations are based on an average probability of passenger arrival. What about the extreme days of operation? Analysis of the highs and the lows of our model can yield both interesting and useful information as to the robustness and resilience of the model. + +To estimate for low traffic, we arbitrarily reduce the probabilities of arriving to $70\%$ , $60\%$ , and $50\%$ for small, medium, and large planes, instead of $85\%$ , $80\%$ , and $75\%$ . Our high extreme is, of course, $100\%$ . + +We find the numbers of machines corresponding to low, mean, and high traffic to be: + +Airport A: 15, 19, 24; + +Airport B: 16, 21, 26. + +# Model 2: Multi-Channel Queuing Model + +# Background Analysis + +The arrival of airport passengers and baggage can be modeled by queuing theory. Because the EDS machines are not at the ticket counters, two queues form: + +- People waiting to check in at the ticket counter. We assume that they arrive at a uniform rate according to a Poisson process. +- People waiting to have their bags checked. To determine how the bags arrive at each EDS machine, we analyze the layout of the airport and the logistics of placing the machines in the building. Since each EDS machine is approximately 20 ft long and 4 ft wide, there will not be sufficient space to install the machines at the ticket counters [Domestic Flights Usage Guide 2003]. The most viable option is to install the machines in open lobby areas throughout the airport, evenly spacing them so passengers find close EDS machines regardless of where they enter the airport. + +Airports have two options of dealing with baggage at the EDS machine. + +- Require all passengers to remain with their baggage until it has passed through the EDS. This method would result in longer queues, as people would pile up in the queue along with baggage. +- Have the ticket agent stamp the luggage at check-in, allowing passengers simply to drop off baggage at the EDS machine. Passengers could then leave and allow the attendants to finish processing the bags. The baggage would then form a queue of its own as bags piled up waiting to be put through the machine. + +# We use the second option. + +The baggage queue follows the same Poisson process as the queue for the counter: As people leave the counter queue, they arrive in the baggage queue. Baggage dropped off becomes the calling unit waiting in the queue and is serviced according to how fast the EDS machines can handle baggage [Render 1997, 662]. The input process for the baggage queue is a first-come-first-serve process. + +# Logistics of the Queue + +To perform our queuing analysis, we first define parameters: + +- $\lambda =$ average arrival rate (bags/h), +- $\mu =$ average service rate at each channel (bags/h), and +- $M =$ number of channels open (EDS machines). + +# Average Arrival Rate + +For this model, we examine each flight type separately. For each flight type, we used Mathematica to generate a random number in the given range of percentages of people that show up. We multiplied this value by the total number of seats in that flight type. We then determine the total number of passengers and the corresponding number of bags. From this we deduce the average arrival rate $\lambda$ of bags per hour. + +# Mean Service Rate + +The average service rate $\mu$ at each channel depends on: + +- the number of people staffing the machine and their experience with it, +- the protocol for dealing with flagged baggage (which slows down the processing), +- locked bags (they will have to be cut open and searched), +- machine reliability (a breakdown will temporarily stop the queue and create a backlog; according to the problem statement, each machine is operational $92\%$ of the time). + +We assume an average of 185 bags/h for an operating machine; taking into account that a machine is operational $92\%$ of the time, the mean service rate is $185 \times .92 = 170.2$ bags/h. + +# Number of Channels Open + +We want to determine the number $M$ of open channels that optimize the system and allow all of the baggage to be checked in time to prevent any delays in flight departures. + +# Advantages of the Queuing Model + +A queuing model allows us to determine the average number of units in the system at any given time and the average time that a unit spends in the waiting line or being serviced. Perhaps the most important advantage is the fact that we can also determine a utilization rate for the servers [Ecker 1988, 379]. From this information, we can aim to increase utilization in order to decrease costs and optimize our solution. + +# Airport A + +For each day of simulation, we determine the total number of bags and run them through our queue simulation in Mathematica. We also estimate the number of servers needed to process all of the bags within a 2-hour period. + +After a few guess-and-check trials, we determined that $M = 19$ servers will be adequate. Bags arrive at approximately $\lambda = 2,999$ bags/hour, each bag spends about $0.52$ min in the scanner, and with 19 EDS scanners an average of 9 bags are waiting in the queue at one time. On average, a bag waits $0.17$ min in the queue, and the total time to get all of the bags scanned is $1.85$ h—well within the limit of $2$ h. The utilization rate is $93\%$ , so each EDS machine is being used almost the entire time. + +# Airport B + +One run produces 6633 bags, for which 20 machines will do. Approximately $\lambda = 3317$ bags arrive per hour, and the average time spent scanning each bag is $0.95\mathrm{min}$ . The average number of bags waiting to be scanned is 33, while the average waiting time is $0.59\mathrm{min}$ . The entire queue takes approximately $1.95\mathrm{h}$ to run, with a utilization factor of $97\%$ . + +# Comparison + +The results from this model (19 machines at A, 20 at B, at costs of $20.9 million and$ 23.6 million) agree closely with those of the Hasty Model (19 and 21 machines, at $20.9 million and $22.7 million). + +# Model 3: Influx Simulation Model + +In our previous models, we assumed a constant flow of arrivals. Realistically, different numbers of people arrive at the airport every minute, either dashing to the counter (if they are late) or walking patiently towards the EDS machine (if they are on time). The main drawback in our queueing model is that it handles arrivals as a whole and does not separate them into separate flights and departure times. However, our Influx Simulation Model will account for this by using a separate Poisson process, to simulate people arriving, for each flight. The flights will be analyzed individually, resulting in a minute-by-minute distribution. + +# Arrival Rate + +To account for peak traffic, we assume that $100\%$ of passengers show up for their flights, over the 1.25 h-period between 120 min and 45 min before departure. Therefore, we estimate that a flight with 128 passengers will have an arrival rate equal to the number of passengers divided by the time interval in which those passengers arrive. For example, this flight will have an arrival rate of 128/1.25 or approximately 102.4 passengers/h. + +# Scanning Rate, Bag Rate + +The scanning rate is 185 bags/h, and passengers average 1.4 bags/person. + +# The Influx Simulation Model + +We split the peak hour into 10 six-minute intervals, to provide a decent buffer between flights and give people an opportunity to have a couple of minutes leeway in case a flight is slightly delayed. We also chose this size interval to provide a small number of flights departing in an interval, which helps reduce possible waiting-line congestion. Our model can deal with multiple planes in large airports; however, smaller airports would have to choose a different process for scheduling, because they might not have the runway capacity to support multiple flights. + +At airport B, with $100\%$ of passengers showing up for full flights, a total of 5781 passengers arrive. We divide them into 10 "platoons" of 578 each according to the six-minute interval in which their plane departs. With approximately the same number of people departing in each time interval, we even out the congestion. + +For a Poisson process, the following properties must hold [Lapin 1997, 229]: + +- The number of events in one interval is independent from any other interval. +- The mean process rate $\lambda$ must remain constant at all times. +- The number of events in any interval of length $t$ is Poisson distributed with mean $\lambda t$ . +- As the interval size goes to zero, the probability of 2 or more occurrences in an interval approaches 0. + +Under these conditions, the probability of $x$ arrivals in a single interval is + +$$ +P (x) = \frac {e ^ {- \lambda t} (\lambda t) ^ {x}}{x !}, \qquad x = 0, 1, 2, \ldots . +$$ + +The 578 people in a platoon arrive over a $1.25\mathrm{h}$ -period. Figure 1 displays the graph of a continuous approximation to the discrete probability mass function of a Poisson process with arrival rate per minute of $\lambda = 578 / (1.25 \times 60) = 7.7$ people/min. + +We use the graph in Figure 1 to simulate the arrival of passengers. We start by generating random ordered pairs. The first coordinate is a random integer between 0 and 15, representing the number of passengers that arrive in one minute, and the second coordinate is a random number between 0 and 0.2 (above the peak of the curve in the figure). We check each ordered pair to determine whether or not it falls under the curve of the graph. If so, we consider the pair to represent passengers arriving into the queue. We repeat this process until we generate 75 points that fit under the curve. These 75 points + +![](images/a0e7717d92a0c455eaf26c9248cc3aaabc3f989fb873998a4c3d97229a080160.jpg) +Figure 1. Graph of continuous approximation to a Poisson distribution with arrival rate $\lambda = 7.7$ people/min. The peak of the curve is at approximately (7.5, 0.15) + +represent how passengers for departures in this six-minute interval arrived at the airport in each minute of the $75\mathrm{min}$ in the $1.25\mathrm{h}$ arrival period. We also put a stipulation in the program to hit the target number of people arriving, i.e., 578 for airport B. For each airport, we generated a list of Poisson values for each of the 10 different time intervals of departure times. We organize the Poisson sequences in Figure 2. + +![](images/5e0eaefcbafb77529af0da8e9829c9629b6f560bb0b7294290c621db040065ff.jpg) +Figure 2. Results of 10 simulations. + +The dark bars represent intervals of $75\mathrm{min}$ ; each corresponds to arrivals for a six-minute time interval of flight departures. The gray bars correspond to the remaining $45\mathrm{min}$ when the plane is loading luggage and passengers. + +We analyzed the minute-to-minute data in a spreadsheet, with a column for each six-minute departure interval and a row for each minute from 0 down to + +180. The spreadsheet processes the rows from top to bottom. The spreadsheet + +- sums a row to yield the number of passengers arriving in a particular minute, +- multiplies that total by the baggage rate (1.4 bags/passenger) to get the number of arriving bags, +- adds those bags to any leftover from the previous minute to get Bag Total, and +- subtracts Bag Total from the number of scanners times 3.083 (the scanner rate in bags per minute). + +If the difference is positive, the number of scanners was sufficient for that minute. If the difference is negative, not all bags in the queue could be scanned through; these bags are carried over to the next minute and the system begins to get behind. As long as the machines can stay close to keeping up, flights will not be delayed. + +Passengers cannot arrive for their flights less than $45\mathrm{min}$ before departure. However, baggage dropped off at the EDS can be processed and loaded on the plane up to $15\mathrm{min}$ after this cutoff, since planes start loading passengers approximately $30\mathrm{min}$ before departure. The extra 15-min leeway allows time for the EDS machines to catch up and for baggage to get loaded. + +From the column for the number of bags in the queue, we can determine whether or not the machines keep up. If 15 min after passengers are no longer allowed to board, the number of bags in the queue equals the total of the bags arriving for flights departing after the current flights, then all of the bags for the current flights have already been scanned. Therefore, when the flight leaves, the scanner may still be behind but any backed up bags are from flights not yet set to depart. + +Figure 3 is a plot of every minute of the peak hours of airport A. The graph accentuates the maximum population in the interval 54-75. The optimal number of scanners to use at airport A is 23, for a total cost of $25.3 million. Airport B displays similar results, yielding 24 scanners at a cost of$ 25.9 million. + +# Developing a Flight Schedule + +During the peak hour, 46 flights depart from airport A and 48 from airport B. We need optimal schedules for all passengers to have their baggage scanned in time for their departures. + +Scheduling too many flights to depart around the same time leads to congestion in the EDS queue; additional machines would be needed to handle these extreme times but would be underutilized the rest of the day. + +A hasty approach might be to schedule approximately the same number of flights to leave at the same time. However, because the flights have different numbers of passengers, there could still be massive congestion. + +![](images/ba6a3497dd63e09fc589b7e463cfe100410831262f8ef6c3fe08678d23b9bc5c.jpg) +Figure 3. Minute-by-minute passenger report at airport A. + +# Assumptions + +- All passengers arrive in a Poisson process no more than 2 hours before, and no less than $45\mathrm{min}$ before, their departure. +- Passengers arriving later than $45\mathrm{min}$ before the flight cannot board. +- All baggage must be checked before passengers are allowed to board the plane. +- Passengers start boarding 30 min before gate departure. +- In relation to the previous two assumptions, all baggage for any given flight must be scanned $30\mathrm{min}$ prior to departure. +- Checked baggage is scanned at a uniform rate. +- Carry-ons are not scanned by EDS. + +# Equally Distributed Passengers + +One way to avoid congestion is to ensure that large numbers of people are not required to arrive at the airport during the same time period. This is accomplished in the model by splitting the peak hour into 10 six-minute intervals, with the goal to space out the passengers equally in these 10 intervals. + +We use the range of passengers per flight (given in the problem statement) to calculate the number of passengers departing during the hour. Assuming all flights are full and all passengers arrive for their flights, 5,396 passengers arrive for airport A (540 per interval) and 5,781 for airport B (578 per interval). + +We distribute the flights into the 10 intervals so that approximately the desired number of passengers depart in each interval. Our algorithm (as implemented in a Mathematica program) works for any desired interval and provides a listing of which flights should be scheduled to depart in the same time intervals. After arranging the flights into intervals, scheduling becomes a matter of determining the order of departures of the small number of flights in an interval. Table 2 shows the schedule for airport A. + +Table 2. Flight schedule for airport A. + +
Flight interval
:00:06:12:18:24:30:36:42:48:54
Specific flight capacity142142142142142142194194215350
142142142142142142142194194194
14214214214285128142142128
464646468512834
343434348534
343434142
Totals540540540540539540546530537544
+ +During peak hours, the rate of passengers coming in continues to grow until the middle of the peak period. If delays were to occur during this time, large flights might be delayed, which could eventually also delay smaller flights because of runway congestion. To avoid this problem, we place the time periods that contain the larger flights near the end of our flight interval. This allows the passengers for the smaller planes to get on their planes and depart on time. If there is a delay or unexpected congestion towards the end of the peak hour, it mainly affects just the two larger flights. + +# Recommendations + +Install 23 EDS machines in airport A and 24 machines in airport B. With these numbers, during the peak hours $100\%$ baggage screening can be accomplished without delaying any departures while also maintaining high utilization rates. + +Implement an optimal form of flight scheduling by distributing passengers evenly among a set number of time intervals. This type of a schedule will reduce passenger congestion, help prevent takeoff delays, and reduce the additional congestion if a plane gets delayed. + +# Device Technology + +New technologies are accurate enough to warrant research to perfect their technologies. X-ray diffraction would equal the accuracy of EDS, and increasing research intensity should prove useful, and quadrupole resonance is specialized in detecting potentially explosive materials such as phosphorous. + +# Cost + +Currently, the EDS can scan 3.1 bags per minute. If we could up the rate to 4 bags per minute, the number of required scanners will decrease by at least one. The new technologies would obviously be expensive, but a decrease of even one scanner could decrease the total cost by over $1 million. + +# References + +Domestic Flights Usage Guide. 2003. http://svc.ana.co.jp/eng/dms/others/information/main.html#D. Accessed 9 February 2003. +Ecker, Joseph. 1988. Introduction to Operations Research. Canada: John Wiley & Sons. +Heiney, Paul. 1996. What is x-ray diffraction? http://dept.physics.upenn.edu/\~heiney/talks/hires/whatis. html#SECTION000100000000000000 . Accessed 10 February 2003. +Kranjc, Asja. 2000. Nuclear Quadrupole Resonance. http://kgb.ijs.si/~kzagar/fi96/ Seminarji99/seminarska.doc. Accessed 10 February 2003. +Lapin, Lawrence L. 1997. Modern Engineering Statistics. Belmont, CA: Wadsworth. +Render, Barry. 1997. Quantitative Analysis for Management. Englewood Cliffs, NJ: Prentice-Hall. +Ross, Sheldon. 1983. Stochastic Processes. Canada: John Wiley & Sons. +X-ray diffraction. 2000. http://www.matter.org.uk/diffraction/x-ray/x_ ray_diffraction.htm. Accessed 10 February 2003. + +# How I Learned to Stop Worrying and Find the Bomb + +Tara Martin + +Gautam Thatte + +Michael Vrable + +Harvey Mudd College + +Claremont, CA + +Advisor: Hank Krieger + +# Summary + +We develop a queueing system model to determine the optimal number of Explosive Detection System (EDS) and Explosive Trace Detection (ETD) machines to implement $100\%$ baggage screening for airports A and B. We test the model with data from United Airlines at Denver International Airport. + +The particular queue system implementation does not affect queue length but can affect the quantity of late bags and length of delay. Our two-queue system model is $92\%$ as efficient as an optimal priority queue, so a complex queueing system is not required. If the system can handle peak-hour volumes, there will be no delays during the rest of the day. + +We also compare three flight-scheduling algorithms for peak-hour flight departures and create flight schedules for airports A and B. Optimal scheduling of peak-hour flights does not significantly change the number of machines needed, although use of a greedy algorithm reduces late bags. + +To meet the $100\%$ baggage screening requirement using EDSs, we recommend 10 for airport A, 11 for B, and 48 for United Airlines at Denver. These conservative estimates account for breakdowns and a safety margin. To replace EDSs, four times as many ETDs are needed. + +Initial cost of implementation at airports A and B is \(22.9 million. This cost could be lowered by speeding the approval of cheaper and faster technologies such as dual-energy X-ray, multiview tomography, and quadrupole resonance. + +![](images/f83ccd08fcfc880ca7f9f0816a254fb74b1d6bad4f296c6f23fc9caa76c69633.jpg) +Figure 1. Flight departures from Denver by United Airlines during a single day. + +# Baggage Screening Queueing Models + +We construct a queueing model of the screening baggage for explosives and test it with many more bags than it was designed to handle. Sample loads include peak-hour traffic at airport A and at airport B and a flight schedule modeled after traffic patterns at Denver International Airport. The raw data for the Denver simulation, summarized in Figure 1, consists of 991 nonstop flights on a typical Monday, as taken from a United Airlines timetable [United Airlines 2003]. + +# Terminology + +Queueing System. A system for storing bags that arrive before a screening machine can take them. The order in which the bags are removed depends on the type of queueing system. Queueing systems are described by their input, queue discipline, and service mechanism. + +Queue. A system for storing bags which is first-in, first-out—that is, bags that arrive first are the first to be screened. A single queueing system might be composed of multiple queues. + +Input. The input describes how the bags enter the system. In our model, the rate at which bags arrive varies during the day. + +Queue discipline. The queue discipline describes how arriving bags are served, such as first-in, first-out. + +Service mechanism. The service mechanism tells how the bags are assigned to servers (screening machines) as they leave the queue. Our model allows for many servers; in the case of multiple queues, the service mechanism specifies how machines are matched up with queues to process bags. + +# Formulation of the Model + +We compute a schedule for the arrival of passengers and baggage. This baggage arrival schedule is left fixed irrespective of changes we make to the baggage queueing system to determine whether bags make it to flights on time. Our goal is a model that determines how long each bag is delayed and hence suggests an appropriate number of machines required for a specified load. + +We make a number of simplifying assumptions: + +- The time required to screen a bag is short. Any delay in delivery of the bag is due entirely to waiting for screening, not to the screening itself. This assumption allows us to disregard many distinctions among different screening machines; only the rate of screening is important. +- Discretizing time does not introduce a large error. Our simulation proceeds in small discrete time steps. This time step, denoted $T$ (usually 2 min), is small in comparison to the time available for screening a bag, so rounding times to the nearest multiple of the time-step does not cause a large error. +- Screening of a bag must be completed by some fixed time before its flight departs; we use $10\mathrm{min}$ . A bag that does not meet this deadline is late. +- Baggage screening, not check-in or other processes, is the only bottleneck. Passengers do not encounter another bottleneck before baggage screening, such as a long line to check in, that affects the flow of bags into the screening system. This assumption allows us to consider the worst-case scenario of unlimited baggage inflow and to isolate the effects of the screening system from other airport influences. +- It is not necessary to consider multiple separate screening systems at an airport; if all are independent and approximately equally loaded, then the system behaves as a single system. +- Baggage is processed at a constant rate. We do not allow for oversized baggage or other variations that affect processing time of bags but assume these are included in the averages. + +Our model is a queueing system [Prabhu 1997]. The input is a list of bags that arrive at each time step; the bags are grouped according to how much time they are allowed before they must be finished with the screening process. A + +fixed number of servers each can process a fixed number of bags in any time step. + +# General Analysis + +Our queueing model can be described by several parameters: + +- The service rate $S$ (bags/time-step) is the rate at which machines can process bags at full efficiency. +- The input rate $\lambda(t)$ (bags/time step) is the number of bags added to the queueing system at time $t$ . + +Regardless of implementation, the number of bags in the queueing system at any time is determined only by $S$ and $\lambda(t)$ . The implementation of the queueing system can affect the order in which bags are removed from the queueing system, not the number in it. + +The total number of bags in the queueing system at time $t$ , denoted $Q(t)$ , is determined by + +$$ +Q (t + T) = \max \{0, Q (t) + \lambda (t) - S \}. +$$ + +If $\lambda(t) > S$ , the number of bags in the queueing system increases; if $\lambda(t) < S$ , the number of bags shrinks. Figure 2 shows the bag input rate $\lambda(t)$ at the Denver airport in our model. The dashed horizontal line shows the service rate $S$ for 36 EDS machines operating at 180 bags/h. The solid line shows the number of bags in the queueing system $Q(t)$ , which increases when $\lambda(t) > S$ . Approximately 52 EDS machines would be required to prevent a backlog of bags from ever building up. + +# Queue Disciplines and Service Mechanisms + +We analyze several mechanisms for controlling how bags are stored in the queueing system before screening and later removed from it. These mechanisms have a large impact on the timely screening of bags, so choosing an appropriate mechanism is important. + +# Naive Model + +We first develop a simple model to give an upper bound estimate on the number of EDS machines we need, using the assumptions: + +- The hour before the peak-hour has significant traffic. +- The minimum number of machines is the number to ensure that no flight is delayed. + +![](images/f0a2de9f7590ceb3dafae2e642fa4f0a3d916c8b96f5e61ef96a8f1839bba416.jpg) +Figure 2. Bag arrival rate and number of bags in queueing system at Denver airport, assuming 36 EDS machines processing 180 bags/h. + +- All bags arriving for a peak hour flight are processed in one hour. +- Bags for a flight are computed using parameters in the problem statement. + +Bags arriving for peak-hour flights must be processed within a 1-h time period. Our model suggests that 34 and 37 EDS machines are required for airports A and B, respectively, and 55 for Denver International Airport. We believe that these are upper bounds. Any optimization in the passenger arrival model or the organization of people at the airport would probably achieve the same $100\%$ success rate but with fewer machines. + +# Optimal Queueing + +We develop an optimal queueing system that bounds the performance of any queueing system and compare various queueing models to this optimum. + +We minimize the total amount of time by which bags are late. + +Our optimal queueing system uses a priority queue: As bags arrive, they are added to a pile. When a bag is to be processed, we pick the bag that needs to be finished soonest. + +In the Denver simulation with an optimal queue, 35 EDSs operating at 180 bags/min each are sufficient to process all bags before their deadlines. The + +queue fills with up to nearly 5,500 bags at one point (26 min of uninterrupted processing is required to screen all of these). + +Practical implementation of such an optimal priority queue at an airport would be too complicated. Thus, we look at other less-complex queueing systems. + +# Single Queue + +In a single first-in, first-out queue, bags that arrive earlier are screened earlier. This scheme could be implemented with a single conveyor belt carrying bags from check-in to machines. + +As long as bags can be screened quickly enough that a significant line never develops, this scheme works well. We find that 47 EDS machines at Denver suffice to deliver all bags on time; this is $34\%$ more than required by the optimal solution. + +If bags must be finished screening at least $10\mathrm{min}$ before departure, then to guarantee that all bags arriving at least $30\mathrm{min}$ before the flight are processed in time, the wait must never grow to more than $20\mathrm{min}$ . In the Denver simulation, this can be done with 38 EDS machines; approximately $0.75\%$ of all bags arrive within $30\mathrm{min}$ of departure and are delivered late. + +This single-queue system does not perform very well under load. As the queue increases in length, the chance of processing a bag late rises quickly. Although most bags arrive with more than an hour that they could wait, the few bags with less time available forces the queue length to be kept small at all times. Many bags are processed much more quickly than necessary so that the few bags that need rapid processing are not late. This situation is not optimal, and it is improved by our next queueing model. + +# Double Queue Model + +Giving preferential treatment to some bags can produce a better queueing system. In particular, bags that arrive late should be processed more quickly. We propose a two-queue system consisting of two first-in, first-out queues for bags of different priority: A normal queue is used for bags that arrive sufficiently early and a rush queue for bags that do not arrive as early. + +The total throughput of the system is not increased, but bags are much more likely to be processed before their deadline. In effect, time is borrowed from bags that have it (by placing them in a slower queue) and given to those that need it (by allowing them to jump ahead of bags in the normal queue), approximating the optimal queue discipline. The number of machines can be decreased, resulting in longer lines but without causing bags to be processed late, and also in significant cost savings. + +The double queue model requires several implementation decisions: + +The method for sorting bags into the two queues (the queue discipline). The cutoff may be fixed (e.g., all bags with less than $40\mathrm{min}$ to departure go into + +the rush queue) or vary with the lengths of the queues. + +The service mechanism. At each time step, the number of bags to remove from each queue must be determined. A fixed number of machines can be assigned to each queue; but if one queue empties, this leaves machines idle. It is better to adjust dynamically the number of machines processing bags from each queue. We suggest increasing the number of machines processing the rush queue as the rush queue increases in size. + +In the Denver simulation, 42 EDS machines are sufficient to get all bags delivered on time. With 38 EDS machines, the only late bags are those that arrive late (only $0.05\%$ of bags). This system requires $9\%$ more machines than the optimal solution. + +# Evaluation + +Adding more queues allows for more flexible scheduling of bag processing, which may help keep more bags from being late. However, more queues mean more parameters in the queue discipline and service mechanism, and a poor choice may harm performance. Additionally, adding queues adds complexity, with more potential for failures and higher labor cost. We believe that the benefits of a many-queue system are not worth the complexity incurred. + +A double-queue system provides a performance competitive with the optimal system; we recommend its use. With only a few more machines than the 35 of the optimal queue, only a few bags are delivered late; and with only $20\%$ more machines, no bags are late. + +# Validation of the Model + +# We account for + +- unfilled seats (ranging from $0\%$ to $50\%$ and partially depending on the size of the flight), +- some of the passengers transfer from another flight and do not have bags rescreened (35%), and +- distribution of checked bags from 0 to 2 per passenger. + +In the Denver simulation, a total of 82,500 bags are screened in a day. + +We validate our model by comparing its predictions with numbers for EDS machines at actual airports. There are no statistics for the number of machines at Denver, but Dallas/Fort Worth processes 55,000 bags/day with 60 EDS machines [Douglas 2002]. If scaled to the same numbers of bags processed by Denver in our model, Dallas/Fort Worth would use 90 EDS machines. This is larger than the number we predict is necessary. However, on initial testing, EDS machines were less than half as fast as predicted (72 bags/min vs. 180 bags/min) [Clark County Department of Aviation 2002]; combined with a safety margin, our results are in agreement with the Dallas/Fort Worth figure. + +# Extensions to the Model + +We present extensions to account for various modifications of the problem, with each change considered in isolation, not in combination with other extensions. + +# Accounting for Error Rates + +EDS machines have a false positive rate of $30\%$ [Butler and Poole 2002]. The result of a false positive is that the bag must be more closely examined, causing delay for that bag and some bags to be late that otherwise would not be—so more machines may be needed. We incorporate this false positive rate into our model by randomly adding a fixed time (6 min) to $30\%$ of the bags. + +At the Denver airport, the effect is to slow down screening enough that 40 EDS machines (instead of 38) are required to process all but late bags in a timely fashion. Doing so for all bags becomes nearly impossible, since some bags arrive with less than 16 min to departure. + +# Incorporating ETD Machines + +Although we developed our model for EDS machines, it is generic enough to study other devices. We identify ways to incorporate ETD machines: + +- ETD Machines in series with EDS machines. The problem statement relates that up to $20\%$ of passengers may need to have bags screened through both an EDS and an ETD machine. We can account for this by giving $20\%$ of bags an extra delay of $4\mathrm{min}$ . + +We assume that there is no queue between EDS and the ETD machines following them—appropriately many ETD machines are purchased to match the processing speed of EDS machines. In the Denver simulation, an increase to 39 EDS machines, instead of 38, allows all but late-arriving bags to be processed on time. + +- ETD machines replace EDS machines. We can calculate the number of ETD machines necessary to obtain the same service rate as for EDS machines and compare the costs. Any mixture of the two machine types with the same overall service rate will behave the same in our model; but since the cost varies linearly as machines of one type are replaced with the other, the most cost-effective operation will occur at one of the extremes, either all EDS or all ETD machines. + +Assuming a rate of 45 bags/min for an ETD machine (one-fourth the rate of an EDS machine), four times as many ETDs will be needed. According to Butler and Poole [2002], ETD machines cost less than one fifth the amount of EDS machines to operate. + +# Strengths and Weaknesses of the Model + +Our model succeeds in capturing the essence of the problem and allows for good predictions, as a result of its many strengths: + +- Our model is based on real-world data. Use of data from Denver makes it much more likely that the results from our model are realistic and not artifacts of an artificial flight distribution, such as the isolated peak hour of flight at airports A and B. Additionally, agreement with figures for EDS and ETD machines currently installed at airports gives us confidence that our model is accurate. +- Our model is flexible enough to handle other types of screening machines, passenger arrival schedules, etc. Our model's parameters can be varied to account for changes in screening machinery, training of baggage screening personnel, and so on. Since our queueing simulation takes as input merely a list of arrival times for bags, it is also very easy to study airline flight schedules at any other location, or to modify the arrival behavior of passengers. +- Our model can predict the screening capacity needed as well as predict how the system will fail. Our model goes beyond merely predicting the number of baggage screening machines needed to process all bags on time to give a complete model for the flow of bags through the system. The model can thus be used to see exactly how the baggage screening system will begin to break down as it is pushed past its limits. This information will help airports evaluate what margin of safety they require. + +At the same time, there are aspects of our model that could be improved: + +- More detailed data for machine operation could be incorporated. Our model is rather simplistic in that all behavior is based only on the waiting time to process bags. Including the actual time to scan a bag (not just the queue wait time) may be better, especially for systems that are slower to screen bags. +- Queue scheduling could be optimized further. Our proposed two-queue system generally performs well, but we have not completed a detailed analysis of it nor systematically determined optimal values for its parameters. + +# Recommendations + +Based on simulations and an analysis of our model, we are able to make a number of recommendations: + +- A safety margin can make a significant difference. + +The loss of just a small percentage of the capacity of the system can make the difference between no late bags and a significant fraction of late bags. + +This is shown in Figures 3a and 3b. After the number of machines in use drops by about $10\%$ , the number of late bags rises dramatically, regardless of queueing algorithm. + +Seemingly paradoxically, the optimal queueing algorithm has the highest fraction of late bags for some values; this is because it sacrifices the percentage of bags on time for decreasing the average amount by which bags are late. + +Since unpredictable slowdowns or large arrivals should be anticipated, planning to handle a larger than expected number of bags is necessary to avoid breakdown of the system. With EDS machines operational $92\%$ of the time, at least $8\%$ more EDS machines should be installed than predicted as necessary by our model. We recommend a further margin of safety, perhaps $10\%$ , to account for any other unexpected circumstance, such as unusually high traffic. + +Based on these considerations, we recommend 48 EDS machines for Denver, 10 for airport A, and 11 for airport B. + +- Backlogs should be avoided except at peak times. The processing capacity (bags/min) should be set higher than the arrival rate of bags at all but peak times. Further, peak times must be fairly well isolated (to an hour or so), or queue lengths will grow quickly to unmanageable levels. When a line develops for scanning, it can then take a good deal of time to get back to a no-wait situation. While our model shows that a persistently long queue can sometimes be handled as long as it does not continue to grow, a long backlog of bags is unstable—any event that causes the queue to grow in length quickly causes many late bags. Thus, a persistently long queue indicates insufficient screening capacity safety margin. +- Set stricter deadlines for passenger arrival before flights. We assume that airlines are fairly lenient about accepting bags from passengers up to the final deadline for placing them on a flight. An airline could establish a policy wherein bags that are not checked by a certain time before the flight—say, $30\mathrm{min}$ —are not guaranteed to make the flight. With such a policy, any time we have identified a strategy as handling all but "late bags," all bags would be handled in time—the "late bags" would have been rejected by the airline outright and would not delay the flight. +- Plan for future growth in aircraft travel. Historical data shows a growth of about $6\%$ per year in the number of airline passengers [Metropolitan Airports Commission 2003]. Since a screening system is a large investment, an airport should plan with an eye to future capacity. The dip in traffic since 2001 may be only temporary and airline traffic may return to its normal growth curve, with a corresponding larger-than-usual increase in traffic in the next year or two. + +![](images/d66a2f74314ca675b97081862f50d47ffb473aeb52a86c1cc680ed980c85e768.jpg) +Figure 3a. Effects of the removal of baggage-screening capacity on the percentage of bags screened on schedule, for various queueing algorithms. + +![](images/978a9caa7c5bcbe2277de60e345daf5f7379cd6aee2a7bb22855a48fdb5d53ab.jpg) +Figure 3b. Effects of the removal of baggage-screening capacity on the average delay for a bag, for various queueing algorithms. + +- Install a baggage screening system early, and ramp up use. Unexpected difficulties may arise with a new screening system. In addition, machine operators need to become proficient. If an airport installs a baggage screening system in advance of the federally mandated deadline, screening can begin below $100\%$ and increase to $100\%$ by the deadline as problems are dealt with. + +# Optimal Peak-Hour Scheduling + +We develop three passenger arrival models to schedule flights during the peak hour, with three distinct passenger arrival profiles and two arrival concentration distributions. The following assumptions simplify the model without reducing the validity of the simulations. + +# Assumptions + +- On average, passengers arrive $1.5\mathrm{h}$ before departure. The problem statement says "between forty-five minutes and two hours"; although $1.5\mathrm{h}$ is not the middle of that range, it is close and makes for easier modeling. +- Passengers arrive according to a Gaussian distribution. We adopt a Gaussian arrival model from Clark County Department of Aviation [2002]; such a distribution encompasses realistic features, such as a peak in arrivals considerably before flight departure. We chose a mean of $90\mathrm{min}$ and tried standard deviations of $15\mathrm{min}$ and $30\mathrm{min}$ , implying that respectively $95\%$ and $70\%$ of passengers arrive between $2\mathrm{h}$ and $1\mathrm{hour}$ before their flights. +- Flights scheduled to leave during the peak hour are uniformly spaced. This assumption accommodates a generic runway structure. + +# Passenger Arrival Models + +We apply three passenger arrival models to airports A and B. The peak-hour data given in the problem statement were processed both in isolation (no other flights during the day) and as part of a busier schedule that affects peak-hour departures. + +# Random Placement Algorithm + +A random placement of flights within the peak hour, according to a uniform distribution, makes different parts of the hour look approximately the same. We regard this algorithm as a baseline. + +# Bimodal Distribution Algorithm + +A bimodal distribution schedules the largest flights at the beginning and at the end of the hour in an attempt to reduce the peak in passenger arrival. This method is useful only when the standard deviation of arrival distributions is low (such as $\sigma = 15\mathrm{min}$ ). At higher standard deviations (such as $\sigma = 30\mathrm{min}$ ), the bimodal distribution converges to the distribution obtained with the greedy algorithm below. + +# Greedy Algorithm + +A greedy algorithm always makes the optimal local choice in the hope that the final solution will be globally optimal [Cormen et al. 2001]. Our greedy algorithm attempts to minimize the peak in the arrival distribution and thus reduce a major peak in passenger arrival for peak hour flights. The following methodology is used: + +- We consider the flights sequentially from largest flight to smallest. +- At each step, the center of the passenger arrival Gaussian distribution being considered is assigned to the minimum value among the possible centers of the distributions. +Each center cannot be used for more than one distribution. + +# Simulation Results + +We ran each of the passenger models through the optimal baggage screening model to determine which would be best suited for airports A and B. The $\sigma = 30$ cases outperformed the $\sigma = 15$ cases for all arrival distributions, which implies that having nearly all the passengers arrive for peak hour flights at the same time backs up the queue significantly. + +The procedure used to combine the given peak-hour data and the Denver data involved: + +- The peak-hour of the Denver data was identified as 10 A.M. to 11 A.M., with a maximum rate of baggage arrival of 160 bags/min. +- The peak-hour data for airports A and B were scaled up by a factor of 3.5 to approximate better the volume at Denver. +- The peak hour in the Denver data was entirely replaced by the airport A and B data in their respective simulation. + +Both the embedded and the isolated peak data were processed using the optimal baggage screening algorithm. + +The greedy algorithm creates a schedule that performs up to $50\%$ better (in terms of total late time for bags) compared to the random schedule, when + +![](images/48ce6ef280cfc571ae04a1787f1675fe888728f5e9dea8808fd0aa7ccbc6d6b8.jpg) +Figure 4. Comparison of peaks of passenger arrival profiles illustrating the superiority of the greedy schedule over the random and bi-modal distributions. + +the peak hour is embedded in the relatively busy day; in isolation, a greedy algorithm schedule was about $30\%$ superior. The bimodal algorithm produced a schedule that was worse than the baseline; we therefore eliminate it. + +At a load below capacity of the machines, any scheduling algorithm will do. Above capacity, some methods perform better than others. The efficiency of a scheduling algorithm may be gauged by how long its operating capacity is exceeded and how backed up the queue becomes. + +In Figure 5, notice that the bimodal profile exceeds its capacity first and continues to operate above capacity for the longest time. Even the intermediate decrease in queue backup is not enough to allow the bags to be processed faster than either the random or the greedy profiles. On the other hand, although the random placement profile exceeds its capacity latest and again drops below capacity earliest, its high peak leads to a significant queue backup that cannot be cleared as quickly as in the greedy profile. This latter profile balances both factors, giving the best result. + +We used the two better algorithms to develop schedules for airports A and B. [EDITOR'S NOTE: We omit the details of the schedules.] For both airports, the greedy algorithm generated a better schedule. Both methods resulted in the use of the same number of EDS machines at the airports, although the greedy + +schedule results in fewer late bags. Airports A and B require 8 and 9 EDS machines, respectively, for $100\%$ baggage screening and no delays due to the screening process. + +# Recommendations + +With normal or above-normal traffic during pre-peak hours, the scheduling of flights during the peak hours does not matter much, because passenger arrivals are spread out over 3 hours, reducing the impact of changes within the peak hour. + +If the peak hour has significantly more traffic than pre-peak hours, then the greedy algorithm is better than either the random or the bimodal distributions. + +# Review of Future Technologies + +Current technology approved by the FAA is highly limited and extremely expensive. + +EDS machines produce a three-dimensional image of the contents of a bag, allowing observation of hidden materials, zoom, and rotation of perspective to focus on suspicious objects. Unfortunately, EDSs use a powerful X-ray that requires screening to protect operators, is very expensive, and—due to the high sensor rotation rate required to resolve images—is limited in speed. + +ETD machines use mass spectrometry to detect trace levels of explosives. The sample collection takes much longer and has much higher labor requirements than the EDS, with a critically high false-negative rate of $30\%$ for a surface sample and $15\%$ for an open-bag sample. This poor detection rate is due to the uneven concentration of explosive residues within a bag [Butler and Poole 2002]. + +Few alternatives have been developed as fully as EDSs and ETDs, but some appear very promising: + +- Coherent scatter is slower than EDS (60–240 bags/h), but with a near perfect detection rate and an order of magnitude fewer false-positives, it is still relatively efficient [Butler and Poole 2002]. +- Dual-energy X-ray has a high false alarm rate of $20\%$ [Singh and Singh 2003] but can process 1,500 bags/h. These systems are being installed in London and other European airports and await certification in the U.S. [Butler and Poole 2002]. +- Stereoscopic tomography, slightly different from the computed Tomography used in EDSs, scans 1,200–1,800 bags/h and is being tested for accuracy and false alarm rates [Singh and Singh 2003]. + +- X-ray diffraction uses unique diffraction patterns of scanned materials to determine their chemical composition. Current experiments show a nearly perfect detection rate and extremely small false-alarm rate [Singh and Singh 2003]. Throughput rates and cost will likely be similar to that of normal X-ray scanners, making this a promising technology. +- Neutron-based detection is used in several developing techniques: + +Thermal neutron analysis (TNA) can detect nitrogen levels particular to many plastic explosives but has limited sensitivity, a high false-alarm rate due to background nitrogen levels, and is at least as expensive as an EDS, making it a less promising candidate. + +Fast neutron analysis (FNA) is similar to TNA except that it can also detect oxygen, carbon, and hydrogen levels, allowing greater sensitivity and accuracy. However, the high-energy neutrons used create large amounts of noise, making information difficult to detect. + +Pulsed fast neutron analysis (PFNA) solves the noise problem but requires a collimated, pulsed energetic neutron beam, which is hard to make and tends to be unsafe and expensive. + +Pulsed fast thermal neutron analysis (PFTNA) uses a shorter pulse. It measures both thermal and fast neutron information. Portable models for landmine, unexploded ordinance, and narcotic detection have very high accuracy levels [Singh and Singh 2003]. + +- Quadrupole resonance uses magnetic resonance techniques to identify the composition of the scanned object. Every material releases a unique signal; those corresponding to explosive compounds can be isolated and identified. Machines using this technique are under construction; the manufacturer predicts that this technology will be faster (300 bags/h) and more accurate than both EDS and ETD [Quantum Magnetics 2002]. +- Millimeter wave imaging is a noninvasive technique that detects short wavelength electromagnetic radiation from scanned objects. While this appears to work well for locating weapons concealed about a person, it does not seem able to distinguish explosive materials from inert ones and is thus not useful for baggage scanning [Homeland Security Research 2002]. Microwave imaging is similar to millimeter wave imaging. + +# Conclusion + +Frankly, we've tried everything else . . . We've put up more metal detectors, searched carry-on luggage, and prohibited passengers from traveling with sharp objects. Yet passengers still somehow continue to find ways to breach security. Clearly, the passengers have to go. + +—The Onion (16 October 2002) + +Since excluding passengers is unrealistic, we study the more practical technique of scanning baggage. Our results are: + +- We develop a model that predicts the behavior of a queueing system for baggage in an airport security screening system and allows prediction of delays caused by the system. This model is then expanded to include multiple types of screening machines and false-positive results. +- We evaluate our model against real-world data for Denver International Airport and for the data given for airports A and B. +- Using our model, we predict the optimal number of Explosive Detection System (EDS) or Explosive Trace Detection (ETD) machines to use at several different airports and provide other recommendations for the implementation of a security screening system. For Denver, we recommend 48 EDS machines; at airports A and B we recommend 10 and 11 machines, respectively. We also compare these figures to actual figures for EDS use at the Dallas/Fort Worth Airport. +- We study how the distribution of flights during the peak hour of the day affects the efficiency of the system. We propose a greedy algorithm for optimally scheduling flights. +- We review promising technologies for future security screening machines. + +Our evaluation of the requirements for $100\%$ baggage screening suggests that such high security goals are cost-effective, so research into alternative technologies and screening systems is needed. + +# References + +107th Congress. 2001. Aviation and Transportation Security Act. http://www.tsa.dot.gov/interweb/assetlibrary/Aviation_and_Transportation_Security_Act_ATSA.Public_Law_107_1771.pdf. +Butler, Viggio, and Robert W. Poole, Jr. 2002. Re-thinking checked-baggage screening. July, 2002. Los Angeles, CA: Reason Public Policy Institute. www.rppi.org/baggagescreening.html. +Clark County Department of Aviation. 2002. EDS bag screening analysis. Presentation posted online. http://www.aci-na.org/docs/EDS%20Bag%20Screening%20Analysis.ppt. Revised 17 January 2002. +Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2001. Introduction to Algorithms. 2nd ed. Cambridge, MA: MIT Press. +Devore, Jay L. 2000. Probability and Statistics for Engineering and the Sciences. 5th ed. Belmont, CA: Duxbury. + +Douglas, Jim. 2002. House votes down Jan. 1 airport-security deadline. http://www.wfaa.com/jdouglas/stories/wfaa020726_am_dfwdelays. 2b8745ed.html. +FAA considering passenger ban. 2002. The Onion (16 October 2002); http://www.theonion.com/soni3838/fea_pessenger_ban.html. +Homeland Security Research. 2002. Homeland Security Analyst—May 2002—Technology Focus. http://www.hsrc.biznewsletter/May_2002/newsletter_May_2002_techfocus.htm. +Metropolitan Airports Commission. 2003. 2002 MAC Annual Report. http://www.mspairport.com/MAC. +Prabhu, N.U. 1997. Foundations of Queueing Theory. Boston, MA: Kluwer. +Quantum Magnetics. 2002. Quadrupole resonance. http://www.qm.com/core_technology/quadrupole_resonance_body.htm. +Singh, Sameer, and Maneesha Singh. 2003. Explosives detection systems (EDS) for aviation security. Signal Processing 83: 31-55; http://www.dcs.ex.ac.uk/research/pann/pdf/pann_SS_084.PDF. +United Airlines. 2003. United.com Electronic Timetable. http://www.ual.com/page/middlepage/0,1454,1891,00.html. + +# Advancing Airport Security through Optimization and Simulation + +Michelle R. Livesey + +Carlos A. Diaz + +Terrence K. Williams + +Humboldt State University + +Arcata, CA + +Advisor: Eileen M. Cashman + +# Summary + +Our design team was tasked with developing optimization and simulation models to: + +- help the airlines optimally schedule all flight departures within peak hours at two large airports in the Midwest; and +- predict the number of explosives detection systems (EDSs) and explosives trace detection (ETD) machines required at the two airports to examine all passengers' bags departing during a peak hour. + +Our optimization model is linked with a genetic algorithm to schedule flight departures optimally for each airport. We use Monte Carlo simulation to generate random data sets for use in a transient stochastic simulation model developed to predict EDS and ETD needs. + +The optimization model yields near-optimal flight schedules for peak hours at the two airports. These flight schedules, along with various probabilities associated with passenger arrival, machine processing speeds, and flight seat distributions, were used by the simulation model to predict the number of EDS and ETD machines required: Airport A requires 30 EDS and 12 ETD machines, and airport B requires 34 EDS and 13 ETD machines. More machines would be needed to accommodate multiple peak hours in succession or increased travel in peak seasons. + +The UMAP Journal 24 (2) (2003) 141-152. ©Copyright 2003 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +# Review of Literature + +# Queueing Theory + +A queueing model is essentially concerned with the input process and service mechanism of the system [Takacs 1962]. The input process at an airport is a combination of the time when passengers arrive before their departure and the number of bags that a passenger checks. The service mechanism is "first-come, first-served," so the order of bags checked is conserved through the screening process. + +# Markov Chains + +A Markov chain is concerned with discrete time and has the property that "if the present state of the system is known, the future of the system is independent of its past" [Kulkarni 1995]. The state of the system at time $(n + 1)$ depends on the system at time $n$ , which depends on the system at time $(n - 1)$ , and so on until at time zero, the starting point of the system. + +# Arrival Distributions + +Queueing models allow the input process to follow any probabilistic distribution. However, many examples in texts describe the arrival of people as a Poisson process [Takacs 1962; Devore 1995]. The assumption that people arrive following a Poisson process is widely used [Heyman and Sobel 1982]. When the arrival density parameter of the Poisson process is large, the distribution is approximately normal [Devore 1995]. + +# Simulation Models + +Simulation queueing models can show the behavior of systems over time [Solomon 1983]. They have been used in the airport industry recently to determine the number of instruments and staff for effective security [Crites 2003]. Simulation models can also take into account the variability of stochastic events, such as passenger arrival distributions and security-screening device operational reliability. + +# Genetic Algorithms + +A genetic algorithm (GA), through its stochastic nature, provides a robust and efficient method for solving difficult optimization problems with large nonlinear search spaces; it generally finds extremely good solutions since it is able to simultaneously search various points of the solution space [Dandy 2001]. + +Based on the mechanics of natural selection and genetics, GAs randomly generate solutions which are checked for fitness and then utilize genetic processes such as selection, crossover, and mutation to combine the most fit solutions into a new population of solutions. In this manner, highly fit and desirable traits are passed from one generation to the next, supplanting unfit traits in the process. The GA iteratively repeats this process over a number of generations until a near global optimum is achieved. + +# Methodology and Application + +To predict the number of EDS and ETD machines to deploy, we must understand the flow of passengers into the airport. To do so, we develop flight schedules discretizing the peak hour into time steps. Flight scheduling can then be achieved using an optimization model, whose objective is to minimize the variance between the total numbers of passengers departing in each time step while meeting the constraints of departing the correct number of flights of each type within the peak hour. + +# Scheduling Model + +We develop an optimization model to determine flight schedules. We discretize the peak hour into 20 time steps, thereby scheduling flights in 3-min intervals. The configuration and development of the model was tailored to a genetic algorithm software called Generator [New Light Industries 2001]. The multiobjective function, which minimizes the variance between the numbers of passengers departing in each time step and also assigns the correct number of flights of each flight type during the peak hour, is of the form + +$$ +\min z = \sum {\frac {(x _ {i} - \bar {x}) ^ {2}}{n - 1}} + \sum P _ {j} (y _ {j} - b _ {j}), +$$ + +where + +$x_{i} =$ the number of passengers departing in time step $i$ + +$\bar{x} =$ the average number of passengers departing per time step, + +$n =$ the total number of time steps in the peak hour, + +$P_{j} =$ the penalty associated with not meeting constraint for flight type $j$ + +$y_{j} =$ the number of flights being scheduled for flight type $j$ , and + +$b_{j} =$ actual number of flights leaving airport of flight type $j$ + +The genetic algorithm and optimization model provide near-optimal flight schedules for both airports A and B, so that approximately the same number of passengers depart in any given time interval. The airport security simulation model incorporates the optimization model's output (the flight schedule) to predict the number of EDS and ETD machines required. + +# Simulation Model + +The simulation model requires various data sets (randomly generated via random number generator) to simulate peak hours at each airport: + +- normally distributed passenger arrival times, varying from 45 to 120 min prior to departure of peak hour flights; +- normally distributed random variable consisting of the number of filled seats on each flight leaving in the peak hour; +- normally distributed random variable consisting of the EDS and ETD instantaneous machine processing rates; +- uniformly distributed discrete random variable that describes the number of checked bags per passenger; +- uniformly distributed discrete random variable that is used to determine which bags are selected for additional ETD screening. + +The simulation model accesses a vector containing the flight schedule to determine the number of each type of flight leaving per time step during the peak hour. It then accesses random variables associated with the filled seat distribution for each flight and sums these values: + +$$ +P _ {i j} = \sum_ {k = 1} ^ {\operatorname {S c h e d} _ {i}} \mathrm {F S} _ {i}, +$$ + +where + +$P_{ij} =$ number of passengers on all flights of type $i$ leaving in time step $j$ , + +$\operatorname{Sched}_i = \text{the number of flights of flight type } i$ leaving in time step $j$ , + +$\mathrm{FS} =$ filled seat random variable, + +$i =$ flight type, and + +$j =$ time step. + +The total number of passengers departing during the time step is then calculated by summing the number of passengers on each flight type during that time period: + +$$ +P _ {\mathrm {T O T}, j} = \sum_ {i} P _ {i j}, +$$ + +where + +$P =$ number of passengers departing in time step, + +$i =$ flight type, and + +$j =$ time step. + +The simulation model then randomly assigns passenger arrival times to all passengers leaving during the peak hour. Assuming that $99.7\%$ of passengers arrive between 45 and $120\mathrm{min}$ before their departure, the approximate normal distribution of arrival of passengers has a mean of $82.5\mathrm{min}$ before departure time and a standard deviation of $12.5\mathrm{min}$ . + +Each passenger is assigned a uniformly distributed discrete random variable between 1 and 5. A passenger who receives a 1 is checking zero bags, a passenger who receives a 2 is carrying one bag, and the rest are carrying two bags. The result is a random bag rate at each time step. + +For each time step, normally distributed random variables are generated to represent the EDS machine processing speed; this is multiplied by the number of machines to predict the number of bags processed. If that number is greater than the number of bags arriving during that time step, then the bags are processed and the residual is zero bags. Otherwise, the residual is calculated and added to the number of bags arriving in the following time period. + +The residual variable represents the number of bags queued by the security machines. A maximum allowable number of bags queued is established using the number of machines, the mean EDS bag-processing speed, and the maximum time allowed for processing a bag. A maximum time of $15\mathrm{min}$ was used in the simulation. If the number of bags queued ever exceeds the maximum allowable bags queued, a flight could be delayed. + +The simulation model was run with ten data sets to produce the effects of ten independent peak hours, and then run in series to simulate ten consecutive peak hours. In the independent peak-hour simulation, the number of bags queued is initially set to zero, assuming that peak hours are scheduled between periods of zero flight departures. In the multiple peak-hour simulation, the passenger and bag arrival phenomena are assumed to follow a Markov process. + +# ETD Simulation + +The independent peak hour simulation was modified to incorporate ETD machines; $20\%$ of the bags processed by the EDSs are flagged for ETD scrutiny. The ETD machine processing speeds, bags queueing, and maximum bags + +queued are all calculated as described earlier for EDS machines. A maximum allowable queue time of 9 min was used; this implies that bags have 21 min to reach their flights after ETD scanning. + +We had hoped to develop the Markov model to incorporate the ETD machines, but time constraints and coding requirements proved prohibitive. + +# Model Assumptions + +- Normal distribution of: + +- passenger arrival time before departure, +- seats occupied on a plane (unless all full planes were specifically simulated), and +- detection systems processing rates. + +- Bags are processed on a first-come first-served basis. +- EDSs are accessible to all bag check-in locations. +- People who arrive less than $45\mathrm{min}$ before their plane departure time are turned away and their baggage is not checked. +- For people who arrive more than $120\mathrm{min}$ before their departure, their bags are not checked until exactly $120\mathrm{min}$ before their departure. +- The time needed to transfer EDS-screened bags to planes is less than $30\mathrm{min}$ . +- The time needed to transfer ETD-screened bags to planes is less than $21\mathrm{min}$ . +- The optimally scheduled time steps within the peak hour are interchangeable, and reorganizing these time steps will not change the outcome of the simulation. +- If the number of bags received during a time step is less than the processing rate for that time period, all of those bags are processed during that time period. +- Flight cancellation is not considered in this simulation; this is justified by the fact that some baggage destined for a canceled flight will have already been checked. This assumption also adds a conservative element. + +# Results and Discussion + +# Flight Schedules + +Using the genetic algorithm, we determined optimal flight schedules for airports A and B. [EDITOR'S NOTE: We omit the details of the schedules.] + +# Number of Machines + +We used the optimization and simulation models to determine the number of EDS and ETD machines required at airports A and B (see Table 1). + +Table 1. Model prediction summary for EDS and ETD machines. + +
AirportFlight StatusMachines Required
A100% full3715
Varying % full3012
B100% full4015
Varying % full3432
+ +# Passenger Arrivals + +Simulations of the various peak-hour data sets showed slight variations in passenger and checked baggage arrival distributions. Figure 1 shows superimposed passenger arrival distributions at airport A for all 10 peak hour data sets used in the simulation. + +![](images/903eac114b8c2832efb47649210c99347f3b80d5dd7941a8b4386512db124835.jpg) +Figure 1. Variations in people arrival distributions at airport A for 10 random data sets. + +The peak hour begins at 11:00 A.M., and passengers arrive between 45 and 120 min before their flight according to an approximately normal distribution. + +The same normal passenger arrival distribution was observed for airport B. Passenger arrival rates were also normally distributed for airport B, with a slightly higher mean and standard deviation, which explains assigning more machines to airport B. + +The simulation model uses passenger and bag arrival probabilities, and EDS and ETD machine processing rate probabilities, to simulate the operational performance of each machine type under peak-hour passenger flows at both airports. Figure 2 shows some operational performance characteristics of airport A's EDS machines for all ten data sets. + +![](images/d9e2bf5a3b11786cddcc9f86b6debdd50c088cc986cd35b56f29473135aa39a1.jpg) +Figure 2. Variability of model response to ten data sets with respect to bag throughput and bags queued by EDS machines in airport A. + +# EDS BAG Throughput + +EDS bag throughput is the number of bags examined and passed by all EDS machines in one time step. The bag throughput increases steadily as more passengers begin to arrive for peak-hour flights, until the machine's operational speed is overcome, at which point bags begin to queue up awaiting examination. The number of bags queued increases steadily as passenger and bag arrivals continue to exceed the processing rate of the EDS machines, but the queue never exceeds the upper limit, denoted by the 15-, 16-, and 18-minute lines Figure 2. These lines correspond to the maximum allowable bags queued so that all arrive on time to their planes. When more time is allowed for bag queueing, then the total number of bags allowed to queue increases, apparent in the stepwise increases shown in the graph. Therefore, by requiring passengers to arrive slightly earlier than the current $45\mathrm{min}$ deadline, the number of EDS and ETD machines required could be reduced. + +The simulation model also generates system characteristics for the ETD machines at airport A. These results are shown in Figure 3. + +![](images/18a1d433bc72e944108178f988e79cacb71fcc6805b7a759f8cbee099f20cd55.jpg) +Figure 3. Bag throughput and bags queued by ETD machines in airport A. + +The simulation model also predicted the number of EDS and ETD machines required at airport B, with similar results. Since all queued bags are processed within the 15-min allowed time period, no delays will occur with this system, assuming bags can arrive to their planes within $30\mathrm{min}$ . Bags passing through ETD examination have only $21\mathrm{min}$ to arrive to their planes. + +If time is an issue, passengers could be required to arrive earlier for flights, or additional personnel could be employed to ensure ETD examined bags arrive to their respective planes without delay. + +# Multiple Peak Hours + +In addition to simulating single peak-hour events at both airports A and B, we evaluated the effects of combining 10 peak-hour events in succession. This simulation is representative of days when air traffic does not slow down but remains heavy throughout the day. Since passenger arrivals do not slow down, as in the single peak-hour simulation, we expect to need more machines. Our simulation of multiple peak hours predicts only the number of EDS machines required and does not consider ETD machines. Table 2 shows the results. + +The EDS system performance for multiple peak hours, in which planes' seating capacities vary, is shown in Figure 4. + +The bag arrival distribution seems to approach a steady-state value that is maintained throughout most of the day. Although the queued bags steadily increase throughout the day, none of these bags exceed the maximum allowable time in queue. Similar results were also obtained for airport B. + +Table 2. Model results for EDS machines for 10 consecutive peak hours. + +
AirportFlight StatusEDS Machines Required
A100% full38
Varying % full31
B100% full43
Varying % full35
+ +![](images/7930a0d51d9af545e2c5925523bb8d1a432a207a424d39db521301a68d026e6e.jpg) +Figure 4. Bag arrival and queued bag distributions over 10 peak hour period in airport A. + +# Conclusions and Recommendations + +- Our optimization model, in conjunction with a genetic algorithm, proved invaluable in developing optimal flight schedules for airports. +- Increasing the number of successive peak hours requires an increase in the number of EDS machines required to prevent flight delays. +- Our simulation model analyzes tradeoffs between changes in technology and their effects on airport security. +- Both EDS and ETD technologies should be employed to provide improved airport security. +- Our optimization and simulation models could easily be applied to the remaining 193 airports in the Midwest region and elsewhere. + +# References + +Cai, X., D.C. McKinney, and L.S. Lasdon. 2001. Solving nonlinear water management models using a combined genetic algorithm and linear programming approach. Advances in Water Resources 24 (2001): 667-676. +Crites, Jim, et al. 2003. Simulation modeling plays critical role in designing security. World Airport Week 14 (2) (29 January 2003); http://archives.californiaaviation.org/airport/msg24923.html. +Dandy, G. C., and M. Engelhardt. 2001. Optimal scheduling of water pipe replacement using genetic algorithms. Journal of Water Resources Planning and Management 127 (4): 214-223. +Devore, Jay L. 1995. Probability and Statistics for Engineering and the Sciences. 4th ed. Pacific Grove, CA: Brooks/Cole. +Heyman, Daniel P., and Matthew J. Sobel. 1982. Stochastic Models in Operations Research, Volume I: Stochastic Processes and Operating Characteristics. New York: McGraw-Hill. +Karp, Aaron. 2002. USA admits it cannot meet $100\%$ explosives screening deadline. *Flight International* (19 November 2002); available through Lexis-Nexis. +Kirby, Mary. 2002. TSA claims success in meeting checked baggage deadline. *Air Transport Intelligence*; available through Lexis-Nexis. +Kulkarni, Vidyadhar G. 1995. Modeling and Analysis of Stochastic Systems. London, UK: Chapman and Hall. +Loucks, Daniel P., Jerry R. Stedinger, and Douglas A. Haith. 1981. Water Resource Systems Planning and Analysis. Englewood Cliffs, NJ: Prentice-Hall. +New Light Industries. 2001. Generator. http://www.nli-ldl/products/genetic_algorithms/generator.htm. +Nyhoff, Larry and Sanford Leestma. 1997. Fortran 90 for Engineers and Scientists. Upper Saddle River, NJ: Prentice-Hall. +Osaki, Shunj. 1985. Stochastic System Reliability Modeling. Philadelphia, PA: World Scientific Publishing. +Pagas, Alain, and Michel Gondran (E. Griffin, trans.). 1986. System Reliability: Evaluation and Predictability in Engineering. London, UK: North Oxford Academic Publishers. +Properties Energetic Materials. TNO Prins Maurits Laboratory Website. http://www.pml.tno.nl/en/em/detection.html. +Rade, Lennart. 1994. The art of simulation. European Journal of Engineering Education. 19 (1): 7-14. + +Sohn, So Young. 2002. Robust design of server capability in $M / M / 1$ queues with both partly random arrival and service rates. Computers and Operations Research 29 (5) 433-440; available through Dialog. +Solomon, Susan L. 1983. Simulation of Waiting-Line Systems. Englewood Cliffs, NJ: Prentice-Hall. +Takacs, Lajos. 1962. Introduction to the Theory of Queues. New York: Oxford University Press. +Willis, Robert, and Brad A. Finney. 2000. *Environmental Systems Engineering and Economics*. Unpublished. + +# The Price of Security: A Cost-Benefit Analysis of Screening of Checked Baggage + +Michael Alan Powell +Tate Alan Jarrow +Kyle Andrew Greenberg +United States Military Academy +West Point, NY + +Advisor: Michael J. Johnson + +# Summary + +This report examines a model constructed to optimize the number of EDS machines necessary to provide desired security at airports. Based on this model, we recommend 16 EDS systems for airport A and 18 for airport B. Furthermore, we provide a set of security objectives for the airline as well as an ideal flight-scheduling solution. We find that a three-level EDS and human inspection system is best. + +However, EDS is not a permanent solution to the security screening problem; it is too inefficient for the expense incurred. In addition, we currently see little reason to incorporate ETD systems into our security proposal. The best hope for the baggage screening problems that we face today lies in future technology; neutron-based detection and quadruple resonance offer the most promising solutions. + +# Problem Approach + +The model of the baggage screening system can be broken down into three phases: + +- Check-In Phase, which consists of: + +- Arrival rate of passengers to the airport + +Table 1. +Symbols used. + +
μmean arrival time of passengers before departure (min)
σstandard deviation of normal distribution of passenger arrival to airport (min)
PAnumber of passengers that arrive in a 10 min interval at the airport (passengers/10 min)
r(t)arrival rate per minute. A function of time and the normal distribution shown in Appendix B
PTtotal number of passengers to arrive in airport during peak hours
LFload factor (percent of plane seats that are filled)
Cppercentage of passengers traveling on connecting flights
lqueue length at every 10-min interval
Tnumber of ticket agents
Trservice rate of ticket agents (passengers/min)
ΔTnumber of ticket agents to be added based on queue length
looptimal passenger line length (passengers)
Ppnumber of passengers processed through ticket counter
liline length at beginning of 10-min interval (passengers)
BNnumber of bags added in system to be inspected every 10 min
max IRmaximum inspection rate of EDS (bags/min)
EDSNumber of EDSs in the system
EDSrEDS inspection rate (bags/min)
LBbag line length for EDS (bags)
ΔEDSnumber of EDS to add to system
lBOoptimal baggage line length for EDS (bags)
EDSOnumber of operational EDSs
max IRHmaximum inspection rate of human operated EDS (bags/min)
EDSHnumber of human operated EDS (machines)
LBHbag line length for human operated EDS (bags)
EDSRHhuman operated EDS inspection rate (bags/min)
ΔEDSHnumber of human operated EDS to add to system
lBOHoptimal baggage line length for human operated EDS (bags)
max IRHandmaximum inspection rate of hand inspectors (bags/min)
Handnumber of hand inspectors available
lBHandbag line length for human inspectors (bags)
Handrhuman inspection rate (bags/min)
lBOHandoptimal baggage line length for human inspectors (bags)
ΔHandnumber of human inspectors to add to system
+ +- Passenger check in rates to ticket counters + +- Baggage Inspection Phase +- Movement Phase—Movement of inspected baggage to appropriate planes + +Our initial approach was to implement the model in the simulation software system Arena [Rockwell Software 2000]. However, the version of the software did not have the capacity, allowing only 100 entities in the system, instead of the 5,000 that we needed for proper testing. As a result, our second implementation model uses Microsoft Excel. + +# Assumptions + +- The average amount of time that a passenger spends at a ticket counter is 105 to 150 ; we assume 120 s. +- The passenger arrival distribution at Las Vegas Airport is representative of airports throughout the country and in the Midwest. +- Passenger arrival is normally distributed, an assumption supported by analysis in later sections. +- Ticket counters are uniformly distributed throughout the airport, and all ticket counters work for all airlines. +- All airlines follow the same basic system: Passengers check in at a ticket counter, an agent checks bags, and the airline delivers them to the plane. +- There is no curbside check-in, which in fact is only a small part of the overall baggage checking. Also, with the advent of new security measures, curbside check-ins will have to be much more secure [Federal Aviation Administration 2001, 104], and we assume that most airlines will be unwilling to incur this additional cost. +- Passengers departing during the peak hours of flight operations are the only passengers we need to be concerned about. This is because at peak hours the maximum number of bags are checked,1 hence this is the most important time to consider. +- Airports A and B are single-terminal airports. The reason for this assumption is that EDS machines must be centrally located to ensure reliability and a rapid flow rate. If the EDSs were spread out, then our model would not be valid, since there would be transportation time between the ticket counters and the EDS machines. In multiple-terminal airports, the EDS machines should be positioned centrally in each individual terminal. +- When adding a new ticket agent, EDS, or inspector to the system, the change is instantaneous; there is no warm-up period and no transit time. Although this assumption is a little unrealistic, it allows for an easier representation of the data and a smoother analysis. +- Since we are modeling two of the largest airports in the Midwest, we base any additional information needed on the Chicago O'Hare Airport, the largest airport in the region and the second largest in the world [Aviation Statistics 2002]. For example, the percentage of passengers on connecting flights (55%) is from Chicago O'Hare [Merringer 1996]. +- The data given in the problem statement about the distribution of the number of bags that passengers check is accurate. We want to process every entity through the system with $30\mathrm{min}$ remaining to allow for the movement stage. + +However, we recognize that this isn't possible because of extraneous factors that we don't control, such as late arrivals. Therefore, our model requires that $95\%$ of the passengers and bags are processed before that 30-min window before departure. + +- The EDS reliability rate (92%) and speed (160 to 210 bags/ min) in the problem statement are accurate. + +# Model Design + +# Check-In Phase + +The key to the check-in phase is determining the distribution of passengers arriving; in other words, we need to find the rate at which passengers arrive at the airport. The second part is determining the length of the queue so that we can estimate the number of ticket agents and the time required for passengers to get through ticket lines to check their bags. + +# Arrival Rate of Passengers + +Data from Las Vegas Airport are given in Figures 1 and 2. + +![](images/8cd2b4b215b2a33df709e7ea97e4f085a839feb33cbbc29a9c1a88feeffc5bf9.jpg) +Percent +Passenger +Arrivals per +Minute +Minutes Prior to Departure +Figure 1. Las Vegas Airport passenger arrival distribution [Leaving ... 2003]. + +We assume that Las Vegas Airport provides us with arrival information that is consistent with airports in the rest of the country. This passes the commonsense test, since people behave similarly throughout the country. + +![](images/b3cb7476f509c20ba71e93aa5c4579d5da03fcd90db762585dd09887a39b183c.jpg) +Figure 2. Las Vegas Airport passenger arrival distribution: lower curve, probability density function; upper curve, cumulative distribution function [Leaving ... 2003]. + +The arrivals seem to follow a normal distribution. However, the graphs were created with discrete values, and not a function; we have the percentage of passengers that arrive every $10\mathrm{min}$ from $190\mathrm{min}$ to $0\mathrm{min}$ prior to departure. For these data, the sample mean is $91.15\mathrm{min}$ , and $50\%$ of passengers arrive between $70\mathrm{min}$ and $100\mathrm{min}$ prior to departure. To get a continuous function, we adjust the mean and standard deviation of a normal distribution to fit the data; the simulated distribution has $\mu = 99.8\mathrm{min}$ and $\sigma = 21.2\mathrm{min}$ (see Figure 3). + +# Flow Rate + +The second part of the check-in phase is the flow rate through the ticket counters. Using the information from the problem statement, we calculate the number of passengers arriving during peak hours by multiplying the number of seats in a flight times the number of flights with that many seats. The total over all flight types gives the total number of passengers: 5,396 passengers for airport A and 5,781 for airport B. + +To determine the number of passengers who arrive in a particular 10-min interval before departure, we multiply the proportion of arrivals per minute + +![](images/cc6941d69ae10016a61a2aab0a7518cf199a84c61263f84db632727c2314b8d1.jpg) +Figure 3. Hypothesized normal distribution of arrivals compared with data. Note: Labels for the distributions are reversed: The diamonds in fact are for the simulated distribution and the squares for the Las Vegas distribution. The correction could not be made by press time. + +$r(t)$ (calculated from the normal distribution) by $10\mathrm{min}$ and then by the total number of passengers $(P_{t})$ who arrive at the airport. Also, we have to consider the load factor $(L_{F}$ : percentage of the flight that is actually filled) and the percentage of passengers on connecting flights $(C_p)$ who arrive but don't have to check in. The overall equation is + +$$ +P _ {A} = r (t) \times 1 0 \times P _ {T} \times L _ {F} \times (1 - C _ {P}). +$$ + +Since $55\%$ of passengers are taking connecting flights, we have $C_P = .55$ + +To find the overall load factor, we divide the total number of passengers by the total number of seats available: $80.4\%$ for airport A and $80.7\%$ for airport B. + +For airport A, the final equation for the number of passengers arriving to check in is + +$$ +P _ {A} = r (t) \times 5 3 9 6 \times 1 0 \times . 8 0 4 \times . 4 5. +$$ + +We use Excel to simulate the dynamic arrival rate, using the normal distribution calculated earlier. To determine the queue length $(l)$ at every 10-min interval, we simply consider the number of ticket agents $(T)$ available, multiply by the server rate $(T_r)$ and by $10\mathrm{min}$ , and then subtract the result from the number of passengers who have checked in: + +$$ +l = P _ {A} - 1 0 \times T \times T _ {r}. +$$ + +We assume that the server rate is 0.5 passengers/min, or 2 min per passenger [EDS Bag Screening Analysis n.d.]. + +At the start of every 10-min interval, there is an initial line length, which consists of both the additional passengers that arrive and the final line length from the iteration before. In other words, in each interval, new passengers are added at the end of the line—i.e., we have first-in, first-out queueing. + +A unique element in the model is that we allow for an increase in ticket agents if the queue increases. Based on the optimal line length $(l_O)$ or on the acceptable length of the line as set by the airport, the model adds ticket agents $(\Delta T)$ . The maximal acceptable length of the queue is twice the service rate in a 10-min interval, in other words, the number of passengers who can be served in $20\mathrm{min}$ . If the actual line length $(l)$ is longer than the acceptable line length, then, based on how much longer, the model adds additional ticket agents, using the equation + +$$ +\Delta T = \left(l - l _ {o}\right) \times 1 0 T _ {r}. +$$ + +The model also removes ticket agents when the line is under the optimal line length, since a negative $\Delta T$ is possible when $\ell < l_{o}$ . + +With these equations, the model tracks the time that it takes for all the passengers to get through the ticket counters. We find the number of passengers processed by taking the difference in the line length at the end of the interval $(l_{f})$ and the line length at the beginning of the interval $(l_{i})$ . This difference is the passengers processed $(P_P)$ : + +$$ +P _ {P} = l _ {f} - l _ {i}. +$$ + +# Inspection Phase + +# Overall Picture + +In the inspection phase, baggage is sent through the EDS machines and checked for explosive components, following the simple flowchart in Figure 4. + +# Initial EDS Inspection + +The number of passengers $P_{p}$ who get through the check-in and deposit their bags each 10-min interval is based on the Check-In Phase calculations. This number of passengers is then the number of passengers who have bags to check and be screened. + +According to the problem statement, $20\%$ of passengers check 0 bags, $20\%$ check 1, and $60\%$ check 2, for a weighted average of 1.4. The total number of bags $B_{N}$ checked in a 10-min interval is + +$$ +B _ {N} = 1. 4 P _ {P}. +$$ + +According to the problem statement, EDSs are operational only $92\%$ of the time. Therefore, the number of operational EDSs $(\mathrm{EDS}_O)$ is + +$$ +\mathrm {E D S} _ {O} = . 9 2 \mathrm {E D S}. +$$ + +![](images/9822fbd61403126fe9ce894840fdf951967c3605b80616266f12c019b7a6a92c.jpg) +Figure 4. Flowchart of inspection stage. + +The maximum number of bags that can be inspected in a 10-min interval depends on the number of EDSs in the system (EDS) and their inspection rate, which we take to be 180 bags/hour, or 3 bags/min. The maximum inspection rate is + +$$ +\max I _ {R} = (3 \text {b a g s / m i n}) \times \mathrm {E D S} _ {O} \times (1 0 \text {m i n}). +$$ + +This model is similar to that of the ticket agent and the passenger. Using the same techniques and only mildly changing the equations, we find the number of bags that are checked in each interval $(\mathrm{EDS} \times \mathrm{EDS}_r \times 10)$ and then determine the line length of the bags $(l_B)$ : + +$$ +l _ {B} = B _ {N} - \mathrm {E D S} _ {O} \times \mathrm {E D S} _ {r} \times 1 0. +$$ + +As with the passenger queue, we bring additional EDS machines into service based on need. When the queue gets to be longer than the optimal line length $(l_{BO})$ , we add the appropriate number of EDSs to the system ( $\Delta$ EDS): + +$$ +\Delta \mathrm {E D S} = \left(l _ {B} - l _ {B O}\right) \times 1 0 \times \mathrm {E D S} _ {r}. +$$ + +Conversely, we subtract machines when the line length falls below the optimal length. + +# Human-Operated EDS + +Because of the $30\%$ false-positive rate for the EDSs, we run positives through a human-controlled EDS, which is a more thorough and accurate inspection [Butler and Poole 2002, 4]. This system follows the same process and equations as the initial baggage screening on the EDSs. However, the difference is that the flow rate is reduced to 1.2 bags/min, since it takes 50 sec for a human-operated EDS to inspect a bag [Recommended Security Guidelines . . . , 102]. So $\mathrm{EDS}_{RH} = 1.2$ and we have + +$$ +\max I _ {R H} = (1. 2 \mathrm {b a g s / m i n} \times \mathrm {E D S} _ {H} \times (1 0 \mathrm {m i n}). +$$ + +Since only $30\%$ of bags screen as positive, the number of bags to inspect is now $30\%$ of the original number: + +$$ +I _ {B H} = 0. 3 B _ {N} - (\mathrm {E D S} _ {H} \times \mathrm {E D S} \times 1 0). +$$ + +Human-operated machines are added and taken away based on need and queue length: + +$$ +\Delta \mathrm {E D S} _ {H} = (l _ {B} - l _ {B O H}) \times 1 0 \times \mathrm {E D S} _ {r H}. +$$ + +Bags that pass inspection are routed to their planes; bags that fail are routed to the hand inspection. + +# Hand Inspection + +In addition to the bags that register positive with the human-operated EDS inspection, we pull off $30\%$ to hand inspect, as an additional safety measure to double check the machines and human operators for accuracy. A number (Hand) of trained inspectors inspect at a rate $(\mathrm{Hand}_r)$ that is much slower: 0.286 bags/minute, or $3.5\mathrm{min / bag}$ , the average between 2 and 5 bag/min [Butler and Poole 2002, 2]. + +$$ +\max I _ {R \text {H a n d}} = (0. 2 8 6 \text {b a g s / m i n}) \times \text {H a n d} \times (1 0 \text {m i n}). +$$ + +The length of the line of bags awaiting hand inspection is + +$$ +l _ {B \mathrm {h a n d}} = 0. 0 9 B _ {N \mathrm {H a n d}} - \mathrm {H a n d} \times \mathrm {H a n d} _ {r} \times 1 0. +$$ + +Again, inspectors are added or subtracted as needed, and the number of bags left in the system is tracked throughout the 190 min of peak time. + +$$ +\Delta \mathrm {H a n d} = (l _ {B \mathrm {H a n d}} - l _ {B O \mathrm {H a n d}}) \times 1 0 \times \mathrm {H a n d} _ {r}. +$$ + +From here, bags that are negative go to planes, while bags with explosive devices or compounds go to Explosive Ordnance Disposal teams. + +# Movement Phase + +After a bag has been inspected and cleared, it is routed to its flight. The time that it takes a bag to get to its flight after inspection is the time to get to the plane (5 min, say) plus the time to be loaded into the plane (10 min, say). Hence we need to ensure that all bags are through inspection some 15 min before departure. However, we are not sure of this exact number and do not have any supporting data; so, to play it safe, we ensure that $95\%$ of bags are finished being processed 30 min before departure, so that there will be no flight delays because of screening. + +# Results and Discussions + +Using Microsoft Excel, we put into simulation the model formed from the equations and theory described above for both airports A and B. The results are given below. + +# Airport A + +Using the initial conditions for airport A given in the problem statement, we calculate the optimal numbers of counter workers, automated EDS machines, human-operated EDS machines, and human inspectors required to meet the goal of $95\%$ of passengers and baggage processed by 30 min prior to departure. The last set of initial conditions necessary are the maximal acceptable line lengths shown in Table 2. + +Table 2. +Optimal line lengths for airport A. + +
Counter line10people
Bag line20bags
Human bag scan line12bags
Hand search line2.86bags
+ +For airport A, 16 EDSs are required to handle peak-hour traffic. In addition, 35 ticket agents, 7 EDS operators, and 8 human inspectors are needed. + +Looking deeper at the average line lengths throughout the whole peak hour process, we get Table 3, which confirms that the line lengths stay below the maximum acceptable lengths. + +# Costs + +The cost of installation of EDS machines at airport A is $17.6 million, and worker cost is$ 1,639 per 190 min period. The worker cost for the 190 min period will not exceed $4 million/year. + +Table 3. Line lengths for airport A. + +
Ave.Max.
Counter line6.020people
Bag line17.521bags
Human bag scan line6.911bags
Hand search line1.62.86bags
+ +# Airport B + +Similar analysis shows that airport B requires 18 EDSs, 38 ticket agents, 8 EDS operators, and 9 human inspectors to achieve the same maximum acceptable line lengths. The corresponding average line lengths are 5.9, 19.1, 9.0, and 1.4, and the costs are $19.8 million for equipment and$ 1,807 worker costs per 190-min period. + +# Flight Departure Scheduling Model + +There are two major considerations for a departure schedule: + +- We must distribute the flight operations throughout the peak hour so that the runways are never more crowded than other times. +- We must distribute the various flights and their sizes so that the number of people departing at any given time during peak hour can be represented by a uniform distribution. + +The time before departure that passengers arrive is independent of the number of seats on the flight. + +First, we build a matrix for possible flight departure times. We split the hour window into six 10-minute windows, each with four 2.5-min smaller windows. Next, we record every flight's passenger load. + +On a spreadsheet, we fill in a matrix, inserting flights into the flight schedule so that flights are evenly distributed by number and size of passenger load. We accomplish this by inserting the largest flights first. After the largest flights are entered, we use the small flights to fill in passenger-load disparities. Finally, to balance flight loads and passenger loads, we can always swap flights once we've placed them. For both airports, we achieve the goal that every 10-min and 2.5 min interval has an equal flight operation rate, and the number of passengers departing in each 10-min interval is nearly constant. + +# Recommendations + +We recommend against a combined EDS/ETD system. + +Such a system provides meager security improvements over an all-EDS system, increases fixed costs, and increases the possibility of baggage delays. We give details of our reasoning. + +Re-screen positives Because of the high false-positive rate (30%) for EDS machines, bags that fail a first EDS test should be give a second one before any further screening. + +Use ETDs on a $10\%$ sample Airports A and B can incorporate the use of ETD machines into the proposed EDS model if they replace each human inspection point with an ETD machine. The ETD machines should inspect all bags that fail the first two EDS machine tests. Since ETDs use mass spectrometry technology, as opposed to the computed tomography technology that EDS machines use, the airports' security systems should send $10\%$ of all bags that pass the initial EDS machines into ETDs, to try to find explosives that EDSs normally will not find, such as minute traces of explosive residue on the lining of a bag. + +Open bags for ETD inspection ETDs most accurately detect explosives when security agents use the "open bag" form of trace detection. It takes additional time to open bags and prepare them to enter the ETD machine, but doing so reduces the machine's false negative rate (rate at which the machine fails to recognize explosive material) by nearly $50\%$ . Open-bag ETD inspections take 2-2.5 min per bag, which is less than or equal to the time it takes for a physical human inspection (2-5 min) [Butler and Poole 2002]. This means that it will not take more to send the bags that fail the two EDS tests through an ETD machine than for a human to inspect each of these bags. It will, however, take more time to test $10\%$ of the bags that pass the initial EDS machine. + +How many ETDs? In addition to the 16 EDS machines, airport A will require 5 ETD machines; airport B, in addition to its 18 EDS machines, will require 4 ETD machines. + +Greater possibility of delays Both airports can still meet departure schedules even if they conduct the additional ETD inspections on $10\%$ of bags that pass the EDS machines. However, the length of the lines at an ETD machine suggest greater possibility of delay. + +Actual costs According to Butler and Poole [2002, 7], installing 50,480 ETD machines would cost $3.0 billion, or$ 59,500 per machine. However, the same report indicates that the installation cost of 6,000 EDS machines is $6.0 billion, or $1 million per EDS machine. These costs include both the cost of the machines and the associated cost of placing them in the airport, and they substantially agree with those in the problem statement. + +Cost comparison One can argue that ETD machines cost nearly ten times as much to operate as EDS machines because they inspect bags at one-tenth the + +rate of EDS machines. However, the ETD system that we suggest does not require any human inspectors. The costs (fixed and variable) of EDS and of EDS/ETD differ unsubstantially, for either airport A or airport B. + +Security is not enhanced The security benefits of incorporating ETD machines into an all-EDS system do not appear significant. ETD machines are less accurate in detecting explosive materials than an EDS machine [Kavuar et al. 2002] and fail to identify explosive materials in $15\%$ of bags that actually contain explosive materials [Butler and Poole 2002]. + +# We recommend investment in development of new technologies. + +We specifically suggest quadrupole resonance and neutron-based detection systems, which have the potential to lower costs while increasing security system effectiveness. We describe various research opportunities. + +Quadrupole resonance: Quantum Magnetics is conducting research on quadrupole resonance (QR) to detect explosives, contraband, and weapons. QR-based technology may be cheaper, faster, and more accurate than EDS and ETD machines. QR detection systems are very simple to operate: a red light appears if a bag contains hidden explosives or biochemical agents, a green light appears if the bag contains no explosives or biochemical agents. The simplicity of use reduces the possibility of human error, which can occur with EDS technology if the operator poorly judges the machine's X-ray images. QR technology may also reduce airports' variable costs if they do not have to compensate security personnel for the technical education EDS operators receive [InVision Technologies 2002]. + +Neutron-based detection: Neutron-based devices can quickly detect hidden substances, such as liquid explosives hidden in a sealed container or plastic explosives stuffed inside a baseball. The HiEnergy Technologies Corporation has conducted tests in which neutron-based devices detected concealed explosives in less than 10-sec, nearly half the time it takes for an EDS machine to inspect a bag. Like QR technology, neutron-based systems determine the chemical formula of hidden substances, which reduces the likelihood of false positives that occur in EDS machines when the machine cannot accurately distinguish explosives from other objects with similar sizes and densities. Neutron-based technology also eliminates the need for drawn-out human interpretations [Fast neutron technologies ... 2002]. + +Elastic (coherent) X-ray scatter: This technology is currently in use in Germany at Cologne, Düsseldorf, and Munich airports. X-ray scatter detection systems can only inspect 60-240 bags/h [Butler and Poole 2002, 3], but they have a false positive rate well below $1\%$ [Automatic detection ... 1998], much more efficient than EDS machines with their $30\%$ false positive rate. + +Millimeter and microwave imaging: This technology can improve overall airport security but is more applicable to inspecting passengers than baggage. Millimeter and microwave systems use temperature and emissivity to create images: the greater the contrast, the sharper the image. This is an excellent way to detect a passenger carrying a gun, since human bodies have high emissivity while metal objects have low emissivity [Murray 2001]. + +Recommendation for further research: Our model shows that if manufacturers can make enough EDS systems, and if airports can fit them and train enough operators, then they should be able to screen all bags checked in at least $30\mathrm{min}$ before departure. However, screening all checked baggage does not guarantee detecting all explosives; EDS and ETD machines cannot be a permanent solution. Investing in research and development opportunities is the only way to ensure that airlines are safe while minimizing space and labor costs. Ultimately, "The common goal should be a fully functioning air transportation system that provides passengers with safe, efficient, and convenient means of carrying out the nation's business" [Kavuar et al. 2002, 7]. + +# References + +Adding flexibility to the December 31, 2002, deadline to screen all checked baggage is a common sense necessity. 2002. American Association of Airport Executives and Airports Council International—North America: Legislative Affairs [database online]. http://www.airportnet.org/depts/federal/budget/mythfact.pdf. Accessed 9 February 2003. +Airlines and airports: Baggage and cargo handler. n.d. http://www.jobmonkey.com/airline/html/security_scrreener.html. Accessed 9 February 2003. +Airlines and airports: Security screener. n.d. http://www.jobmonkey.com/ airline/html/security_screener.html. Accessed 9 February 2003. +Airlines and airports: Ticketagent. n.d. http://www.jobmonkey.com/airline/html/ticket_agent.html. Accessed 9 February 2003. +Federal Aviation Administration. 1998. Airport Advisory Circular 150/5360-13. http://www2.faa.gov/arp/pdf/5360-13.pdf. Accessed 7 February 2003. +Airport Diagram 03023: AL-166 (FAA) Chicago-O'Hare International (ORD). n.d. National Aeronautical Charting Office. http://www.naco.faa.gov/ content/naco/online/airportdiagrams/00166AD.pdf. Accessed 8 Febru- ary 2003. +Associate Administrator for Civil Aviation Security. 2001. Recommended Security Guidelines for Airport Planning, Design and Construction. Washington, DC: Office of Civil Aviation Security, Policy and Planning, Federal Aviation Administration. + +Aviation Statistics. 2002. Midwest Aviation Coalition (May 2002). http:// www.midwestaviation.org/html/statistics.html. Accessed 8 February 2003. +Butler, Vigo, and Robert W. Poole, Jr. 2002. Rethinking checked-baggage screening. July 2002. Los Angeles, CA: Reason Public Policy Institute. www.rppi.org/baggagescreening.html. Accessed 8 February 2003. +EDS bag screening analysis. 2002. McCarran International Airport: Clack County Department of Aviation, (revised 17 Jan 2002). http://aci-na.org/docs/EDS%20Bag%20Screening%20Analysis.ppt. Accessed 8 February 2003. +Fast neutron technology detects concealed liquid explosives. 2002. HiEnergy Technologies: News release, September 2002. http://www.hienergyinc.com/press/9-24-02.pdf. Accessed 8 February 2003. +Hilkevitch, Jon, and Rogers Worthington. 2002. Off-peak hours offer O'Hare wiggle room. Chicago Tribune (4 May 2002). http://www.thetracon.com/news/trib50400.htm. Accessed 8 February 2003. +Invision Technologies, Inc. Subsidiary Quantum Magnetics Demonstrates Next-Generation Detection Systems. InVision Technologies: News release, June 2002. http://www.invision.tech-com"http://www.invision.tech-com. Accessed 8 February 2003. +Kauvar, Gary, Bernard Tostker, and Russell Shaver. 2002. Safer skies: Baggage screening and beyond. White paper. Santa Monica: RAND. +Leaving on a Jet Plane. n.d. McCarran International Airport: Clack County Department of Aviation. http://aci-na.org/docs/3. Accessed 8 February 2003. +Merringer, James L. 1996. A wing and a fare. Illinois Periodicals Online (April 1996): [newspaper online]. http://www.lib.niu.edu/ipo/ii960430.html. Accessed 8 February 2003. +Mooney, Douglas, and Randall Swift. 1999. A Course in Mathematical Modeling. Washington DC: Mathematical Association of American. +Murray, Charles J. 2001. Wanted: Next-gen tech for weapons detection. Microwave Engineering Online (September 2001). http://www.mwee.com/mwee_news/0EG20010917S0048.. Accessed 9 February 2003. +Nice, Karim. n.d. How baggage handling works. How Stuff Works [database online]. http://www.howstuffworks.com/baggage-handling.htm. Accessed 8 February 2003. +Rockwell Software. 2000. Arena Basic [software]. Sweickley, PA. +Spielman, Fran, and Nancy Moffett. 2002. City has huge stake in airline. Chicago Sun-Times (31 October 2002): [newspaper online]. http://www.suntimes.com/specialSections/ual/cst-nws-united31.html. Accessed 8 February 2003. + +Strecker, H. 1998. Automatic detection of explosives in airline baggage using elastic X-ray scatter. *Medica Mundi* 42 (2) (1998): 30-33. http://www.medical.philips.com/main/news/ assets/docs/medicamundi/mm_vol42_no2/mm_vol42_no2_article_autmoatic_detector_of.pdf. +Uncle Sam wants you. 2002. CBS News.com (4 March 2002). http://www.cbsnews.com/stories/2002/03/04/national/printable502912.shtml. Accessed 8 February 2002. +Winston, Wayne L. 1994. Operations Research Applications and Algorithms. Belmont, CA: Duxbury Press. + +# Feds with EDS: Searching for the Optimal Explosive Scanning System + +Robert T. Haining + +Dana M. Lindemann + +Neal P. Richardson + +Wake Forest University + +Winston-Salem, NC + +Advisor: Bob Plemmons + +# Summary + +A May 2002 Transportation Security Administration (TSA) press release describes pilot testing of different baggage screening programs at three airports [Melendez 2002]. One airport used all Explosive Trace Detection (ETD) machines, one used all Explosive Detection System (EDS) machines, and a third airport used half and half. We show that these pilot tests were unnecessary. + +We focus on maximization of productivity of the machines and of the amount of time they have to process the highest peak in checked bags. We show the importance of proper flight schedule planning and the ideal method for scheduling. + +The implementation of the model's conclusions will save money in purchasing and installing machinery. Security will be paramount. Minimizing passenger inconvenience will be the secondary concern; but under our model, we eliminate, or at least minimize, expected delays. + +By extending our model, we can also potentially find the optimal amount of time before takeoff when passengers should be required to arrive at the airport. To minimize cost, this time may need to be increased or decreased, depending on experimental data. + +# General Assumptions + +- We assume all data as given on the problem statement. + +- Flight delays due to baggage inspection are unsatisfactory. However, a 15-minute delay is considered on time, according to FAA policy [Mead 2002]. +- The percentage of planes that are cancelled before baggage is checked is negligible. +- There are no extreme unforeseen circumstances, e.g., striking workers that might affect baggage screening and flight departures. +- The number of passengers who check more than two bags is negligible. +- All airports have EDS or other scanning machines functional, so we do not need to rescan bags from connecting flights originating elsewhere. +- A system of bag queuing and prioritizing process is in place. +- Prioritizing negates the benefits of passengers arriving earlier than mandatory time. +- There is no significant delay in having to re-scan or hand-examine bags due to false positives. +- The throughput rate of bags per hour per EDS machine can be increased to 210 bags/h/machine by training the operators. +- We ignore the cost of repurchasing EDS or ETD machines due to defects and breakdowns. We also assume that performing scheduled maintenance on these machines reduces the chance of machine failure. +- We ignore potential lines at the airline check-in desk. + +# The Model + +$$ +Q _ {\mathrm {E D S}} = \left[ \frac {\phi \sum_ {i = 1} ^ {8} \left(t _ {i} n _ {i} P _ {\text {s e a t s f i l l e d} _ {i}}\right)}{\Omega \ell (1 + \tau - \mu)} \right] \tag {1} +$$ + +where + +$Q_{\mathrm{EDS}} =$ number of EDSs needed; + +$\ell =$ throughput rate of each machine (bags/h/machine); + +$\tau =$ minimum early passenger arrival time (h), i.e., how long before departure the airline closes bag check-in; + +$\mu =$ travel time of one bag between EDS and the plane (h); + +$t_i =$ number of seats on flight of type $i$ + +$n_i =$ number of flights of type $i$ during the peak hour; + +$P_{\text{seats filled}_i} = \text{estimated percentage of seats filled in flights of type } i;$ + +$\phi =$ summation shift constant, defined below; + +$\Omega =$ percentage of time that the EDS is operational (given as $92\%$ ). + +# Derivation + +We are dealing with a model of rates, such that $B_{\mathrm{peak}}$ , the number of bags in the peak hour, equals the rate of bags processed multiplied by the amount of time $T$ . The rate of bags per hour depends on $\ell$ , the number of bags that one machine can process in one hour, times $Q_{\mathrm{EDS}}$ , the number of EDSs. We combine these equations and solve, getting + +$$ +Q _ {\mathrm {E D S}} = \frac {B _ {\mathrm {p e a k}}}{\ell T}. +$$ + +Since each EDS is operational only portion $\Omega$ of the time, we must discount the time by this constant, $\Omega$ , yielding + +$$ +Q _ {\mathrm {E D S}} = \left\lceil \frac {B _ {\mathrm {p e a k}}}{\Omega \ell T} \right\rceil . +$$ + +We add the ceiling brackets because the number of EDS must be whole. We now derive formulas for $B_{\mathrm{peak}}$ and $T$ . + +$B_{\mathrm{peak}}$ + +The aggregate number $B_{\mathrm{peak}}$ of bags on one flight is the number of passengers times the average number of bags that each carries. The average number of bags per passenger, $\bar{b}$ , is $b_1 + 2b_2$ , where $b_1$ and $b_2$ are the proportions of passengers who check one bag and two bags, respectively. + +The problem statement lists seating capacities of eight flight types, but the number of passengers per flight depends on the probability that those seats are filled, $P_{\text{seats filled}_i}$ . By multiplying the number of bags on one flight, $\bar{b} t_i P_{\text{seats filled}_i}$ , by the number of flights of the same type departing in the peak hour, $n_i$ , we get the total number of bags on all flights of type $i$ . By summing up all eight flight types, we arrive at + +$$ +B _ {\text {p e a k}} = \sum_ {i = 1} ^ {8} t _ {i} P _ {\text {s e a t s f i l l e d} _ {i}} \bar {b} n _ {i}. +$$ + +However, a couple of other factors need consideration. + +Flight cancellations. The problem statement says that $2\%$ of flights are cancelled daily. However, in our flying experiences, a flight is generally not cancelled until after the bags have been checked and the passengers are waiting at the gate or perhaps already on the flight. When forced to, airlines tend to delay flights as long as possible, canceling only after all other options have been exhausted. Thus, we assume that the cancellation of flights does not affect the number of checked bags to be scanned. + +Connecting passengers. Since airports must scan all bags, and since typically the EDS machines are in the passenger check-in area, we assume that bags of connecting passengers do not need to be rescanned, in agreement with current FAA policy. We define the percentage of non-connecting passengers, i.e., those originating in our airport, as $P_{\mathrm{orig}}$ . + +Including these factors, we get + +$$ +B _ {\text {p e a k}} = \sum_ {i = 1} ^ {8} t _ {i} P _ {\text {o r i g}} P _ {\text {s e a t s f i l l e d} _ {i}} \bar {b} n _ {i}. +$$ + +Defining the summation shift constant $\phi = \bar{b} P_{\mathrm{orig}}$ we have + +$$ +B _ {\text {p e a k}} = \phi \sum_ {i = 1} ^ {8} t _ {i} P _ {\text {s e a t s f i l l e d} _ {i}} n _ {i}. +$$ + +Substituting this into the formula for $Q_{\mathrm{EDS}}$ , we get + +$$ +Q _ {\mathrm {E D S}} = \left[ \frac {\phi \sum_ {i = 1} ^ {8} (t _ {i} n _ {i} P _ {\mathrm {s e a t s f i l l e d} _ {i}})}{\Omega \ell T} \right], +$$ + +with $T$ yet to be shown to be $(1 + \tau -\mu)$ + +# The Cost Function Caveat + +The ultimate goal is to minimize cost. This model's cost function (in thousands of dollars) for airport A is $Q_{\mathrm{EDS}}(1100 + \omega)$ , and for airport B, $Q_{\mathrm{EDS}}(1080 + \omega)$ , where $\omega$ is the operating cost per machine, and 1100 and 1080 are the costs to purchase and install the machines at each airport, according to data in the problem statement. To minimize cost, we minimize $Q_{\mathrm{EDS}}$ , either by reducing $B_{\mathrm{peak}}$ , increasing $\ell$ , or increasing $T$ . + +Minimizing $B_{\text{peak}}$ would involve having passengers check fewer bags or else reducing the number of passengers flying during peak hour, via either flight cancellation or rescheduling to non-peak times. Flight cancellation would + +involve lower airline revenue and fewer choices of flights for consumers and is clearly undesirable. Rescheduling to non-peak times seemingly would be desirable; but surely the airlines and airports have already tackled this issue in the past, so further progress in rescheduling cannot be expected. Finally, requiring passengers to check fewer bags (which the threat of longer wait times might indirectly accomplish) would be unpopular; furthermore, merely suggesting passengers bring less checked luggage cannot be relied upon. + +Maximizing $\ell$ , the number of bags per hour that each machine can process. We assume that the range between 160 and 210 depends on the competence of the operator. Thus, by instituting a more comprehensive and extensive training regimen, we can hope to increase $\ell$ . We also assume that the savings due to needing fewer machines outweigh the costs of increased training. Acknowledging that other factors could limit the machine's output, we estimate $\ell$ to be a modest 190 bags/hour/machine. + +Maximizing $T$ . All airlines have a time $\tau$ before departure after which a passenger may not check in and board. Taking into account data supplied in the problem statement, we have $\tau = 45 \mathrm{~min}$ . By then, all bags will be present, so EDS operators can be guaranteed $\tau - \mu$ min to process bags for a flight, where $\mu$ is the time to load the bags onto the plane. As we have no data, we arbitrarily set $\mu = 6 \mathrm{~min}$ , so EDS operators have at least $39 \mathrm{~min}$ (0.65 h) to process the bags. Our task is to maximize this amount of time. + +If the peak hour were the only hour in which flights departed, EDS processing for peak hour can begin $45\mathrm{min}$ before the first flight, and the last bag of the last flight must finish being processed $6\mathrm{min}$ before the end of the hour. Thus, we have at most $1\mathrm{h}39\mathrm{min}$ to process all of the bags. Therefore, the total time is $T = 1 + \tau - \mu$ . + +To use this maximum time interval best, we need a steady supply of bags coming in, to allow the machines to operate at maximum output for the entire time interval. As we will show, we can come close to a constant flow. + +We now revoke the assumption that the peak hour is the only hour of flights. The bags in the hours immediately before and after peak, by definition fewer than $B_{\mathrm{peak}}$ , can be processed in less time than needed to process $B_{\mathrm{peak}}$ . When the peak hour's first bags arrive 45 min before the peak hour begins, we cannot yet assume that the EDSs will be available to process them, because flights departing during the hour before peak will have bags that still need to be processed. Similarly, we cannot assume that the EDSs can process our peak hour's bags all the way up to the last moment, since the bags of the next hour's first flight will likely require more than a few minutes to process. So, we should expect encroachments on the 1.65-hour maximum time interval. However, both the highest morning and evening peak hours are sufficiently greater in volume than the neighboring hours [Bureau of Transportation + +Statistics n.d.], so we can operate at maximum time, 1.65 hours, without fear of other periods' effects. So, we define $T = (1 + \tau - \mu)$ and arrive at (1). + +# Solving for the Optimal $Q_{\mathrm{EDS}}$ + +# Calculating $B_{\mathrm{peak}}$ + +We examine each component of the equation + +$$ +B _ {\text {p e a k}} = \sum_ {i = 1} ^ {8} t _ {i} n _ {i} P _ {\text {s e a t s f i l l e d} _ {i}} \bar {b} P _ {\text {o r i g}}. +$$ + +The problem statement tells us that $20\%$ of passengers check no bags, $20\%$ check just one bag, and $60\%$ check two bags. So, the average number of bags per passenger is $\bar{b} = 1.4$ . + +Using the given proportions of seats filled for the various types of flights plus data from the T-100 Domestic Segment table in the Large Air Carriers database from the Intermodal Transportation Database [Bureau of Transportation Statistics n.d.], we calculate the averages for each flight type $i$ : + +$$ +P _ {\text {s e a t s f i l l e d} _ {i}} = \left\{ \begin{array}{l l} . 8 6 7 9, & 1 \leq i \leq 3; \\ . 8 1 9 4, & 4 \leq i \leq 7; \\ . 7 7 0 5, & i = 8. \end{array} \right. +$$ + +On average, $15\%$ of passengers are from connecting flights, so $P_{\mathrm{orig}} = .85$ . + +Our equation has now become + +$$ +B _ {\text {p e a k}} = (. 8 5) (1. 4) \sum_ {i = 1} ^ {8} t _ {i} n _ {i} P _ {\text {s e a t s f i l l e d} _ {i}} = 1. 1 9 \sum_ {i = 1} ^ {8} t _ {i} n _ {i} P _ {\text {s e a t s f i l l e d} _ {i}}. +$$ + +Substituting in the values for airports A and B for $t_i$ and $n_i$ (from the Technical Information Sheet) and our values for $P_{\text{seats filled}_i}$ , we get + +$$ +B _ {\text {p e a k} \mathrm {a t A}} = 5 2 8 6 \text {b a g s}, \quad B _ {\text {p e a k} \mathrm {a t B}} = 5 6 8 3 \text {b a g s}. +$$ + +# Calculating $Q_{\mathrm{EDS}}$ + +An EDS is operational $\Omega = 92\%$ of the time. We use $\ell = 190$ as an average value for the rate of bags per machine per hour. We have $\tau = 0.75\mathrm{h}$ and $\mu = 0.1\mathrm{h}$ . Using these values and the respective values of $B_{\mathrm{peak}}$ for each airport, we arrive at + +$$ +Q _ {\text {E D S f o r A}} = 1 9, \quad Q _ {\text {E D S f o r B}} = 2 0. +$$ + +# Exploring $\phi$ + +During holidays, passengers are more likely to carry more bags. We examine the extreme of each passenger carrying two bags, which alters $\phi$ to 1.7. Table 1 shows the effect on delays for airport A; results for airport B are similar. + +# Table 1. + +Delays (in min) for airport A, for various machine speeds $\ell$ , values of $\phi$ , and proportions of seats filled. The value $\phi = 1.7$ corresponds to each passenger checking two bags. + +
lφ = 1.19φ = 1.7
maxest.minmaxest.min
16039140986321
190170067372
21060051240
+ +As expected, delays are greater when each passenger checks two bags. In addition, there will probably be more seats filled during this time period. However, since these busiest times of the year occur so rarely, we believe it is not worth buying extra machines to handle this overload. A possible solution to increased baggage is to turn to more temporary solutions, such as renting other portable screening devices or hiring extra workers or K-9 dogs. + +In the worst-case scenario, on the busiest day of the year in airport A or B, when every flight in the peak hour is full, and the EDS is operating at its highest rate $(\ell = 210)$ , there will be only about $50\mathrm{min}$ of delay. We believe this is acceptable. + +# Scheduling Algorithm + +We developed the following algorithm to schedule the departure of different flight types within the peak hour so that the number of passengers, and, consequently, the number of bags, is evenly distributed. + +1. Obtain data on the number of flights and seats on each flight. +2. Modify the seat data to represent the average number of people on each flight. To do this, multiply by the estimated percentage of seats filled for the type of the given flight. +3. Calculate total number of people on all flights during the peak hour. +4. Determine the desired number of time intervals during the peak hour. We chose 6 as an appropriate number. +5. Determine the average number of people to fly during each time interval. Allocate that number of spaces for each interval, i.e. total number of people divided by 6. + +6. Do the following $n$ times (where $n =$ total number of flights): + +(a) Find the flight with the most people on it. +(b) Starting at the first interval, and searching sequentially through to the last, find the time interval with the most number of spaces still available. +(c) Assign said flight to this time interval. +(d) Subtract the number of available spaces by the number of people on said flight. + +7. Make sure there is a flight at :00 and one at :59 to ensure the efficiency of our model, so as to maximize the time interval available for processing and allow the use of machines at full capacity. To do this: + +(a) For the first 30 minutes, start at the beginning of the time interval and evenly distribute the interval's assigned flights in order of decreasing flight capacity and increasing time. +(b) For the second half hour, start at the end of the time interval (:39, for instance) and evenly distribute the interval's assigned flights in order of decreasing flight capacity and decreasing time. + +Essentially, we are evenly distributing the flights scheduled in this peak hour among six 10-minute intervals. The flights were modified to represent the average number of passengers per flight, rather than the number of seats per flight, since the former has more impact on the number of bags scanned than the latter. The manner in which the flights are distributed among those intervals is analogous to filling a jar with different-sized rocks. Begin by adding the largest rocks, then smaller rocks, then pebbles, then sand, and finally water. With each additional step, you are filling in gaps. If you start with water and fill up the jar, then there is no room left for anything else. Thus, we start with the larger capacity flights and move our way down. + +We wrote a computer program in $C++$ to implements the algorithm. [EDITOR'S NOTE: We omit the program code.] + +Figure 1 shows the number of bags still left for the EDS to process at airport A after each minute in airport A, as a function of time, according to our algorithm. For airport B, the results are similar. + +![](images/3008b1899aada8b5d59609c97cac2704c560444944da5021e00b186c5c819c28.jpg) +Figure 1. Bags left to process at airport A, as a function of time, according to the flight-distribution algorithm. + +# Cost Analysis of EDS and ETD + +$$ +C (\alpha , \omega , Z) = B _ {\mathrm {p e a k}} \left(\frac {\alpha (1 0 0 0 + c _ {i} + \omega Z)}{\Omega_ {\mathrm {E D S}} \ell_ {\mathrm {E D S}} (1 + \tau - \mu)} + \frac {(1 . 2 - \alpha) (4 5 + 1 0 \omega Z)}{\Omega_ {\mathrm {E T D}} \ell_ {\mathrm {E T D}} (1 + \tau - \mu)}\right) +$$ + +where + +$C =$ total cost of recommended system; + +$B_{\mathrm{peak}} =$ total number of bags during the peak hour; + +$\alpha =$ percentage of $B_{\mathrm{peak}}$ that the EDS will screen; + +$\omega =$ hourly operational cost of EDS; cost of ETD machine is 10 times this amount; + +$Z =$ years + +$c_{i} =$ installation cost of EDS, dependent on airport (thousands of dollars); + +$\ell =$ throughput rate of each machine (bags/h/machine); + +$\Omega =$ percent of time that the machines are operational; + +$\tau =$ minimum early passenger arrival time (h); + +$\mu =$ travel time of one bag between EDS and the plane (h); + +1000, $45 = \text{cost of EDS and ETD machines, respectively (thousands of dollars).}$ + +We also assume that the installation cost of the ETDs is negligible. + +# Deriving the Model + +By requiring that $20\%$ of all bags be screened through both an EDS and an ETD machine, the effective number of bags to screen increases by $20\%$ . The number of bags that go through the EDS, $B_{\mathrm{EDS}}$ , plus the number of bags that go through the ETD machine screening, $B_{\mathrm{ETD}}$ , must equal this effective number of bags. Therefore, + +$$ +B _ {\mathrm {e f f}} = 1. 2 B _ {\mathrm {p e a k}} = B _ {\mathrm {E D S}} + B _ {\mathrm {E T D}}. +$$ + +The time to screen all these bags remains the same as in our previous model, and therefore $\tau$ and $\mu$ have the same values as given earlier. Likewise, the equation to determine the number of EDSs remains the same, and the number of ETD machines can be determined using the same equation with parameters for ETDs substituted. + +# Cost + +The initial cost per machine equals the machine cost plus installation cost. EDSs are given as costing $1 million, while ETD machines are only$ 45K. Luckily, ETD machines are usually fairly small and portable, so their installation costs are assumed to be negligible. However, the installation cost of EDSs, $c_i$ , is substantial: $100K for airport A and $80K for airport B. The annual variable cost of operating the machinery is $\omega$ for an EDS, $10\omega$ for an ETD. We adopt a horizon of $Z$ years. + +The total cost $C$ is the fixed cost plus the variable cost of each machine over the time horizon. All costs in the following equations are in thousands of dollars. + +$$ +C (\omega , Z) = Q _ {\mathrm {E D S}} \left(1 0 0 0 + c _ {i} + \omega Z\right) + Q _ {\mathrm {E T D}} \left(4 5 + 1 0 \omega Z\right). +$$ + +Substituting, we get: + +$$ +C (\omega , Z) = \frac {B _ {\mathrm {E D S}} (1 0 0 0 + c _ {i} + \omega Z)}{\Omega_ {\mathrm {E D S}} \ell_ {\mathrm {E D S}} (1 + \tau - \mu)} + \frac {B _ {\mathrm {E T D}} (4 5 + c _ {i} + 1 0 \omega Z)}{\Omega_ {\mathrm {E T D}} \ell_ {\mathrm {E T D}} (1 + \tau - \mu)}. +$$ + +However, the number of bags going through each EDS is related to the number of bags going through each ETD machine. In addition, the number of bags going through each EDS is between $20\%$ and $100\%$ of the total number of peak-hour bags. We represent this relationship by the coefficient $\alpha$ , with $0.2 \leq \alpha \leq 1$ . + +$$ +B _ {\mathrm {E D S}} = \alpha B _ {\mathrm {p e a k}}. +$$ + +Substituting into the cost equation, we are left with + +$$ +C (\alpha , \omega , Z) = B _ {\mathrm {p e a k}} \left(\frac {\alpha (1 0 0 0 + c _ {i} + \omega Z)}{\Omega_ {\mathrm {E D S}} \ell_ {\mathrm {E D S}} (1 + \tau - \mu)} + \frac {(1 . 2 - \alpha) (4 5 + 1 0 \omega Z)}{\Omega_ {\mathrm {E T D}} \ell_ {\mathrm {E T D}} (1 + \tau - \mu)}\right). +$$ + +Using Maple, we plot $C$ as a function of $\alpha$ and keep $\omega$ constant at an arbitrary $50K. We can see in Figure 2 that after various numbers of years, the cost of the machine can significantly depend on the number of bags that go through each machine, which depends on $\alpha$ . + +![](images/870c0da92f4c757fa0c125bfe61730f3d72d3bd6402eaf27c28c862cc9a65f8e.jpg) +Figure 2. Costs over various time horizons for airports A and B. + +![](images/13f9201d2ef69b8a4dc6450d5ca28c25bccefdc17b731e36804fbbef8f530ecb.jpg) + +For any $\alpha$ , the function $C$ is linear in $Z$ . Except for the particular $Z$ value that makes the slope 0, only 0.2 and 1—the extreme values for $\alpha$ —can yield minimum values for $C$ . This means that there are two significant cases to study: + +- the EDS-led system, in which EDSs are the first tier of baggage scanning, processing $100\%$ of the bags, and ETD machines are the fail-safe, scanning $20\%$ of the bags; or else, vice versa, +- the ETD-led system, in which ETD machines process $100\%$ and the EDSs scan $20\%$ . + +We show later that the case of $\alpha$ between these two extremes is undesirable. + +Installing an ETD-led system (i.e., \(\alpha = 0.2\)) would be cost-effective only for a very short time horizon of a few months. This makes sense since the installation cost of an all-EDS system is very expensive, while the total of the high variable cost of operating the ETD machines is low over a short duration. However, after a few months, it is optimal to have \(\alpha = 1\), or an EDS-led system, since this has minimum cost in the long run. The graphs assume that the cost of operation of the EDS is \(\omega = \\(50\mathrm{K}\) per year, which may or may not be realistic. A different value of \(\omega\) will affect the slopes of the hs, thereby affecting when \(\alpha = 1\) becomes optimal. Therefore, by finding where the derivative of the cost function is zero, we can find the critical turning point for our model at any \(\omega\), such that after this time, an EDS-led system would be more desirable. We have + +$$ +\frac {\partial}{\partial \alpha} C (\alpha , \omega , Z) = B _ {\mathrm {p e a k}} \left(\frac {1 0 0 0 + c _ {i} + \omega Z}{\Omega_ {\mathrm {E D S}} \ell_ {\mathrm {E D S}} (1 + \tau - \mu)} - \frac {4 5 + 1 0 \omega Z}{\Omega_ {\mathrm {E T D}} \ell_ {\mathrm {E T D}} (1 + \tau - \mu)}\right). +$$ + +Setting this expression equal to 0 and solving for $Z$ , we find + +$$ +Z (\omega) = \frac {1}{\omega} \left(\frac {4 5 \Omega_ {\mathrm {E D S}} \ell_ {\mathrm {E D S}} - (1 0 0 0 + c _ {i}) \Omega_ {\mathrm {E T D}} \ell_ {\mathrm {E T D}}}{\Omega_ {\mathrm {E T D}} \ell_ {\mathrm {E T D}} - 1 0 \Omega_ {\mathrm {E D S}} \ell_ {\mathrm {E D S}}}\right). +$$ + +Notice that $Z$ is inversely related to $\omega$ . Also, $B_{\mathrm{peak}}$ and $(1 + \tau - \mu)$ cancel out, thereby not influencing the critical cutoff time. Therefore, the only difference between airports A and B is the installation cost, which is unnoticeable when plotted. Therefore, just for airport A, we plot $Z$ as a function of $\omega$ in Figure 3. + +![](images/82a0c8797ce3140ef0d475fb9eacbf288fdf07f7440c40f6d2acadae009ace18.jpg) +Figure 3. Time to equal cumulative cost as a function of annual operation cost of an EDS. + +For $(\omega, Z)$ combinations below the curve, an ETD-led system is more cost-efficient; however, the operational cost of each machine will be high enough to make an EDS-led system cheaper in less than one year. Given not only a life expectancy of EDSs around 10 years but also bureaucratic inertia, we cannot expect the EDS-ETD system baggage inspection system to be replaced soon enough so that an ETD-led system will minimize costs. An EDS-led system is more desirable + +Even though ETD machines become quite expensive after a short amount of time because of high operational cost, the low fixed cost might come in handy during the peak hours of peak times of the year. It would not be cost-efficient to buy extra EDSs just to handle these periods, but airports could buy extra ETD machines and store them until needed. + +# Determining $Q_{\mathrm{EDS}}$ and $Q_{\mathrm{ETD}}$ + +We have determined that $100\%$ of the bags should go through an EDS. We can calculate the total number of machines to buy by plugging the numbers into our initial equations: + +$$ +Q _ {\mathrm {E D S}} = \frac {\alpha B _ {\mathrm {p e a k}}}{\Omega_ {\mathrm {E D S}} \ell_ {\mathrm {E D S}} (1 + \tau - \mu)}, \qquad Q _ {\mathrm {E T D}} = \frac {(1 . 2 - \alpha) B _ {\mathrm {p e a k}}}{\Omega_ {\mathrm {E T D}} \ell_ {\mathrm {E T D}} (1 + \tau - \mu)}. +$$ + +We estimate $\ell_{\mathrm{ETD}} = 47$ bags/hour/machine, the average throughput rate of the ETD machines at the Winter Olympics in 2002 [Butler and Poole 2002]. The other constants have the same values as we used in our earlier model: + +$$ +B _ {\text {p e a k} \mathrm {a t A}} = 5 2 8 6, \quad B _ {\text {p e a k} \mathrm {a t B}} = 5 6 8 3 +$$ + +$$ +\ell_ {\mathrm {E D S}} = 1 9 0, \quad \ell_ {\mathrm {E T D}} = 4 7 +$$ + +$$ +\Omega_ {\mathrm {E D S}} = . 9 2, \quad \Omega_ {\mathrm {E T D}} = . 9 8 +$$ + +$$ +\tau = . 7 5, \mu = 0. 1 +$$ + +Using these values, we find + +$$ +Q _ {\mathrm {E D S} A} = \lceil 1 8. 3 3 \rceil = 1 9, \quad Q _ {\mathrm {E D S} B} = \lceil 1 9. 7 0 \rceil = 2 0; +$$ + +$$ +Q _ {\mathrm {E T D} _ {A}} = \lceil 1 3. 9 1 \rceil = 1 4, \qquad Q _ {\mathrm {E T D} _ {B}} = \lceil 1 4. 9 6 \rceil = 1 5. +$$ + +As expected, the EDS values for both airports are unchanged from our previous results, when we had not yet considered the ETD machines. + +# Recommendations for the Future + +Although an EDS-led system, with merely enough ETD machines to cover $20\%$ of the bags, is optimal based on our calculations, it might not be the absolute best solution. An important consideration is whether or not new technology might replace the machines before the critical cutoff time. For example, if current technology trends show that a better baggage screening system will be ready in less than a year, it might be worth taking the risk and buying an ETD-led system. Then, within the year, buy the better machines, with lower operational costs, that can replace the ETD machines. However, not only would this save very little, but this is quite a risk to take since your operational cost for the ETD machines will hurt the airport terribly if better technology does not come out in time. Therefore, our model shows that unless current trends show an immediate market introduction of new and advanced technology, + +the best solution is to have all bags screened by EDSs and only $20\%$ screened by ETDs + +Down the road, however, we may need to re-evaluate the system. + +Other variables that we should weigh heavily are the false positive rate, the false negative rate, and the human reliability factor. The false positive rate and the false negative rate should both be kept as low as possible, but it is more important that the false negative rate be extremely close to 0, as this affects the accuracy of the machine, while the false positive rate merely affects the efficiency of the machine. Increased precision would not only increase the safety of our air traffic system but also reduce the number of secondary, fail-safe screening devices, thus saving money. Currently, EDSs are widely reported to have between $22\%$ and $30\%$ false positive rates, which is ridiculously high. + +New technology seems to be decreasing significantly this inefficiency, which will result in less required re-screenings and human intervention. A machine with high false negatives used as a first-tier scanner (as in the EDS in the EDS-led system) is very dangerous, and to counter the threat of explosives slipping through, costly random screening of negatives with a second device will be needed, though still not eliminating the said threat. + +# Conclusion: Strengths and Weaknesses + +The main strength of our model is that the number of EDS machines it projects will work well even if some assumed constants and probabilities shift. More accurate statistical data, as should be available to airport administrators, would yield a more accurate optimal number of machines needed. The delays caused by fluctuations in assumptions are, under most every case, within acceptable ranges for delay, i.e., delays for other reasons happening at the same time. If this model is implemented, it should be stressed that the system is designed so that no extra delays should be expected. If this argument is sold to the people convincingly enough, instances of delay should not make passengers more likely to blame the EDS system over other causes for delay, such as waiting for connecting passengers, bad weather, or mechanical difficulties. Extreme circumstances, such as holiday travel days, normally force delays; any delay in the EDS system on such a day, if not compensated with temporary ETD machines, would run parallel to—not in addition to—delays already occurring. Besides, air travelers will be willing to wait a few extra minutes occasionally if it gives them a sense of security that many lacked following September 11. + +One weakness of our model is that we did not go into different methods for implementing the prioritization and queuing regime for bags entering the explosives scanners. We considered several options. The tags placed on the bags at the check-in desk could list departure time, thus allowing easy sorting. This, however, does not allow for changes in departure time due to delays. A departure-listing screen, like those posted throughout the airports for passengers, could be displayed by the EDS machines. This list will be very long at a large airport, though, and would require EDS operators to recheck the display frequently. + +Another weakness is that we ignored the placement of the EDS machines. Most EDSs are placed in the airport lobby near the check-in area. In a large airport, this could mean that the machines are spread out over a large area. So, the EDS machines could not work together like one unit, as our model implicitly assumes. This would mean a loss of efficiency: machines at one end of the airport could run out of bags while those at the other end could have too many. This problem could be remedied in the flight scheduling process, factoring in airline check-in desk placement in the even distribution of bags over the hour. The scope of that undertaking is far outside what we can accomplish here, though it ultimately deserves consideration. + +# References + +Bureau of Transportation Statistics. n.d. Intermodal Transportation Database. http://transtats.bts.gov/DL_SelectionFields.asp?Table_ID=236 and Table_ID=259. Accessed 10 February 2003. +Butler, Viggo, and Robert W. Poole, Jr. 2002. Rethinking checked baggage screening. July 2002. Los Angeles, CA: Reason Public Policy Institute. http://www.rppi.org/baggagescreening.html. Accessed 10 February 2003. +Gozani, Tsahi. 2002. Hearing on role of military research and development programs in homeland security. 12 Mar 2002. United States House of Representatives, Committee on Armed Services, Subcommittee on Military Research and Development. http://www-hoover.stanford.edu/research/conferences/nsf02/gozani2.pdf. Accessed 10 February 2003. +International Security Systems (subsidiary of Analogic Corporation). n.d. The EXACT: EXplovis Assessment Computed Tomography, features and specifications. http://www.analogic.com/Images/EXACT.pdf. Accessed 10 February 2003. +Johnson, Alex. 2002. Full bag scanning may be years away. 26 March 2002. MSNBC. http://www.msnbc.com/news/726695.asp?cp1=1. Accessed 10 February 2003. +L.E.K. Consulting. 2000. Report on aviation congestion issues. Traffic Capacity Forum. Auckland (New Zealand): 16 Mar 2000. http://www.comcom.govt.nz/price/Airfield/isubs_27_04_01/Wellington/Appendix_6(c).pdf. Accessed 10 February 2003. +Mead, Kenneth M. 2002. Challenges facing TSA in implementing the Aviation and Transportation Security Act. United States House of Representatives, Committee on Transportation and Infrastructure, Subcommittee on Aviation. 23 Jan 2002. http://www.tsa.dot.gov/interweb/assetlibrary/ Challenges_Facing_TSA_in_Implementing_the_Aviation_and_ Transportation_Security_Act.pdf. Accessed 10 February 2003. +Melendez, Nico. 2002. Under Secretary Magaw announces explosive detection pilot programs to enhance aviation security. 20 May 2002. Transportation Security Administration. http://www.tsa.gov/public/display?content =84. Accessed 10 February 2003. +Sharkey, Joe. 2002 The lull before the storm for the nation's airports. New York Times (19 November 2002). http://www.nytimes.com/ref/open/biztravel/19ROAD-OPEN.html. Accessed 10 February 2003. +Stoller, Gary. 2002. Flight check-in times vary among airlines, airports. USA Today (27 May 2002). http://www.usatoday.com/money/biztravel/2002-05-28-checkin.htm. Accessed 10 February 2003. + +Successful baggage screening relies on human factors. 2002. California Aviation (27 March 2002). http://archives.californiaaviation.org/airport/ msg20658.html. Accessed 10 February 2003. +Zoellner, Tom. 2002. Airport bomb scanning tab high. *Arizona Republic* (24 April 2002). http://www.arizonarepublic.com/special42/articles/0424bombscan24.html. Accessed 10 February 2003. + +# Judge's Commentary: The Outstanding Airport Screening Papers + +C. Richard Cassady + +Dept. of Industrial Engineering + +University of Arkansas + +Fayetteville, AR 72701 + +cassady@engr.uark.edu + +# Introduction + +The final judging for the 2003 Interdisciplinary Contest in Modeling took place at the United States Military Academy on March 8, 2003. The judges spent a long, but enjoyable, day reading an excellent set of papers submitted by the student teams. Because of the complexity of the problem and the wide variety of available and reasonable solution approaches, the judges' evaluation process focused on the following general areas. + +# Modeling + +The judges evaluated the student teams' creative application of existing and novel mathematical modeling techniques to the defined problems. Particular attention was placed upon the identification and appropriateness of the underlying modeling assumptions and model validation efforts. + +# Analysis + +The judges evaluated the breadth and depth of the numerical analysis each team performed using their models. Particular attention was placed upon the reasonableness of conclusions and sensitivity analysis. + +# Communication + +To convey the quality of their modeling and analysis activities, student teams had to communicate effectively in their report. Key factors in this communication included organization, clarity and brevity of information. Particular emphasis was placed upon each team's one-page summary. + +The Outstanding papers were the ones that provided excellent communication of valid modeling activities, meaningful numerical analysis and thoughtful conclusions. Papers that fell short typically fell into one of two categories: + +- well-written papers with questionable models or limited analysis, or +- papers that hid excellent modeling and analysis work with marginal communication. + +# The Problem + +This year's problem dealt with baggage screening and flight scheduling at a commercial airport. Clearly, these areas have received increased attention (especially baggage screening) since the tragic events of September 11, 2001. The student teams were required to analyze the required capacity for two types of baggage screening machines and to develop a recommended flight schedule for the airport's peak hour. Limited data was provided on the characteristics of peak hour flights and the passengers that utilize them. In addition to the modeling and analysis, the teams were asked to investigate emerging technologies in the area of baggage screening. In addition to the documentation of their modeling and analysis efforts, student teams drafted a position paper and two memos that communicated their findings, conclusions and recommendations. + +This problem was an excellent choice for ICM. The multidisciplinary nature of airport security is clear, and the specific problems defined by the authors captured the essence of many fundamental areas of mathematical modeling. Most importantly, the problems included sufficient complexity to require the students to go beyond "textbook operations research" and utilize their creative problem-solving skills. + +The problem was written by Dr. Sheldon Jacobson and Dr. John Kobza. Dr. Jacobson is Associate Professor of Mechanical and Industrial Engineering and Director of the Simulation and Optimization Laboratory at the University of Illinois Urbana-Champaign. Dr. Kobza is Associate Professor of Industrial Engineering at Texas Tech University. They have significant research experience in the area of transportation security, and they recently received the Aviation Security Research Award for their work in access control and checked baggage screening. In addition to authoring the problems, Drs. Jacobson and Kobza made valuable contributions as insightful members of the final judging panel. + +# Modeling Approaches + +The majority of the mathematical modeling and analysis utilized by the student teams was applied to Tasks 1, 3, and 6. The approaches to Tasks 1 and 6 were very similar. Many teams attempted to apply the results of queueing analysis for M/M/s systems. While the baggage screening system is a queueing system, there were several common shortcomings in analyses of this type. First, many teams failed to identify and discuss the underlying assumptions of an M/M/s queue. Certainly, the assumption of constant average arrival rate is questionable at best. Second, many teams applied the steady-state (long-run) results from queueing theory to a period of time ranging from one to three hours. + +Other teams applied "back-of-the-envelope" or simulation-based capacity analysis. This approach avoided the restrictive assumptions of queueing theory but led to two other common shortfalls: + +- Many teams assumed that the baggage screening system is "empty and idle" at the beginning of the peak hour. In reality, it is more likely that the hours preceding and following the peak hour are "near peak." +- Many teams identified the number of machines that could handle the required load without any required queueing. A more cost-effective approach would be to design a system that experiences some queueing during peak demand but has the ability to "recover" in time. + +A common shortcoming in both the queueing analysis and capacity analysis approaches was that teams did not recognize that a large proportion of passengers would be connecting through the airport. The baggage for these passengers would not have to be screened. + +The majority of student teams recognized Task 3 as a scheduling problem, but they also realized that this problem is much more complex than traditional scheduling problems found in the literature. As a result, most teams developed heuristic procedures that "smooth" the flow of passengers through the airport during and around the peak hour. Many of these procedures suffered from several of the assumptions mentioned above regarding Tasks 1 and 6. Some teams did manage to formulate reasonable mathematical optimization models of the scheduling problem. Solution approaches for these optimization models ranged from traditional discrete optimization algorithms (embedded in software) to search-based heuristics such as genetic algorithms. + +It was somewhat surprising that few teams modeled the problem in such a way that captured the interactions between the baggage screening and fight scheduling sub-problems. A few of the teams did combine the two problems into a large-scale simulation effort with simulation-based optimization heuristics applied to derive the solution. However, these comprehensive approaches were the exception not the rule. + +The papers that moved forward in the competition tended to have a foundation in well-known modeling approaches with problem-specific customization + +based on the creativity and skill of the team. Many of these papers included self-evaluation of the modeling assumptions and some degree of validation based on preliminary analysis. Some teams used data from real airports and airlines to contribute to or validate their results. As always, sensitivity analysis was appreciated and rewarded by the judges. + +# Conclusions + +My recommendations for future student teams are: + +Assumptions Identify and critique (in writing) the assumptions of your models. Sometimes restrictive assumptions have to be made. Be sure that the judges realize you are aware of and concerned about these assumptions. + +Analysis Spend as much time as you can on numerical analysis of the model. Use this analysis to perform "sanity checks" on the model. Do not just report the output. Comment on the reasonableness of the results. Perform extensive numerical experiments to eliminate bias resulting from assumptions, estimations or initial conditions. Summarize your analysis clearly in tabular or graphical form. + +Communication Do not neglect the writing. Clear communication makes it easier to identify outstanding work. To be perfectly honest, good communication improves the judges' frame of mind. + +References Clearly cite information that you utilize from published sources (books, papers, Websites, etc.). This will make it clear to the judges that you have properly and completely researched the problem. However, do not rely exclusively on existing work and do not copy text from existing sources without properly documenting the sources. Note that we did experience a few isolated instances of plagiarism and excessive copying in the competition. + +# About the Author + +![](images/35646d54c030abc8ab7e4c284c6a2e358e7044e9d5fe910582bf57fcec54ec77.jpg) + +Richard Cassady is an assistant professor in the Dept. of Industrial Engineering at the University of Arkansas. His primary research interests include application of operations research in the area of repairable systems modeling. He teaches undergraduate and graduate courses in the areas of probability and statistics, stochastic processes, and reliability. + +# Authors' Commentary: Aviation Security Baggage Screening Strategies: To Screen or Not to Screen, That Is the Question! + +Sheldon H. Jacobson + +Department of Mechanical and Industrial Engineering + +University of Illinois + +Urbana, IL 61801-2906 + +shj@uiuc.edu + +http://www staff.uiuc.edu/\~shj/shj.html + +John E. Kobza + +Department of Industrial Engineering + +Texas Tech University + +Lubbock, TX 79409-3061 + +john.kobza@coe.ttu.edu + +# Introduction and Background + +The events of September 11, 2001 make it the worst day in the history of commercial aviation. These events have lead to massive changes in the manner in which aviation security is organized and implemented at the 429 commercial airports throughout the United States. These changes include, for example, + +- the creation of the Transportation Security Administration (TSA), +- the federalization of aviation security personnel, +- a more extensive use of air marshals on domestic flights, +- extensive positive passenger baggage matching protocols, and +- enhanced passenger and baggage security screening at airport terminal checkpoints and gates. + +The problems posed in this year's modeling competition, based on research supported by the National Science Foundation through the Division of Design, Manufacturing and Industrial Innovation, Program in Service Enterprise Engineering, are motivated by several of the challenges faced by the TSA in addressing the December 31, 2002 Congressionally-mandated deadline for $100\%$ screening of all checked baggage on all domestic commercial flights in the United States. This mandate has resulted in the rapid manufacturing and deployment of several thousand explosive detection systems (EDSs) and explosive trace devices (ETDs). Key questions faced by the TSA have included + +- where to deploy such baggage screening devices, +- what combination of EDSs and ETDs should be used at individual airports, and +- how such devices should be used once deployed. + +The problems in this year's competition embody several of these questions. + +# Aviation and Transportation Security Act + +On November 19, 2001, the United States Congress passed the Aviation and Transportation Security Act (ATSA), resulting in widespread and sweeping changes in how security is addressed for all forms of transportation (with a particular emphasis on air travel). An aggressive schedule was included for the federalization of airport security personnel (with a deadline of November 19, 2002) and the screening of all checked baggage on commercial flights using federally approved security screening devices and systems (with a deadline of December 31, 2002). Skeptics in both industry and government questioned whether these deadlines could be met, given the magnitude and scope of this undertaking. Moreover, the costs associated with such an endeavor were estimated to be in the billions of dollars (US). As the deadlines approached, the director of the TSA, Admiral James M. Loy, remained committed to meeting all specified deadlines. By December 31, 2002, over $90\%$ of all airports had met this deadline, with the remaining airports aggressively moving towards compliance. + +# Formulation of the Contest Question + +The authors of this year's contest question have been working in the area of operations research modeling of aviation security system since the mid 1990s. Their research has been disseminated in a wide variety of journals. The problems that they have addressed include + +- a probabilistic analysis of access control security systems [Kobza and Jacobson 1996; 1997], + +- a sampling procedure to estimate risk probabilities in access control security systems [Jacobson, Kobza, and Nakayama 2000], +- the analysis of baggage value performance measures for checked baggage security systems [Jacobson, Bowman, and Kobza 2001], +- a knapsack-problem model formulation for addressing aviation security system design problems [Jacobson, Kobza, and Easterling 2001], +- models for analyzing the impact of connecting passengers on selectee rates [Virta et al. 2002], +- a case study for checked baggage screening security system design [Jacobson et al. 2003a], and +- a cost/risk analysis of various checked baggage screening strategies [Jacobson, Virta, and Kobza 2003b]. + +All these research contributions focus on identifying strategies or procedures for enhancing the operation and design of aviation security systems. + +This year's contest question relates to purchasing and deploying EDSs and ETDs for checked baggage screening at the 429 commercial airports around the United States. Many of the specific issues raised in the tasks described in the problem description grew out of ongoing discussions with TSA personnel, as well as from information extracted from Congressional testimonies by, for example, Kenneth Mead, the Inspector General of the United States Department of Transportation [Mead 2002a; 2002b], and a variety of newspaper and newswire sources. Factors that affect security equipment purchase and deployment decisions include + +the size of airports, +- the number of concourses within airports, and +- the schedule of flights departing from each airport (as well as their distribution throughout the day). + +On a broader scale, the potential for growth at an airport must also be considered. All these factors play an important role in the decision-making process. + +The initial goal of the TSA was to meet the requirement as defined in the ATSA. Therefore, feasibility was of paramount concern. However, once feasibility was attained, cost-effectiveness and operational efficiency became important. This year's modeling problem weaves together the feasibility issues that arose during the frenetic ramp-up period following the passage of the ATSA and the current cost-effectiveness issues to provide timely problems for the competition participants and an interesting and challenging evaluation process for the judges. + +# Conclusions + +The transition process for aviation security operations since September 11, 2001 has not been smooth, but much progress has been made. However, aviation security is still a "work in process." New technologies being developed will significantly affect many of the operations in place today and require substantial changes. With the changing nature of system threats, aviation security will continue to evolve. This year's modeling competition clearly points out that there are many young people around the world who have the talent and the outstanding ideas needed to guide this evolution and affect the security of our world. + +# Acknowledgments + +The authors wish to thank Dr. Chris Arney and Colonel Gary Krahn for organizing this year's competition and for providing invaluable feedback and suggestions on the design and revisions of the problem statement. Without their assistance, this year's competition would not have been the enormous success that it has turned out to be. The authors wish to thank the triage graders at the United States Military Academy who devoted countless hours to the initial grading of the submissions. Special thanks go to the final round of graders who not only provided expert advice and evaluation but also made the weekend of grading an enjoyable experience. The authors also wish to acknowledge the many individuals within the Transportation Security Administration without whom this problem could never have been developed. Lastly, the motivation for the problems in this year's competition is based on research supported in part by the National Science Foundation (DMI-0114046, DMI-0114499). + +# References + +Jacobson, S.H., J.M. Bowman, and J.E. Kobza. 2001. Modeling and analyzing the performance of aviation security systems using baggage value performance measures. IMA Journal of Management Mathematics 12 (1): 3-22. +Jacobson, S.H., J.E. Kobza, and A.E. Easterling. 2001. A detection theoretic approach to modeling aviation security problems using the knapsack problem. IIE Transactions 33 (9): 747-759. +Jacobson, S.H., J.E. Kobza, and M.K. Nakayama. 2000. A sampling procedure to estimate risk probabilities in access control security systems. European Journal on Operational Research 122 (1): 123-132. +Jacobson, S.H., J.E. Virta, J.M. Bowman, J.E. Kobza, and J.J. Nestor. 2003a. Modeling aviation baggage screening security systems: A case study. IIE Transactions 35 (3): 259-269. + +Jacobson, S.H., J.E. Virta, and J.E. Kobza. 2003b. Analyzing the cost of screening selectee and non-selectee baggage. Risk Analysis (to appear). +Kobza, J.E., and S.H. Jacobson. 1996. Addressing the dependency problem in access security system architecture design. Risk Analysis 16 (6): 801-812. +_______. 1997 Probability models for access security system architectures. Journal of the Operational Research Society 48 (3): 255-263. +Mead, K.M. 2002a. Challenges facing the TSA in implementing the Aviation and Transportation Security Act. Report CC-2002-088. Washington, DC: Office of Inspector General, Department of Transportation. +______ 2002b. Key issues concerning implementation of the Aviation and Transportation Security Act. Report CC-2002-098. Washington, DC: Office of Inspector General, Department of Transportation. +Virta, J.E., S.H. Jacobson, and J.E. Kobza. 2002. Outgoing selectee rates at hub airports. Reliability Engineering and System Safety 76 (2): 155-165. + +# About the Authors + +![](images/85ea35c5b37a4defc16d61e9554bb10cbe8279a08468239614816d07570efec4.jpg) + +Sheldon H. Jacobson is a Professor, Willett Faculty Scholar, and Director of the Simulation and Optimization Laboratory in the Department of Mechanical and Industrial Engineering at the University of Illinois at Urbana-Champaign. He has also served on the faculty at Case Western Reserve University and Virginia Tech. He has a B.Sc. and M.Sc. (both in Mathematics) from McGill University, and a M.S. and Ph.D. (both in Operations Research and Industrial Engineering) from Cornell University. His theoretical research interests include the analysis and design of heuristics and al + +gorithms for intractable discrete optimization problems. His applied research interests address problems in the manufacturing, aviation security, and healthcare industries. He (with John Kobza) received the 1998 Application Award from the Institute of Industrial Engineers Operations Research Division for their research contributions in the application of operations research to address aviation security problems. Most recently, Aviation Security International awarded the 2002 Aviation Security Research Award to him and John Kobza for their contributions to aviation security theory and practice. In addition, he and John Kobza received the Best Paper Award in the IIE Transactions Focused Issue on Operations Engineering for their paper on enhancing aviation security systems using discrete optimization models. He was also the recipient of Guggenheim Fellowship in 2003 for his research in the area of aviation security. His research has been published in a wide spectrum of journals, including Operations Research, INFORMS Journal on Computing, Operations Research Letters, IIE + +Transactions, and the Journal of the Operational Research Society. He has received research funding from several government agencies and industrial partners, including the National Science Foundation, the Air Force Office of Scientific Research, and the Federal Aviation Administration. + +![](images/1ab5b8ba780c12876c7bb820468ad08a282d72a1ee3fe9761978324e45797dfa.jpg) + +John E. Kobza received the B.S. degree in Electrical Engineering from Washington State University in 1982 and the M.S. degree in Electrical Engineering from Clemson University in 1984. From 1984 to 1987, he was a Member of the Technical Staff at GTE Laboratories, working on traffic scheduling for integrated communications networks. He was an IBM Fellow at the William E. Simon Graduate School of Business, University of Rochester during the 1987-88 academic year. He received the Ph.D. degree in Industrial and Systems Engineering at Virginia Tech in 1993. He was an Assistant Professor of Industrial and Systems Engineering at Virginia Tech from 1993 to 1999. + +He is currently an Associate Professor of Industrial Engineering at Texas Tech University. His research interests include modeling, analyzing and designing systems involving uncertainty and risk, such as security systems, manufacturing systems and communication networks. He is a member of Sigma Xi, Alpha Pi Mu, INFORMS, IIE, and IEEE. + +# Guide for Authors Focus + +The UMAP Journal focuses on mathematical modeling and applications of mathematics at the undergraduate level. The editor also welcomes expository articles for the On Jargon column, reviews of books and other materials, and guest editorials on new ideas in mathematics education or on interaction between mathematics and application fields. Prospective authors are invited to consult the editor or an associate editor. + +# Understanding + +A manuscript is submitted with the understanding—unless the authors advise otherwise—that the work is original with the authors, is contributed for sole publication in the Journal, and is not concurrently under consideration or scheduled for publication elsewhere with substantially the same form and content. Pursuant to U.S. copyright law, authors must sign a copyright release before editorial processing begins. Authors who include data, figures, photographs, examples, exercises, or long quotations from other sources must, before publication, secure appropriate permissions from the copyright holders and provide the editor with copies. The Journal's copyright policy and copyright release form appear in Vol. 18 (1997) No. 1, pp. 1-14 and at ftp://cs.beloit.edu/math-cs/Faculty/Paul Campbell/Public/UMAP. + +# Language + +The language of publication is English (but the editor will help find translators for particularly meritorious manuscripts in other languages). The majority of readers are native speakers of English, but authors are asked to keep in mind that readers vary in their familiarity with vocabulary, idiomatic expressions, and slang. Authors should use consistently either British or American spelling. + +# Format + +Even short articles should be sectioned with carefully chosen (unnumbered) titles. An article should begin by saying clearly what it is about and what it will presume of the reader's background. Relevant bibliography should appear in a section entitled *References* and may include annotations, as well as sources not cited. Authors are asked to include short biographical sketches and photos in a section entitled *About the Author(s)*. + +# Style Manual + +On questions of style, please consult current Journal issues and The Chicago Manual of Style, 13th or 14th ed. (Chicago, IL: University of Chicago Press, 1982, 1993). + +# Citations + +The Journal uses the author-date system. References cited in the text should include between square brackets the last names of the authors and the year of publication, with no intervening punctuation (e.g., [Kolmes and Mitchell 1990]). For three or more authors, use [Kolmes et al. 1990]. Papers by the same authors in the same year may be distinguished by a lowercase letter after the year (e.g., [Fjelstad 1990a]). A specific page, section, equation, or other division of the cited work may follow the date, preceded by a comma (e.g., [Kolmes and Mitchell 1990, 56]). Omit "p." and "pp." with page numbers. Multiple citations may appear in the same brackets, alphabetically, separated by semicolons (e.g., [Ng 1990; Standler 1990]). If the citation is part of the text, then the author's name does not appear in brackets (e.g., "... Campbell [1989] argued ..."). + +# References + +Book entries should follow the format (note placement of year and use of periods): + +Moore, David S., and George P. McCabe. 1989. Introduction to the Practice of Statistics. New York, NY: W.H. Freeman. + +For articles, use the form (again, most delimiters are periods): + +Nievergelt, Yves. 1988. Graphic differentiation clarifies health care pricing. UMAP Modules in Undergraduate Mathematics and Its Applications: Module 678. The UMAP Journal 9 (1): 51-86. Reprinted in UMAP Modules: Tools for Teaching 1988, edited by Paul J. Campbell, 1-36. Arlington, MA: COMAP, 1989. + +# What to Submit + +Number all pages, put figures on separate sheets (in two forms, with and without lettering), and number figures and tables in separate series. Send three paper copies of the entire manuscript, plus the copyright release form, and—by email attachment or on diskette—formatted and unformatted ("text" or ASCII) files of the text and a separate file of each figure. Please advise the computer platform and names and versions of programs used. The Journal is typeset in $\mathrm{LATEX}$ using EPS or PICT files of figures. + +# Refereeing + +All suitable manuscripts are refereed double-blind, usually by at least two referees. + +# Courtesy Copies + +Reprints are not available. Authors of an article each receive two copies of the issue; the author of a review receives one copy; authors of a UMAP Module or an ILAP Module each receive two copies of the issue plus a copy of the Tools for Teaching volume. Authors may reproduce their work for their own purposes, including classroom teaching and internal distribution within their institutions, provided copies are not sold. + +# UMAP Modules and ILAP Modules + +A UMAP Module is a teaching/learning module, with precise statements of the target audience, the mathematical prerequisites, and the time frame for completion, and with exercises and (often) a sample exam (with solutions). An ILAP (Interdisciplinary Lively Application Project) is a student group project, jointly authored by faculty from mathematics and a partner department. Some UMAP and ILAP Modules appear in the Journal, others in the annual Tools for Teaching volume. Authors considering whether to develop a topic as an article or as a UMAP or an ILAP Module should consult the editor. + +# Where to Submit + +Reviews, On Jargon columns, and ILAPs should go to the respective associate editors, whose addresses appear on the Journal masthead. Send all other manuscripts to + +Paul J. Campbell, Editor + +The UMAP Journal + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +USA + +voice: (608) 363-2007 fax: (608) 363-2718 email: campbell@beloit.edu \ No newline at end of file diff --git a/MCM/1995-2008/2003MCM/2003MCM.md b/MCM/1995-2008/2003MCM/2003MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..ffce279d50090bdd520d715232e60592716a7171 --- /dev/null +++ b/MCM/1995-2008/2003MCM/2003MCM.md @@ -0,0 +1,5244 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Interim Vice-President + +for Academic Affairs + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy State Univ. Montgomery + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Director of Educ. Technology + +Roland Cheyne + +Production Editor + +Pauline Wright + +Copy Editor + +Timothy McLean + +Distribution + +Kevin Darcy + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 24, No. 3 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +David C. "Chris" Arney + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. "Gene" Woolsey + +Brigham Young University + +The College of St. Rose + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes print copies of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2320 \$90 + +(Outside U.S.) #2321 $105 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2370 $415 + +(Outside U.S.) #2371 $435 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2340 $180 + +(Outside U.S.) #2341 $200 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2310 $39 + +(Outside U.S.) #2310 $39 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc.57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2003 by COMAP, Inc. All rights reserved. + +# Vol. 24, No. 3 2003 + +# Table of Contents + +# Editorial + +Secondary-Undergraduate Articulation: Moving Forward Solomon A. Garfunkel 193 + +# Modeling Forum + +Results of the 2003 Mathematical Contest in Modeling Frank Giordano 199 + +Safe Landings +Chad T. Kishimoto, Justin C. Kao, and Jeffrey A. Edlund . . . . . . 219 + +A Time-Independent Model of Box Safety for Stunt +Motorcyclists +Ivan Corwin, Sheel Ganatra, and Nikita Rozenblyum 233 + +Thinking Outside the Box and Over the Elephant Melissa J. Banister, Matthew Macauley, and Micah J. Smukler . . . 251 + +You Too Can Be James Bond Deng Xiaowei, Xu Wei, and Zhang Zhenyu 263 + +Cardboard Comfortable When It Comes to Crashing Jeffrey Giansiracusa, Ernie Esser, and Simon Pai 281 + +Fly With Confidence Hu Yuxiao, Hua Zheng, and Zhou Enlu 299 + +Judge's Commentary: The Outstanding Stunt Person Papers William P. Fox 317 + +The Genetic Algorithm-Based Optimization Approach for Gamma Unit Treatment Sun Fei, Yang Lin, and Wang Hong 323 + +A Sphere-Packing Model for the Optimal Treatment Plan Long Yun, Ye Yungqing, and Wei Zhen 339 + +The Gamma Knife Problem +Darin W. Gillis, David R. Lindstone, and Aaron T. Windfield . . .351 + +Shelling Tumors with Caution and Wiggles Luke Winstrom, Sam Coskey, and Mark Blunk 365 + +Shoot to Kill! Sarah Grove, Chris Jones, and Joel Lepak 379 + +![](images/6de243d8ac552a7e20a7e2b722628aee8a565c0e28857164f4f9709e17ad5dc8.jpg) + +# Publisher's Editorial Secondary-Undergraduate Articulation: Moving Forward + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +s.garfunkel@mail.comap.com + +# The Calls to Change + +Several publications in the late 1980s and early 1990s stimulated significant activity aimed at improving mathematics education: + +- The National Council of Teachers of Mathematics (NCTM) published the Standards documents for school mathematics (Curriculum and Evaluation Standards for School Mathematics, the Professional Standards for Teaching Mathematics, the Assessment Standards for School Mathematics). +- The Mathematical Association of America (MAA) publication Calculus for a New Century: A Pump Not a Filter helped stimulate reform at the undergraduate level in entry-level calculus. +- The Secretary's Commission on Achieving Necessary Skills (SCANS) report What Work Requires Schools suggested that mathematical preparation for success in the workforce extends far beyond basic computation skills. +- Publications such as *Everybody Counts: A Report to the Nation on the Future of Mathematics Education* from the Mathematical Sciences Education Board suggested that large-scale changes would be needed in K-16 mathematics education to prepare our students to meet adequately the challenges of the twenty-first century. Indeed, that publication stresses the need for long-term cooperative effort and planning throughout the school continuum if + +we are to meet those needs. "Efforts to change must proceed steadily for many years, on many levels simultaneously, with broad involvement of all constituencies at each stage" (p. 96). + +# Outlining Specific Changes Needed + +Since the late 1980s, there have been multiple efforts to meet the challenges outlined in these and other documents, and we have certainly moved forward. The updated Standards document, the Principles and Standards for School Mathematics published in 2000, maintains and clarifies a direction for school mathematics improvement. Among other goals, the objectives of school mathematics include providing the kind of solid foundation students will need + +to be mathematically literate citizens, +to enter the workplace (broadly determined), or +- to go on to programs of study and professions that use advanced mathematics. + +At the post-secondary level, the emerging report from the MAA's Committee on the Undergraduate Program in Mathematics (CUPM) [2003] for two- and four-year colleges and universities is consistent with—and a natural extension of—the Standards vision. It will urge + +- giving attention to the educational needs of all students with an updated curriculum and +- providing computational skills, conceptual understanding, and mathematical critical thinking skills. + +While earlier CUPM reports focused primarily on the mathematics major, the new report will make broad recommendations for the entire college-level mathematics curriculum, including ones for students taking general education and introductory courses, those majoring in partner disciplines, and those preparing for K-8 teaching. + +The original Crossroads document from the American Mathematical Association of Two-Year Colleges (AMATYC), which set standards for college mathematics courses before calculus, and the emerging updated version are in harmony with these documents. + +The needs of the business community as reflected in the SCANS report and from other sources are consistent with the directions suggested by NCTM, CUPM, and AMATYC. + +# The Message Is Strong… + +The clear message from policymakers, as reflected in increased accountability measures, mandated testing and other measures of the No Child Left Behind (NCLB) legislation, is: + +We need to raise the mathematical competencies and conceptual understanding of all students. + +# ... But the Medium Is Weak + +With the NCTM Standards, new state standards and tests, and the CUPM and Crossroads reports, curricular and pedagogical change continues at both the high school and post-secondary levels. But there is a serious disconnect: the decentralized nature of education in this country, characterized in *Everybody Counts* and in Hiebert and Stigler's *The Teaching Gap* [1999] paradoxically as "no one is in charge—everyone is in charge." + +# The Dramatic Articulation Disconnect + +In contrast to efforts to raise the bar in school mathematics and address the multiple goals in the NCTM vision, the point of view of many of the nation's institutions of higher learning remains fixed on the belief that the ultimate purpose of pre-college education is to prepare students for calculus, often reflected in their mathematics placement examinations. So, while the language of the high school documents is mirrored in the emerging CUPM report and similar recommendations, there is still a dramatic articulation problem. + +We can no longer tolerate the poor communication and lack of basic understanding between the school and post-secondary mathematics communities. Adding to the problem is the fact that colleges and universities are even more insular and individualistic in developing their mathematics course offerings. Reports such as the one emerging from CUPM seem to carry far less weight in higher education than documents such as the NCTM Standards documents do at the pre-college level. + +These problems must be addressed head-on; the emerging of the CUPM report is an ideal time to do so. We feel that it is preferable to deal with articulation issues as the CUPM report is released rather than trying to retrofit solutions later. + +# What We Don't Know + +We need reliable data on a variety of issues: + +- How easily do students coming out of reform-based programs move into the undergraduate curricula at both two- and four-year colleges? +How well do they do? +- Do such students take more mathematics courses, fewer mathematics courses, different (non-major) mathematics courses? +- Can we identify and detail successful transitions? Can we identify real barriers? + +There is a different set of articulation questions when we look at the transition from high school to the workplace or to direct workplace-related education programs: + +- How well do students perform in the workplace? +- To what extent, for example, do they meet the goals of the SCANS report? +- What do the answers to these questions tell us about changes needed in the college as well as the high school offerings? + +# A New Role for COMAP + +In summary, there is an important research agenda here. We want to do more than document—we need to facilitate and stimulate articulation relating to this common vision that bridges secondary and post-secondary education as well as secondary education and the modern workplace. With that in mind, we at COMAP are actively planning to research transitions between high school and college. Similarly, we will look at transitions to workplace environments that epitomize the recommendations of the SCANS report. + +Thus, the research focuses on the direction of mathematics education as suggested by the visions of prominent professional organizations involved in the teaching of mathematics, as well as a highly regarded vision of what is needed for success in the workplace. Our study is focused on change and facilitating the implementation of a consistent vision. We cannot afford to have the CUPM report just sit on the shelf. There is too much important work to be done. + +# References + +Assessment Standards Working Group of the National Council of Teachers of Mathematics. 1995. Assessment Standards for School Mathematics. Alexandria, VA: National Council of Teachers of Mathematics. + +Commission on Standards for School Mathematics of the National Council of Teachers of Mathematics. 1989. Curriculum and Evaluation Standards for School Mathematics Alexandria, VA: National Council of Teachers of Mathematics. +Commission on Teaching Standards for School Mathematics of the National Council of Teachers of Mathematics. 1991. Professional Standards for Teaching Mathematics. Alexandria, VA: National Council of Teachers of Mathematics. +Committe on the Undergraduate Program in Mathematics of the Mathematical Association of America. 2003. Undergraduate Programs and Courses in the Mathematical Sciences: A CUPM Curriculum Guide . Draft 5.2, August 4, 2003. http://www.maa.org/cupm/. +Hiebert, James, and James Stigler. 1999. The Teaching Gap: Best Ideas from the World's Teachers for Improving Education in the Classroom. Free Press. +Mathematical Sciences Education Board of the National Research Council. 1989. *Everybody Counts: A Report to the Nation on the Future of Mathematics Education*. National Research Council. +National Council of Teachers of Mathematics. 2000. Principles and Standards for School Mathematics (with CD-ROM). Alexandria, VA: National Council of Teachers of Mathematics. +Secretary's Commission on Achieving Necessary Skills. 1991. What Work Requires of Schools: A SCANS Report for America 2000. DIANE Publishing Co. +Steen, Lynn (ed.). 1987. *Calculus for a New Century: A Pump Not a Filter*. Washington, DC: Mathematical Association of America. +Thompson, Helen M., Susan A. Henley, and Daniel D. Barron. 2000. *Fostering Information Literacy: Connecting National Standards, Goals 2000, and the SCANS Report*. Libraries Unlimited. +Writing Team and Task Force of the Standards for Introductory College Mathematics Project (Don Cohen (ed.)). 1995. Crossroads in Mathematics: Standards for Introductory College Mathematics Before Calculus. http://www.imacc.org/standards/. Executive summary at http://www.amatyc.org/Crossroads/CrsrdsXS.html. Memphis, TN: American Mathematical Association of Two-Year Colleges. + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for 11 years and has dedicated the last 25 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he also appeared as the on-camera host), Against All Odds: Inside Statistics (still showing on late-night TV in New York!), and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary school mathematics. + +# Modeling Forum + +# Results of the 2003 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +frgiorda@nps.navy.mil + +# Introduction + +A total of 492 teams of undergraduates, from 230 institutions in 9 countries—and from varying departments, including an Art Education Center—spent the second weekend in February working on applied mathematics problems in the 18th Mathematical Contest in Modeling (MCM). + +The 2003 MCM began at 8:00 P.M. EST on Thursday, Feb. 6 and ended at 8:00 P.M. EST on Monday, Feb. 10. During that time, teams of up to three undergraduates were to research and submit an optimal solution for one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problems at the appropriate time, and entered completion data through COMAP'S MCM Website. + +Each team had to choose one of the two contest problems. After a weekend of hard work, solution papers were sent to COMAP on Monday. Eleven of the top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first sixteen contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2002). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first ten years of the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +This year's Problem A teams were asked to determine the size, location, and number of cardboard boxes needed to cushion a stunt person's fall using + +different combined weights (stunt person and motorcycle) and different jump heights. + +Problem B dealt with the use of a gamma knife in the treatment of tumor cells in brain tissue. Teams were asked to design a model to provide the fewest and most direct doses in order to treat the tumor without going outside the target tumor itself. + +In addition to the MCM, COMAP also sponsors the Interdisciplinary Contest in Modeling (ICM) and the High School Mathematical Contest in Modeling (HiMCM). The ICM, which runs concurrently with MCM, offers a modeling problem involving concepts in operations research, information science, and interdisciplinary issues in security and safety. Results of this year's ICM are on the COMAP Website at http://www.comap.com/undergraduate/contest; results and Outstanding papers appeared in Vol. 24 (2003), No. 2. The HiMCM offers high school students a modeling opportunity similar to the MCM. Further details about the HiMCM are at http://www.comap.com/highschool/contest. + +# Problem A: The Stunt Person + +An exciting action scene in a movie is going to be filmed, and you are the stunt coordinator! A stunt person on a motorcycle will jump over an elephant and land in a pile of cardboard boxes to cushion their fall. You need to protect the stunt person, and also use relatively few cardboard boxes (lower cost, not seen by camera, etc.). + +Your job is to: + +- determine what size boxes to use, +-determine how many boxes to use, +- determine how the boxes will be stacked, +- determine if any modifications to the boxes would help, and +- generalize to different combined weights (stunt person and motorcycle) and different jump heights. + +Note that in the 1997 film "Tomorrow Never Dies," the James Bond character, on a motorcycle, jumps over a helicopter. + +# Problem B: Gamma Knife Treatment Planning + +Stereotactic radiosurgery delivers a single high dose of ionizing radiation to a radiographically well-defined, small intracranial 3D brain tumor without + +delivering any significant fraction of the prescribed dose to the surrounding brain tissue. Three modalities are commonly used in this area; they are the gamma knife unit, heavy charged particle beams, and external high-energy photon beams from linear accelerators. + +The gamma knife unit delivers a single high dose of ionizing radiation emanating from 201 cobalt-60 unit sources through a heavy helmet. All 201 beams simultaneously intersect at the isocenter, resulting in a spherical (approximately) dose distribution at the effective dose levels. Irradiating the isocenter to deliver dose is termed a "shot." Shots can be represented as different spheres. Four interchangeable outer collimator helmets with beam-channel diameters of 4, 8, 14, and $18\mathrm{mm}$ are available for irradiating different size volumes. For a target volume larger than one shot, multiple shots can be used to cover the entire target. In practice, most target volumes are treated with 1 to 15 shots. The target volume is a bounded, three-dimensional digital image that usually consists of millions of points. + +The goal of radiosurgery is to deplete tumor cells while preserving normal structures. Since there are physical limitations and biological uncertainties involved in this therapy process, a treatment plan needs to account for all those limitations and uncertainties. In general, an optimal treatment plan is designed to meet the following requirements. + +1. Minimize the dose gradient across the target volume. +2. Match specified isodose contours to the target volumes. +3. Match specified dose-volume constraints of the target and critical organ. +4. Minimize the integral dose to the entire volume of normal tissues or organs. +5. Constrain dose to specified normal tissue points below tolerance doses. +6. Minimize the maximum dose to critical volumes. + +In gamma unit treatment planning, we have the following constraints: + +1. Prohibit shots from protruding outside the target. +2. Prohibit shots from overlapping (to avoid hot spots). +3. Cover the target volume with effective dosage as much as possible. But at least $90\%$ of the target volume must be covered by shots. +4. Use as few shots as possible. + +Your tasks are to formulate the optimal treatment planning for a gamma knife unit as a sphere-packing problem, and propose an algorithm to find a solution. While designing your algorithm, you must keep in mind that your algorithm must be reasonably efficient. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at either Appalachian State University (Problem A) or at the National Security Agency (Problem B). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +This year, an additional Regional Judging site was created at the U.S. Military Academy to support the growing number of contest submissions. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Stunt Person63697128267
Gamma Knife Treatment53456130225
1170153258492
+ +The 11 papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +# Stunt Person Papers + +"Safe Landings" + +California Institute of Technology + +Pasadena, CA + +Darryl H. Yong + +Chad T. Kishimoto + +Justin C. Kao + +Jeffrey A. Edlund + +"A Time-Independent Model of Box Safety for Stunt Motorcyclists" + +Harvard University + +Cambridge, MA + +Clifford H. Taubes + +Ivan Corwin + +Sheel Ganatra + +Nikita Rozenblyum + +"Thinking Outside the Box and Over the Elephant" + +Harvey Mudd College + +Claremont, CA + +Jon Jacobsen + +Melissa J. Banister + +Matthew Macauley + +Micah J. Smukler + +"You Too Can Be James Bond" + +Southeast University + +Nanjing, China + +Chen Enshui + +Deng Xiaowei + +Xu Wei + +Zhang Zhenyu + +"Cardboard Comfortable When It Comes to Crashing" + +University of Washington + +Seattle, WA + +James Allen Morrow + +Jeffrey Giansiracusa + +Ernie Esser + +Simon Pai + +"Fly With Confidence" + +Zhejiang University + +Hangzhou, China + +Tan Zhiyi + +Hu Yuxiao + +Hua Zheng + +Zhou Enlu + +# Gamma Knife Treatment Papers + +"The Genetic Algorithm-Based Optimization Approach for Gamma Unit Treatment" + +Donghua University + +Sun Fei + +Shanghai, China + +Yang Lin + +Ding Yongsheng + +Wang Hong + +"A Sphere-Packing Model for the Optimal Treatment Plan" + +Peking University + +Long Yun + +Beijing, China + +Ye Yungqing + +Liu Xufeng + +Wei Zhen + +"The Gamma Knife Problem" + +University of Colorado + +Darin W. Gillis + +Boulder, CO + +David R. Lindstone + +Anne M. Dougherty + +Aaron T. Windfield + +"Shelling Tumors with Caution and Wiggles" + +University of Washington + +Seattle, WA + +James Allen Morrow + +Luke Winstrom + +Sam Coskey + +Mark Blunk + +"Fly With Confidence" + +Youngstown State University + +Youngstown, OH + +Angela Spalsbury + +Sarah Grove + +Chris Jones + +Joel Lepak + +# Meritorious Teams + +Stunt Person Papers (36 teams) + +Asbury College, Wilmore, KY (Ken Rietz) + +California Polytechnic State University, San Luis Obispo, CA (Jonathan E. Shapiro) + +Central Washington University, Ellensburg, WA (Stuart F. Boersma) + +City Univ. of Hong Kong, China (Jonathan J. Wylie) + +Duke University, Durham, NC (David Kraines) + +Earlham College, Richmond, IN (Charlie Peck) (two teams) + +Harvey Mudd College, Claremont, CA (Ran Libeskind-Hadas) (two teams) + +James Madison University, Harrisonburg, VA (Joseph W. Rudmin) + +Kansas State University, Manhattan, KS (Dave Auckly, Mikil Foss, and Marianne Korten) + +Luther College, Decorah, IA (Reginald D. Laursen) + +Maggie Walker Governor's School, Richmond, VA (John, A. Barnes) + +Massachusetts Institute of Technology, Cambridge, MA (Martin Z. Bazant) + +Mesa State College, Grand Junction, Colorado (Edward K. Bonan-Hamada) + +Messiah College, Grantham, PA (Lamarr C. Widmer) + +N.C. School of Science and Mathematics, Durham, NC (Dot Doyle) + +National University of Defence Technology, China (Ziyang Mao) + +Rose-Hulman Institute of Technology, Terre Haute, IN (David J. Rader) + +Southeastern Oklahoma State University, Durant, OK (Brett M. Elliott) + +Southern Oregon University, Ashland, OR (Kemble R. Yates) + +State University of West Georgia, Carrollton, Georgia (Scott Gordon) + +Tri-State University, Angola, IN (Steven A. Schonefeld) + +Truman State University, Kirksville, MO (Steve J. Smith) + +United States Air Force Academy, USAF Academy, CO (James S. Rolf) + +United States Military Academy, West Point, NY (Frank A. Wattenberg) + +University College Cork, Ireland (James Gleeson) + +University College Cork, Ireland (Donal J. Hurley) + +University of Alaska Fairbanks, Fairbanks, AK (Chris M. Hartman) + +University of New South Wales, Sydney, Australia (James W. Franklin) + +University of Puget Sound, Tacoma, WA (Michael S. Casey) + +University of San Diego, San Diego, CA (Jeffrey H. Wright) + +University of Science and Technology of China, China (Li Yu) + +Worcester Polytechnic Institute, Worcester, MA (Suzanne L. Weekes) +Xidian University, China (Zhou Shui-Sheng) +York University, Toronto, Ontario, Canada (Huaxiong Huang) + +Gamma Knife Treatment Papers (34 teams) +Bethel College, St. Paul, MN (William, M, Kinney) (two teams) +Boston University, Boston, MA (Glen, R, Hall) +California Polytechnic State University, San Luis Obispo, CA (Jonathan E. Shapiro) +Dalhousie University, Canada (Dorothea A. Pronk) +Hangzhou University of Commerce, China (Ding Zhengzhong) +Harvey Mudd College, Claremont, CA (Jon Jacobsen) +Hong Kong Baptist University, China (Chong-Sze Tong) +Kenyon College, Gambier, OH (Keith E. Howard) +Lawrence Technological University, Southfield, MI (Ruth G. Favro) +Northwestern Polytechnical University, China (Peng Guohua) +Rensselaer Polytechnic Institute, Troy, NY (Peter R. Kramer) +Rowan University, Glassboro, NJ (Hieu D. Nguyen) +Saint Louis University, St. Louis, MO (James E. Dowdy) +Science Institution of Northeastern University, China (Sun Ping) +Shanghai Jiaotong University, China (Baorui Song) +Shanghai Jiaotong University, China (Jianguo Huang) +Simpson College, Indianola, Iowa (Werner Kolln) +South China University of Technology, China (Jianliang Lin) +Southeast Missouri State University, Cape Girardeau, MO (Robert W. Sheets) +Tianjin University, China (Zeyi Liu) +Trinity University, San Antonio, TX (Robert W. Laird) +Tsinghua University, China (Jun Ye) +University College Dublin (Maria G. Meehan) +University of Arizona, Tucson, AZ (Bruce J. Bayly) +University of Colorado at Boulder, Boulder, CO (Anne M. Dougherty) +University of Delaware, Newark, DE (Louis F. Rossi) +University of Richmond, Richmond, VA (Kathy W. Hoke) +University of Saskatchewan, Canada (Rainer Dick) +University of Saskatchewan, Canada (James A. Brooke) +Washington University, St. Louis, MO (Hiro Mukai) +Westminster College, New Wilmington, PA (Barbara T. Faires) +Wuhan University of Technology, China (Peng Sijun) +Xidian University, China (Zhang Zhuo-kui) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, recognized the teams from Zhejiang University (Stunt Person Problem) + +and University of Washington (Gamma Knife Treatment Problem) as INFORMS Outstanding teams and provided the following recognition: + +- a letter of congratulations from the current president of INFORMS to each team member and to the faculty advisor; +- a check in the amount of $300 to each team member; +- a bronze plaque for display at the team's institution, commemorating their achievement; +- individual certificates for team members and faculty advisor as a personal commemoration of this achievement; +- a one-year student membership in INFORMS for each team member, which includes their choice of a professional journal plus the OR/MS Today periodical and the INFORMS society newsletter. +- a one-year subscription access to the COMAP modeling materials Website for the faculty advisor. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from Caliornia Institute of Technology (Stunt Person Problem) and University of Colorado (Gamma Knife Treatment Problem). Each of the team members was awarded a \(300 cash prize and the teams received partial expenses to present their results in a special Minisymposium at the SIAM Annual Meeting in Montreal, Canada in June. Their schools were given a framed, hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding team from each problem as an MAA Winner. The teams were from University of Washington (Stunt Person Problem) and Youngstown State University (Gamma Knife Treatment Problem). With partial travel support from the MAA, both teams presented their solutions at a special session of the MAA Mathfest in Boulder, CO in August. Each team member was presented a certificate by Richard S. Neal, Co-Chair of the MAA Committee on Undergraduate Student Activities and Chapters. + +# Judging + +Director + +Frank R. Giordano, Naval Postgraduate School, Monterey, CA + +Associate Directors + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA +Patrick Driscoll, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Contest Coordinator + +Kevin Darcy, COMAP Inc., Lexington, MA + +# Stunt Person Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, Stillwater, OK (MAA) + +Associate Judges + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, Appalachian State University, Boone, NC (Triage) + +Kelly Black, Mathematics Dept., University of New Hampshire, Durham, NH (SIAM) + +Courtney Coleman, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Lisette De Pillis, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Ben Fusaro, Mathematics Dept., Florida State University, Tallahassee, FL (SIAM) + +Mario Juncosa, RAND Corporation, Santa Monica, CA + +Michael Moody, Mathematics Dept., Harvey Mudd College, Claremont, CA + +John L. Scharf, Mathematics Dept., Carroll College, Helena, MT (COMAP HiMCM) + +Dan Solow, Mathematics Dept., Case Western Reserve University, Cleveland, OH (INFORMS) + +Michael Tortorella, Lucent Technologies, Holmdel, NJ + +Daniel Zwillinger, Newton, MA (author) + +# Gamma Knife Treatment Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +Peter Anspach, National Security Agency, Ft. Meade, MD (Triage) + +Karen D. Bolinger, Mathematics Dept., Ohio State University, Columbus, OH +James Case, Baltimore, MD (SIAM) + +J. Douglas Faires, Youngstown State University, Youngstown, OH (MAA) + +William P. Fox, Mathematics Dept., Francis Marion University, Florence, SC (MAA) + +Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC (author rep) + +John Kobza, Mathematics Dept., Texas Tech University, Lubbock, TX (INFORMS) + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Mathematics Dept., St. Mary's College, Notre Dame, IN (SIAM) + +Kathleen Shannon, Mathematics Dept., Salisbury State University, Salisbury, MD + +Marie Vanisko, Dept. of Mathematics, Engineering, Physics, and Computer Science, Carroll College, Helena, MT (MAA) + +# Regional Judging Session + +Head Judge + +Patrick Driscoll, Dept. of Mathematical Sciences + +Associate Judges + +Edward Pohl, Dept. of Systems Engineering + +Michael Jaye, Dept. of Mathematical Sciences + +Darrall Henderson, Dept. of Mathematical Sciences + +Steven Horton, Dept. of Mathematical Sciences + +—all of the U.S. Military Academy, West Point, NY + +# Triage Sessions: + +# Stunt Person Problem + +Head Triage Judge + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, + +Appalachian State University, Boone, NC + +Associate Judges + +Terry Anderson, Dept. of Mathematical Sciences + +Anthony Calamai, Physics Dept. + +Mark Ginn, Dept. of Mathematical Sciences + +Andrew Graham, Physics Dept. + +Rick Klima, Dept. of Mathematical Sciences + +—all from Appalachian State University, Boone, NC + +Dan Warner, Dept. of Mathematical Sciences, Clemson University, + +Clemson, SC + +Richard West, Mathematics Dept., Francis Marion University, Florence, SC + +# Gamma Knife Treatment Problem + +Head Triage Judge + +Peter Anspach, National Security Agency (NSA), Ft. Meade, MD + +Associate Judges + +James Case, Baltimore, Maryland + +Antonia Bluher, Stuart Gott, Blair Kelly, Dean McCullough (retired), Craig Orr, + +Brian Pilz, and other members of NSA. + +# Sources of the Problems + +The Stunt Person Problem was contributed by Dan Zwillinger, Newton, MA. The Gamma Knife Treatment Problem was contributed by Jie Wang. + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency and by COMAP. We thank Dr. Gene Berg of NSA for his coordinating efforts. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +We thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Editing (and sometimes substantial cutting) has taken place: Minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +A = Stunt Person Problem + +H = Honorable Mention + +B = Gamma Knife Treatment Problem + +M = Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORAB
ALASKA
University of AlaskaFairbanksChris M. HartmanM
ARKANSAS
Arkansas Schl for Math. & Sci.Hot SpringsBruce E. TurkalH,P
Hendrix CollegeConwayDuff Gordon CampbellP
CALIFORNIA
California Institute of Tech.PasadenaDarryl H. YongO
California Polytech. State Univ.San Luis ObispoJonathan E. ShapiroMM
Calif. State Univ., Monterey BaySeasideHongde HuH
Calif. State Univ., BakersfieldBakersfieldDavid GoveH
Calif. State Univ., NorthridgeNorthridgeGholam Ali ZakeriH,P
Calif. State Univ., StanislausTurlockBrian JueP
Harvey Mudd CollegeClaremontJon JacobsenOM
Ran Libeskind-HadasM,M
Monta Vista High SchoolCupertinoI-Heng McCombH,P
Pomona CollegeClaremontAmi E. RadunskayaP
Sonoma State UniversityRohnert ParkElaine T. McDonaldP
University of San DiegoSan DiegoJeffrey H. WrightM,P
COLORADO
Colorado CollegeColorado SpringsPeter L. StaabHP
Colorado State UniversityFort CollinsMichael J. KirbyP
PuebloBruce N. LundbergP
U.S. Air Force AcademyUSAF AcademyJames S. RolfM
University of ColoradoBoulderAnne M. DoughertyO,M
Mesa State CollegeGrand JunctionEdward K. Bonan-HamadaM
CONNECTICUT
Sacred Heart UniversityFairfieldPeter LothH
Western Conn. State Univ.DanburyCharles F. Rocca jrP
DELAWARE
University of DelawareNewarkLouis F. RossiHM
DISTRICT OF COLUMBIA
Georgetown UniversityWashingtonAndrew VogtP
FLORIDA
Embryo-Riddle Aeronaut. Univ.Daytona BeachGreg Scott Spradlin
Florida Gulf Coast UniversityFort MyersCharles Lindsey
Jacksonville UniversityJacksonvillePaul R. Simony
ILLINOIS
Greenville CollegeGreenvilleGalen R. PetersHP
Hugh E. SiefkenP
Monmouth CollegeMonmouthChristopher G. FasanoP
Wheaton CollegeWheatonPaul IsiharaP
INDIANA
Earlham CollegeRichmondTimothy J. McLarnanP
Charlie PeckM,M
Jennifer Joy ZiebarthH
Goshen CollegeGoshenDavid HousmanH
Rose-Hulman Inst. of Tech.Terre HauteDavid J. RaderMP
Cary LaxterP
Saint Mary's CollegeNotre DameJoanne R. SnowHP
Tri-State UniversityAngolaSteven SchonefeldM
IOWA
Grand View CollegeDes MoinesSergio LochP,P
Grinnell CollegeGrinnellAlan R. WolfP,P
Mark MontgomeryH,P
Luther CollegeDecorahReginald D. LaursenM,H
Simpson CollegeIndianolaBruce F. SloanHP
Werner KollnM
Wartburg CollegeWaverlyMariah BirgenH
KANSAS
Kansas State UniversityManhattanDave Auckly, Mikil Foss, and Marianne KortenMP
KENTUCKY
Asbury CollegeWilmoreKen RietzMP
Bellarmine UniversityLouisville,William J. HardinH
Northern Kentucky Univ.Highland HeightsGail S. MackinH
MAINE
Colby CollegeWatervilleJan HollyH,P
MARYLAND
Goucher CollegeBaltimoreRobert E. LewandH,P
Hood CollegeFrederickBetty MayfieldP
Loyola CollegeBaltimoreChristos XenophontosH,H
Mount St. Mary's CollegeEmmitsburgFred J. PortierH,P
Salisbury UniversitySalisburySteven M. HetzlerH
Towson UniversityTowsonMike P. O'LearyH,P
Washington CollegeChestertownEugene P. HamiltonP,P
+ +# MASSACHUSETTS + +Boston University + +Harvard University + +Massachusetts Inst. of Tech. + +Olin College of Engineering + +Boston + +Cambridge + +Cambridge + +Needham + +Glen R. Hall + +Clifford H. Taubes + +Martin Z. Bazant + +Burt S. Tilley + +
INSTITUTIONCITYADVISORAB
MICHIGAN
Calvin CollegeGrand RapidsGary W. TalsmaP
Eastern Michigan UniversityYpsilantiChristopher E. HeePP
Hillsdale CollegeHillsdaleJohn P. BoardmanH
Hope CollegeHollandAaron C. CinzoriP
Lawrence Tech. Univ.SouthfieldRuth G. FavroHM
Valentina TobosP
Siena Heights UniversityAdrianToni CarrollH,P
Timothy H. HusbandP
University of MichiganDearbornDavid JamesP
MINNESOTA
Augsburg CollegeMinneapolisNicholas CoultP
Bemidji State UniversityBemidjiColleen G. LivingstonPH
Bethel CollegeSt. PaulWilliam M, KinneyM,M
College of St. Benedict and St. John's UniversityCollegevilleRobert J. HesseH,P
Gustavus Adolphus CollegeSt. PeterThomas P. LoFaroH
Macalester CollegeSt. PaulDaniel T. KaplanHP
Elizabeth G. ShoopP
MISSOURI
NW Missouri State Univ.MaryvilleRussell N. EulerP,P
Saint Louis UniversitySt. LouisDavid JacksonP
Stephen BlytheH
James E. DowdyM
SE Missouri State Univ.Cape GirardeauRobert W. SheetsM
Truman State UniversityKirksvilleSteve J. SmithM,H
Washington UniversitySt. LouisHiro MukaiM,H
MONTANA
Carroll CollegeHelenaHolly S. ZulloH
NEBRASKA
Hastings CollegeHastingsDavid CookeH,H
NEVADA
Sierra Nevada CollegeIncline VillageCharles LevitanP
NEW JERSEY
Rowan UniversityGlassboroHieu D. NguyenM
William Paterson Univ.WayneDonna J. Cedio-FengyaP
NEW MEXICO
New Mexico State Univ.Las CrucesCaroline P. SweezyP
NEW YORK
+ +Cornell University + +Hobart & William Smith Coll. + +Manhattan College + +Marist College + +Nazareth College + +Rensselaer Polytechnic Inst + +Ithaca + +Geneva + +Riverdale + +Poughkeepsie + +Rochester + +Troy + +Alexander Vladimirsky + +Scotty L. Orr + +Kathryn W. Weld + +Tracey McGrail + +Nelson G. Rich + +Peter R. Kramer + +H + +HP + +![](images/06fbb6f2beac3fe9b162684a6d1a5172c0ef5fbdb6926584581433549034f67e.jpg) + +关注数学模型获取更多资讯 + +
INSTITUTIONCITYADVISORAB
NORTH CAROLINA
Appalachian State UniversityBooneEric S. MarlandP
Brevard CollegeBrevardC. Clarke WellbornP
Davidson CollegeDavidsonLaurie J. HeyerHH
Duke UniversityDurhamDavid KrainesM
Elon UniversityElonTodd LeeH
Meredith CollegeRaleighCammey E. ColeP
N.C. School of Sci. and Math.DurhamDot DoyleM
North Carolina State Univ.RaleighJeffrey S. ScroggsP
OHIO
Kenyon CollegeGambierKeith E. HowardM
Miami UniversityOxfordStephen E. WrightP
Mount Vernon Nazarene Univ.Mount VernonJohn T. NoonanH
Wright State UniversityDaytonThomas P. SvobodnyP
Youngstown State UniversityYoungstownAngela SpalsburyHO
Michael CrescimannoH
Scott MartinP
Xavier UniversityCincinnatiMichael GoldweberH
OKLAHOMA
Southeastern Okla. State Univ.DurantBrett M. ElliottM
OREGON
Eastern Oregon UniversityLa GrandeAnthony TovarH
Lewis and Clark CollegePortlandRobert W. OwensHH
Thomas OlsenP
Southern Oregon UniversityAshlandKemble R. YatesM
PENNSYLVANIA
Bloomsburg UniversityBloomsburgKevin K. FerlandP
Bucknell UniversityLewisburgSally KoutsoliotasP
Clarion Univ. of PennsylvaniaClarionPaul G. AshcraftP,P
Steve I. GendlerP
Gettysburg CollegeGettysburgJames P. Fink and
Sharon StephensonH
Juniata CollegeHuntingdonJohn F. BukowskiP
Lafayette CollegeEastonThomas HillH
Messiah CollegeGranthamLamarr C. WidmerM
University of PittsburghPittsburghJonathan E. RubinH
Westminster CollegeNew WilmingtonBarbara T. FairesHM
SOUTH CAROLINA
Benedict CollegeColumbiaBalaji S. IyangarP
Francis Marion UniversityFlorenceThomas FitzkeeP
SOUTH DAKOTA
Mount Marty CollegeYanktonJim MinerP
Shane K. Miner
SD School of Mines and Tech.Rapid CityKyle RileyP
VERMONT
Johnson State CollegeJohnsonGlenn D. SproulP
VIRGINIA
Chesterfield County
Math. and Sci. HSMidlothianZorica SkoroH
Eastern Mennonite UniversityHarrisonburgJohn L. HorstPP
Charles D. CooleyP
James Madison UniversityHarrisonburgDorn W. PetersonH
Joseph W. RudminM
Maggie Walker Governor's SchlRichmondJohn A. BarnesMP
Mills E. Godwin High SchoolRichmondAnn W. SebrellP
Randolph-Macon CollegeAshlandBruce F. TorrenceH,P
Roanoke CollegeSalemJeffrey L. SpielmanH
Roland MintonP
University of RichmondRichmondKathy W. HokeM
U. of Virginia's College at WiseWiseGeorge W. MossP
Virginia WesternRoanokeRuth ShermanH
Dawn ReinhardH
WASHINGTON
Central Washington Univ.EllensburgStuart F. BoersmaM
Gonzaga UniversitySpokaneThomas M. McKenzieP
Heritage CollegeToppenishRichard W. SwearingenPP
Pacific Lutheran UniversityTacomaMei ZhuH,H
University of Puget SoundTacomaMichael S. CaseyM,P
John E. RiegseckerH
University of WashingtonSeattleJames Allen MorrowOO
Western Washington Univ.BellinghamTjalling J. YpmaH,H
Saim UralH
WISCONSIN
Beloit CollegeBeloitPaul J. CampbellP
Edgewood CollegeMadisonSteven PostH
St. Norbert CollegeDe PereJohn FrohligerH
University of WisconsinRiver FallsKathy TomlinsonH,H
AUSTRALIA
Univ. of New South WalesSydneyJames W. FranklinMH
CANADA
Dalhousie UniversityHalifax, NSDorothea PronkM
University of Western OntarioLondon, ONMartin H. MuserH
University of SaskatchewanSaskatoon, SKRainer DickM
James BrookeM
York UniversityToronto, ONHuaxiong HuangMH
CHINA
Anhui UniversityHefeiZhang QuanbingP
BeiHang UniversityBeijingLiu HongyingH
Beijing Institute of TechnologyBeijingZhang Bao XueP
Cui Xiao DiP
Beijing Univ. of Chemical Tech.BeijingYuan WenyanH
Shi XiaodingP
Jiang GuangfengP
Cheng YanP
Beijing Univ. of Posts and Tel.BeijingHe ZuguoH
Sun HongxiangH
Wang XiaoxiaP
Luo ShoushanH
Beijing Univ. of TechnologyBeijingYi XueP
Gao LuduanH,P
Central South UniversityChangshaChen XiaosongH
Zheng ZhoushunP,P
Qin XuanyunH
China Univ. of Mining Tech.XuzhouZhang XingyongP
Zhou ShengwuP
Wu ZongxiangP
Chongqing UniversityChongqingShu YongluP
Yang DadiP
Gong QuH
Duan ZhengminH
Zhang XingyouH
Dalian UniversityDalianTan XinxinPH
Dalian Univ. of Tech.DalianHe MingfengP,P
Wang YiP
Zhao LizhongP
Dong Hua UniversityShanghaiDing YongshengO
Feng YiliP
He GuoxingH
You SurongH
East China U. of Sci. and Tech.ShanghaiSu ChunjieP,P
Lu YuanhongH,P
Fudan UniversityShanghaiCai ZhijieH
Cao YuanP
Guangxi UniversityNannngLu YuejinPP
Huang XinmingP
Hangzhou Univ. of CommerceHangzhouDing ZhengzhongPM
Zhao HengP,P
Harbin Engineering Univ.HarbinZhang XiaoweiPP
Gao ZhenbinP
Harbin Institute of Tech.HarbinShang ShoutingH,P
Zhang ChipingP
Wang XuefengP
Harbin Univ. of Sci. and Tech.HarbinLi DongmeiHP
Tian GuangyueP
Hebei UniversityBaodingMa GuodongP
Fan TieGangP
Hua QiangH
Wang XiZhaoP
+ +![](images/44716dbfd68346a21ddeb641ae943e1d6c904c2edb95df32462b6ccda769e1ec.jpg) +关注数学模型获取更多资讯 + +
INSTITUTIONCITYADVISORAB
Institute of Theoretical PhysicsBeijingJiang DaquanH
Jiamusi UniversityJiamusiBai FengshanP
Wei FanP
Gu LizhiP
Jilin UniversityChangchunFang PeichenP,P
Huang QingdaoP,P
Jinan UniversityGuangzhouFan SuohaiH
Ye ShiqiP
Luo ShizhuangP
Hu DaiqiangP
Nanchang UniversityNanchangMa XinshengH,P
Chen TaoP
Nanjing Normal UniversityNanjingFu ShitaiP
Hu YongP
Qu WeiguangP
Chen XinP
Nanjing UniversityNanjingWu ZhaoyangP
Hao PanP
Nanjing Univ. of Sci. & Tech.NanjingHuang ZhenyouP
Wang PinglingP
Wu XinMinP
Zhang ZhengjunP
Nankai UniversityTianjinHuang WuqunP,P
”, School of Math.TianjinRuan JishouP,P
National U. of Defence Tech.ChangshaMao ZiyangM,P
Wu MengdaH,P
Northeast Agricultural Univ.HarbinLi FanggePP
Hu XiaobingP
Northeastern UniversityShenyangHao PeifengP
Zhang ShujunH,P
”, Info. Sci. and Eng. Inst.ShenyangHao PeifengP
”, Mechanical Inst.ShenyangHe XueHongHH
Jing YuanWeiP,P
”, Science Inst.ShenyangHan TieminPP
Sun PingM,H
Northern Jiaotong Univ.BeijingWang BingtuanP,P
Wang XiaoxiaP
Gui WenhaoH
Northwestern Polytech. Univ.XiánXu WeiP
Peng GuohuaM
Zhang LiningP
Nie YufengH
Peking UniversityBeijingWang MingP,P
Liu XufengO,P
Shandong UniversityJinanCui YuquanP
Liu Bao dongP
Shanghai Foreign Lang. SchlShanghaiSun YuP,P
Li Qun PanH
Shanghai Jiading No. 1 HSShanghaiXie XilinP
Yu LiP
+ +
INSTITUTIONCITYADVISORAB
Shanghai Normal Univ.ShanghaiCong Yuhao and Guo ShenghuanP
Zhang JizhouH
Tang YincaiP
Liu RongguanP
Shanghai Univ. of Finance and Econ.ShanghaiYang XiaobinP
South China Normal UniversityGuangzhouWang HenggengH,P
", Art Ed. CtrGuangzhouYi LouP
South China Univ. of Tech.GuangzhouYi HongP
Lin JianliangM
Zhu FengfengP
Liang ManfaP
Southeast UniversityNanjingChen EnshuiOP
Zhang ZhiqiangP,P
Tianjin Polytechnic UniversityTianjinZhou JunmingP
Huang DongweiH
Tianjin UniversityTianjinLiu ZeyiM
Dan LinP
Rong XiminP
Liang FengzhenP
Tsinghua UniversityBeijingJiang QiyuanH
Jun YeM,P
Hu ZhimingHP
U. of Electronic Sci. and Tech.ChengduQin SiyiP,P
Du HongfeiP,P
Univ. of Sci. and Tech. of ChinaHefeiYang ZhouwangH
Li YuM
Le XuliP
Wuhan UniversityWuhanHu YuanmingH,H
Zhong LiuyiP,H
Chen ShihuaH
Wuhan Univ. of Tech.WuhanPeng SijunM
Huang ZhangcanP
Wang ZhanqingP
Jin ShengpingP
Xián Jiaotong UniversityXiánZhou YicangH
Dai YonghongH
Xián Univ. of Tech.XiánQin XinQiangP
Cao MaoshengP
Xiamen UniversityXiamenSun HongfeiP
Xidian UniversityXiánZhang Zhuo-kuiM,H
Zhou Shui-ShengM
Liu Hong-WeiH
Xuzhou Inst. of Tech.XuzhouLi SubeiPH
Zhejiang UniversityHangzhouYang QifanP
Yong HeH
Tan ZhiyiO
FINLAND
Päivölä CollegeTarttilaMerikki LappiHP
Mathematical HS of HelsinkiHelsinkiEsa Ilmari LappiH,P
HONG KONG
City Univ. of Hong KongHong KongKei ShingP,P
Jonathan J. WylieM
Yiu-Chung HonP
Hong Kong Baptist Univ.Hong KongWai Chee ShiuP
Chong-Sze TongM
INDONESIA
Institut Teknologi BandungBandungRieske HadiantiH
Kuntjoro Adji SidartoH
IRELAND
University College CorkCorkJames GleesonM
Donal J. HurleyM,H
University College DublinBelfieldMaria G. MeehanPM
Ted CoxHH
SINGAPORE
National Univ. of SingaporeSingaporeVictor TanP
SOUTH AFRICA
University of StellenboschStellenboschJan H. van VuurenHP
+ +# Editor's Note + +For team advisors from China, we have endeavored to list family name first. + +# Safe Landings + +Chad T. Kishimoto + +Justin C. Kao + +Jeffrey A. Edlund + +California Institute of Technology + +Pasadena, California 91125 + +Advisor: Darryl H. Yong + +# Abstract + +We examine the physical principles of stunt crash pads made of corrugated cardboard boxes and build a mathematical model to describe them. The model leads to a computer simulation of a stunt person impacting a box catcher. Together, the simulation and model allow us to predict the behavior of box catchers from physical parameters and hence design them for maximum safety and minimum cost. + +We present two case studies of box-catcher design, a motorcyclist landing after jumping over an elephant and David Blaine's televised Vertigo stunt. These demonstrate the ability of our model to handle both high-speed impacts and large weights. For each case, we calculate two possible box-catcher designs, showing the effects of varying design parameters. Air resistance is the dominant force with high impact speeds, while box buckling provides greater resistance at low speeds. We also discuss other box-catcher variations. + +# Basic Concept + +# Requirements + +A falling stunt person has a large amount of kinetic energy, + +$$ +U _ {\mathrm {s}} = \frac {1}{2} m _ {\mathrm {s}} u _ {\mathrm {s}} ^ {2}. +$$ + +To land safely, most of it must be absorbed by a catcher before the performer hits the ground. Therefore, following Newton's Second Law, the catcher must exert a force to decelerate the performer, + +$$ +F = m \frac {d u _ {\mathrm {s}}}{d t}. +$$ + +The total energy absorbed should equal the performer's initial energy (kinetic and potential), + +$$ +\int F d z = U _ {\mathrm {s} _ {0}} + V _ {\mathrm {s} _ {0}}. +$$ + +However, the catcher itself cannot exert too large a force or it would be no better than having the performer hit the ground in the first place. We therefore set a maximum force, $F_{\mathrm{max}}$ . The smaller we make $F$ , the larger (and more expensive) the box catcher has to be. Therefore, to save both money and life, we would like to have + +$$ +F \approx (1 - \delta) F _ {\max}, +$$ + +where $\delta$ is a safety margin $(0 < \delta < 1)$ . + +# The Box Catcher + +A box catcher consists of many corrugated cardboard boxes, stacked in layers, possibly with modifications such as ropes to keep the boxes together or inserted sheets of cardboard to add stability and distribute forces. When the stunt person falls into the box catcher, the impact crushes boxes beneath. As a box collapses, not only does the cardboard get torn and crumpled, but the air inside is forced out, providing a force resisting the fall that is significant but not too large. As the performer passes though the layers, each layer takes away some kinetic energy. + +# Modeling the Cardboard Box + +We examine in detail the processes involved when a stunt person vertically impacts a single cardboard box. This analysis allows us to predict the effect of varying box parameters (shape, size, etc.) on the amount of energy absorbed by the box. + +# Assumptions: Sequence of Events + +Although the impact involves many complex interactions—between the performer's posture, the structure of the box, the air inside the box, the support of the box, the angle and location of impact, and other details—modeling thin-shell buckling and turbulent compressible flow is neither cost-effective for a movie production nor practical for a paper of this nature. We therefore assume and describe separately the following sequence of events in the impact. + +1. A force is applied to the top of the cardboard box. + +2. The force causes the sides of the box to buckle and lose structural integrity. +3. Air is pushed out of the box as the box is crushed. +4. The box is fully flattened. + +We now consider the physical processes at play in each of these stages. + +Table 1. +Nomenclature. + +
PropertySymbolUnits
Potential energy densityvJ/m3
Young's modulusYPa
Strain in box wallsεnone
Stress in box wallsSPa
Tensile strengthTSPa
Total kinetic energyUJ
Total potential energyVJ
Volume of cardboard in box wallsVm3
Distance scale over which buckling is significantΔHm
Width of boxwm
Thickness of box wallsτm
Height of boxlm
Surface area of the top face of the boxAm2
Proportion of top face through which air escapesα
Velocity of stunt personusm/s
Mean velocity of expelled airuam/s
Mass of stunt person (and vehicle, if any)mkg
Density of airρkg/m3
Acceleration due to gravity, g = 9.8 m/s2gm/s2
+ +# Stage 1: Force Applied + +A force $F(t)$ is applied uniformly over the top surface. The walls of the box expand slightly, allowing the box to compress longitudinally (along the direction of the force). While this applied force is small enough (less than the force necessary to cause buckling), the box absorbs little of the force and transmits to the ground (or next layer of the box catcher) a large fraction of the applied force. The applied force increases until it is on the order of the buckling force $(F(t) \sim F_B)$ . At this point, the box begins to buckle. + +# Stage 2: Box Buckles + +The walls crumple, the box tears, and the top of the box is pressed down. Once the box has lost structural integrity, although the action of deforming the box may present some resistance, the force that the box itself can withstand is greatly diminished. + +Consider the pristine box being deformed by a force applied to its upper face. To counteract this force, the walls of the box expand in the transverse + +directions (perpendicular to the force), creating a strain in the walls. For a given strain $\epsilon$ , the potential energy density $v$ stored in the walls of the box is + +$$ +v = \frac {1}{2} Y \epsilon^ {2} + \mathcal {O} (\epsilon^ {4}), +$$ + +where $Y$ is the Young's modulus for boxboard. For this calculation, the effects of longitudinal (along the direction of the force) contractions in the box walls are negligible compared to the transverse expansion. + +A stress $S$ is created in the box walls as a result of the battle between the wall's pressure to expand and its resistance to expansion, + +$$ +S = Y \epsilon + \mathcal {O} (\epsilon^ {2}). +$$ + +Only a small strain $(\epsilon \ll 1)$ is necessary to cause the box to buckle, so we neglect higher-order terms. There is a point where the box stops expanding and begins to give way to the increasing stress placed on it. When the stress in the box walls reaches its tensile strength $T_{\mathrm{S}}$ , it loses its structural integrity and the box bursts as it continues to buckle. Thus, we can set a limit on the maximum strain that the box walls can endure, + +$$ +\epsilon_ {\max } = \frac {T _ {S}}{Y}. +$$ + +The typical tensile strength of cardboard is on the order of a few MPa, while the Young's modulus is on the order of a few GPa. Thus, we may assume that $\epsilon \ll 1$ . The maximum energy density allowed in the walls of the box is now + +$$ +v _ {\mathrm {m a x}} = \frac {1}{2} Y \epsilon_ {\mathrm {m a x}} ^ {2} = \frac {1}{2} \frac {T _ {\mathrm {S}} ^ {2}}{Y}. +$$ + +The total energy stored in the walls just before the box bursts is + +$$ +V _ {\mathrm {m a x}} = \frac {1}{2} \frac {T _ {\mathrm {S}} ^ {2}}{Y} \mathcal {V}, +$$ + +where $\mathcal{V} =$ (lateral surface area) $\times$ (thickness) is the volume of cardboard in the box walls. If we assume that the force to deform the box is constant over the deformation distance, then by conservation of energy, we have + +$$ +\frac {d U _ {\mathrm {s}}}{d t} + \frac {d V _ {\mathrm {s}}}{d t} = - \frac {d U _ {\mathrm {b o x}}}{d t} = - \frac {d U _ {\mathrm {b o x}}}{d t} \frac {d z}{d t} = - \frac {\Delta U _ {\mathrm {b o x}}}{\Delta H} \frac {d z}{d t} = \frac {u _ {\mathrm {s}}}{2 \Delta H} \frac {T _ {\mathrm {S}} ^ {2}}{Y} \mathcal {V}, +$$ + +where $\Delta H$ is the change in height of the box over the course of the buckling process. Since + +$$ +m _ {\mathrm {s}} u _ {\mathrm {s}} \frac {d u _ {\mathrm {s}}}{d t} = \frac {d U _ {\mathrm {s}}}{d t}, +$$ + +the change in velocity of the falling stunt person due to buckling the box is + +$$ +\frac {d u _ {s}}{d t} = \frac {1}{2 m _ {\mathrm {s}} \Delta H} \frac {T _ {\mathrm {S}} ^ {2}}{Y} \mathcal {V}. \tag {1} +$$ + +To determine $\Delta H$ , we assume an average force $F \approx F_{B}$ , where $F_{B}$ is the buckling force of the cardboard walls. The Appendix shows that the buckling force is related to the Young's modulus and other physical parameters by + +$$ +F _ {B} = Y \frac {\pi^ {2}}{1 2} \frac {w \tau^ {3}}{\ell^ {2}}. \tag {2} +$$ + +In this case, $\mathcal{V} = 4\ell w\tau$ for a square box. Then $\Delta H$ can be estimated by + +$$ +F _ {B} \Delta H = \frac {1}{2} \frac {T _ {S} ^ {2}}{Y} \mathcal {V}, +$$ + +$$ +Y \frac {\pi^ {2}}{1 2} \frac {w \tau^ {3}}{\ell^ {2}} \Delta H = 2 \ell w \tau \frac {T _ {S} ^ {2}}{Y}, +$$ + +$$ +\Delta H = \frac {2 4}{\pi^ {2}} \left(\frac {T _ {\mathrm {S}}}{Y}\right) ^ {2} \left(\frac {\ell}{\tau}\right) ^ {2} \ell . +$$ + +For a typical box, $\Delta H$ is on the order of a few centimeters. + +# Stage 3: Box is Crushed + +Without structural integrity, the box is crushed by the applied force. However, in crushing the box, the air inside must be pushed out. + +Let $A$ be the surface area of the top of the box. We make the ad hoc assumption that when the box buckled, it is torn such that air can escape through an area $\alpha A$ . Provided $\alpha \sim \mathcal{O}(1)$ , we can assume incompressible flow, since the area of opening in the box is of the same order of magnitude as the area being pushed in. By conservation of mass, we obtain the the velocity of the air moving out of the box in terms of $\alpha$ and the velocity of the stunt person: + +$$ +- u _ {\mathrm {s}} A = u _ {\mathrm {a}} (\alpha A) \quad \Longrightarrow \quad u _ {\mathrm {a}} = - u _ {\mathrm {s}} / \alpha . +$$ + +Using conservation of energy, we equate the change in energy of the stunt person and the air leaving the box, + +$$ +\frac {d U _ {\mathrm {s}}}{d t} + \frac {d V _ {\mathrm {s}}}{d t} = - \left(\frac {d U _ {\mathrm {a}}}{d t} + \frac {d V _ {\mathrm {a}}}{d t}\right). +$$ + +The potential energy of the air does not change significantly as it is ejected, so the energy equation simplifies to + +$$ +\frac {d U _ {\mathrm {s}}}{d t} + \frac {d V _ {\mathrm {s}}}{d t} = - \frac {d U _ {\mathrm {a}}}{d t}. \tag {3} +$$ + +The energy gain of air outside the box is due to air carrying kinetic energy out of the box: + +$$ +\frac {d U _ {\mathrm {a}}}{d t} = \frac {1}{2} \frac {d m _ {\mathrm {a}}}{d t} u _ {\mathrm {a}} ^ {2} = \frac {1}{2} (\rho \alpha A u _ {\mathrm {a}}) u _ {\mathrm {a}} ^ {2}, \tag {4} +$$ + +while the energy loss of the stunt person is from deceleration and falling: + +$$ +\frac {d U _ {\mathrm {s}}}{d t} + \frac {d V _ {\mathrm {s}}}{d t} = m _ {\mathrm {s}} u _ {\mathrm {s}} \frac {d u _ {\mathrm {s}}}{d t} + m _ {\mathrm {s}} g u _ {\mathrm {s}}. \tag {5} +$$ + +Combining (3-5), substituting for $u_{\mathrm{a}}$ , and rearranging, we obtain + +$$ +\frac {\mathrm {d} u _ {\mathrm {s}}}{\mathrm {d} t} = \frac {1}{2} \frac {\rho A}{m _ {\mathrm {s}}} \frac {u _ {\mathrm {s}} ^ {2}}{\alpha^ {2}} - g. \tag {6} +$$ + +# Stage 4: Box Is Flattened + +Once the cardboard box is fully compressed and all the air is pushed out, we assume that the box no longer has any effect on the stunt person. + +# Summary + +In stages 1 and 4, the box is essentially inert—little energy goes into it. Therefore in our mathematical description, we ignore these and concentrate on stages 2 and 3. Stage 2 occurs in the top $\Delta H$ of box deformation, while stage 3 occurs in the remainder of the deformation. Combining (1) and (6), we get + +$$ +\frac {d u _ {\mathrm {s}}}{d t} = \left\{ \begin{array}{l l} \frac {1}{2 m _ {\mathrm {s}} \Delta H} \frac {T _ {\mathrm {S}} ^ {2}}{Y} \mathcal {V}, & | z | < | \Delta H |; \\ \frac {1}{2} \frac {\rho A}{m _ {\mathrm {s}}} \frac {u _ {\mathrm {s}} ^ {2}}{\alpha^ {2}} - g, & | z | \geq | \Delta H |, \end{array} \right. \tag {7} +$$ + +where $z$ is measured from the top of the box. + +# Modeling the Box Catcher + +# Cardboard Properties + +We assume that each cardboard box is made of corrugated cardboard with uniform physical properties, using the data shown in Table 2. + +Table 2. Properties of corrugated cardboard [Bever 1986]. + +
PropertySymbolValue
Tensile strengthTS12.2 MPa
Thicknessτ5 mm
Young's modulusY1.3 GPa
+ +# Assumptions + +- Layers of boxes are comprised of identical boxes laid side-by-side over the entire area of the box catcher. +- The box catcher is large enough compared to the stunt person that edge effects are negligible. +- The boxes are held together so that there is no relative horizontal velocity between them. +- Loose cardboard is placed between layers of cardboard boxes, so that any force transmitted through the top layer of boxes is well distributed to lower boxes, ensuring that only one layer is crushed at a time. In other words, we treat each layer of boxes independently and the box catcher as a sequence of layers. + +# Equations of Motion + +Since each layer is independent, the equations of motion for the box catcher look similar to the equation of motion for a single box (7). The equations depend on the dimensions of the boxes in each level, so we solve numerically for the motion of the stunt person. At each level of the box catcher, the performer impacts several boxes (approximately) at once. The number of boxes that act to decelerate the stunt person is the ratio $A_{s} / A$ of the performer's cross-sectional area to the surface area of the top face of a cardboard box. Thus, we alter the equations of motion by this ratio, getting + +$$ +\frac {d u _ {\mathrm {s}}}{d t} = \left\{ \begin{array}{l l} \frac {1}{2 m _ {\mathrm {s}} \Delta H} \frac {T _ {\mathrm {S}} ^ {2}}{Y} \mathcal {V} \frac {A _ {\mathrm {s}}}{A} & | z - z _ {\mathrm {t o p}} | < | \Delta H |; \\ \frac {1}{2} \frac {\rho A _ {\mathrm {s}}}{m _ {\mathrm {s}}} \frac {u _ {\mathrm {s}} ^ {2}}{\alpha^ {2}} - g, & | z - z _ {\mathrm {t o p}} | \geq | \Delta H |, \end{array} \right. \tag {8} +$$ + +where + +$z$ is the vertical distance measured from the top of the stack, + +$z_{\mathrm{top}}$ is the value of $z$ at the top of the current box, and + +$A_{s}$ is the cross sectional area of the stunt person. + +Now, given a suggested stack of boxes, we can integrate the equations of motion to see whether or not the stack successfully stops the falling stunt person, and if so, where in the stack. + +# Analysis + +Our model allows us to predict a stunt person's fall given parameters for the box catcher. Now we would like to find analytic results to guide box-catcher design. + +Using the equations of motion (8), we determine the force that the stunt person feels falling through the box catcher: + +$$ +F = m _ {s} \frac {d u _ {\mathrm {s}}}{d t} = \left\{ \begin{array}{l l} \frac {1}{2 \Delta H} \frac {T _ {\mathrm {S}} ^ {2}}{Y} \mathcal {V} \frac {A _ {\mathrm {s}}}{A}, & | z - z _ {\mathrm {t o p}} | < | \Delta H |; \\ \frac {1}{2} \rho A _ {\mathrm {s}} \frac {u _ {\mathrm {s}} ^ {2}}{\alpha^ {2}} - m _ {\mathrm {s}} g, & | z - z _ {\mathrm {t o p}} | \geq | \Delta H |. \end{array} \right. +$$ + +We want to make this force large enough to stop but not harm the performer. Therefore, we demand that + +$$ +F \leq (1 - \delta) F _ {\max } \tag {9} +$$ + +However, we wish to find solutions that both minimize cost (fewest boxes) and conform to spatial constraints (we don't want the box catcher to be taller than the obstacle that the stunt person is jumping over). Thus, it is in our best interest to maximize the force applied to the performer subject to (9). + +We are faced with two independent equations with three unknowns; we solve for two of them in terms of the third. With the simplifying assumption that the top face of each box is a square $(A = w^2)$ , we can solve for the optimal dimensions of the box given the stunt person's impact velocity: + +$$ +{\frac {\pi^ {2}}{1 2} Y \tau^ {3} \frac {w}{\ell^ {2}} \frac {A _ {\mathrm {s}}}{A}} = {F _ {\mathrm {m a x}} (1 - \delta) = F _ {\mathrm {t h r e s h}},} +$$ + +$$ +\frac {1}{2} \frac {\rho}{\alpha^ {2}} A _ {s} \tilde {u} ^ {2} - m _ {s} g = F _ {\mathrm {t h r e s h}}, +$$ + +where $\tilde{u}^2$ is the stunt person's velocity after causing the box to buckle (but before expelling all the air). So we have + +$$ +\tilde {u} ^ {2} = u _ {o} ^ {2} - 4 \tau \frac {T _ {\mathrm {S}} ^ {2}}{m _ {s} Y} w \ell \frac {A _ {\mathrm {s}}}{A}. +$$ + +Define $\gamma$ , the maximum number of gees of acceleration felt by the stunt person, by $F_{\mathrm{thresh}} = (1 - \delta) F_{\mathrm{max}} = \gamma m_{\mathrm{s}} g$ . We get + +$$ +\ell^ {3} = \frac {\pi^ {2}}{4 8 \gamma} \left(\frac {Y}{T _ {\mathrm {S}}}\right) ^ {2} \frac {\tau^ {2}}{g} \left(u _ {0} ^ {2} - 2 (\gamma + 1) \alpha^ {2} \frac {m _ {\mathrm {s}} g}{\rho A _ {\mathrm {s}}}\right) \tag {10} +$$ + +and + +$$ +\frac {A (w)}{w} = w = \frac {\pi^ {2}}{1 2 \gamma} \frac {A _ {\mathrm {s}} Y}{m _ {\mathrm {s}} g} \frac {\tau^ {3}}{\ell^ {2}}. \tag {11} +$$ + +Thus, we could write a routine that would integrate the equations of motion of the stunt person falling through each level of boxes and for each layer use the incoming velocity to calculate the optimal dimensions for the next layer of boxes. This would yield a box-catcher structure that would safely stop the stunt person in the fewest levels of boxes, minimizing the cost of the box catcher. + +# General Solutions + +The most difficult aspect of finding a general solution is the need to know the speed of the stunt person at impact, which depends on the height of the box catcher. Given sufficient computing time, we could solve the equations with condition that the height above the ground at impact is the height of the box catcher. However, this is unnecessary for a paper of this scope. + +Instead, we use the model to shed light on the qualitative aspects of building a box catcher. Equation (10) tells us that for higher speeds of the stunt person, we need taller boxes (larger $\ell$ ) to keep the force on the stunt person at its maximum allowable level. This means that it is necessary to place the tallest boxes at the top of the stack, followed by shorter ones, for both cost effectiveness and safety. Inspecting (11) shows that the optimal width of the box is inversely proportional to the height ( $w \propto \ell^{-2}$ ). Thus, the box catcher should have the thinnest boxes on top and the widest ones on the bottom. + +It follows that + +the optimal box catcher has short, wide boxes at the bottom and tall, narrow boxes at the top. + +Furthermore, the equations of motion (8) contain $u^2$ in the air expulsion stage but no velocity-dependent term in the buckling stage. Hence, for high impact speeds, air expulsion provides the dominant deceleration, while for smaller impacts, box buckling is the more important stage. + +# Results + +# Motorcycle Stunt + +A stunt person on a motorcycle is preparing to jump over an elephant for the filming of a blockbuster movie. What should be going through a stunt coordinator's mind? + +The average height of an African elephant is $4\mathrm{m}$ ; the mass of a top-of-the-line motorcycle is $230\mathrm{kg}$ [Encyclopedia Britannica Online 2003]. + +We assume that the stunt person clears the elephant by about $2\mathrm{m}$ , so the impact velocity is $8\mathrm{m / s}$ . Choosing the parameters $\ell = \frac{1}{3}\mathrm{m}$ and $w = \frac{1}{2}\mathrm{m}$ , a $3\mathrm{m}$ -tall homogeneous box catcher stops the stunt person and motorcycle after they fall about $2.33\mathrm{m}$ (Figure 1). + +![](images/224aa7a16adfb1c2127c3b53dee8bdf27c008ae6b52841073eb527c5933310ce.jpg) + +![](images/691f9c627d9e4bfa0fa9f7f01a1689542ab9fe45d645e718608a50a42c97a97b.jpg) +Figure 1. Simulation of box catcher stopping stunt person and motorcycle. + +However, we can do better. We take our previous solution, remove one layer of boxes, and change the top layer to boxes with height $\frac{1}{6}$ m and width $\frac{2}{3}$ m (for a $\gamma \sim 1.3$ ). We've removed a row without compromising the safety of the stunt person. In this way, we can make changes until we arrive at an optimal arrangement, one with the fewest boxes necessary. + +# David Blaine's Vertigo + +On May 22, 2002, David Blaine leaped from a ten-story pole onto a cardboard box catcher below. We consider his fall of more than $25\mathrm{m}$ onto approximately $4\mathrm{m}$ of boxes. His impact velocity was approximately $36.5\mathrm{m/s}$ (using the approximation $v_{f} = \sqrt{2gh}$ , which is valid since the terminal velocity for a person falling in air is about $60\mathrm{m/s}$ [Resnick et al. 1992]). + +Viewing his fall in the TV special [David Blaine's Vertigo 2002], we estimate the boxes to have height $\ell = \frac{1}{3}\mathrm{m}$ and width $w = \frac{2}{3}\mathrm{m}$ , with 12 layers of boxes in + +the catcher. According to our simulation, Blaine's momentum is finally stopped by the last box. A larger margin of safety could be accomplished by decreasing the value of either $\ell$ or $w$ . By changing the width of the boxes to $w = \frac{1}{2} \mathrm{~m}$ , our simulation shows that Blaine would be stopped $1 \mathrm{~m}$ from the ground. + +# Other Factors + +# Landing Zone Considerations + +The stunt person must land on the box catcher for it to be of any use. We suggest a probabilistic calculation of the landing zone; but since the principles involved are so dependent on the fall setup and conditions, we do not attempt this. + +# Box and Box Catcher Construction + +We made a few key assumptions about design: + +- The boxes in the box catcher are held together so that there is no relative horizontal velocity between the boxes. This assumption is essential. If the boxes could shift horizontally, they would likely shift out of the way of the falling stunt person. The catcher must also be large enough so that the structure that holds the boxes together doesn't interfere with energy dissipation. +- Loose cardboard forms intervening layers between the different levels of cardboard boxes. This assumption is necessary so that any force retransmitted by the top-level box does not cause another box to buckle; requiring one layer of boxes to buckle at a time allows us to optimize box parameters at each layer independently. A layer of cardboard on top of the catcher is essential to guarantee that the impact force of the falling stunt person is well-distributed. + +Following these principles, and using a variety of sizes of boxes (as in equations (10) and (11)), we can tailor the box catcher to each stunt situation. + +# Conclusion + +We present a model of energy dissipated by the collapse of a single box, noting two key stages: + +- The walls of the box buckle and give way to the overwhelming force applied to it. Shearing forces cause tears in the walls, eventually leading to collapse. +- The air within the box is expelled, absorbing kinetic energy of the falling stunt person in the process. + +We consider each mechanism acting independently of the other. By solving the equations of motion in each stage, we predict the trajectory of a stunt person moving through a single box, and by extension, through the box catcher as a whole. + +Whether or not the performer is stopped before reaching the bottom (and being injured) depends on the structure of the box catcher and the size of the boxes. Taller is safer but shorter is cheaper (and may be necessary to remain off-camera); we must balance safety with cost and size. + +We present a solution where 9 layers of boxes are used to break the fall of the elephant-leaping motorcyclist, each with a height of $\frac{1}{3}$ m and a width of $\frac{1}{2}$ m. This is a poor solution, for the force on the falling stunt person is not optimized. We also present a slightly better solution with only 8 layers of boxes and outline the principles for continued optimization. + +Finally, we observe that air resistance is the dominant force with high speeds, while box deformation provides greater resistance at low speeds. + +# Strengths of the Model + +- All parameters in the program are flexible. Whether humid weather tends to weaken the cardboard (and lower both the tensile strength and Young's modulus), or the height of the jump changes, the stunt coordinator can test the safety of the box catcher with our program. +- The algorithm is robust, in that small changes in initial conditions do not cause drastic changes in the end result. We have seen results for extreme cases, such as extreme impact velocities (falling from great heights) or extreme weight considerations (a person on a motorcycle). +- Our algorithm takes just a few seconds to compute the trajectory of the stunt person through the box catcher, so it is practical for a stunt coordinator to test a variety of box-catcher configurations. + +# Weaknesses of the Model + +- We were unable to derive an optimal box configuration. +- We could not get a good estimate of the magnitude of the force experienced by the stunt person. We present a plausible mechanism for dissipation of energy by a box catcher, but the force function that we use is discontinuous between box layers, though the actual physics of the situation has a continuous force. + +# Future Research + +Future research on this project should develop an algorithm that not only tests the safety of a configuration of boxes but suggests an optimal configuration + +of boxes. Another challenge is that cardboard boxes probably have a cost proportional to the surface area of cardboard used, not to the number of boxes. + +# Appendix: On Buckling + +We derive (2), + +$$ +F _ {B} = Y \frac {\pi^ {2}}{1 2} \frac {w \tau^ {3}}{\ell^ {2}}. \tag {2} +$$ + +Consider one wall of a cardboard box with external forces acting on both ends. Considering the effects of shearing and inertial dynamical acceleration, the displacement of the box wall from its equilibrium (unstressed) position obeys the equation of motion + +$$ +- D \frac {\partial^ {4} \eta}{\partial z ^ {4}} - F \frac {\partial^ {2} \eta}{\partial z ^ {2}} = \Lambda \frac {\partial^ {2} \eta}{\partial t ^ {2}}, +$$ + +where $\Lambda \equiv W / g_{e}$ is the mass per unit length of the box wall [Thorne and Blandford 2002]. + +Table A1. +Nomenclature. + +
PropertySymbolUnits
Horiz. displacement from equilibrium (unstressed) locationη (z, t)m
Flexural rigidity of the box wallDJ·m
Young's modulus of the box wallYPa
Length of the box wall in z-directionlm
Width of the box wallwm
Thickness of the box wallτm
Force applied to each end of the box wallFN
Weight per unit length of the box wallWN/m
Acceleration due to gravitygem/s2
Buckling forceFBN
+ +We seek solutions for which the ends of the box wall remain fixed. This is a good assumption for our model, since we require the box catcher to be held together so that the boxes remain horizontally stationary with respect to one other. Thus, we set the boundary conditions to be $\eta(0,t) = \eta(\ell,t) = 0$ . Using the separation ansatz $\eta(z,t) = \zeta(z)T(t)$ , we get the linear ordinary differential equations + +$$ +- D \frac {d ^ {4} \zeta}{d z ^ {4}} - F \frac {d ^ {2} \zeta}{d z ^ {2}} = \kappa_ {n} \zeta \qquad \mathrm {a n d} \qquad \Lambda \frac {d ^ {2} T}{d t ^ {2}} = \kappa_ {n} T, +$$ + +where $\kappa_{n}$ is the separation constant. The normal-mode solutions are thus + +$$ +\eta_ {n} (z, t) = A \sin \left(\frac {n \pi}{\ell} z\right) e ^ {- i \omega_ {n} t}, +$$ + +where $\omega_{n}\in \mathbb{C}$ satisfies the dispersion relation + +$$ +\omega_ {n} ^ {2} = \frac {1}{\Lambda} \left(\frac {n \pi}{\ell}\right) ^ {2} \left[ \left(\frac {n \pi}{\ell}\right) ^ {2} D - F \right]. +$$ + +Consider the lowest-order normal mode $(n = 1)$ . For $F < F_{\mathrm{crit}} \equiv \pi^2 D / \ell^2$ , we have $\omega_1^2 > 0$ , so $\omega_1 \in \mathbb{R}$ , and the normal mode just oscillates in time, giving stable solutions. However, if $F > F_{\mathrm{crit}}$ , then $\omega_1^2 < 0$ and $\omega = \pm i\varpi$ , $\varpi \in \mathbb{R}$ , i.e. there are exponentially growing solutions $(\propto e^{\varpi t})$ , and the normal-mode solution becomes unstable. Thus, the box wall buckles for applied forces $F > \pi^2 D / \ell^2$ . The buckling force is + +$$ +F _ {B} = \frac {\pi^ {2} D}{\ell^ {2}} = Y \frac {\pi^ {2}}{1 2} \frac {w t ^ {3}}{\ell^ {2}}, +$$ + +which is (2). + +# References + +Bever, Michael B. (ed.) 1986. Encyclopedia of Materials Science and Engineering. Vol. 2. Cambridge, MA: MIT Press. +Blaine, David. 2002. David Blaine's Vertigo. Television special, May 2002. +Encyclopaedia Britannica Online. 2003. http://search.eb.com/. +Polking, John C. 2002. ODE Software for MATLAB. 4th order Runge-Kutta demonstration rk4.m. http://math.rice.edu/~dfield/. +Resnick, Robert, David Halliday, and Kenneth S. Krane. 1992. Physics. New York: John Wiley and Sons. +Thorne, Kip S., and Roger D. Blandford. 2002. Ph136: Applications of Classical Physics. Unpublished lecture notes. Pasadena, CA: California Institute of Technology. http://www.pma.caltech.edu/Courses/ph136/yr2002/index.html. + +# A Time-Independent Model of Box Safety for Stunt Motorcyclists + +Ivan Corwin + +Sheel Ganatra + +Nikita Rozenblyum + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Abstract + +We develop a knowledge of the workings of corrugated fiberboard and create an extensive time-independent model of motorcycle collision with one box, our Single-Box Model. We identify important factors in box-to-box and frictional interactions, as well as several extensions of the Single-Box Model. + +Taking into account such effects as cracking, buckling, and buckling under other boxes, we use the energy-dependent Dual-Impact Model to show that the "pyramid" configuration of large 90-cm cubic boxes—a configuration of boxes in which every box is resting equally upon four others—is optimal for absorption of the most energy while maintaining a reasonable deceleration. We show how variations in height and weight affect the model and calculate a bound on the number of boxes needed. + +# General Assumptions + +- The temperature and weather are assumed to be "ideal conditions"—they do not affect the strength of the box. +- The wind is negligible, because the combined weight of the motorcycle and the person is sufficiently large. +- The ground on which the boxes are arranged is a rigid flat surface that can take any level of force. +- All boxes are cubic, which makes for the greatest strength [Urbanik 1997]. + +Variables. + +Table 1. + +
VariableDefinitionUnits
lBox edge lengthcm
V(t)Velocity as a function of time (v0 is velocity of impact)cm/s
ATEnergy change due to the top of the boxkg·cm2/s2
ABEnergy change due to the buckling of the boxkg·cm2/s2
ANETTotal energy including gravity for topkg·cm2/s2
ACD-EDGEEnergy absorbed by CD springs in modelled bucklingkg·cm2/s2
AMD-EDGEEnergy absorbed by MD springs in modelled bucklingkg·cm2/s2
xNETTotal depression of the topcm
ΔxChange in a distancecm
xDOWNComponent of edge's depression in the z-directioncm
x(t)Downward displacement of the top of the box (x0 = 0)cm
xFFinal depression before top failure.cm
δLOffset from top center.cm
MMotorcycle and stuntman combined masskg
PCDTensile strength in the cross-machine directionkg/(s2cm)
PMDTensile strength in the machine directionkg/(s2cm)
PECTMaximum strength as measured with the Edge Crush Testkg/s2
PMLMaximum strength as measured with the Mullen Testkg/(s2cm)
FCDForce in the cross-machine directionkg·cm/s2
FCDMAXMaximum force in the cross-machine directionkg·cm/s2
FMDForce in the machine directionkg·cm/s2
FMDMAXMaximum force in the machine directionkg·cm/s2
FUPNet dampening force the box exerts on the motorcyclekg·cm/s2
FUPMAXMaximum dampening force the box exerts on the motorcyclekg·cm/s2
FECTForce box exerts on the framekg·cm/s2
FECTMAXMaximum force the box exerts on the frame before yieldingkg·cm/s2
FNETTotal forcekg·cm/s2
xMDDepression at which MD tensile strength is exceededcm
xCDDepression at which CD tensile strength is exceededcm
xECTDepression at which a buckle occurscm
xMLDepression at which a puncture occurscm
VxVelocity in the x-directioncm/s
VyVelocity in the y-directioncm/s
VixInitial velocity in the x-directioncm/s
ViyInitial velocity in the y-directioncm/s
Vf(x)Final velocity in the x-directioncm/s
Vf(y)Final velocity in the y-directioncm/s
MBMass of boxes displacedkg
tTimes
AxEnergy in the x-directionkg·cm2/s2
AyEnergy in the y-directionkg·cm2/s2
AzEnergy in the z-directionkg·cm2/s2
dDistancecm
+ +Table 2. +Constants. + +
VariableDefinitionValue
EMDYoung's modulus in the machine direction3000000 kg/(s2-cm)
ECDYoung's modulus in the cross-machine direction800000 kg/(s2-cm)
ESum of EMD and ECD3800000 kg/(s2-cm)
LMDTire rect. length in the machine direction7cm front, 10cm back
LCDTire rect. length in the cross-machine direction10 cm
PCDTensile strength in the cross-Machine direction2000 kg/(s2-cm)
PMDTensile strength in the machine direction2500 kg/(s2-cm)
PECTMax. strength as measured with the edge crush test10000 kg/s2
PMLMax. strength as measured with the Mullen test25000 kg/(s2-cm)
gGravitational Constant980 cm/s2
μCoefficient of kinetic friction0.4
wCardboard thickness0.5 cm
δvSpeed variation200 cm/s
δφAngle variation away from y-axis leaving the rampπ/36
MMass of rider and motorcycle300 kg
viInitial velocity leaving ramp1500 cm/s
θRamp angle of elevationπ/6
+ +# Definitions and Key Terms + +- Buckling is the process by which a stiff plane develops a crack due to a stress exceeding the yield stress. +- Compressive strength is the maximum force per unit area that a material can withstand, under compression, prior to yielding. +- Corrugation is the style found in cardboard of sinusoidal waves of liner paper sandwiched between inner and outer papers. We use boxes with the most common corrugation, C-flute corrugation (see below for definition of a flute). +- Cracking is a resulting state when the tensile force exceeds the tensile strength. +- Cross-machine direction is the direction perpendicular to the sinusoidal wave of the corrugation. +- Depression is when, due to a force, a section of a side or edge moves downwards. +- ECT is the acronym for a common test of strength, the edge crush test. +- Fiberboard is the formal name for cardboard. +- Flute is a single wavelength of a sinusoidal wave between the inner and outer portion papers that extends throughout the length of the cardboard. A C-flute has a height of $0.35 \mathrm{~cm}$ and there are 138 flutes per meter [Packaging glossary n.d.]. + +- Machine direction is parallel to the direction of the sinusoidal waves. +- **Motorcycle** We use the 1999 BMW R1200C model motorcycle for structural information; it has a 7.0-cm front wheel and a 10-cm back wheel, with radii 46 and 38 cm. It is 3 m long and has 17 cm ground clearance. The weight is 220 kg (dry) and 257 kg (fueled) [Motorcycle specs n.d.]. +- Mullen test is a common measurement of the maximum force that a piece of cardboard can stand before bursting or puncturing. +- Puncturing is when a force causes an area to burst through the cardboard surface. +- Pyramid configuration is a configuration of boxes in which each box is resting equally on four others. +- Strain is the dimensionless ratio of elongation to entire length. +- Stress is the force per unit area to which a material is subject. +- Tensile strength is the maximum force per unit area which a material can withstand while under tension prior to yielding. +- Young's modulus of elasticity is the value of stress divided by strain and relates to the ability of a material to stretch. + +# Developing the Single-Box Model + +# Expectations + +- Given sufficiently small impact area, the surface plane either punctures or cracks before the frame buckles. +- Given sufficiently large impact area, the frame should buckle and no puncturing occurs. +- During buckling, corners are more resistive to crushing than edges. + +# Preliminary Assumptions + +- The force is exerted at or around the center of the top. +- The top of the box faces upward. This assumption allows us to use ECT (Edge Crush Test) results. This orientation also ensures that the flutes along the side are oriented perpendicular to the ground, so they serve as columns. + +# The Conceptual Single-Box Model + +During the puncturing or cracking of the top of the box: + +- The frame stays rigid. +- The cardboard can be modeled as two springs with spring constants equal to the length of board times its modulus of elasticity [Urbanik 1999] (Figure 1). + +![](images/d34b1ad8be9c88ce3140b97a97a0a18e86eb1c874efae0621ec5c55b75de4252.jpg) +Figure 1. Our model for a tire hitting the top of one box. We treat the portions of the box directly vertical or horizontal from the edges of the motorcycle tire (the rectangle in the middle) as ideal springs, neglecting the effect of the rest of the box. + +- The surface of the motorcycle tire that strikes the box is approximated as a rectangle with dimensions $L_{\mathrm{MD}}$ (the wheel's width) and $L_{\mathrm{CD}}$ (the length of the wheel in contact with the cardboard surface). We neglect the spin and tread of the tire. +- The part of the spring beneath the tire does not undergo any tension. In reality, this is not the case; but with this assumption, cracking and puncturing occur along the edges of the tire. The force still comes from the rigid frame, and the springs have the same constant; therefore, we believe this assumption affects only the position of the cracking and puncturing, not when it occurs or how much energy is dissipated. +- There is no torque on the box during this first process. This assumption can be made since the force is at the center of the top. + +The top of the cardboard box can fail in several ways: + +- If the resistive upwards force from the top, $F_{\mathrm{UP}}$ , exceeds $P_{\mathrm{ML}} \cdot L_{\mathrm{MD}} \cdot L_{\mathrm{CD}}$ (the Mullen maximum allowable pressure over this area), then puncturing occurs. + +- If the force $F_{\mathrm{MD}}$ on the machine direction spring exceeds $P_{\mathrm{MD}} \cdot w \cdot (3L_{\mathrm{CD}} - l) / 2$ (the tensile strength in the machine direction times cross-sectional area perpendicular to the force), then a crack occurs in the cross-machine direction. We assume that this force that causes cracking is evenly distributed over a section larger than the actual edge of the tire rectangle, because of the solid nature of cardboard. +- If the force $F_{\mathrm{CD}}$ on the cross machine direction spring exceeds $P_{\mathrm{CD}} \cdot w \cdot (3L_{\mathrm{MD}} - l) / 2$ , then a crack occurs in the machine direction, for the same reasons as above. +- If the force on the edges exceeds $P_{\mathrm{ECT}} \cdot 4l$ (the compression strength of the edges times the total edge length), then buckling occurs first. Here we assume that even though we model the top as two springs, the force is evenly distributed over the edges, taking advantage of the high spring constant and solid behavior of cardboard. + +There are several ways in which boxes can fail: + +- Puncture: The wheel enlarges the hole, and only when the hull of the motorcycle hits the edge does buckling occur. +- Crack: The tire does not break through the material, and buckling eventually occurs. +- Buckling: Can occur without a puncture or crack first. + +We assume that at most one puncture or crack occurs per box, followed inevitably by buckling. + +# Calculations for the Box Top + +[EDITOR'S NOTE: We do not give all the details of the following calculations.] Refer to Figure 2. We solve for $F_{\mathrm{CD}}$ , getting + +$$ +F _ {\mathrm {C D}} = E _ {\mathrm {C D}} L _ {\mathrm {M D}} \left(\sqrt {\left(\frac {l - L _ {\mathrm {C D}}}{2}\right) ^ {2} + [ x (t) ] ^ {2}} - \frac {l - L _ {\mathrm {C D}}}{2}\right). \tag {1} +$$ + +We apply the results to solve for $F_{\mathrm{MD}}$ and combine them to get $F_{\mathrm{UP}}$ . After obtaining the vertical component of $F_{\mathrm{CD}}$ , + +$$ +x (t) E _ {C D} L _ {M D} \left(\frac {\sqrt {\left(\frac {l - L _ {C D}}{2}\right) ^ {2} + [ x (t) ] ^ {2}} - \frac {l - L _ {C D}}{2}}{\sqrt {\left(\frac {l - L _ {C D}}{2}\right) ^ {2} + [ x (t) ] ^ {2}}}\right), +$$ + +![](images/c595a5189abbedc0b840fdf8289ad470756f822c15f682952719c13de6576a2d.jpg) +Figure 2. Side view of the depression of the motorcycle tire into the top of the box. + +and the analogue by symmetry for $F_{\mathrm{MD}}$ , we have $F_{\mathrm{UP}}$ , namely: + +$$ +\begin{array}{l} F _ {\mathrm {U P}} = x (t) \left[ 2 E _ {\mathrm {C D}} L _ {\mathrm {M D}} \left(1 - \frac {\frac {l - L _ {\mathrm {C D}}}{2}}{\sqrt {\left(\frac {l - L _ {C D}}{2}\right) ^ {2} + [ x (t) ] ^ {2}}}\right) + \right. \\ \left. 2 E _ {\mathrm {M D}} L _ {\mathrm {M D}} \left(1 - \frac {\frac {l - L _ {\mathrm {M D}}}{2}}{\sqrt {\left(\frac {l - L _ {\mathrm {C D}}}{2}\right) ^ {2} + [ x (t) ] ^ {2}}}\right) \right]. \tag {2} \\ \end{array} +$$ + +The force $F_{\mathrm{UP}}$ is the resistive force that the top exerts on the motorcycle's wheel. Balancing the force and taking into effect gravity (in the form of the normal force), we get the force equation of the motion of the motorcycle's wheel on the box prior to puncture, crack, or buckle: + +$$ +F _ {\mathrm {N E T}} = F _ {\mathrm {U P}} + m g. +$$ + +We use this expression to calculate the energy as a function of depression into the box. We use our initial force calculation to determine the level of depression and the type of failure that the top incurs. This depression is the minimum depression for which any failure occurs. + +If the force $F_{\mathrm{CD}}$ on the cross-machine direction spring (contributed by both sides of the spring) exceeds $2P_{CD}L_{MD}w$ , then a crack occurs in the machine direction. Solving for the depression, we get + +$$ +x _ {\mathrm {C D}} = \sqrt {\left(\frac {P _ {\mathrm {C D}} w}{E _ {\mathrm {C D}}}\right) ^ {2} + \frac {P _ {\mathrm {C D}} w}{E _ {\mathrm {C D}}} (l - L _ {\mathrm {C D}})} +$$ + +Likewise, if the force $F_{\mathrm{MD}}$ on the machine direction spring (contributed by both sides of the spring) exceeds $2P_{\mathrm{MD}}L_{\mathrm{CD}}w$ , then a crack occurs in the cross-machine direction, with the analogous formula for the depression. + +If the resistive upwards force $F_{\mathrm{UP}}$ from the top exceeds $P_{ML}L_{\mathrm{MD}}L_{\mathrm{CD}}$ , puncturing occurs. Similarly, if the net force on the edges, $2F_{\mathrm{CD}} + 2F_{\mathrm{MD}}$ , exceeds $4P_{ECT}l$ , buckling occurs first. We find the respective depressions in the next section. + +We can use the $x$ -position in energy calculations to determine the new speed of the motorcycle. + +Energy is a distance integral of net force, so using (2) we can find energy $A_{T}$ absorbed by the top: + +$$ +\begin{array}{l} A _ {T} (x) = \int_ {0} ^ {x} F _ {N E T} d s = E _ {\mathrm {C D}} L _ {\mathrm {M D}} \left(x ^ {2} - (l - L _ {\mathrm {C D}}) \sqrt {\left(\frac {l - L _ {\mathrm {C D}}}{2}\right) ^ {2} + x ^ {2}} + \frac {(l - L _ {\mathrm {C D}}) ^ {2}}{2}\right) \\ + E _ {\mathrm {M D}} L _ {\mathrm {C D}} \left(x ^ {2} - (l - L _ {\mathrm {M D}}) \sqrt {\left(\frac {l - L _ {\mathrm {M D}}}{2}\right) ^ {2} + x ^ {2}} + \frac {(l - L _ {\mathrm {M D}}) ^ {2}}{2}\right) + m g x. \\ \end{array} +$$ + +# Extensions of Top Model + +Testing the model shows that not all the force and dissipative energy comes from the top of the box springs. We make the following further assumptions: + +- Since the deflection is small compared to the edge lengths, $x \ll l - L_{\mathrm{CM}} / 2$ and $x \ll l / 2$ . +- The boxes have two layers coming together at the top that are corrugated in different directions, hence the cardboard on the top really is of width $2w$ . +- Since the flutes in one top piece are perpendicular to the flutes of the other, the resulting combined modulus of elasticity is the sum of the two original values in both directions. This means that we can define $E = E_{\mathrm{MD}} + E_{\mathrm{CD}}$ and modify all equations accordingly. + +The equations for the forces $F_{\mathrm{CD}}$ (1) and $F_{\mathrm{MD}}$ are of the form + +$$ +f (x) = k \left(a ^ {2} + x ^ {2}\right) ^ {1 / 2} - a, +$$ + +with $a = (l - L_{\mathrm{CD}}) / 2$ . For small deflections $x$ , such a function can be approximated well by its second-degree Taylor expansion around $x = 0$ : + +$$ +f (x) \approx \frac {x ^ {2}}{2 a}. +$$ + +The resulting equations are + +$$ +F _ {\mathrm {C D}} = 2 E L _ {\mathrm {M D}} \left(\frac {x ^ {2}}{l - L _ {\mathrm {C D}}}\right), \qquad F _ {\mathrm {M D}} = 2 E L _ {\mathrm {C D}} \left(\frac {x ^ {2}}{l - L _ {\mathrm {M D}}}\right), +$$ + +$$ +F _ {\mathrm {U P}} = x (t) \left(2 E L _ {\mathrm {M D}} \frac {x (t)}{l - L _ {\mathrm {C D}}} + 2 E L _ {\mathrm {C D}} \frac {x (t)}{l - L _ {\mathrm {M D}}}\right) +$$ + +We deal briefly with the position of the rectangle on the box. The ECT deflection should not depend on the position of the rectangle, since ECT depends only on the net force. The other three deflections become less as the force moves away from the center, so we can model them with a standard linear decrease factor of $1 - \delta L(l / 2)$ , where $\delta L$ is the distance radially from the center. + +Now we solve equations for $x$ , taking the $\delta$ factor into account, as well as $F_{\mathrm{ECT}} = 2(F_{\mathrm{CD}} + F_{\mathrm{MD}})$ , getting + +$$ +x _ {\mathrm {C D}} = \left(1 - \frac {2 \delta L}{l}\right) \sqrt {\frac {F _ {\mathrm {C D M A X}} (l - L _ {\mathrm {C D}})}{2 E L _ {\mathrm {M D}}}}, x _ {\mathrm {M D}} = \left(1 - \frac {2 \delta L}{l}\right) \sqrt {\frac {F _ {\mathrm {M D M A X}} (l - L _ {\mathrm {M D}})}{2 E L _ {\mathrm {C D}}}}, +$$ + +$$ +x _ {\mathrm {M L}} = \left(1 - \frac {2 \delta L}{l}\right) \sqrt {\frac {F _ {\mathrm {U P M A X}}}{\left(\frac {2 E L _ {\mathrm {M D}}}{l - L _ {\mathrm {C D}}} + \frac {2 E L _ {\mathrm {C D}}}{l - L _ {\mathrm {M D}}}\right)}}, \qquad x _ {\mathrm {E C T}} = \sqrt {\frac {F _ {\mathrm {E C T M A X}}}{\left(\frac {2 E L _ {\mathrm {M D}}}{l - L _ {\mathrm {C D}}} + \frac {2 E L _ {\mathrm {C D}}}{l - L _ {\mathrm {M D}}}\right)}}. +$$ + +To obtain energy, we integrate the Taylor-expanded version of $F_{\mathrm{UP}} + mg$ : + +$$ +A _ {T} (x) = 2 E \frac {x _ {F} ^ {3}}{3} \left(\frac {L _ {\mathrm {M D}}}{l - L _ {\mathrm {C D}}} + \frac {L _ {\mathrm {C D}}}{l - L _ {\mathrm {M D}}}\right) + m g x _ {F}, +$$ + +where $x_{F} = \min \{x_{\mathrm{CD}},x_{\mathrm{MD}},x_{\mathrm{ML}},x_{\mathrm{ECT}}\}$ + +Now we can substitute the maximum values of the respective forces and solve for the $x$ -values. We use the same values of $F_{\mathrm{UP}} = P_{\mathrm{ML}}L_{\mathrm{MD}}L_{\mathrm{CD}}$ and $F_{\mathrm{ECT}} = 4P_{\mathrm{ECT}}l$ . For the cracking forces, we double their values to take into account both springs on the side of the tire. So this gives + +$$ +F _ {\mathrm {C D}} = 2 \left(P _ {\mathrm {C D}} + P _ {\mathrm {M D}}\right) w L _ {\mathrm {C D}}, \quad F _ {\mathrm {M D}} = 2 \left(P _ {\mathrm {C D}} + P _ {\mathrm {M D}}\right) w L _ {\mathrm {M D}}. +$$ + +We make one last change. Energy is absorbed not only from the elasticity of the top but also from that of the edges. We determine the average force on this edge spring and the change in the depression $x$ that occurs for this spring. It is reasonable to use average force, because this is over a small distance, and a parabolic function is reasonably linear in such an interval. Using modulus of elasticity $E$ and first-order Taylor approximations, we estimate the displacement and energy absorbed. + +We also assume that the edge does not fail before the top. The is reasonable because an edge has greater structural integrity. + +So, assume that we are dealing with edge on which $F_{\mathrm{CD}}$ acts. To find the average force, integrate $F_{\mathrm{CD}}$ with respect to distance and divide by distance. Denote by $x_{F}$ the depression at which failure occurs in our original top model. The average force is + +$$ +\overline {{F _ {\mathrm {C D}}}} = E L _ {\mathrm {M D}} \left(\frac {x _ {F} ^ {2}}{3 (l - L _ {\mathrm {C D}})}\right). +$$ + +To solve for $\Delta x$ , observe that if the force is centered at the edge, then + +$$ +\Delta x = \sqrt {\left(\frac {l}{2}\right) ^ {2} + x _ {\mathrm {C D - E D G E}} ^ {2}} - \frac {l}{2}. +$$ + +Taking a first-order Taylor approximation gives $\Delta x = x_{\mathrm{CD - EDGE}}^2 /l$ . Using Hooke's law and $\overline{F_{\mathrm{CD}}} = E\Delta x$ , we solve for the depression of this spring $x_{\mathrm{CD - EDGE}}$ : + +$$ +x _ {\mathrm {C D - E D G E}} = \sqrt {L _ {\mathrm {M D}} l \left(\frac {x _ {F} ^ {2}}{3 (l - L _ {\mathrm {C D}})}\right)}. +$$ + +The corresponding energy is + +$$ +A _ {\mathrm {C D - E D G E}} = \overline {{F _ {\mathrm {C D}}}} \Delta x = E L _ {\mathrm {M D}} x _ {\mathrm {C D - E D G E}} ^ {2} \left(\frac {x _ {F} ^ {2}}{3 (l - L _ {\mathrm {C D}})}\right) + m g x _ {\mathrm {C D - E D G E}}. +$$ + +At this point, we assume that even though we have treated the two spring groups (the top and the edge) as separate systems—despite the fact that they move and affect each other dynamically—we can add the depression values $x$ and energy absorption results $A$ . This simplifying assumption is valid because the depressions are small enough that the approximations are the same as in the dynamic case. To obtain $x_{\mathrm{MD-EDGE}}$ and $A_{\mathrm{MD-EDGE}}$ , interchange CD and MD. + +So, reviewing, we have $x_{F} = \min(x_{\mathrm{CD}}, x_{\mathrm{MD}}, x_{\mathrm{ML}}, x_{\mathrm{ECT}})$ and total depression $x_{\mathrm{NET}} = x_{F} + 2x_{\mathrm{CD-EDGE}} + 2x_{\mathrm{MD-EDGE}}$ and $A_{\mathrm{NET}} = A_{F} + 2A_{\mathrm{CD-EDGE}} + 2A_{\mathrm{MD-EDGE}}$ . We calculate + +$$ +\overline {{F _ {\mathrm {N E T}}}} = \frac {A _ {F} + 2 A _ {\mathrm {C D - E D G E}} + 2 A _ {\mathrm {M D - E D G E}}}{x _ {\mathrm {N E T}}}. +$$ + +Finally, we take into account the fact that as the mass falls, it gains energy from gravity is countered by the energy gained. This fact leads to the important equation for $A_{T}$ (the energy change during the top failures): + +$$ +A _ {T} = A _ {\mathrm {N E T}} - m g (x _ {\mathrm {N E T}}). +$$ + +# Single-Crack Buckling + +We deal with energy dissipation for single-crack buckling, which occurs when a single side develops a crack from the top to a side. We model this as two sets of springs (Figure 3). + +Once the top (side III) and a single side (side I) become weak, all corners but the two adjacent to the crack $(C_1, C_2)$ remain strong. The only nonrigid corners, free to move, are $C_1$ and $C_2$ . We assume that we are in the elastic range of the cardboard, so we can model this situation with two springs connected to the adjacent corners. We have springs connecting $C_4$ and $C_1, C_5$ and $C_1, C_3$ and $C_2, C_6$ and $C_2$ . We apply the same methods as used in modeling the top to determine the energy as a function of how much the edges move in. [EDITOR'S NOTE: We do not give all the details.] The total force exerted by the box is + +$$ +F _ {\mathrm {N E T}} = 2 E _ {\mathrm {M D}} \frac {l}{1 0} \left(\sqrt {l ^ {2} + x ^ {2}} - l\right) + 2 E _ {\mathrm {C D}} \frac {l}{1 0} \left(\sqrt {l ^ {2} + x ^ {2}} - l\right) +$$ + +![](images/2c9448cceb6e5f3d91f3419f585dd016f68900d1e16037ea3f51f908a0d97083.jpg) +Figure 3. Our model of a box that is buckling. + +and the buckling energy is + +$$ +\begin{array}{l} A _ {B} = (E _ {\mathrm {M D}} + E _ {\mathrm {C D}}) \frac {l}{1 0} \left(l ^ {2} \ln (\sqrt {l ^ {2} + x ^ {2}} + x) + x \sqrt {l ^ {2} + x ^ {2}} - 2 l x - l ^ {2} \ln l\right) \\ + m g x _ {\text {D O W N}}. \\ \end{array} +$$ + +where $x_{\mathrm{DOWN}}$ is the component of $x$ that points downwards from the edge. + +Now we use methods similar to those used in modeling the top to determine the maximum $x$ -displacement that can occur before the tensile strength is exceeded. + +If the force $F_{\mathrm{MD}}$ on the machine direction spring (across the flutes) exceeds $P_{\mathrm{MD}} w l / 10$ (the tensile strength), then a crack occurs in the cross-machine direction, with + +$$ +x _ {\mathrm {M D}} = \sqrt {\left(\frac {P _ {\mathrm {M D}} l w}{E _ {\mathrm {M D}}}\right) ^ {2} + \frac {2 P _ {\mathrm {M D}} w l ^ {2}}{E _ {\mathrm {M D}}}}. +$$ + +Likewise, if the force $F_{\mathrm{CD}}$ on the cross-machine direction spring (along the flutes) exceeds $P_{\mathrm{CD}} w l / 10$ , then a crack occurs in the machine direction, with + +$$ +x _ {\mathrm {C D}} = \sqrt {\left(\frac {P _ {\mathrm {C D}} l w}{E _ {\mathrm {C D}}}\right) ^ {2} + \frac {2 P _ {\mathrm {C D}} w l ^ {2}}{E _ {\mathrm {C D}}}}. +$$ + +The minimum of these $x$ -displacements indicates when failure occurs. After failure, the box has less potential to continue to absorb energy. We also assume that the motorcycle has gone a significant distance, so there is little distance left to compress. + +We make one last change, taking into account gravity's contribution. First, we solve for the depression of the tire during buckling, before failure. We have assumed that the buckled edge moves in towards the box's center. The total depression, $x_{\mathrm{DOWN}}$ , can be calculated, with a geometric argument, to be $\sqrt{lx - x^2} / \sqrt{2}$ . Using this, we get a final value for $A_B$ : + +$$ +\begin{array}{l} A _ {B} = \left(E _ {\mathrm {M D}} + E _ {\mathrm {C D}}\right) \frac {l}{1 0} \left(l ^ {2} \ln \left(\sqrt {l ^ {2} + x ^ {2}} + x\right) + x \sqrt {l ^ {2} + x ^ {2}} - 2 l x - l ^ {2} \ln l\right) \\ + \frac {m g \sqrt {l x - x ^ {2}}}{\sqrt {2}} \\ \end{array} +$$ + +and + +$$ +\overline {{F _ {\mathrm {B N E T}}}} = \frac {(E _ {\mathrm {M D}} + E _ {\mathrm {C D}}) \frac {l}{1 0} (l ^ {2} \ln [ \sqrt {l ^ {2} + x ^ {2}} + x ] + x \sqrt {l ^ {2} + x ^ {2}} - 2 l x - l ^ {2} \ln l)}{\frac {\sqrt {l x - x ^ {2}}}{\sqrt {2}}} + m g. +$$ + +With the same argument as for the top, we obtain + +$$ +A _ {B} = (E _ {\mathrm {M D}} + E _ {\mathrm {C D}}) \frac {l}{1 0} \left(l ^ {2} \ln (\sqrt {l ^ {2} + x ^ {2}} + x) + x \sqrt {l ^ {2} + x ^ {2}} - 2 l x - l ^ {2} \ln l\right). +$$ + +# Drawbacks of the One-Box Model + +- The model does not take torque into account. +- The time-independence of all of our quantities makes it difficult to solve for quantities such as friction. +- The energy absorbed by the top of the box seems lower than desired. +- This model is difficult to extend to interactions between multiple boxes. + +# Stacking Two Identical Boxes + +Consider two identical boxes, one stacked perfectly on the other, and suppose that the tire strikes the top of the higher box. We claim: + +- The top box cracks, buckles, and/or punctures first; since it absorbs some of the force that acts on it, force it exerts on the second box is diminished. +- The lower box's top does not crack or puncture, it only buckles. Once the upper box has buckled, we may assume it is reasonably flattened. Then, the effective top of the second is at least two times—or maybe even three (if no punctures have occurred in the top of the first box) times—the thickness of a cardboard top. This effectively doubles or triples the tensile strength of the top, since tensile strength depends on width. Furthermore, the force from the motorcycle tire, now felt through extra cardboard, is spread over a larger area decreasing its ability to depress the top of the box. + +# The Effects of Friction and Adjacent Boxes + +# Preliminaries + +For large box configurations, we predict that the following occur: + +- As the motorcycle propels through boxes, it loses momentum through collisions. +- Each box experiences friction with other boxes and with the ground. Combined, these aid in slowing the motorcycle. + +# Average Friction Experienced + +Suppose that the combined mass of the motorcycle and boxes that have "stuck" to it is $m$ . Furthermore, let $m_b$ be the mass of the box(es) that this system is about to strike horizontally. The frictional force is $\mu \cdot N$ , where $\mu$ is the coefficient of friction and $N$ is the force. For box-to-box interactions, $\mu \approx 0.4$ ; and for box-to-ground interactions, $\mu \approx 0.6$ . + +Let the motorcycle-boxes system have initial mass $m$ and strike a box with initial vertical speed $v_{iz}$ and horizontal $y$ speed $v_{iy}$ . By conservation of momentum, the new vertical speed is $v_{y} = m v_{iy} / (m + m_{b})$ . In our one-box model, we developed an expression for the average upwards force acting on a box, namely $\overline{F_{\mathrm{NET}}}$ and $\overline{F_{\mathrm{BNET}}}$ . These forces are functions of $x$ ; for simplicity, we assume that the normal force felt equals the average force exerted upon the motorcycle by the box. We use the precalculated average forces due to buckling, cracking, and/or puncturing. + +Thus, the average frictional force experienced is: $f_{s} = \mu \overline{F}$ . + +# Horizontal Distance Travelled + +To calculate the energy (and thus speed) that the motorcycle loses to friction, we determine the horizontal distance that the motorcycle travels while experiencing this frictional force. We make several approximations to estimate the horizontal energy lost during compression. Without loss of generality, suppose that we wish to calculate the horizontal distance travelled in the $y$ -direction. + +Since the vertical energy lost is $\Delta A_{z}$ , the final $z$ velocity is + +$$ +v _ {f z} = \sqrt {2 (A _ {z} - \Delta A _ {z}) / m}; +$$ + +and an approximate expression for the average $z$ -velocity during this time is + +$$ +\overline {{{v}}} _ {z} \approx \frac {v _ {f z} + v _ {i z}}{2}. +$$ + +This gives us the approximate time span of this event, $\overline{t} \approx x / \overline{v_z}$ . Although the horizontal speed in the $y$ -direction changes due to friction, for the purposes of calculating the horizontal distance $d$ travelled we treat it as constant to obtain + +$$ +d = v _ {y} \bar {t} = \frac {m v _ {i y}}{m + m _ {b}} \frac {2 x}{\sqrt {2 (A _ {z} - \Delta A _ {z}) / m} + v _ {i z}}. +$$ + +Thus, the approximate energy lost to friction is + +$$ +\Delta A _ {y} = \mu \overline {{F}} d = \mu \overline {{F}} \cdot \frac {m v _ {i y}}{m + m _ {b}} \frac {2 x}{\sqrt {2 (A _ {z} - \Delta A _ {z}) / m} + v _ {i z}}, +$$ + +and the new horizontal speed after this occurs is + +$$ +v _ {f y} = \sqrt {2 \left(\frac {1}{2} m v _ {i y} ^ {2} - \Delta A _ {y}\right) / (m + m _ {b})}. +$$ + +Analogous equations are hold for frictional forces acting in the $x$ -direction. + +# The Dual-Impact Model (DIM) + +Since it simplifies computer modeling immensely while maintaining approximate accuracy, our multi-box model consists of a large three-dimensional array of points, where each point represents a $10\mathrm{cm} \times 10\mathrm{cm} \times 10\mathrm{cm}$ cube. The boxes have side-length $90\mathrm{cm}, 70\mathrm{cm}$ , or $50\mathrm{cm}$ and occupy "cubes" of these points in space. This has immediate simplifying consequences: + +- Since boxes are modeled within a discrete set, their representations do not have full freedom of movement in the set. Therefore, while calculations involving buckling, cracking, and the like are done for each box, the box is modeled in the set as either existing or being flattened. Furthermore, movement will be rounded to the nearest $10\mathrm{cm}$ . +- This particular model does not allow boxes to be kept "off-angle," spinning, or modeled as flipping. It is the buckling that slows the motorcycle. + +# Which Configurations Are Desirable? + +We prefer configurations that + +- minimize the number of boxes used; +- minimize the magnitude of force acting upon the motorcycle—at most an acceleration of $5g$ , if possible; and +- ensure that the motorcycle has no downward component of velocity by the time it reaches the ground (if it ever does) and that the motorcycle goes through as many boxes as possible without hitting the ground. + +# Preliminary Assumptions + +- The front and rear motorcycle wheels have the same velocity. +- The motorcycle has the same angle of inclination when it lands as it did when it took off; that is, the back wheel lands on the cardboard boxes first. +- The motorcycle plus cyclist has sufficient velocity not to fall over. +- The motorcycle moves in the cross-machine direction of the top box. +- The boxes are in layers; boxes in a layer are all the same size. +- The mass of boxes $m_b$ that the motorcycle with mass $m$ strikes are sufficiently small that they can be neglected, i.e., $m + m_b \approx m$ . + +# Step-by-Step Description of the Algorithm + +Given a box configuration, the two inputs required by our simulation are the position that the bottom tire strikes the first box at and the velocity vector of the motorcycle. + +The first interaction is the stress and failure of the top of the first box struck. Here our simulation uses the One-Box Model extensively. The distance of the tire from the center of the box determines which type of top failure occurs. This failure determines the change in vertical energy; our simulation calculates the energy loss for both horizontal directions. + +Once the top fails, we model the buckling of the box, getting the change in vertical velocity and the distance/velocities with which the motorcycle has moved horizontally (taking friction into account). + +This entire process is repeated for the front wheel of the motorcycle, as we assume that the front wheel pivots around the rear wheel to a box of its own with similar conditions. We refer to all of these combined interactions, which occur with the back and front wheels on their first box, including top failure and buckling, as the primary impact. + +The secondary impact involves the motorcycle interacting with all boxes after the first. Although many more boxes are involved, the forces tend to be much more well-distributed. The inputs to the simulation are the velocity and position vector of the motorcycle. The only failure that can occur on a box below the first is buckling, when the force exerted exceeds the ECT yield strength. Frictional forces continue to affect the velocity, and some vertical velocity is absorbed in buckling the boxes. The DIM also allows for multiple boxes to buckle at once. In fact, this model automatically allows the box that is covered the most by the current box to buckle, as well as any other boxes whose tops are covered more than $15\%$ by the current box. After this process is evaluated, the motorcycle's horizontal and vertical positions and recorded, and the process continues with the next layer. + +# Testing the Dual-Impact Model + +We use a computer to simulate various patterns and sizes of boxes, including simple stacks of boxes on top of each other in columns, more-random configurations, and pyramidal configurations. For each configuration, we vary the speed at which the motorcycle leaves the ramp. We consider + +- the maximum height, to determine whether the jump clears the elephant; +- variations in the speed and angle from which the motorcycle leaves the ramp; +- box-on-box interactions in transferring energy and dissipating heat through friction; and +- whether a configuration would stop the motorcycle's motion in all directions. + +We find compelling evidence for using larger boxes in a pyramidal configuration. + +The optimal configuration uses three layers of boxes $90\mathrm{cm}$ on a side stacked so that every box (except those at the bottom) rest equally on four others. Also, smaller boxes at the bottom is a good strategy for absorbing energy. In fact, when height is an issue, having a large box with smaller boxes beneath it does better than all other smaller alternatives. The second-best configuration is a pyramid with $50\mathrm{m}$ boxes in the base, $70\mathrm{cm}$ boxes in the middle layer, and $90\mathrm{cm}$ boxes on top. + +Our tests indicate that buckling is the main source of energy absorption: Buckling takes place over a much great distance than other failures and tends to last much longer. + +Decelerations during energy absorption remain reasonable and within our bound of $5g$ . + +Why are pyramids such effective shapes? When a motorcycle hits a box and eventually interacts with the boxes bordering it below, the more boxes with large areas in contact with the box, the more energy dissipation. Every box buckles, not just the top one, as is the case for a column configuration. Pyramids also have a stability hardly found in standard columns of boxes, which individually collapse quite easily. + +# Conclusions and Extensions of the Problem + +# Solving the Problem + +A motorcycle clearing a midsized elephant can be stopped effectively by $90~\mathrm{cm}$ boxes but not as effectively by smaller ones, even in greater numbers. In addition, larger boxes have a higher chance of cracking, since the top of a large box has more give than a smaller box; this is useful, since additional energy is absorbed. Finally, one needs fewer large boxes. + +Estimating how many boxes to use requires solving for the area in which the motorcycle could land and multiplying it by the number of layers of boxes. The approximate range in the $y$ -direction is + +$$ +\begin{array}{l} \frac {(v _ {e} + \delta v) \cos \theta}{g} \left[ (v _ {e} + \delta v) \sin \theta + \sqrt {(v _ {e} + \delta v) ^ {2} \sin^ {2} \theta + 2 g (h _ {r} - h _ {b})} \right] \\ - \frac {\left(v _ {e} - \delta v\right) \cos \theta \cos \delta \phi}{g} \left[ \left(v _ {e} - \delta v\right) \sin \theta + \sqrt {\left(v _ {e} - \delta v\right) ^ {2} \sin^ {2} \theta + 2 g \left(h _ {r} - h _ {b}\right)} \right], \tag {3} \\ \end{array} +$$ + +where $v_{e}$ is the expected velocity, $\delta v$ is the deviation in velocity, $\phi = 0$ , and $\delta \phi$ is the offset from the targeted position. The range in the $x$ -direction is + +$$ +2 \frac {\left(v _ {e} + \delta v\right) \cos \theta \sin \delta \phi}{g} \left[ \left(v _ {e} + \delta v\right) \sin \theta + \sqrt {\left(v _ {e} + \delta v\right) ^ {2} \sin^ {2} \theta + 2 g \left(h _ {r} - h _ {b}\right)} \right]. \tag {4} +$$ + +The area to be covered is the product of (2) and (3). + +Each 90-cm box that buckles absorbs 4 million $\mathrm{kg}\cdot \mathrm{cm}^2 /\mathrm{s}^2$ . With the pyramid setup, looking at both wheels, 2 boxes buckle on first impact, 8 on the next level, and 15 on the third (assuming that the first two boxes were separated by one box). Instead of determining the number of boxes that intersect from both the front and back wheels, we assume an upper bound of $N(N + 1)(2N + 1) / 6$ where $N$ is the number of levels. + +Knowing how much energy the boxes can absorb helps generalize the solution to motorcycles of any mass and jumps of any height. + +# Strengths of the Model + +- All of our assumptions are well-grounded in the literature +- Our time-independent model allows for simple modeling without delving into the realm of differential equations. +- The values we obtain are reasonable. + +# Weaknesses of Model and Further Study + +- Our ability to model certain interactions, such as friction and lateral velocity, is severely limited by the approximations that we are forced to make. +- Our model is discrete and thus cannot incorporate torques, slipping, and other such effects. + +# References + +Abbreviations and definitions. n.d. http://www.corrugatedprices.com/ reference/definitions.html. +Advantages of corrugated packaging. n.d. http://www.fcbm.org/technology/advantages.htm. +Benenson, Walter, et al. (eds.). 2002. Handbook of Physics. New York: Springer-Verlag. +ECT vs. # test. n.d. http://www.papermart.com/packaging/info-ect.htm. +Formed cardboard, formed hardboard parts. n.d. http://www.diedrichs.de/uk/pappe_hartf.htm. +Habitat utilisation in an elephant enclosure. n.d. http://asio.jde.aca.mmua.ac.uk/new_gis/analysis/elephant.htm. +Heritage Paper. n.d. http://www.heritage-paper.net/packaging.html. +Matcom: The properties of paper and cardboard. n.d. http://www.enged.com/students/matcom/matcom89.html. +Motorcyclespecs.n.d. http://home.pacbell.net/will207/motorcycle.htm. +Motorcycle tire size conversion, speed ratings, load ratings chart. n.d. http: //www.webbikeworld.com/Motorcycle-tires/tire-data.htm. +MSN learning and research—Elephant. Encarta. n.d. http://encarta.msn.com/encnet/refpages/RefArticle.aspx?refid=761575386. +Packaging glossary. n.d. http://www.euro-corrugated-box.com/box_terms.htm. +Party poopers?: NJ lawmakers limit roller coasters. 2002. WPVI.com Action News (15 September 2002). http://abclocal.go.com/wpvi/news/09152002_nw_coasters.html. +Urbanik, Thomas J. 1997. Linear and nonlinear material effects on postbuckling strength of corrugated containers. Mechanics of Cellulosic Materials 221: 93-99. +_______. 1999. Nonlinear finite element modelling of corrugated board. Mechanics of Cellulosic Materials 231: 101-106. + +# Thinking Outside the Box and Over the Elephant + +Melissa J. Banister +Matthew Macauley +Micah J. Smukler +Harvey Mudd College +Claremont, CA + +Advisor: Jon Jacobsen + +# Abstract + +We present a mathematical model of the collapsing of a box structure, which is to be used to protect a stunt motorcyclist who jumps over an elephant. Reasonable values of the model's parameters cause it to predict that we should construct the structure out of fifty $6\mathrm{in} \times 28\mathrm{in} \times 28$ in boxes, stacked five high, two wide, and five long. In general, the model predicts that we should use boxes whose height is one-quarter of the harmonic mean of their length and width. We discuss the assumptions, derivation, and limitations of this model. + +# Introduction + +A stunt motorcyclist jumps over an elephant; we use cardboard boxes to cushion the landing. Our goal is to determine how to arrange the boxes to protect the motorcyclist. We determine + +- how many boxes to use, +the size of the boxes. +- the arrangement of the boxes, and +- any modifications to the boxes. + +In addition, our model must accommodate motorcyclists jumping from different heights and on motorcycles of different weights. Our goal is to reduce the impulse at landing, thus essentially simulating a much lower jump (of which we assume the rider is capable). + +![](images/660bf18a36c7376a34f6b0be9ece4c66adf9d1503b1aeb371cfab0cd13eaca10.jpg) +Figure 1. The landing platform (graphic from Just'In Designs [2001]). + +As the rider breaks through the top layer of boxes, crashing through cardboard at a high horizontal speed, it will be difficult to maintain balance. It is too dangerous to rely on the cardboard to stop the horizontal motion of the rider, unless we use such a large pile that keeping it from being visible to the camera would be nearly impossible. + +We are faced with how to cushion the rider's landing without creating merely a pit of boxes. Imagine jumping from a 10-ft roof. If the jumper lands on a large wooden platform resting on a deep foam pit, the risk for injury is much less; the foam spreads out the jumper's deceleration over a much longer time, simulating a much lower jumping height. + +Our goal is to create a landing platform for the motorcyclist that behaves much like the wooden platform on the foam pit. We simulate the foam pit by stacks of boxes. Our "platform" is constructed from boxes unfolded into cardboard flats and placed in a layer on top of the "pit" (Figure 1). The idea is that the motorcyclist should never break through this layer of flats but should merely break the boxes underneath it. + +# Safety Considerations + +- Once the motorcycle has landed on the stack of cardboard boxes, its deceleration should to be as uniform as possible as the structure collapses—the more uniform the deceleration, the easier to maintain balance. +- We want the platform to remain as level and rigid as possible. If it is not level, the rider may lose balance; it is insufficiently rigid, it may bend and collapse into the pile of boxes. + +# Terminology + +- The flute type of a gauge of cardboard refers to its corrugated core structure. Three sheets of linerboard compose one sheet of corrugated cardboard; the middle one is shaped into flutes, or waves, by a machine, and then the outer two sheets are glued on either side of it. For example, C-flute corrugated cardboard is the most common form [Mall City Containers n.d.]. + +- The edge crush test (ECT) value of a gauge of cardboard is the force per unit length that must be applied along the edge before the edge breaks. We make extensive use of the concept of such a value; however, the actual numbers given for gauges of cardboard apply to ideal situations that would not be replicated in the cases that we are considering [Boxland Online 1999]. +- The flatwise compression test (FCT) value of a gauge of cardboard is the pressure that must be applied to collapse it. It does not directly correlate to the stress placed on boxes in practice and therefore is not used as an industry standard [Pflug et al. 2000]. +- The bursting strength of a gauge of cardboard is the amount of air pressure required to break a sample. Because our model is concerned mainly with the strength of a box edge, but the bursting strength more accurately measures the strength of a face, we do not make use of it [Boxland Online 1999]. +- The stacking weight of a box is the weight that can be applied uniformly to the top of a box without crushing it. In general, the stacking weight of a box is smaller than the ECT or bursting strength, because it takes into account the structural weaknesses of the particular box. We derive most of our numerical values for box strength from manufacturers' specified stacking weights [Bankers Box 2003]. + +# Assumptions + +- The force exerted on every layer beneath the top platform is horizontally uniform. The force that the motorcycle exerts on the top platform is concentrated where the wheels touch. Ideally, however, this top platform is perfectly flat and rigid, so it distributes the force evenly to all lower layers. We approach this ideal by adding additional flats to the top platform. +- The stacking weight of a box is proportional to its perimeter and inversely proportional to its height. This assumption is physically reasonable, because weight on a box is supported by the edges and because the material in a taller box on average is farther from the box's points of stability than in a shorter box. This claim can be verified from data [Clean Sweep Supply 2002]. +- Nearly all of the work done to crush a box is used to initially damage its structural integrity. After the structure of a box is damaged, the remaining compression follows much more easily; indeed, we suppose it to be negligible. We denote by $d$ the distance through which this initial work is done and assume for simplicity that the work is done uniformly throughout $d$ . Through rough experiments performed in our workroom, we find that $d \approx 0.03 \, \text{m}$ . We assume that this value is constant but also discuss the effect of making it a function of the size of box and of the speed of the crushing object. + +![](images/ec6f2641b65cfc1d901bde62ab98223f9528dbf4a01bde0ec868c4d40fab504b.jpg) +Figure 2. Tire before and after landing, acting as a shock absorber. + +![](images/1ef2187ac163dbe3c0d7340cc67d3cc6058fa9f3145422281750ab7e4ce87830.jpg) + +- When the motorcycle lands, we ignore the effects of any shock absorbers and assume that the motorcyclist does not shift position to cushion the fall. This is a worst-case scenario. To calculate how much force the tires experience per unit area, we consider a standard 19-inch tire of height $90\mathrm{mm}$ and width $120\mathrm{mm}$ [Kawasaki 2002]. It compresses no less than $50\mathrm{mm}$ (Figure 2). A simple geometry calculation then tells us that the surface area of the tire touching the platform is approximately $3000\mathrm{mm}^2$ . We assume that the force exerted on the motorcycle on landing is uniformly distributed over this area. +- The pressure required to compress a stack of cardboard flats completely is the sum of the pressures required to compress each individual flat. +- In a uniformly layered stack of boxes, each layer collapses completely before the layer beneath it begins to collapse. This is probably an oversimplification; however, it is reasonable to suppose that the motorcycle is falling nearly as fast as the force that it is transmitting. +- The motorcyclist can easily land a jump $0.25 \mathrm{~m}$ high. + +# The Model + +# Crushing an Individual Box + +For a cardboard box of height $h$ , width $w$ , and length $l$ , by the assumptions made above, the stacking weight $S$ is + +$$ +S (h, l, w) = \frac {k (l + w)}{h}, +$$ + +where $k$ is a constant (with units of force). + +Once a box is compressed by a small amount, its spine breaks and very little additional force is required to flatten it. Thus, most of the force that the box exerts on the bike is done over the distance $d \ll h$ , and we assume that the work is done uniformly over this distance; this work is + +$$ +W = (\mathrm {f o r c e}) (\mathrm {d i s t a n c e}) = \frac {k (l + w) d}{h}. +$$ + +# Crushing a Layer of Boxes + +To ensure that a layer of boxes collapses uniformly, we build it out of $n$ identical boxes. The total amount of work required to crush such a layer is + +$$ +W _ {T} = n \frac {k (l + w) d}{h}. +$$ + +Once the structure starts to collapse, we want the rider to maintain a roughly constant average deceleration $g'$ over each layer. It follows that the layer should do total work $m(g + g')h$ , so + +$$ +W _ {T} = \frac {n k d (l + w)}{h} = m \left(g + g ^ {\prime}\right) h. \tag {1} +$$ + +Define $A = nlw$ to be the cross-sectional area of a layer of boxes; rearranging (1) produces + +$$ +B \equiv \frac {A d k}{m \left(g + g ^ {\prime}\right)} = \frac {h ^ {2} l w}{l + w}. \tag {2} +$$ + +The constant $B$ gives a necessary relationship among the dimensions of the box if we wish to maintain constant deceleration throughout the collision. + +Finally, we would like to minimize the total amount of material, subject to the above constraint. To do so, consider the efficiency of a layer with a given composition of boxes to be the ratio of amount of work done to amount of material used. If the motorcyclist peaks at a height $h_0$ , we must do work $mgh_0$ to stop the motorcycle. We minimize the total material needed by maximizing the efficiency of each layer. + +The amount of material in a box is roughly proportional to its surface area, $S = 2(hl + lw + wh)$ . Thus the amount of material used by the layer is proportional to $nS = 2n(hl + lw + wh)$ . It follows that the efficiency $E$ of a layer composed of boxes of dimensions $h \times l \times w$ is + +$$ +E \propto \frac {W _ {T}}{n S} = \frac {n k d (l + w)}{2 n h (h l + l w + w h)} = \frac {k d (l + w)}{2 h (h l + l w + h w)}. +$$ + +We maximize $E$ for each layer, subject to the constraint (2). The calculations are easier if we minimize $1 / E$ . Neglecting constant factors, we minimize + +$$ +f (h, l, w) = \frac {h}{l + w} (h l + h w + l w) +$$ + +subject to the constraint + +$$ +\frac {h ^ {2} l w}{l + w} = B, +$$ + +where $B$ is the constant defined in (2). However, as long as we are obeying this constraint (that each layer does the same total work), we can write + +$$ +f (h, l, w) = h ^ {2} + \frac {h l w}{l + w} = h ^ {2} + \frac {B}{h}, +$$ + +and thus $f$ depends only on $h$ . The function $f$ is minimized at + +$$ +h = \sqrt [ 3 ]{\frac {B}{2}} = \sqrt [ 3 ]{\frac {A d k}{2 m \left(g + g ^ {\prime}\right)}}. \tag {3} +$$ + +At this value of $h$ , the constraint reduces to + +$$ +{\frac {l w}{l + w}} = {\frac {B}{h ^ {2}}} = \sqrt [ 3 ]{4 B}. +$$ + +This implies that the harmonic mean of $l$ and $w$ should be + +$$ +H \equiv \frac {2 l w}{l + w} = 2 \sqrt [ 3 ]{4 B} = 4 h. +$$ + +So, in the optimal situation, the box should be roughly four times as long and wide as it is tall. However, there are other considerations. + +- For commercially available boxes, we must choose among some discrete set of realizable box dimensions. +- That the number $n$ of boxes in a layer must be an integer affects the possible values of many of the parameters on which $h$ is based (most notably $A$ , the cross-sectional area of the layer). We select the box among the potential candidates which most nearly compensates for this change in parameters. + +# The Entire Structure + +We need to determine the gross parameters of the entire structure. We determine the cross-sectional area of the structure by considering how much space the motorcyclist needs to land safely. The motorcycle has length about $1\mathrm{m}$ ; we should leave at least this much space perpendicular to the direction of travel. We need substantially more in the direction of travel, to ensure that the motorcyclist can land safely with a reasonable margin of error. So we let the structure be $1\mathrm{m}$ wide and $3\mathrm{m}$ long, for a cross-sectional area of $3\mathrm{m}^2$ . + +How many corrugated cardboard flats should we use? We do not want them all to crease under the weight of the motorcycle and rider, for then the motorcyclist could be thrown off balance. So we must determine how much we can expect the flats to bend as a result of the force exerted by the motorcycle. + +To calculate the number of flats required, we use the flatwise compression test (FCT) data in Pflug et al. [2000] for C-flute cardboard. Though our goal is to prevent substantial creasing of the entire layer of flats, we note that if we have a reasonable number of flats, creasing the bottom flat requires completely crushing a substantial area along most of the remaining layers. Less than $20\%$ of this pressure is required to dimple a sheet of cardboard to the point where it may be creased. Since we assume that the pressure required to crush the flats scales linearly with the number of flats, we find the maximum pressure that + +the motorcycle will ever exert on the flats and divide it by the FCT value for a single C-flute cardboard flat order to obtain the total number of sheets needed. + +A brief examination of a piece of cardboard demonstrates that bending it in the direction parallel to the flutes is much easier than bending it in the direction perpendicular to the flutes. Hence, it would be risky to orient all flats in the same direction; if force is applied along a strip of the surface, all of the flats may easily give way and bend in succession. So, it would be wise to alternate the orientation of the flats in the stack. To make sure that we have the full strength in any direction, we use twice the number of flats calculated, alternating the direction of the flats of each flat as we build the stack. Then, no matter how the motorcycle is oriented when it lands on the stack, at least the required strength exists in every direction. + +Finally, we determine the overall height $h_1$ of the structure as follows. The motorcycle accelerates downward at a rate of $g$ from the peak of its flight to the top of the structure; this distance is $h_0 - h_1$ . It then decelerates constantly at a rate of $g'$ until it reaches the ground, over a distance $h_1$ . Since the motorcycle is at rest both at the apex of its flight and when it reaches the ground, the relation + +$$ +(h _ {0} - h _ {1}) g = h _ {1} g ^ {\prime} +$$ + +must hold. It follows, then, that + +$$ +h _ {1} = \frac {h _ {0} g}{g + g ^ {\prime}}. \tag {4} +$$ + +# Finding the Desired Deceleration + +To determine $g'$ , we need the height $H$ of the jump. We discuss how to build the platform so that the rider does not experience a deceleration greater than for a 0.25-m jump. + +We assume that most of the cushion of the landing is in the compression of the tires, which compress $\Delta x = 5\mathrm{cm}$ . The vertical velocity of the rider on impact is $\sqrt{2gH}$ . We also make the approximation that the motorcycle experiences constant deceleration after its tires hit the ground. So the rider travels at an initial speed of $\sqrt{2gH}$ and stops after $0.05\mathrm{m}$ . We determine the acceleration: + +$$ +v _ {0} ^ {2} + v _ {f} ^ {2} = 2 a x, +$$ + +$$ +\left(\sqrt {2 g H}\right) ^ {2} + 0 = 2 a (0. 0 5). +$$ + +Solving yields $a = 20gH$ . That is, if the motorcyclist jumps to a height of $4\mathrm{m}$ , on landing the ground exerts a force on the motorcycle that feels like 80 times the force of gravity; if the jump were from $0.25\mathrm{m}$ , this force would be only $5mg$ . To simulate a $0.25\mathrm{-m}$ fall, we should have the motorcyclist decelerate at a rate of $5g$ . + +# Numerical Results + +We now return to the question of determining the number of flats needed for the top layer. By Pflug et al. [2000], the flatwise compression test (FCT) result for C-flute board is $1.5 \times 10^{5} \mathrm{~Pa}$ . We expect the motorcyclist to experience an acceleration of approximately $5g$ upon landing, distributed over a surface area of $3000 \mathrm{~mm}^{2} = 0.003 \mathrm{~m}^{2}$ . We assume that the motorcycle has mass $100 \mathrm{~kg}$ [Kawasaki 2002] and the rider has mass $60 \mathrm{~kg}$ . The pressure exerted on the cardboard is + +$$ +P = \frac {(1 6 0 \mathrm {k g}) (5) (9 . 8 \mathrm {m} / \mathrm {s} ^ {2})}{0 . 0 0 3 \mathrm {m} ^ {2}} = 2. 6 1 \times 1 0 ^ {6} \mathrm {P a}. +$$ + +For the cardboard at the bottom of the stack of flats to be bent significantly, enough pressure must be applied to crush most of the cardboard above it. Thus a lower bound on the number of flats is $\left\lceil (2.61 \times 10^{6} \mathrm{~Pa}) / (1.50 \times 10^{5} \mathrm{~Pa}) \right\rceil = \left\lceil 17.4 \right\rceil = 18$ flats. To be perfectly safe, we double this figure and cross-hatch the flats; that is, we want 36 flats in the top platform, the flutes of which alternate in direction. + +Next, we calculate the total mass of cardboard for these flats. We assume that the flats are $1\mathrm{m}\times 3\mathrm{m}$ . From Gilchrist et al. [1999], we know that the density of C-flute corrugated cardboard is $537~\mathrm{g / m^2}$ ; we obtain a mass of $1.611\mathrm{kg}$ per flat, or about $60\mathrm{kg}$ for 36 flats, which is comparable to the weight of a second person. The thickness of a C-flute flat is $4.4\mathrm{mm}$ [Mall City Containers n.d.]; with 36 flats, the height of the stack is $158.4\mathrm{mm}$ . + +We now plug some reasonable values into (3) and get a good approximation of the desired height of the boxes. Let the stacking weight constant $k = 800 \, \mathrm{N}$ ; this is roughly the mean value found in Clean Sweep Supply [2002]. These values, along with $g' = 5g$ , give an optimal $h$ of roughly + +$$ +h = \sqrt [ 3 ]{\frac {(3 \mathrm {m} ^ {2}) (0 . 0 5 \mathrm {m}) (8 0 0 \mathrm {N})}{2 (2 2 0 \mathrm {k g}) [ 9 . 8 + 5 (9 . 8) \mathrm {m / s} ^ {2} ]}} \approx 0. 1 7 \mathrm {m}. +$$ + +So the harmonic mean of $l$ and $w$ must be on the order of $4h = 0.67 \, \mathrm{m}$ . + +Converting these values into inches gives $h = 6.5$ in and a value of roughly 26.5 in for the harmonic mean of $l$ and $w$ . The two commercially available box sizes that most closely approximate these values are $6 \, \text{in} \times 26 \, \text{in} \times 26 \, \text{in}$ and $6 \, \text{in} \times 28 \, \text{in} \times 28 \, \text{in}$ [Uline Shipping Supplies 2002]. Note that we must increase the cross-sectional area beyond what was calculated in order to keep the number of boxes per layer an integer. Doing so increases the total value of $B = Akd / m(g + g')$ ; thus, we ideally want a somewhat larger box than calculated. Since we cannot increase $h$ (any commercially available box of this rough shape and size has a height of 6 in), we increase $l$ and $w$ . This choice increases the amount by which the cross-sectional area is larger than previously calculated. Since optimum values of $l$ and $w$ only change as $A^{1/3}$ , however, the larger box is the closer to the optimum. + +To have a landing pad of size at least $1\mathrm{m}\times 3\mathrm{m}$ , we need two of these boxes lengthwise and five widthwise, for a total of 10 boxes per layer. We determine the total height $h_1$ of the pile as follows The average height of an adult male African elephant is $3.2\mathrm{m}$ [Theroux 2002]; the motorcyclist could easily clear such an elephant with a jump of $h_0\approx 4\mathrm{m}$ . From this value, and (4), we obtain + +$$ +h _ {1} = \frac {h _ {0} g}{g + g ^ {\prime}} = \frac {h _ {0}}{6}; +$$ + +thus, we want $h_1 \approx 0.67 \, \mathrm{m} = 26.2 \, \mathrm{in}$ . To exceed this number with 6-inch layers, we need 5 layers. + +To summarize: We use $6 \mathrm{in} \times 28 \mathrm{in} \times 28$ in boxes. There are 10 boxes in each layer of the stack, in a $2 \times 5$ grid, and the stack consists of 5 layers of boxes with 36 layers of cross-hatched cardboard flats piled on top. + +# Changing the Parameters + +Because the dimensions of the boxes used vary as the cube root of $B$ , which is either linear or inversely linear in most of our parameters, our results are fairly resistant to change in any parameter. For example, changing one of them by a factor of 2 changes the optimal box dimensions by only $25\%$ ; even increasing a parameter by an order of magnitude only doubles the dimensions. + +The one exception is the jumping height. Since $B$ is independent of that, so are the optimum box dimensions, except insofar as the height affects the amount of area. However, the jumping height does affect the height of the box pile; we want that to increase linearly with the jumping height, in a ratio given by the desired deceleration $g'$ . Additionally, a very high or very low jumping height will cause our model to break down completely. + +For C-flute cardboard in the flats, the total amount of cardboard needed essentially depends only on the weight of the motorcycle and rider and on the net deceleration. This may at first glance appear counterintuitive: Should we not expect the jumping height to affect the stress put on the flats? In fact, the height is irrelevant as long as the assumptions of the model are justified. Since the boxes below the flats are calculated to break upon experiencing a force $mg'$ , the force transmitted through the flats by the motorcyclist is never larger than this. The only exception is the initial force that the flats experience on first being hit by the motorcycle. If the jumping height is sufficiently large, the assumption—that this initial force is dominated by the normal force exerted by the boxes underneath—will break down, and we may need to increase the thickness beyond the value calculated. However, for jumping heights this large, it is probable that other parts of our model will break down. + +Certain predictions of our model are independent of its parameters, so long as our assumptions are justified. Most notable is the observation that to best conserve material for a given result, the height of a box should be one-quarter + +the harmonic mean of its other two dimensions, the ratio of which can be specified arbitrarily. + +Finally, for very large jumps, we may need to revise our calculation of $g'$ . While a rider may be comfortable with a deceleration of $5g$ for a small fraction of a second, it is less reasonable to assume the same comfort level if the deceleration is to last several seconds. However, we shouldn't be too concerned about this, since for jumps that are high enough for it to be an issue, our model is likely to break down in other ways. + +# Weaknesses of the Model + +The primary weakness of our model is its dependence on the adjustable parameter $d$ , the distance through which the work of crushing the box is done. We assume that over the box sizes that we are concerned with, $d$ is roughly independent of the dimensions of the box; verifying the truth of this assertion would require experimentation. However, so long as $d$ varies at most linearly with $h$ (which is reasonable, since for any box we must have $d < h$ ), our method still works; we can still find a uniform optimal box size for the pile. If $d$ depends substantially on the amount of force applied (or the velocity of the motorcycle), this will no longer be the case: the optimal box size will vary with position in the stack of boxes. However, we think this hypothesis is unlikely. + +Our assumption that each layer collapses in a reasonably uniform manner is also a weakness in the model, at least for some parameter values. If the motorcycle hits the structure with too much velocity, or the desired cross-sectional area of the structure is too large, it may not be possible to layer enough flats on top of the structure to ensure uniform collapse, especially if restricted to commercially available cardboard sizes. + +Finally, it is unlikely that we could find easily a supply of boxes whose faces are as large as $1\mathrm{m} \times 3\mathrm{m}$ , to create the flats we want. While we could custom-order such flats, this would likely drive the price of construction up substantially. The other alternatives are to use several smaller flats in the place of each large one, or to unfold large cardboard boxes to make the flats. Doing this could weaken the structure, but this problem could likely be circumvented by varying the positions of the weak spots in each layer of flats (and possibly by slightly increasing the safety factor in the number of flats used). + +# Conclusion + +We have designed a landing platform out of cardboard boxes for a stunt motorcyclist who will jump over an elephant. Our model of this platform predicts that we can minimize the material used by using boxes with dimensions $6\mathrm{in}\times 28\mathrm{in}\times 28\mathrm{in}$ . + +The size of the boxes used depends neither on the mass of the motorcyclist and jumper nor the height of the jump. To accommodate a higher jump, just use more layers of boxes. The calculations are based on simulating the landing of a 0.25-m jump. To simulate a lower jump, we would use slightly taller boxes. + +An advantage to our model is the small size of the cardboard stack—less than $3\mathrm{m}^3$ and specifically its short height. It should be easy to hide such a structure from the camera. + +# References + +Bankers Box. 2003. Bankers Box frequently asked questions. www.bankersbox.com/service/index.cfm. +Baum, G.A., D.C. Brennan, and C.C. Habeger. 1981. Orthotropic elastic constants of paper. Tappi Journal 64 (2): 97-101. +Boxland Online. 1999. Flutes. www.boxland.com/flutes.html. +Clarke, J.W., and J.A. Marcondes. 1998. What pallet manufacturers should know about corrugated boxes. Tech. rept. 2. Blacksburg, VA: Center for Unit Load Design, Virginia Polytechnic Institute. www unitload.vt.edu/technote/980918/980918.htm. +Clean Sweep Supply, Inc. 2002. Recycled Liberty Plus storage boxes. http://www.cleansweepsupply.com/pages/skugroup13316.html. +Gilchrist, A.C., J.C. Suhling, J. C., and T.J. Urbanik. 1999. Nonlinear finite-element modeling of corrugated board. Mechanics of Cellulose Materials 85: 101-106. +Just'In Designs. 2001. Free goldwig clipart. www.just-in.net/clipart/dirt/index.htm. +Kawasaki, Inc. 2002. Kawasaki motorcross off-road series. www.fremontmotorsports.com/HTML/Kawasaki/kawasaki-MX-1.htm. +Korchak, R. 2002. Motorcycle tire sizes. www.webbikeworld.com/Motorcycle-tires/sizes.htm. +Mall City Containers. n.d. Packaging basics. www.mallcitycontainers.com/ Packaging.htm. +Pflug, J., I. Verpoest, and D. Vandepitte. 2000. Folded honeycomb cardboard and core material for structural applications. In Proceedings of the 5th International Conference on Sandwich Construction, 361-372. +Theroux, S. 2002. Animal records: Size. www.swishweb.com/Animal_Kingdom/animal06.htm. + +Uline Shipping Supplies. 2002. Uline: Shipping supply specialists. www.uline.com/Boxes_Search.asp. + +# You Too Can Be James Bond + +Deng Xiaowei +Xu Wei +Zhang Zhenyu +Southeast University +Nanjing, China + +Advisor: Chen Enshui + +# Abstract + +We divide the jump into three phases: flying through the air, punching through the stack, and landing on the ground. We construct four models to minimize the number and the cost of boxes. + +In the Ideal Mechanical model, we consider the boxes' force on the motorcycle and stunt person as constant. In the Realistic Mechanical model, we focus on how the boxes support the motorcycle and stunt person, which includes three phases: elastic deformation, plastic deformation, and and crush-down deformation. However, in the Ideal Air Box model, the internal air pressure of each box can't be ignored. As a matter of fact, the boxes are unsealed, so we amend the Ideal Air Box model to develop a Realistic Air Box model. We discuss the strengths and weaknesses of each model. + +We define a metric $U$ , which is a function of the cost and the number of boxes. By mathematical programming, we calculate the size and the number of the boxes. In normal conditions, we that assume the safe speed is $5.42\mathrm{m / s}$ . For a total weight of stunt person and motorcycle of $187\mathrm{kg}$ , we need 196 boxes of size $0.7\mathrm{m}\times 0.7\mathrm{m}\times 0.5\mathrm{m}$ . We analyze the accuracy and sensitivity of the result to such factors as the total weight, the contact area, and the velocity. We also offer some important suggestions on how to pile up the boxes and how to change the shape of the boxes. + +# Assumptions and Analysis + +# About Boxes and the Pile of Boxes + +- All the boxes are the same size. The ratio of length to width has little effect on the compression strength of a cardboard box; so to simplify the problem, we assume a square cross section. + +Table 1. +Variables, parameters, and physical constants. + +
NotationDescriptionUnits
Boxhheight of boxm
rlength of side of boxm
Zperimeter around top of boxm
A0total surface area of boxm2
Ppressure in box (cylinder)Pa
Ptpressure in box at time t when it collapsesPa
kpressure in box at time t, in atmospheresatm
σrate of air leaking from boxm3/s
σirate of air leaking from box in time interval im3/s
Vvolume of box (cylinder)m3
Vivolume of box in time interval im3
PileLpilelength of the pilem
Wpilewidth of the pilem
Hpileheight of the pilem
Lnumber of layers of boxes in the pile
Numtotal number of boxes
Costcost of boxes
Supper surface area of pilem2
JumpHelephantaverage height of the elephantm
Hmaxmaximum height of the jump4 m
νfraction of Hmax that a person can reach
H0height of the rampm
θangle of the ramp
v0launch speedm/s
safesafe speed at which to hit the groundm/s
Mmass of motor plus stunt personkg
Kellicut formulaPcompressive strength of the box
Pxcomprehensive annular compressive strength of the paper
dx2corrugation constant
Zcircumference of the top surface of the boxm
Jbox shape coefficient
F0maximum supporting force from the boxN
Fbuffering force of the boxN
bconstant concerning the properties of paper
Otherx, zdistance that the cylinder is compressedm
xtcylinder displacement when box collapsesm
zmcompression distance at which box collapsesm
dtinterval of times
Wwork doneJ
Ds, k1, k2quantities related to cost
λh, λcweight factors
T, Ufunctions to be optimized
Constantsgacceleration due to gravity, at Earth's surfacem/s2
P0standard atmospheric pressure at Earth's surfacePa
+ +- After the box has been crushed down to some extent, we ignore the supporting force that it can still supply. +- Considering the practical production and transport limitations, the size of the box should not be too large. +- The boxes are piled together in the shape of a rectangular solid. +- When the motorcycle impacts one layer, it has little effect on the next layer. The layer below is considered to be rigid flat (its displacement is ignored). +- We ignore the weight of the boxes; they are much lighter than the person plus motorcycle. + +# About the Stunt Person and the Motorcycle + +- We ignore the resistance of the air to the horizontal velocity of the person and motorcycle. The friction is so little that it is negligible. +- The stunt person has received professional training, is skilled, and is equipped with anything allowable for protection. +- The average weight of a stunt person is $70 \mathrm{~kg}$ . +- We choose a certain type of motorcycle (e.g., Yamaha 2003 TT-R225), which weighs 259 lb [Yamaha Motor Corp. 2003]. + +# About the Elephant + +- The elephant keeps calm during the jump. +- We adopt the classic value of $3.5\mathrm{m}$ for the height of the elephant [PBS Online n.d.]. + +# About the Weather + +The weather is fine for filming and jumping, including appropriate temperature and humidity. On a gusty day, the wind might make the person lose balance in the air. + +# About the Camera + +The most attractive moment is when the person is over the elephant and at maximum height $H_{\mathrm{max}}$ . We have to make sure that the boxes do not appear on camera, namely, we need $H_{\mathrm{pile}} \leq \nu H_{\mathrm{max}}$ , where the coefficient $\nu$ is best determined empirically. In our model, we set $\nu = 0.625$ . + +# About the Ramp for Jumping + +The ramp for jumping is a slope at angle $\theta$ of length $L_{\mathrm{slope}}$ , as determined by the jump height or horizontal distance for landing. + +# The Development of Models + +When the stunt person begins to contact the cardboard boxes or the ground, he or she may suffer great shock. To absorb the momentum, the contact time must be extended. + +We divide the whole process into three independent phases, that is: + +1. flying through the air, +2. punching through the stack, and +3. landing on the ground. + +We find out the maximum height in phase 1, the greatest speed of hitting the ground in phase 3, and how the person plus motorcycle interact with the boxes is based on the results of phases 1 and 3. Phases 1 and 3 are simple and we solve them first. + +# Flying through the Air + +The stunt person leaves the ramp with initial speed $v_{0}$ at angle $\theta$ to the horizontal at height $H_{0}$ (Figure 1). + +![](images/0bc1ac21b941b32c61600559382d0d6548315c549e0c5c8765688b924fce6821.jpg) +Figure 1. The jump. + +In the air, the stunt person on the motorcycle is affected only by the constant acceleration of gravity. Based on Newton's Second Law, we have + +$$ +\begin{array}{l} x (t) = \left(v _ {0} \cos \theta\right) t, \\ y (t) = (v _ {0} \sin \theta) t - \frac {1}{2} g t ^ {2}, \\ \end{array} +$$ + +where $x(t)$ and $y(t)$ are the horizontal and vertical displacements from the launch point after $t$ seconds. The launch speed $v_{0}$ and the maximum height $H_{\mathrm{max}}$ are related by + +$$ +v _ {0} \sin \theta = \sqrt {2 g (H _ {\mathrm {m a x}} - H _ {0})}. +$$ + +For an elephant of height $3.5\mathrm{m}$ , we take $H_{\mathrm{max}} = 4\mathrm{m}$ . For $H_0 = 0.5\mathrm{m}$ and $\theta = 30^{\circ}$ , we get $v_{0} = \sqrt{2\cdot 9.8\cdot 3.5} /0.5 = 16.6~\mathrm{m / s}$ . With a $2\mathrm{m}$ -high box-pile, the stunt person hits the pile with vertical speed $6.3~\mathrm{m / s}$ and horizontal speed $14.3~\mathrm{m / s}$ ; the distance between the landing point and elephant is $D = 9.2~\mathrm{m}$ . + +# Would the Landing Be Safe? + +To simplify the problem, we ignore the complex process when the person begins to touch the ground. We consider that there is a critical safe speed $v_{\mathrm{safe}}$ . If the speed hitting the ground is less than or equal to that speed, the person would not be injured. The safe speed is related to the ground surface (hard, grassplot, mud, etc.) and materials used (paper, rubber etc.). Our simulation uses a typical value, $v_{\mathrm{safe}} = 5.42 \, \mathrm{m/s}$ . + +# Is the Pile Area Large Enough? + +The height $H_{\mathrm{pile}}$ of the pile of boxes is related to the maximum height $H_{\mathrm{max}}$ that the stunt person reaches and also to the vertical speed of hitting the boxes. The greater $H_{\mathrm{max}}$ , the greater $H_{\mathrm{pile}}$ is required, with $H_{\mathrm{pile}} = Lh$ , where $L$ is the number of the layers of boxes and $h$ is the height of a single box. + +Would $L_{\mathrm{pile}}$ equal to the length of the person be enough? The answer is no. When accelerating on the ramp, the stunt person can't make the initial jump speed exactly what we calculate. We think that 3-5 times the length of the person is needed. + +The stunt person does not leave the ramp aligned exactly along the central axis and does not keep the motorcycle exactly along that axis after hitting the boxes. That there may be some horizontal movement means that $W_{\mathrm{pile}}$ should be 2-4 times the length of the person. + +In our simulation, we let $L_{\mathrm{pile}} = 6 \mathrm{~m}$ , $W_{\mathrm{pile}} = 4 \mathrm{~m}$ . + +# Boxes: How to Cushion the Person + +# Ideal Mechanical Model + +Based on our general assumptions, we suppose that while the stunt person is destroying the boxes of the current layer, boxes in lower layers are seldom affected and keep still. + +To illustrate the process of collision with just one layer separately, we suppose that the mutual effect of the stunt person and the box-pile is motion with a constant acceleration. During that, the stunt person plus motorcycle is supported by a constant vertical force $F$ — that is, we treat $F$ as the average force during the whole process. + +It can be proved that although the stunt person strikes the boxes of different layers at different initial velocities, the work consumed to fall through each box is the same (Appendix A). The number of layers $L$ of the box-pile is determined by the formula + +$$ +\text {W o r k} \times L - m g h \geq \frac {1}{2} m v _ {0} ^ {2} - \frac {1}{2} m v _ {\text {s a f e}} ^ {2}, +$$ + +where $L$ is the smallest integer that satisfies this inequality. + +# Strength and Weaknesses + +This model is simple and efficient. However, we have ignored the detailed process and substituted constant work, though in fact the force changes with time. + +# Realistic Mechanical Model + +First we study the empirical deformed load curve of the cushion system, showing the boxes' deformation under a static load (Figure 2) [Yan and Yuan 2000]. + +![](images/38d1764ad393ce6f584212602c17cedd9d3e99702485bb937e51f9a15f47aa8c.jpg) +Figure 2. Deformation curve. + +The compression process is divided into three phases, as shown in the figure: + +- OA phase: Elastic deformation, according to Hooke Law. +- AB phase: Plastic deformation. The compression grows more slowly and reaches the maximum. +- BC phase: Crush-down deformation: After compression reaches the maximum, the rate of deformation starts to fall. The unrecoverable deformation goes on increasing. + +According to the Kellicut formula [Yan and Yuan 2000; Zhao et al.], the compressive strength of a box is + +$$ +P = P _ {x} \left(\frac {d x _ {2}}{Z / 4}\right) ^ {1 / 3} Z J, +$$ + +where + +$P$ is the compressive strength of the box, + +$P_{x}$ is the comprehensive annular compressive strength of the paper, + +$dx_{2}$ is the corrugation constant, + +$Z$ is the circumference of the top surface, and + +$J$ is the box shape coefficient. + +For the stunt person plus motorcycle, the maximum supporting force from the box is nearly + +$$ +F _ {0} = P \frac {s}{(Z / 4) ^ {2}} = P _ {x} \left(\frac {d x _ {2}}{Z / 4}\right) ^ {2 / 3} Z J \frac {s}{(Z / 4) ^ {2}} = b Z ^ {- 5 / 3} s, +$$ + +where $b$ is a constant concerning the properties of paper. + +We assimilate the static loading process to the dynamical process of getting impacted and obtain the following buffering force and its deformation graph (Figure 3). + +$$ +F = \left\{ \begin{array}{l l} \frac {F _ {0}}{a} x, & x \geq a; \\ F _ {0}, & a \leq x \leq b; \\ F _ {0} \exp \left(\frac {- a (x - b)}{h}\right), & x \geq b. \end{array} \right. +$$ + +The model describes the mechanical capability of the box and offers an appropriate curve of the relationship between buffering power and deformation. We can measure the energy consumed by the crushing of boxes. One limitation is the Kellicut formula, which applies only to certain kinds of cardboard boxes. Error may also occur in replacing the dynamical process with a static process. + +# Ideal Air Box Model + +We consider the depleting energy consumed by the resistance of air in the process of compression. We divide the process into two phases: + +Phase 1: Assume that the cardboard is closed (gas can't escape). The pressure in the box rises from standard atmospheric pressure $P_{0}$ to $P_{t} = k P_{0}$ ( $k$ atmospheres) at time $t$ when the box ruptures. We consider that the impact is so + +![](images/1ca8142557a6cba313d9740a33dc553f7b3599f6d86d2c4557e8b1fcdb27ff0c.jpg) +Figure 3. Force-deformation figure. + +quick that the gas in the cardboard doesn't exchange energy with environment. So we can just deem it an adiabatic compression process. Using the First Law of Thermodynamics, we get + +$$ +P V ^ {1. 4} = \text {c o n s t a n t}. \tag {1} +$$ + +The proof is in Appendix B. + +Phase 2: Under the effect of the impact and internal air pressure, cracks appear in the wall of the cardboard box. The internal air mixes with the air outside, quickly falling to standard atmospheric pressure $P_0$ . We assume that the cardboard box is a rigid cylinder and the compressive face (top surface) of the box is a piston (Figure 4). + +We calculate the internal pressure when the piston shifts downward a distance $x$ from height $h$ . Let $s$ be the area of the top of the cylinder. According to (1), we have + +$$ +P _ {0} \cdot [ h s ] ^ {1. 4} = P \cdot [ (h - x) s ] ^ {1. 4}, +$$ + +$$ +P = P _ {0} \left(\frac {h}{h - x}\right) ^ {1. 4}. \tag {2} +$$ + +The graph of $P$ is shown in Figure 5. + +Consider Phase 1. According to (2), we have + +$$ +P _ {t} = k P _ {0} = P _ {0} \left(\frac {h}{h - x}\right) ^ {1. 4}. +$$ + +We solve for the displacement at time $t$ : + +$$ +x _ {t} = h \left(1 - k ^ {- 5 / 7}\right). +$$ + +![](images/9713de1bb552ea2b57e7f013f1ce536f4b3bc1d7397583a83b081b889b60ff6a.jpg) +Figure 4. The cylinder model. + +![](images/f8c74da1ca3be021415580744af1bd223860767ac90cff5a7405de28a3e11af0.jpg) +Figure 5. Internal pressure as a function of the volume of compressed air. + +The compression work done in Phase 1 is + +$$ +\begin{array}{l} W _ {1} = \int_ {0} ^ {x _ {t}} P d V = s \int_ {0} ^ {h (1 - k ^ {- 5 / 7})} P _ {0} \left[ \left(\frac {h}{h - x}\right) ^ {1. 4} - 1 \right] d x \\ = P _ {0} s h \left(\frac {5}{2} k ^ {2 / 7} - \frac {7}{2} + k ^ {- 5 / 7}\right). \tag {3} \\ \end{array} +$$ + +Consider Phase 2. The compression work of Phase 2 is + +$$ +W _ {2} = \int (P _ {0} - P _ {0}) d V = 0, +$$ + +so the total work done is given by (3). + +In this model, first we get the curve of internal pressure and deformation. Based on the graph, we calculate the energy consumed by the resistance of gas. However, considering the box as a rigid cylinder may be inaccurate. Another weakness is the airtightness of the box; actually, the internal gas leaks during the whole process of compression. + +# Realistic Air Box Model + +The height of the box is less than $1\mathrm{m}$ , so the speed of the motorcycle and stunt person changes little when passing through one box height's distance; hence, we think of it as constant during compression of a single box. The effect of the air in the crushing process is just like the air in a cylinder with a hole to leak air. In Figure 6, we have + +$s$ is the area of the top of the box (top of the cylinder), +$h$ is the height of the box (cylinder), +$\sigma$ is the air-leaking rate, +$z$ is the crushing distance from the top of the box (cylinder), and +$v$ is the speed of the motorcycle and stunt person. + +![](images/5deb66d8d5f4880ef60f8b0906ce0ea810b94ebe0e94034c5eb819679b17b4f4.jpg) +Figure 6. The unsealed cylinder model. + +In the crushing process, the cardboard box's deformation becomes larger and larger. As a result, the air leaks more quickly. Generally, when a cardboard box is pressed to about half of its original height, the effect of air-leaking is so prominent that we cannot ignore it. + +We assume that the air-leaking rate changes with the crushing distance $z$ via + +$$ +\sigma (z) = 2 e ^ {4 z / h} s, +$$ + +for $z\in (0,h)$ . Let $x = z / h$ for $x\in (0,1)$ . + +The goal is to calculate the total work that the air does to the motorcycle and stunt person. It is necessary to calculate the pressure of the air in a box while the crushing is occurring. We divide the crushing process into $N$ time periods; in each time period, the pressure of the air can be calculated according to the universal gas law $PV = nRT$ . We assume that temperature is constant, so $PV$ is constant. + +In the first time period, we have + +$$ +\begin{array}{l} P _ {0} V _ {0} = P _ {0} \sigma_ {0} d t + (V _ {0} - s v d t) P _ {1}, \\ P _ {1} = \frac {P _ {0} (V _ {0} - \sigma_ {0} d t}{V _ {0} - s v d t}. \\ \end{array} +$$ + +In general, we get + +$$ +P _ {i} = \frac {P _ {i - 1} (V _ {i - 1} - \sigma_ {i - 1} d t)}{V _ {i - 1} - s v d t}, +$$ + +where + +$$ +d t = \frac {h}{v N} , +$$ + +$\sigma_{i}$ is the air-leaking rate in time period $i$ , and + +$V_{i}$ is the volume of the box in time period $i$ . + +So in every time period, the pressure of air in the cylinder can be calculated recursively: + +$$ +P _ {i} = \frac {(V _ {i - 1} - \sigma_ {i - 1} d t)}{V _ {i - 1} - s v d t} \cdot \frac {(V _ {i - 2} - \sigma_ {i - 2} d t)}{V _ {i - 2} - s v d t} \dots \frac {(V _ {0} - \sigma_ {0} d t)}{V _ {0} - s v d t}. +$$ + +Let $h = 0.5 \, \text{m}$ and $N = 1000$ . The air pressure in the box increases from $P_0$ (standard atmospheric pressure) to a maximum of $1.66P_0$ at about one-third of the box's height and drops steeply to $P_0$ at about half the box's height (Figure 7). + +![](images/10da76d1f6ef1c20062f948b0ea0149ae7b3e02e2b7588428f5315513606e4a9.jpg) +Figure 7. Air pressure in the box as a function of distance crushed. + +The energy that the air in box does to the motorcycle and stunt person is + +$$ +W = - \int f d z = - \int_ {0} ^ {z _ {m}} P (z) s d z, +$$ + +where $z_{m}$ is the distance at which the air pressure equals $P_{0}$ and $s$ is the area of the top of the box. Let $s = 0.5\mathrm{m}^2$ ; then the energy that the air in box does to motorcycle and stunt person is $W = -1322\mathrm{J}$ . + +The consideration of the box's air-leaking effect makes this model more realistic, particularly since an exponential function is used to describe the air-leaking effect. With numerical methods, this model is to solve. However, + +there are some parameters to identify, and the assumption of constant speed for motorcycle plus stunt person is not correct. + +# Use the Model + +We need to keep the height of the box-pile lower than the elephant so that the action can be filmed without the box-pile being seen. The height of the pile is the number of layers $L$ times the height $h$ of a single box: + +$$ +H _ {\text {p i l e}} = L h. +$$ + +The total number of boxes used is + +$$ +\mathrm {N u m} = \frac {L S}{r ^ {2}}, +$$ + +where $r$ is the width of a box (Figure 8) and $S$ is the upper surface area of the box-pile. + +![](images/80991b98facc5499d1273c8744af48ed3cfa5f59de55f7b9cdaebe748d7f0678.jpg) +Figure 8. Dimensions of a box. + +The cost is + +$$ +\operatorname {C o s t} = \operatorname {N u m} D _ {s}, +$$ + +where $D_{s}$ is the unit price, which depends on the materials, transportation cost and other factors. We set $D_{s} = k_{1}A + k_{2}$ , where + +$k_{1}$ is the fabrication cost per square meter. + +$k_{2}$ includes the average cost for transportation and some other factors, and + +$A_0$ is the total surface area of a single box: $A_0 = 4rh + 2r^2$ . + +Then the cost is + +$$ +\mathrm {C o s t} = \mathrm {N u m} D _ {s} = k _ {2} \mathrm {N u m} + k _ {1} A _ {0} \mathrm {N u m} = k _ {2} \mathrm {N u m} + k _ {1} A, +$$ + +where $A$ is the total surface area of all of the Num boxes. + +A metric $T$ that takes into account both minimizing cost and the total box-pile height is + +$$ +T = \lambda_ {h} H _ {\mathrm {p i l e}} + \lambda_ {c} \mathrm {C o s t}, +$$ + +where the $\lambda$ s are weight factors. So we must solve the following optimization problem: + +$$ +\text {m i n i m i z e} \quad T (r, h, L) = \lambda h H _ {\text {p i l e}} + \lambda_ {c} \text {C o s t} +$$ + +$$ +\text {s u b j e c t} \quad \sum_ {i = 1} ^ {L} W _ {i} - L m g h \geq \frac {1}{2} m v _ {0} ^ {2} - \frac {1}{2} v _ {\text {s a f e}} ^ {2}, \tag {4} +$$ + +$$ +v _ {0} ^ {2} = 2 g \left(H _ {\max } - L h\right), \tag {5} +$$ + +where $W_{i}$ is the energy that the motorcycle and stunt person lose passing through layer $i$ . The inequality (4) makes the piles hard and numerous enough to slow the falling stunt person to the safe speed, while equation (5) defines initial speed in terms of the maximum height of the jump and the height of the pile. + +# Results and Analysis + +Although the Ideal Mechanical model is practical and easy to calculate, it requires that many parameters be determined by experiments. The Realistic Mechanical model gives mechanical support to the Air Box models. The Realistic Air Box model develops from the Ideal Air Box model by considering air-leaking; its results may be more credible, so we give results for the Realistic Air Box Model instead of for the others. + +# Results of the Unsealed Air Box Model + +# Case A + +To give some typical results and analyze stability, we assume that the maximum height of the box-pile is $H_{\mathrm{pile}} = 2.5 \mathrm{~m}$ and the safe speed is $v_{\mathrm{safe}} = 5.42 \mathrm{~m/s}$ . Considering that the height must be lower than the given maximum height, we ignore its effect in $T$ . Let $k = k_2 / k_1$ ; then the utility function can be written as + +$$ +U = \frac {\mathrm {C o s t}}{k _ {1}} = k \mathrm {N u m} + A. +$$ + +We use enumeration to find the best solution that minimizes $U$ (that is, minimizes total cost). The step size for $r$ and $h$ in our enumeration is $0.1\mathrm{m}$ . Different weights of stunt person plus motorcycle give different solutions; details are in Table 2. The solution of $r, h,$ and $L$ changes little with $k$ . + +# Case B + +To find out how contact area influences the solution, we fix $M = 187$ kg and $v_{\text{safe}} = 5.42$ m/s and vary the area $s$ of the top of a box. The solution does not change with $k$ , so we let $k$ equal 0 (Table 3). + +Table 2. +Results for Case A. + +
M(kg)kSolutionA(m2)U
r(m)h(m)L
15000.40.81240240
5""""990
""3437
18700.70.54466466
5""""1447
""""
20000.50.63490490
50.60.545151855
""""
23000.40.83720720
5""""29700
"""720
250anyNo solution
+ +Table 3. +Results for Case B, with $M = 187\mathrm{kg}$ and $v_{\mathrm{safe}} = 5.42\mathrm{m / s}$ . + +
s(m)r(m)h(m)LA(m2)U
1.00.80.61122122
0.8"0.8"146146
0.60.70.72288288
0.5"0.54466466
0.40.40.7"864864
0.2No solution
+ +# Case C + +To find out how the safe speed influences the solution, we fix $s = 0.5 \, \mathrm{m}^2$ and $M = 187 \, \mathrm{kg}$ . Details can be seen in Table 4. + +# Analysis of the Results + +The parameter $k$ influences the solution only slightly; that is, our model is stable, since we can easily get an optimum solution without paying attention to having an accurate value of $k$ , whose value in fact is hard to obtain. + +Lightening the weight remarkably decreases the value of $U$ (Table 2). So the stunt coordinator should try to reduce the total weight. + +Increasing the contact area to some extent reduces cost (Table 3). So the stunt + +Table 4. Results for Case C, with $M = 187\mathrm{kg}$ and $s = 0.5\mathrm{m}^2$ + +
\( v_{safe} \) (m/s)r(m)h(m)LA(m2)U
4.64No solution
4.850.40.63576576
5.05''''''''''
5.24''0.72432432
5.420.70.54466466
+ +person should try to touch the boxes with both wheels, to enlarge the contact area. The size of wheels also influences the contact area. + +The safe speed has a significant effect on the cost (Table 4). To decrease cost, increase the safe speed by selecting a soft landing surface (e.g., grass or sand) instead of cement. + +# Further Discussion + +# How to Pile the Cardboard Boxes + +The farther the flight in the air, the longer the box-pile needs to be. So the best shape of the underside of the stack may be like a sector, whose further side would be a little wider than the nearer one (Figure 9). Another idea is to pile the boxes like a hill whose central part is higher than the other parts, since the contact area between the stunt person plus motorcycle and the box is much smaller than the area of the pile. The cushioning effect is from the contacting area, so we are more interested in the height of the boxes of the contact part of the pile. If we can determine accurately where the landing will be, we can save lots of boxes (Figure 10). + +![](images/c49bad3768e88b5d0e76fcf7e1feabe343951d9b727cbfe68a4d39fc77cf9055.jpg) +Figure 9. Vertical view of the box-pile. + +![](images/d98aa00baf23168f02055d0654f7d5b88b678c90bd01c143a1cdfa0838fe2775.jpg) +Figure 10. Lateral view of the box-pile. + +Tying together boxes in the same layer may be helpful, because doing so supplies a larger horizontal effect and make the layer effect obvious. Another advantage is to prevent the stunt person from falling into a gap between boxes. + +# Change the Shape of the Boxes + +To simplify the problem, we assumed that the cross section is square. Yan and Yuan [2000] give the relationship between the compression strength of the box and the ratio of length to width (Figure 11). We could change that ratio for best effect. + +![](images/177e1bb3353319f72500416e0b6b2310adbbebd0b81581ef2c5d98a755df6f88.jpg) +Figure 11. Compression strength as a function of ratio of length to width of the box. + +# Appendix A: Same Work for Each Box + +We analyze the process between boxes of two adjacent layers (Figure A1) and show that two boxes do the same amount of work: $W_{1} = W_{2}$ , where + +$$ +W _ {1} = \frac {1}{2} m V _ {1} ^ {2} - \frac {1}{2} m V _ {0} ^ {2} + m g h, +$$ + +$$ +W _ {2} = \frac {1}{2} m V _ {2} ^ {2} - \frac {1}{2} m V _ {1} ^ {2} + m g h. +$$ + +![](images/a5f43e468979ed8657f58de881405bcc5940f2465197cd859a95873b6df44c62.jpg) +Figure A1. Diagram of two boxes. + +Using the average velocity, and since both boxes have the same height $h$ , we have + +$$ +h = \frac {1}{2} (V _ {0} + V _ {1}) t _ {1} = \frac {1}{2} (V _ {1} + V _ {2}) t _ {2}, +$$ + +so + +$$ +\frac {t _ {1}}{t _ {2}} = \frac {V _ {1} + V _ {2}}{V _ {0} + V _ {1}}. +$$ + +Use the Newton's Second Law, we have that $F / m = a_{1} = a_{2}$ and + +$$ +V _ {1} = V _ {0} + a _ {1} t _ {1}, \qquad V _ {2} = V _ {1} + a _ {2} t _ {2}, +$$ + +so that + +$$ +\frac {t _ {1}}{t _ {2}} = \frac {V _ {1} - V _ {2}}{V _ {0} - V _ {1}}. +$$ + +Setting the two expressions for $t_1 / t_2$ equal and cross-multiplying gives + +$$ +V _ {1} ^ {2} - V _ {0} ^ {2} = V _ {2} ^ {2} - V _ {1} ^ {2}. +$$ + +# Appendix B: $PV^{1.4}$ Is Constant + +We assume that the impact is so quick that the gas in the cardboard doesn't exchange energy with environment, so we have an adiabatic compression process. According to First Law of Thermodynamics, we get + +$$ +\Delta E + W _ {Q} = 0, +$$ + +where $\Delta E$ is the increment of the gas's internal energy and $W_{Q}$ is the work done by the adiabatic gas. We also then have + +$$ +d E + P d V = 0, +$$ + +where $P$ is the pressure of the gas and $V$ is the volume. We put them into the Ideal Gas Internal Energy Formula: + +$$ +E = \frac {M}{\mu} \cdot \frac {i}{2} R T \qquad \mathrm {a n d} \qquad C _ {\nu} = \frac {i}{2} R, +$$ + +where + +$M$ is the mass of the gas, + +$i$ is the number of degrees of freedom of the gas molecule, + +$\mu$ is the molar mass of the gas. + +$R$ is the molar gas constant, + +$T$ is the temperature of the gas, and + +$C_{\nu}$ is the constant volume molar heat capacity. + +Then we have: + +$$ +\frac {M}{\mu} C _ {\nu} d T + P d V = 0. +$$ + +We differentiate the ideal-gas state equation + +$$ +P V = \frac {M R}{\mu} T +$$ + +getting + +$$ +P d V + V d P = \frac {M R d T}{\mu}. +$$ + +We eliminate $dT$ from the last two equations to get + +$$ +\frac {d P}{P} + \frac {C _ {\nu} + R}{C _ {\nu}} \cdot \frac {d V}{V} = 0. +$$ + +Integrating, we get $PV^{r} = \text{constant}$ , where + +$$ +r = \frac {C _ {\nu} + R}{C _ {\nu}} = \frac {2 + i}{i}. +$$ + +The main composition of the air is nitrogen and oxygen, so $i = 5$ and $r = 1.4$ , so $PV^{1.4} =$ constant. + +# References + +Johansen P.M., and G. Vik. 1982. Prediction of air leakages from air cushion chamber. *Pressure Tunnels: Tunneling Machines/Underground Storage* 2 (1): 935-938. +Martin, David. 1982. Rockbursts imperil construction of Norway's largest underground power station. Tunnels and Tunnelling 14 (10): 23-25. +Mildin, R.D. 1945. Dynamics of package cushioning. Bell System Technical Journal 24: 353-461. +Mokhtari, Mohand, and Michel Marie. 2000. Engineering Applications of MATLAB. 3 and SIMULINK 3. New York: Springer-Verlag. +PBS Online. n.d. Wild Indonesia: Classroom resources. Why do elephants have big feet? http://www.pbs.org/wildindonesia/resources/bigfeet.html. +Yamaha Motor Corp. 2003. http://www.yamaha-motor.com/products/unitinfo.asp?lid=2&lc=mcy&cid=28&mid=35. +Yan, Lamei, and Youwei Yuan. 2000. Calculation of the compression strength of corrugated paper boxes and computer-aided design. Journal of Zhuzhou Institute of Technology 14 (1). +Zhao, Maiqun, Digong Wang, and Xingxiang Xu. 2000. Modification to Kellicut formula. Journal of Xi'an University of Technology 16 (1). + +# Cardboard Comfortable When It Comes to Crashing + +Jeffrey Giansiracusa + +Ernie Esser + +Simon Pai + +University of Washington + +Seattle, WA + +Advisor: James Allen Morrow + +# Abstract + +A scene in an upcoming action movie requires a stunt person on a motorcycle to jump over an elephant; cardboard boxes will be used to cushion the landing. + +We formulate a model for the energy required to crush a box based on size, shape, and material. We also summarize the most readily available boxes on the market. We choose a maximum safe deceleration rate of $5g$ , based on comparison with airbag rigs used professionally for high-fall stunts. + +To ensure that the stunt person lands on the box rig, we analyze the uncertainty in trajectory and extract the landing point uncertainty. + +We construct a numerical simulation of the impact and motion through the boxes based on our earlier energy calculations. After analyzing the sensitivity and stability of this simulation, we use it to examine the effectiveness of various configurations for the box stack (including different box sizes, types of boxes, and stacking patterns). We find that $200\mathrm{kg}$ is the most desirable combined mass of the motorcycle and stunt person, and a launch ramp angle of $20^{\circ}$ is optimal when considering safety, camera angle, and clearance over the elephant. + +A stack of (30 in) $^3$ boxes with vertical mattress walls spaced periodically is optimal in terms of construction time, cost, and cushioning capacity. We recommend that this stack be 4 m high, 4 m wide, and 24 m long. It will consist of approximately 1,100 boxes and cost $4,300 in materials. The stunt person's wages are uncertain but fortunately the elephant works for peanuts. + +# Introduction + +Airbag rigs are commonly used for high-fall stunts [M&M Stunts 2003], but they are designed only to catch humans. The alternative is a cardboard-box rig—a stack of boxes that crush and absorb impact. + +Our objectives are: + +- to catch the stunt person and motorcycle safely, and +to minimize the cost and size of the box rig. + +# Our approach is: + +- We investigate the relationship between the size/shape/material of a box and the work (crush energy) required to crush it. +- We review the available cardboard boxes. +- By comparison with an airbag rig, we estimate the maximum acceptable deceleration that the stunt person can experience during landing. +- We analyze the trajectory of the motorcycle and the uncertainty in its landing location. This determines the proper placement of the box rig and how large an area it must cover. +- Using the crush energy formula, we estimate the number of boxes needed. +- We formulate a numerical simulation of the motorcycle as it enters the box rig. Using this model, we analyze the effectiveness of various types of boxes and stacking arrangements for low, medium, and high jumps. +- As an alternative to catching the stunt person while sitting on the motorcycle, we analyze the possibility of having the stunt person bail out in mid-air and land separately from the motorcycle. +- We make recommendations regarding placement, size, construction, and stacking type of the box rig. + +# Energy Absorbed by Crushing Cardboard + +We estimate the energy required to crush a box, based on physical considerations and experimentation. We assume that the primary source of energy absorption is the breakdown of the box walls due to edge compressive forces. + +Commercial cardboard is rated by the edge crush test (ECT), which measures edge compressive force parallel to the flute (the wavy layer between the two wall layers) that the cardboard can withstand before breaking. This can be interpreted as the force against the edge per unit length of crease created [Pflug et al. 1999; McCoy Corporation n.d.]. Once a crease has formed, very little work is required to bend the cardboard further. + +To understand how the formation of wall creases relates to the process of crushing a box, we conducted several experiments (Figure 1). We found: + +- The first wall creases typically form in the first $15\%$ of the stroke distance. +- These creases extend across two faces of the box; a schematic of one such crease is illustrated in Figure 2. + +![](images/fdadc495e845ca8655570b83e976f8c235a238974e74435f94b8260b8a3c2c28.jpg) +Figure 1. Experimental apparatus for crushing boxes: We dropped a crush-test dummy (i.e., team member) onto several boxes and observed how the structure (the box, not the dummy) broke down. + +![](images/e907d1cea73929b3ebc60bd81c62c1c9b1eeb12b6bdb36b2964e738f78bda122.jpg) +Figure 1a. Crush-test dummy in action. (Left: Jeff Giansiracusa; right: Simon Pai.) + +![](images/8644bd3dc1af59764c10cadb9a2daa0928f761df088bd6ce95fd77b1215740bf.jpg) +Figure 1b. Crushed box with creases. (Photos courtesy of Richard Neal.) + +![](images/77ea311a7cc254b1427c2723dd0f18a4973f3d4342021b2134042fee12b069df.jpg) +Figure 2. The first crease forms in a curve across the side faces as the box is compressed. + +- Once these have formed, the box deforms further with comparatively little resistance, because additional creases are created by torque forces rather than edge compressive forces. +- The primary creases each have length approximately equal to the diagonal length of the face. + +The work done in crushing the box is given by the average force applied times the distance through which it is applied. This and our experimental qualitative results lead us to write the following equation for energy absorbed by a box of dimension $l_{x} \times l_{y} \times l_{z}$ crushed in the $z$ -direction: + +$$ +E = \mathrm {E C T} \times 2 \sqrt {l _ {x} ^ {2} + l _ {y} ^ {2}} \times l _ {z} \times 0. 1 5 \tag {1} +$$ + +As a reality check, we compute the crush energy for a standard 8.5 in $\times$ 17 in $\times$ 11 in box with $\mathrm{ECT} = 20\mathrm{~lbs / in}$ and a C-flute (the type commonly used to store paper). With these numerical values, (1) gives an energy of $187\mathrm{~J}$ . This corresponds roughly to a 140-lb person sitting on the box and nearly flattening it. Crush-test dummy results confirm this estimate. + +Energy can also be absorbed in the process of flattening the flute within the cardboard walls. However, the pressure required to do this is approximately $150\mathrm{kPa}$ [Pflug et al. 1999] and the surface area involved is more than $1\mathrm{m}^2$ , so a quick calculation shows that the stunt person would decelerate too quickly if the kinetic energy were transferred into flattening boxes. We therefore ignore this additional flattening effect. + +So, any successful box rig configuration must dissipate all of the kinetic energy of the stunt person and motorcycle through box-crushing alone. + +# Common Box Types + +Minimizing cost is important. The cardboard box rig will consist of perhaps hundreds of boxes, and wholesale box prices can range up to $10 or$ 20 per unit; so we restrict our attention to commonly available box types (Table 1). + +Table 1. Commonly available box types [Paper Mart n.d.; VeriPack.com n.d.] + +
TypeSize (in)ECT rating (lbs/in)Price
A10 × 10 × 1032$0.40
B20 × 20 × 2032$1.50
C20 × 20 × 2048$3.50
D30 × 30 × 3032$5.00
E44 × 12 × 1232$1.75
F80 × 60 × 732$10.00
+ +# Some Quick Estimates + +# Maximum Safe Acceleration + +To determine acceptable forces and accelerations for the stunt person, we compare the box rig with other cushioning devices. In the stunt rigging business, it is common practice to use an air bag for high falls of up to $30\mathrm{m}$ ; such airbags are approximately $4\mathrm{m}$ deep. + +Assume that a stunt person falls from $30\mathrm{m}$ above the airbag. Gravity accelerates the performer from rest to speed $v$ when the performer strikes the airbag and is decelerated completely, so we have + +$$ +\sqrt {2 g d _ {\mathrm {f a l l}}} = \sqrt {2 a _ {\mathrm {b a g}} h _ {\mathrm {b a g}}}, +$$ + +where $d_{\mathrm{fall}}$ is the fall distance, $a_{\mathrm{bag}}$ is the deceleration rate the stunt person experiences in the airbag, $h_{\mathrm{bag}}$ is the height of the airbag, and $g$ is the acceleration due to gravity. Thus, + +$$ +a _ {\mathrm {b a g}} = \frac {d _ {\mathrm {f a l l}}}{h _ {\mathrm {b a g}}} g = \frac {3 0 \mathrm {m}}{4 \mathrm {m}} g = 7. 5 g. +$$ + +We therefore conclude: + +- When using an airbag, the stunt person experiences an average acceleration of at most $7.5g$ . This provides an upper bound on the maximum acceleration that a person can safely withstand. +- With the airbag, the stunt person is able to land in a position that distributes forces evenly across the body. In our stunt, however, the stunt person lands in the box rig while still on the motorcycle, with greater chance for injury under high deceleration. +- We choose $5g$ as our maximum safe deceleration. + +# Displacement and Energy Estimates + +If the deceleration is constant through the boxes, then we can estimate the distance required to bring them to rest. Since any deviation from constant acceleration increases either the stopping distance or the peak deceleration, this will give us a lower bound on the stopping distance and hence on the required dimensions of the box rig. + +Suppose that the stunt person enters the rig at time $t = 0$ with speed $v_{0}$ and experiences a constant deceleration $a$ until brought to rest at time $t = t_{f}$ . The person's speed is $v(t) = v_{0} - at$ . Since the stunt person is at rest at time $t_{f}$ , we have + +$$ +t _ {f} = v _ {0} / a. +$$ + +Let $x(t)$ be the displacement from the point of entry as a function of time. Since $x(0) = 0$ , we have + +$$ +x (t) = v _ {0} t - \frac {1}{2} a t ^ {2} +$$ + +and so the total distance traveled through the boxes is + +$$ +\Delta x = x (t _ {f}) = \frac {v _ {0} ^ {2}}{a} - \frac {1}{2} a \left(\frac {v _ {0}}{a}\right) ^ {2} = \frac {v _ {0} ^ {2}}{2 a}. +$$ + +Therefore, we arrive at: + +- Given an impact velocity $v_{0} \approx 20 \, \mathrm{m/s}$ and deceleration bounded by $5g$ , the stunt person requires at least $4 \, \mathrm{m}$ to come to rest. + +The energy that must be dissipated in the boxes is roughly equal to the kinetic energy that the motorcycle and stunt person enter with. (Since the box rig should be only $3 - 4\mathrm{m}$ high, the potential energy is a much smaller fraction of the total energy.) Thus, for $v_{0} = 20~\mathrm{m / s}$ and a mass of $200\mathrm{kg}$ , the change in energy is $40,000\mathrm{J}$ . From (1), we calculate that the crush energy of a standard $(30\mathrm{inch})^{3}$ box is $633\mathrm{J}$ , so we need. $40,000 / 633\approx 60$ boxes. + +# Trajectory Analysis and Cushion Location + +Cardboard boxes won't dissipate any energy unless the stunt person lands on them. It is therefore important to consider the trajectory, so we know where to place the box rig and what the uncertainty is in the landing location. + +We calculate trajectories by solving the following differential equation, where $v$ is the speed, $k$ is the drag coefficient, and $\vec{x}$ is the position: + +$$ +(\vec {x}) ^ {\prime \prime} = - g \hat {z} - \frac {k}{m} | v | ^ {2} \hat {v} +$$ + +We used Matlab's ODE45 function to solve an equivalent system of first-order equations. We use an air drag coefficient of $k = 1.0$ [Filippone 2003]. We see from Figure 5 that it would be unwise to ignore air resistance, since it alters the landing position by up to several meters. + +It is unreasonable to expect the stunt person to leave the ramp with exactly the same initial velocity and angle every jump. We therefore need to allow for some uncertainty in the resulting trajectory and ensure that the cardboard cushion is large enough to support a wide range of possible landing locations. The ramp angle $\phi$ is constant, but the motorcycle might move slightly to one side as it leaves the ramp. Let $\theta$ be the azimuthal angle between the ramp axis and the motorcycle's velocity vector. Ideally $\theta$ should be zero, but small variations may occur. The other uncertain initial condition is the initial velocity $v_{0}$ . + +In modeling possible trajectories, we assume the following uncertainties: + +- Initial velocity: $v_{0} = v_{\text{intended}} \pm 1 \, \text{m/s}$ + +![](images/05a59298c2c59df0f945a312932838e4cfb383267db0cc197aed46bcc9878554.jpg) +Figure 3. Air resistance significantly changes the trajectory. + +- Azimuthal angle: $\theta = 0^{\circ} \pm 2^{\circ}$ + +We use this to identify the range of possible landing locations by plotting the trajectories that result from the worst possible launches (Figure 6). + +If the intended initial velocity is $22\mathrm{m / s}$ , the ramp angle is $20^{\circ}$ , and the mass of the rider plus motorcycle is $200\mathrm{kg}$ , then the distance variation is $\pm 2.5\mathrm{m}$ and the lateral variation is $\pm 1.5\mathrm{m}$ . + +# Impact simulation + +To evaluate the effectiveness of various box rig configurations, we construct a numerical simulation of the motion of the stunt person and motorcycle through the box rig. + +# Assumptions + +The full physics of the box rig is far too complex to model accurately. We make the following assumptions to approximate and simplify the problem. + +- The problem is two dimensional. We restrict out attention to the plane of motion of the stunt person. + +![](images/5e29ad5926ab64de24ce9b721485679cefebdc70e3729ad9179a29be41ca4b87.jpg) +Figure 4. Trajectory uncertainty due to launch uncertainties (box rig is not to scale). + +- As the motorcycle plows through the boxes, a thick layer of crushed boxes accumulates against its front and lower surfaces. These layers increase the effective size of the motorcycle and cause it to strike a larger number of boxes as it moves. This assumption captures the effects of internal friction and viscosity within the boxes. +- In striking a large number of boxes, the velocity magnitude is reduced but the direction is unchanged. +- Boxes are crushed rather than pushed out of the way. In practice, this can be ensured by placing a strong netting around the three sides of the box rig that face away from the incoming stunt person. +- Boxes are crushed to a uniform level. Some boxes may be crushed only slightly while others are completely flattened, but these effects disappear when we average over a large number of collisions. + +# Formulation + +We formulate the simulation as follows: + +- The motorcycle with stunt person is represented by a bounding rectangle that is initially 1.2 meters long, $1.2\mathrm{m}$ high and $0.7\mathrm{m}$ wide. +- The box rig is represented by a two-dimensional stack of boxes. +- We numerically integrate the motion in discrete time steps of $0.05 \mathrm{~s}$ . The only object in motion throughout the simulation is the stunt person plus motorcycle—all boxes are stationary. + +- When the bounding rectangle intersects a box, the box is considered crushed. We modify the stunt person's velocity according to the kinematics described later and ignore further interactions with the crushed box. +- For each box crushed, we add a layer of additional thickness to either the front or the bottom of the motorcycle bounding rectangle. We assume that boxes are crushed to $20\%$ of their length or height. We allow the front layer to extend above and below the original bounding rectangle (and likewise for the bottom layer), so that the force of the motorcycle striking a tall box is effectively distributed along the length of the box. These debris layers increase the effective size of the motorcycle and therefore cause it to strike a larger number of boxes as it moves. We use this process to account for the effects of friction. +- The vertical component of the velocity is set to zero when the bounding rectangle strikes the ground. + +# Kinetics + +As the stunt person with motorcycle falls into the rig, each box collided with collapses and absorbs a small amount $\Delta E$ of kinetic energy, thereby slowing the descent. The crushed box is then pinned against the forward moving face of the stunt person and motorcycle and must move with them, contributing an additional mass of $m_{\mathrm{box}}$ . + +We calculate the change in this velocity using conservation of energy and assuming that the velocity direction remains unchanged (this is a good approximation in the average of a large number of collisions): + +$$ +\frac {1}{2} \left(m _ {0} + m _ {\text {b o x}}\right) v _ {\text {n e w}} ^ {2} = \max \left(\frac {1}{2} m _ {0} v _ {0} ^ {2} - \Delta E, 0\right). +$$ + +We take the maximum to avoid imparting more energy to the box than the motorcycle has. Solving for $v_{\mathrm{new}}$ yields + +$$ +v _ {n e w} = \sqrt {\max \left(\frac {m _ {0} v _ {0} ^ {2} - 2 \Delta E}{m _ {0} + m _ {\mathrm {b o x}}}, 0\right)} \tag {2} +$$ + +We use this equation to calculate the new velocity after each collision. + +# Stability and Sensitivity Analysis + +Given the crude nature of our collision detection, there is the danger of finding results that depend sensitively on the initial location of the motorcycle relative to the phase of the box-rig periodicity (typically less than $1.5\mathrm{m}$ ). To show that these phase alignment effects are negligible we vary the initial location of the motorcycle by $0.4\mathrm{m}$ ( $37\%$ of the rig periodicity) in either direction. Deceleration rates and stopping distance vary by less than $5\%$ . The simulation + +is therefore insensitive to where the motorcycle lands relative to the period of the box rig. + +As a second check, we vary the time step size from 0.025 s to 0.1 s (our standard value is 0.05 s). There are no distinguishable changes in results; the simulation is highly insensitive to the size of the time step. + +# Configurations Considered + +We consider the following configurations for the stunt: + +- Seven different stacking arrangements. Details are shown in Table 2 and Figure 7. + +Table 2. +The seven box rig configurations. Refer to Table 1 for data on the lettered box types. + +
Stack typeCost/m2Description
1$40Standard rig, box type B (20-in cube).
2$94Standard rig, heavy-duty box type C (20-in cube, ECT 48).
3$43Standard rig, box type D (30-in cube).
4$47Like type 3, but type-A boxes (10-in cube) inside the D boxes.
5$46Modification of type 3: additional vertical walls of type F mattress boxes.
6$41Like type 5, but horizontal mattress box walls.
7$46Mattress boxes (type F) stacked horizontally, with periodic vertical walls
+ +- Three values for the total mass of the motorcycle and stunt person: $200\mathrm{kg}, 300\mathrm{kg}$ , and $400\mathrm{kg}$ . +- Three flight trajectories for the motorcycle and stunt person: low, medium, and high. These provide three different entry angles and velocities for the simulation. Each trajectory is designed to clear an elephant that is roughly $3\mathrm{m}$ tall [Woodland Park Zoo n.d.]. Details of these trajectories are given in Table 3 and are shown to scale in Figure 8. + +Table 3. +The three test trajectories. + +
Jump typeInitial speed (m/s)Ramp angle angleJump distance (m)
Low2910°30.0
Medium2220°28.5
High2030°30.4
+ +![](images/1cdccff8e223bd4bad8c7311774ea63399b2333fd371768d37fba5ccb05992fe.jpg) +Figure 5. Box stacking configurations. Crush patterns are the result of simulated impacts of a $200\mathrm{-kg}$ mass in the low trajectory. + +![](images/6a854e8d2c6e48f65129ac54f57b417a8e0048d779c9144ec09e23a5d7185af0.jpg) + +![](images/b008ed1c7515f0b378730d1d0033a9d074379e4ac59eb9d57fb046a0557319a9.jpg) + +# Data Analysis + +The simulation provides us the velocity as a function of time. The plots appear jagged and step-like because of the discrete way in which our simulation handles collisions. We obtain the acceleration by fitting a straight line to the velocity vs. time plot and measuring the slope (Figure 9). + +In examining the plots for the runs, we look at: + +1. deceleration experience by the stunt person, averaged over the entire time from impact to rest; +2. maximum of deceleration averaged over 0.1 s intervals; and +3. whether or not the boxes completely arrest vertical motion before the stunt person hit the ground. + +If either (1) or (2) ever exceeds the maximum safe deceleration threshold of $5g$ , or (3) fails, we consider the configuration unsafe. + +![](images/29fe60bef90ebb3614688b531992d7ab87f14e803412d817b0c2aa3b1eb5a5d6.jpg) +Figure 6. The three trajectories tested. + +![](images/d312e07cc4dcd7b195f235d5d1234dd1b448fd261252223f5ea55acc5e98d24b.jpg) +Figure 8. Velocity vs. time for a $200\mathrm{-kg}$ low trajectory impact on a box stack of 20 in cubes stacked in standard style. + +# Results + +# Mass + +For a mass of $400\mathrm{kg}$ , the boxes give way beneath the incoming motorcycle too easily; such a mass would require a stack of boxes nearly $6\mathrm{m}$ high. A mass of $300\mathrm{kg}$ is marginal, and $200\mathrm{kg}$ is optimal. + +# Stacking Types + +We measure stopping distance from the point of impact to the furthest box damaged and report the stopping distance for the medium trajectory. (The motorcycle actually comes to rest in a significantly shorter distance, but it pushes a wall of debris several meters ahead of it.) The results for the stack types in Figure 6 are: + +1. Made from the cheapest and most common boxes, this stack resulted in $4.8g$ deceleration; it stopped the motorcycle in $11\mathrm{m}$ . +2. Rejected—deceleration of over $6g$ but brings the motorcycle to rest in only $7\mathrm{m}$ . +3. Very soft deceleration of $3.6g$ to $4.1g$ . But this stack did not completely stop the vertical motion and took $13\mathrm{m}$ to bring the motorcycle to rest. +4. Marginally safe deceleration from $4.8g$ to $5.1g$ , but this stack is the best at arresting the vertical motion; stopping distance of $9\mathrm{m}$ . +5. Behavior is similar to type (3), but stopping distance is reduced to $11\mathrm{m}$ +6. The extra horizontal mattress boxes make very little difference. Deceleration is $4.1g$ , and vertical motion is not slowed enough to prevent hitting the ground hard. +7. Rejected because deceleration $(5.2g$ to $5.7g)$ is unsafe. + +The difficulty of slowing the vertical motion enough might be overcome by stacking the box rig on top of a landing ramp. + +Conclusion: Type (1) stacking is optimal without a ramp. However, with a landing ramp under the boxes, type (3) or type (5) stacking gives a much softer deceleration. + +We tried additional variations on the type (5) stack. We conclude that + +30-in boxes (type $D$ ) with mattress box walls spaced every 4 boxes is optimal. + +# An Alternative: Bailing Out + +It may be desirable for the stunt person and motorcycle to separate before impacting the box rig, since doing so would reduce the chance of injury resulting from the stunt person being pinned against the motorcycle. + +We assume that they separate after clearing the elephant and allowing for a clear camera shot, corresponding to a distance of about $25\mathrm{m}$ . We run the same simulation as before but alter the vertical velocity at the point of separation and then follow separately the two trajectories. An estimate of the change of momentum is necessary to figure out the corresponding changes in velocity. If the stunt person jumps vertically away from the motorcycle, it makes sense to consider the analogy of a person jumping on the ground. A decent jump corresponds to about $0.5\mathrm{m}$ . Since the initial velocity is $v_{0} = \sqrt{2gd}$ where $d$ is the height, we find that $v_{0}$ is roughly $3\mathrm{m/s}$ . Accordingly, we increase the stunt person's vertical velocity by $3\mathrm{m/s}$ . Then the corresponding change in velocity for the motorcycle is given by conservation of momentum. The resulting trajectories are plotted in Figure 10. + +![](images/b71ccda4ddf197b20459162811ea963c2ffe840e15a1d10cb5bce49e20f8bead.jpg) +Figure 9. Stunt person separating from motorcycle in three possible trajectories. + +When the trajectory is medium or high, stunt person and motorcycle are separated by only about $6\mathrm{m}$ at the landing point. When the trajectory is low, however, this separation increases to around $15\mathrm{m}$ . This presents a problem if we want to protect both the motorcycle and the stunt person. Naturally, the safety of the person is the most important, and it is simple to extend the box + +rig to the projected landing location of the stunt person. + +Unfortunately, simulations show that a box configuration designed to decelerate the combined mass of motorcycle and stunt person smoothly doesn't work as well when there is just the mass of the person to contend with. In fact, it's possible that the stunt person will decelerate so quickly that our g-force tolerance is exceeded. Our simulations show that this is in fact the case for all of the box stacks that we considered. For the heights and speeds considered, a box rig is unsafe. However, if the boxes are stacked loosely enough with some spacing between the boxes as in Figure 11, then it is possible to decelerate the stunt person at a reasonable rate. Therefore, the best solution is to redesign the box rig, using a softer material and/or looser stacking. + +![](images/34fc3fea22f86a67e796d238a4044289e764caca8014038744b16f59fa3ed535.jpg) +Figure 10. Box stacking arrangement suitable for catching a stunt person who has separated from the motorcycle. + +# Recommendations + +- Which mass is best? A $400\mathrm{kg}$ mass is simply too much to be slowed adequately by a box stack less than $4\mathrm{m}$ high. The motorcycle invariably falls through the rig and hits the ground beneath at greater than $5\mathrm{m/s}$ ; the motorcycle could easily tumble over in the boxes and crush the stunt person. A $300\mathrm{kg}$ mass is marginal, but the safest is $200\mathrm{kg}$ . +- Which trajectory is best? The low trajectory $(10^{\circ})$ provides the least risk but allows only minimal clearance over the elephant (only $1\mathrm{m}$ for a tall elephant) and requires the highest speed (which increases the risk). + +- Which type of boxes and stacking is best? The type (1) stack, made of $(20\mathrm{in})^3$ boxes, is best for landing without a ramp. With a ramp under the rig, type (3), made from $(30\mathrm{in})^3$ boxes, and type (5), which is type (3) with added vertical mattress box walls, are optimal. The added walls of type (5) decrease the landing distance by $2\mathrm{m}$ , so fewer boxes are required and construction cost is reduced. +- What size must the rig be? With the $200\mathrm{kg}$ mass, our simulation shows that $3\mathrm{m}$ height is usually enough for the low trajectory, but $4\mathrm{m}$ is necessary for the high trajectory. This can be reduced to as little as $2\mathrm{m}$ if the rig is stacked on top of a landing ramp. Stopping distance is between 10 and $13\mathrm{m}$ (measured from point of entry to the front of the debris wall) depending on stack type, and we estimate that the landing location uncertainty is $1\mathrm{m}$ laterally and $3\mathrm{m}$ forwards or backwards. We consider an additional $50\%$ beyond these uncertainties to be necessary. Therefore our recommendations are: + +- Height: $4 \mathrm{~m}$ without landing ramp, $2 \mathrm{~m}$ with ramp. +-Width: $4\mathrm{m}$ +- Length: $24\mathrm{m}$ for type (1) or (5) stacking, and $29\mathrm{m}$ for type (3) stacking. + +- How much does it cost? The cost is between $4,300 for type (1) and$ 5,300, depending on the precise configuration; this is approximately the cost of renting an airbag rig for a day [M&M Film Stunt 2003]. +- How many boxes? Type (1) stack requires $2,000(20\mathrm{in})^{3}$ boxes and type (3) requires $1,100(30\mathrm{in})^{3}$ boxes. Type (5) uses the same number as (3) and a few additional mattress boxes. + +# Final Recommendation + +The overall best type of box rig uses (30 in) $^{3}$ boxes stacked as usual, with vertical mattress box walls every couple of meters to distribute the forces over a larger number of boxes. This configuration gives the softest deceleration and requires the fewest boxes. + +# References + +Filippone, A. 2003. Aerodynamics database. http://aerodyn.org/Drag/ tables.html/#man . Accessed 8 February 2003. +M&M Film Stunts Ltd. 2003. Stunt equipment rentals. http://www.mmstunts.com. Accessed 7 February 2003. +McCoy Corporation. n.d. http://www.4corrugated.com. Accessed 8 February 2003. + +Paper Mart. n.d. http://www.papermart.com. Accessed 7 February 2003. +Pflug, Jochen, Ignaas Verpoest, and Dirk Vandepitte. 1999. Folded honeycomb cardboard and core material for structural applications. http://www.mtm.kuleuven.ac.be/Research/C2/poly/downloads/SC5_TorHex_Paper.pdf. Accessed 7 February 2003. +VeriPack.com. n.d. http://www.veripack.com. Accessed 7 February 2003. +Woodland Park Zoo. n.d. African elephant. http://www.zoo.org/chai/site/learn/african.htm. Accessed 8 February 2003. + +# Editor's Note + +The authors' model emphasizes the important role that creases play in the breakdown of a box, based on their own experimental results with "crash dummies." + +The authors' conclusions about this are confirmed in Peterson [2003], which summarizes some of the research on crumpling, and from which we quote and summarize: + +[T]he energy that goes into bending and stretching a material as it is crumpled is concentrated in the network of narrow ridges... [C]rumpled sheets can be described in terms of the pattern of ridges and peaks that cover the surface. By adding together the deformation energies associated with individual ridges, scientists can estimate the total energy stored in a given crumpled sheet. ... + +The researchers also discovered that increasing a sheet's size has an unexpectedly small effect on the total amount of energy required to crumple it. For instance, it takes only twice as much energy to crumple a sheet whose sides are eight times [as long] ... + +A team of students from Fairview High School in Boulder, Colorado—Andrew Leifer, David Pothier, and Raymond To—won an award at this year's Intel International Science and Engineering Fair for their study of crumpling. They found that ridges in crumpled sheets show a fractal pattern, with a power law describing the lengths of ridges produced from buckling a single ridge and with a Weibull probability distribution describing the frequency distribution of ridge lengths. + +Their result supported the notion that paper crumpling can be viewed as a repetitive process of buckling multiple ridges and their daughter products. + +Peterson gives extensive references to literature on crushing, crumpling, and buckling of thin sheets. + +# References + +Peterson, Ivars. 2003. Ivars Peterson's Math Trek: Deciphering the wrinkles of crumpled sheets. http://www.maa.org/mathland/mathtrek_05_26_03.html. + +# Fly With Confidence + +Hu Yuxiao + +Hua Zheng + +Zhou Enlu + +Zhejiang University + +Hangzhou, China + +Advisor: Tan Zhiyi + +# Abstract + +We develop a model to design a pile of cardboard boxes to cushion the fall of a stunt motorcycle; the kinetic energy of the motorcycle is consumed through breaking down the boxes. + +We ignore the scattering effect of the boxes and begin by analyzing a single box. The energy to break it down has three components: the upper surfaces, the side surfaces, and the vertical edges. When big boxes are used, the upper surface provides the main supporting force; when small ones are used, the vertical edges play a great role. + +We extend our analysis to the pile of boxes. Taking into account the maximum impulse that a person can bear, along with the camera effect and cost concerns, we determine the size of a box. + +We conceive several stacking strategies and analyze their feasibility. We incorporate their strengths into our final strategy. We also examine several modifications to the box to improve its cushioning effect. + +To validate our model, we apply it to different cases and get some encouraging results. + +# Assumptions + +- The stunt person and the motorcycle are taken as a system, which we refer to as the motorcycle system or for brevity as the motorcycle. We ignore relative movement and interaction between them. +- The motorcycle system is a uniform-density block. We consider only the movement of its mass center, so we consider the motorcycle system as a mass particle. +- The cardboard boxes are all made of the same material, single wall S-1 cardboard $4.5 \mathrm{~mm}$ thick [Corrugated fiberboard ... n.d.] + +- The cardboard box is cubic and isotropic. +- Cost is proportional to the total surface area of cardboard. + +# Symbols and Terms + +Table 1. +Symbols. + +
Motorcycle parameters
amean acceleration during the landing
bdrag coefficient for the motorcycle plus rider
Ekinetic energy of the motorcycle when it hits the pile of boxes
Hheight of the stage from which the motorcycle leaves
mtotal mass of the motorcycle system
Scross-sectional area of the motorcycle system, which we assume is 1.5 m2
σmstandard deviation of v0
σdstandard deviation of the direction of v0
θangle between motorcycle direction and the horizontal
v0initial projection speed of the motorcycle
vspeed of the motorcycle when it lands on the pile
Box parameters
Eboxenergy that a single box can absorb during its breakdown
ledge length of the box
ppolepressure needed to break down the pole of a box
psidepressure needed to break down the side of a box
τcoefficient for transfer of pressure from side to pole
ρdensity of energy absorption (DEA) of a box
Vboxoriginal volume of a single box
Pile parameters
hheight of the pile
Stotaltotal combined surface area of all boxes in the pile
Vpilecombined volume of all boxes in the pile
Cardboard parameters
bsbursting strength of the cardboard
esedgewise crush resistance of the cardboard
Rider parameter
amaxthe maximum acceleration that a person can bear.
+ +Pole: the vertical edge of the box + +Edge: the horizontal edge of the box + +Corner: the intersection of poles and edges + +Bursting strength: the maximum pressure on the cardboard before it bursts + +![](images/95778e2f0ef3cf520bd9b04d68cdd70ca29e0279c3ded70c9ef6c41cd399b685.jpg) +Figure 1. Illustration of terminology. + +Edgewise crush resistance: the maximum force on unit length of the edge before the side surface crushes + +Piercing Strength: the energy of an awl piercing the cardboard + +# Problem Analysis + +Our primary goal is to protect the stunt person. Having ensured this, we should minimize the height of the pile (to get a good film effect) and minimize the total superficial area of boxes (to lower the cost). + +From the analysis of the jump, we get the pile area and kinetic energy of the motorcycle system. Then we consider the cushion process. Since the maximal impulse a person can bear is $12\mathrm{kN}$ [UIAA Safety Regulations n.d.], the problem is to ensure that the force exerted on the person stays within the range during the cushioning. + +Consider the motorcycle landing in a pile of cardboard boxes. The boxes provide a supporting force to decelerate it. We examine the cushioning effect of both big boxes and small boxes. In our modeling, we focus on energy and reduce the problem to analyzing the energy to break down the boxes. We search for the relation between this energy and the size and number of the boxes. We then improve the cushioning by changing stacking approaches and modifying the box. + +# Model Design + +# The Jump + +# Local Assumptions + +- The speed $v$ and direction of the motorcycle are random variables that are normally distributed. + +- The standard deviation $(\sigma_{m})$ of the magnitude of $v_{0}$ is $0.05v_{0}$ . +- The standard deviation $(\sigma_d)$ of the direction of $v_0$ is $5^\circ$ . +- The stunt is considered safe if the probability of landing on top of the box pile is more than $99.7\%$ . + +# Local Variables + +- $v_{0}$ : initial speed, +- $\theta$ : angle between velocity and horizontal direction, +- $H$ : height of stage, +- $m$ : mass of motorcycle system. + +We first examine the path that the motorcycle follows. Taking the air resistance into account, we get two differential equations + +$$ +\frac {d v _ {x}}{d t} = - \frac {b v _ {x}}{m}, \quad \frac {d v _ {y}}{d t} = \frac {b v _ {y}}{m} - g, \tag {1} +$$ + +where + +$$ +v _ {x} = v _ {0} \cos \theta , \quad v _ {y} = v _ {0} \sin \theta , \quad \text {a n d} \quad \frac {d x}{d t} = v _ {x}, \quad \frac {d h}{d t} = v _ {y}. \tag {2} +$$ + +The drag coefficient $b$ can be obtained by computing the mass and terminal speed of a skydiver [Halliday et al. 2001]. For $v = 60\mathrm{m / s}$ and $m = 200\mathrm{kg}$ , we have $b \approx 30\mathrm{N / (m / s)}$ . + +A typical elephant is $3\mathrm{m}$ high and $5\mathrm{m}$ long, with a trunk $2\mathrm{m}$ long [Estes 1999]. To jump it over safely, we assume that the stage is higher than the elephant's trunk reach, which is $5\mathrm{m}$ . + +Solving equations (1)-(2) with Matlab, we get a quasiparabola. Air resistance makes a difference of no more than $5\%$ , so we neglect it. With height difference $H$ and initial speed $v_{0}$ , neglecting air resistance, the landing point is $v_{0}\sqrt{2H / g}$ from the projecting point. + +# Determine the Area of the Pile + +The area of the box pile must be large enough for the actor to land on, that is, the upper face of the pile must capture more than $99.7\%$ of falls. (Very few may crush into the side face, which is still quite safe.) To meet this criterion, the surface must extend to cover six standard deviations (three on each side of the mean) of both the projection speed $v_{0}$ and its direction [Sheng et al. 2001]. + +We calculate the landing point in the combinations of $v_{0} \pm 3\sigma_{m}$ for the speed and $\pm 3\sigma_{d}$ for deviation from straight. The resulting length is $3.03\mathrm{m}$ and the + +width is $5.23\mathrm{m}$ . Taking the size of the motorcycle into account, the length needs to be approximately $4.5\mathrm{m}$ and the width $6\mathrm{m}$ . Since the motorcycle has a high horizontal speed, the box pile must in fact be longer to cushion the horizontal motion. + +# The Cushioning Process + +# Definitions + +Big box: The cross-sectional area is much larger than that of the motorcycle, so the motorcycle interacts with only one box when hitting the cushion. We ignore the deformation and resistance of edges and poles. Thus, we need to consider only the interaction between motorcycle and the upper surface of that box. + +Small box: The cross-sectional area is smaller than or comparable to that of motorcycle, so the motorcycle interacts with a number of them simultaneously. In this situation, the edges and poles play a great role in cushioning. + +Since big and small boxes have different interactions, we analyze the two situations separately and compare their cushioning effect to determine the box size. + +# Analysis of Big Box + +The motorcycle has considerable velocity when it hits the upper surface of the box, exerting a force on the upper surface. Because the corrugated cardboard is to some extent elastic, it first stretches a little. After some elongation, it goes into the plastic region and finally ruptures. This process is too complicated for us to calculate the total energy that results in the final rupture, so we reduce it to the following extreme situation. + +The area of the upper surface of the cardboard is infinitely large and the motorcycle can be taken as a point compared with the cardboard. The piercing strength of the corrugated paper is $4.9\mathrm{J}$ [Corrugated fiberboard ... n.d.]. The total energy of the motorcycle is $E = mv^2 /2$ ; since the order of magnitude of $v$ is $10\mathrm{m / s}$ , the energy is approximately $10^{4}\mathrm{J}$ . Thus, the kinetic energy of the motorcycle is about $10^{3}$ times the energy needed to pierce the cardboard. So the cardboard is easily pierced and provides little cushioning. + +Although we examine the extreme situation, we can safely reach the following conclusion: The bigger the box, the easier for the motorcycle to penetrate the upper surface. In fact, as the cardboard becomes smaller, the motorcycle cannot be taken as a point again, so the force distributes over the surface, making it more difficult to penetrate the upper surface. + +When the box becomes even smaller, the edges and poles of the box provide great support. But the cost increases at the same time. + +# Small Box Model + +![](images/6fbaaa1a852c12082b521f03d31d750c80ac14ad2e29593e5cf4f3f47fb0a674.jpg) +Figure 2. Energy to break down a box. + +Energy to break down a box. The breakdown of a box consists of three processes: + +- breakdown of the upper surface, +- breakdown of the side surfaces, and +- breakdown of the poles. + +After all three components break down, the box is completely damaged and cannot provide any cushioning. The total energy required to break down a box is + +$$ +E _ {\text {b o x}} = E _ {\text {u p p e r}} + E _ {\text {s i d e}} + E _ {\text {p o l e}}. +$$ + +After some analysis (see Appendix), we find that $E_{\mathrm{upper}}$ and $E_{\mathrm{side}}$ are rather small compared with $E_{\mathrm{pole}}$ , so + +$$ +E _ {\text {b o x}} \approx E _ {\text {p o l e}}. \tag {3} +$$ + +We cannot find any data for calculating $E_{\mathrm{pole}}$ , so we make a rough estimate. Our analogy is to steel, for which we have data. We obtain the relationship between the maximum pressure to break the pole and the side: + +$$ +p _ {\text {p o l e}} = \tau p _ {\text {s i d e}}, +$$ + +where $p_{\mathrm{pole}}$ is the breakdown pressure for the pole, $p_{\mathrm{side}}$ is the breakdown pressure for a side surface, and $\tau$ is the transfer coefficient. + +The breakdown pressure for a side surface is inversely proportional to the length $\ell$ of a side, so with edgewise crush resistance of the cardboard $e_s$ we have + +$$ +p _ {\text {s i d e}} = \frac {e _ {s}}{\ell}. \tag {4} +$$ + +Height of the pile. The motorcycle lands in a pile with an initial velocity and ultimately decelerates to zero, trapped in this pile. During that process, the force exerted on the motorcycle must be smaller than the maximum force that a person can bear; otherwise, the stunt person would be injured. Since $12\mathrm{kN}$ is the threshold, we consider $6\mathrm{kN}$ the safety bound. Thus, a $60\mathrm{kg}$ person can bear a maximum acceleration of $a_{\max} = 6000 / 60 = 100\mathrm{m / s^2}$ . We want the mean acceleration to be smaller than this: $\bar{a}\leq a_{\max}$ ; we use mean acceleration because the cushion process has approximately constant deceleration. Thus, using kinematics, we obtain + +$$ +\bar {a} = \frac {v ^ {2}}{2 h} \leq a _ {\mathrm {m a x}}, \qquad \mathrm {o r} \qquad h \geq \frac {v ^ {2}}{2 a _ {\mathrm {m a x}}}. +$$ + +Thus, we let the pile height $h$ be $v^2 / 2a_{\mathrm{max}}$ , so that the motorcycle just touches the ground when it stops. In terms of the kinetic energy $E = mv^2 / 2$ of the motorcycle, we have + +$$ +h = \frac {E}{m a _ {\mathrm {m a x}}}. \tag {5} +$$ + +# Size of Boxes + +To see how a box cushions the motion of the motorcycle, we define the density of energy absorption (DEA) of a box as + +$$ +\rho = \frac {E _ {\mathrm {b o x}}}{V _ {\mathrm {b o x}}}, +$$ + +where $E_{\mathrm{box}}$ is the energy that the box can absorb during its breakdown and $V_{\mathrm{box}}$ is the original volume of the box. This density reflects the average cushioning ability of the box, and $\rho$ can be thought of as the proportion of energy that is absorbed. + +In a homogenous pile, all the boxes have the same DEA. The total energy that the pile absorbs is $\rho V_{\mathrm{pile}}$ for the collapsed boxes. The height of the stack of boxes is $h$ and the cross-sectional area of ones collapsed by the motorcycle is $S$ , so + +$$ +E = \rho V _ {\text {p i l e}} = \rho S h = \rho S \cdot \frac {E}{m a _ {\max }} \tag {6} +$$ + +from (5). Cancelling the $E\mathrm{s},$ we get + +$$ +\rho = \frac {m a _ {\mathrm {m a x}}}{S}. +$$ + +We assume that the work done in breaking down a single box is proportional to $p_{\mathrm{pole}}$ , with proportionality coefficient $k$ . In breaking down the pile of boxes, + +this pressure is exerted across an area $S$ and through a distance $h$ , for total work $kp_{\mathrm{pole}}Sh$ . Equating this work to absorbed energy $E = \rho Sh$ , we have + +$$ +\rho S h = E = k p _ {\mathrm {p o l e}} S h, +$$ + +so + +$$ +\rho = \frac {m a _ {\mathrm {m a x}}}{S} = k p _ {\mathrm {p o l e}}; +$$ + +and substituting $p_{\mathrm{pole}} = \tau e_s / \ell$ , we get + +$$ +\ell = \frac {k \tau e _ {s}}{m a _ {\max }} S, \quad \text {t o g e t h e r w i t h t h e p r e v i o u s} \quad h = \frac {v ^ {2}}{2 a _ {\max }}. \tag {7} +$$ + +Substituting + +$$ +\begin{array}{l} m = 2 0 0 \mathrm {k g}, \quad a _ {\max } = 1 0 0 \mathrm {m} / \mathrm {s}, \quad v = 1 3 \mathrm {m} / \mathrm {s}, \\ k = 0. 5, \quad e _ {s} = 4 \times 1 0 ^ {3} \mathrm {N / m}, \quad S = 1. 5 \mathrm {m} ^ {2}, \quad \mathrm {a n d} \quad \tau = 1. 5, \\ \end{array} +$$ + +we get + +$$ +\ell = 0. 2 2 5 \mathrm {m}, \qquad h = 0. 8 4 5 \mathrm {m}. +$$ + +# Number of Boxes + +The landing area, $4.5\mathrm{m}$ by $6\mathrm{m}$ , needs to be extended to take the horizontal motion into account. The maximum length $h$ that the motorcycle system can penetrate is sufficient for cushioning horizontal motion, so the pile should be $(4.5 + h)\times 6\times h$ (meters) in dimension. From this fact, we can calculate the number of boxes needed in the pile. + +From (7), we have + +$$ +h = \frac {m v ^ {2}}{2 k \tau e _ {s} S} \ell . +$$ + +Thus, we know the dimensions of the pile: + +$$ +N _ {h} = \left\lceil \frac {h}{a} \right\rceil = \left\lceil \frac {m v ^ {2}}{2 k \tau e _ {s} S} \right\rceil = 4, \quad N _ {w} = \left\lceil \frac {6}{\ell} \right\rceil , \quad N _ {l} = \left\lceil \frac {4 . 5 + h}{\ell} \right\rceil . \tag {8} +$$ + +Numerically, we get + +$$ +N _ {h} = 4, \qquad N _ {w} = 2 7, \qquad N _ {l} = 2 4; +$$ + +and the total number of boxes needed is $N = 4 \times 27 \times 24 = 2592$ , with total surface area + +$$ +S _ {\mathrm {t o t a l}} = 6 a ^ {2} N = 6 \times 0. 2 2 5 ^ {2} \times 2 5 9 2 \approx 7 8 0 \mathrm {m} ^ {2}. +$$ + +Next, we analyze the change in cost if we alter the edge length of the boxes. As an approximation, we use + +$$ +N = \frac {V _ {\mathrm {p i l e}}}{V _ {\mathrm {b o x}}} = N _ {h} \cdot \frac {6 \times (4 . 5 + h)}{\ell^ {2}} = \frac {1 0 8 + 2 4 h}{\ell^ {2}} = \frac {1 0 8 + \frac {1 2 m \ell v ^ {2}}{k \tau e _ {s} S}}{\ell^ {2}}. +$$ + +We have calculated for minimum $h$ and $\ell$ . If we increase $\ell$ , that is, use bigger boxes, we need fewer boxes but the total cost increases, since + +$$ +S _ {\mathrm {t o t a l}} = 6 a ^ {2} N = 6 \cdot \left(1 0 8 + \frac {1 2 m \ell v ^ {2}}{k \tau e _ {s} S}\right) \approx 6 4 8 + 5 4 0 \ell . +$$ + +The number of layers is 4 regardless of edge length; but for increased $\ell$ , the pile is lengthened to ensure that the motorcycle will not burst out of the pile due to the reduction of DEA. + +In conclusion, smaller boxes lower the pile and are cost efficient; from computation, we should choose the minimum size $22.5\mathrm{cm}$ on a side, and need 2592 boxes. + +# Stacking Strategy + +In the above discussion, we assume that we stack the boxes regularly stacked (no overlapping). We examine several other stacking strategies. + +# Pyramid Stack + +Pyramid stacking stacks fewer boxes on top and more boxes at the bottom. When a stress is exerted on the pile, it is divided into the normal stress and shearing force along the slopes so that the downward stress diverge [Johnson 1985]. Furthermore, a pyramidal stack is more stable than a regular stack. + +# Mixed Stack + +Mixed stacking is to stack boxes of different sizes in the pile; a common practice is to lay big boxes on the top and small boxes at the bottom. + +For a regular stack, we considered the cushioning a motion with constant acceleration because we assumed that the supporting force provided by boxes is constant. However, this is not the case; generally speaking, the force is larger in the first few seconds, so decreasing the supporting force in the beginning is good for cushioning. + +We have shown that big boxes provide less support than small boxes. So the mixed stack can be characterized as softer on the top and stiffer at the bottom. In fact, this kind of pile is similar to a sponge cushion, which is often used in stunt filming, high jump, pole vault, etc. In addition, cardboard is superior to a sponge cushion in this situation; a sponge cushion is too soft, so the motorcycle may lose balance. + +# Sparse Stack + +Sparse stacking reserves a space $c$ between adjacent boxes. Because each box absorbs a constant amount of energy, the spaces decrease the density of energy absorption $\rho$ . Thus, given the initial kinetic energy, the height of the pile must increase to compensate for the decrease in $\rho$ . Since $\rho Sh = E$ from (6), we have + +$$ +\frac {h _ {\mathrm {n e w}}}{h _ {\mathrm {o l d}}} = \frac {\rho_ {\mathrm {o l d}}}{\rho_ {\mathrm {n e w}}} = \frac {1 / \ell^ {2}}{1 / (\ell + c) ^ {2}} = \left(1 + \frac {c}{\ell}\right) ^ {2}. +$$ + +So, the cushioning distance $h$ is proportional to the square of $(1 + c / \ell)$ . The sparse stack saves some material at the cross section but increases the cushioning distance. With no change in base area, for rectangular stacking the surface area is constant, while in a pyramid stacking it decreases. + +# Crossed Stack + +![](images/f18584727f43546f87ddd6fabe5771219aa6baefe8d519b33bad086558f43dbf.jpg) +Figure 3. Crossed stacking: side and top views. + +Crossed stacking is to lay the upper boxes on the intersection of the lower boxes, as shown in Figure 3. There are two kinds of interactions on the surface: + +- vertex-to-face: the interaction between pole of the upper box and the surface of the lower box. Because the pole is much stronger than the edge and surface, the pole will not deform but the upper surface may break down. +- edge-to-edge: the interaction between edge of the upper box and the perpendicular edge of the lower box. The two edges both bend over. Figure 4 shows the deformation of the boxes. + +To determine whether crossed stacking is better than regular stacking, we compare the pile height of the two approaches. + +![](images/027a01bca8835486305872d7a0edffe7f3ad185e2bf028a07954af7285dbf9fa.jpg) +Figure 4. Deformation of boxes in edge-to-edge interaction. + +The force provided by vertex-to-face structure is $F_{1} = b_{s}a^{2}$ ; the force provided by edge-to-edge structure is $F_{2} = e_{s}a$ . Thus, introducing the same analysis as above, the pile height of the crossed stack is + +$$ +h = \frac {m v ^ {2} \ell^ {2}}{8 k S (F _ {1} + F _ {2})} = \frac {m v ^ {2} \ell^ {2}}{8 k S (e _ {s} \ell + b _ {s} \ell^ {2})} = \frac {1}{8 (e _ {s} + b _ {s} \ell)} \cdot \frac {\ell m v ^ {2}}{k S}, +$$ + +compared to the pile height of the regular stack, + +$$ +h ^ {\prime} = \frac {m \ell}{2 k \tau e _ {s} S} \cdot v ^ {2} = \frac {1}{2 \tau e _ {s}} \cdot \frac {\ell m v ^ {2}}{k S}, +$$ + +For all edge lengths, we have $h < h'$ : Crossed stacking needs a smaller pile height than regular stacking. + +# The Final Stacking Strategy + +Synthesizing the analyses of several stacking approaches, we present our final stack approach (Figure 5). The far end of the pile is a quasislope, and big boxes are on the top while small boxes are cross-stacked at the bottom. The slope is in the same direction as the initial velocity of the motorcycle. Therefore, the stress that the motorcycle exerts on the pile dissipates along this direction. + +# Modifications to the Box + +In search for a better solution, we may make some modifications to the boxes to make the pile more comfortable, as well as lower and cheaper. + +# Changing Edge Ratio + +We have assumed the box to be cubic; we now investigate the energy absorbing properties of a box that is not cubic. + +![](images/d15dfac66bc379a64cb2eebaf2859839c7ca1a8d96bc231a753edaf44259dd97.jpg) +Figure 5. The final stacking strategy (cross-sectional view). + +According to Wolf's empirical formula [Sun 1995, 4] we have + +$$ +P = \frac {1 . 1 7 7 2 P _ {m} \sqrt {t z} \left(0 . 3 2 2 8 R _ {L} - 0 . 1 2 7 R _ {L} ^ {2} + 1\right)}{1 0 0 H _ {0} ^ {0 . 0 4 1}}, \tag {9} +$$ + +where + +- $P$ is the stiffness of the box, +- $P_{m}$ is the edgewise crash resistance of cardboard, +- $t$ is the thickness of cardboard, +- $z$ is the perimeter of the box, +- $R_{L}$ is the ratio of the length and width of the box, and +$H_0$ is the height of the box. + +Because the exponent of $H_0$ is 0.041, the height makes little difference to the stiffness of the box. Thus, we should choose a small value for $H_0$ , since a lower height means lower center of gravity and thus better stability for the pile. + +# Stuffing the Box + +After the collapse process, the box no longer provides any supporting force. To lengthen the effective time, we may add elastic material in the box, such as foam and corrugated paper. They must be soft and loose enough, or else they may prevent the box from collapsing or occupy too much space when the box breaks down, defeating one of the reasons why a box pile is preferable to foam or a spring cushion. + +# Adding Supporting Structures + +Apart from stuffing the box, we may also add supporting structures, such as vertical struts. They strengthen the box by preventing tangential displacement and supporting the upper surface of the box. There are different ways, such as triangular or square buttresses in the corners. These supporting structures bring considerable improvement to the mechanics while the total cost increases insignificantly. Because these structure significantly strengthen the box, the size of the box should be increased. + +# Other Considerations + +To make the landing comfortable, the upper layers should be more elastic while the lower layers stiff. To accomplish this, we can put taller boxes with stuffing in the upper layers and shorter ones with supporting structures in the bottom layers. + +# Generalizing the Result + +# The General Process + +We assumed that a $60\mathrm{kg}$ stunt person riding a $120\mathrm{kg}$ motorcycle jumps horizontally from a $5\mathrm{m}$ stage at $10\mathrm{m/s}$ . Now we offer a more general statement. + +Let the masses of the rider and the motorcycle be $m_r$ and $m_m$ ; the new $a$ and $m$ are $6kN / m_r$ and $(m_r + m_m)$ . We also calculate the values of final speed $v$ and kinetic energy $E$ , using the initial speed $v_0$ and stage height $H$ : + +$$ +E = m g H + \frac {1}{2} m v _ {0} ^ {2}, \qquad v = \sqrt {\frac {2 E}{m}}. +$$ + +We can then apply (7) to calculate the edge of box and the height of the pile, and we get the dimensions of the pile from (8). + +We can also make small adjustments by changing the structure of boxes and the pile, according to the approaches introduced in the section on stacking strategy. + +# Quick Reference Card for the Stunt Coordinator + +To see how our model works with different circumstances, we suppose that a stunt team has an actor (65 kg) and an actress (50 kg) and four motorcycles, namely, Toyota CBX250S, Jialing JH125, Yamaha DX100, and Yamaha RX125 [Zhaohu n.d.]. + +We summarize the results in Figure 6 for the stunt coordinators' quick reference in practical filming work. Note that the team needs only two types of boxes. Sensitivity analysis—varying $v_{0}, H_{0}, m_{r}$ , and $m_{m}$ by $10\%$ —shows that $h$ and $\ell$ are not sensitive to small changes in these variables. + +The box size + +
Mass of Actor (kg)Mass of Motorcycle (kg)Basal Area (m2)Box size (cm)Alternative
Box Size (cm)Adjustment
501291.34218.722.5Sparse Placement
501051.50024.222.5Buttress
50971.37223.322.5Stuffing
5083.51.40826.428.0Sparse Placement
651291.34222.522.5
651051.50028.728.0Stuffing
65971.37227.528.0Crossed Placement
6583.51.40830.828.0Buttress
+ +The pile height + +
Actor's Mass (kg)Stage Height (m)Initial Velocity (m/s)Pile Height (m)Actor's Mass (kg)Stage Height (m)Initial Velocity (m/s)Pile Height (m)
6515152.815015152.16
651551.73501551.33
655151.75505151.35
65550.6750550.51
+ +Figure 6. Quick reference card for the stunt coordinator. + +# Model Validation + +# Validation of Homogeneity Assumption + +Our main model is based on the assumption that when the boxes are small, we consider the box pile as a homogenous substance. Now we use Wolf's empirical formula (9) to validate our assumption. + +Because we consider all the boxes as cubes and because the exponent of $H_0$ , 0.041, is so small, the denominator can be considered as a constant 100. We simplify the expression to + +$$ +P = \frac {1 . 1 7 7 2 \cdot 5 8 8 0 \cdot \sqrt {0 . 0 0 4 5 \cdot 2 \ell \cdot 1 0 0} (0 . 3 2 2 8 - 0 . 1 2 7 + 1)}{1 0 0}, +$$ + +so that + +$$ +p = \frac {P}{(1 0 0 \ell) ^ {2}} = 7. 8 5 (1 0 0 \ell) ^ {- \frac {3}{2}}; +$$ + +the derivative of $p$ is $dp / d\ell = -11.78 \times 10^{-3}a^{-\frac{5}{2}}$ . We graph the derivative in Figure 7. For $\ell > 5$ cm, we have $dp / d\ell \approx 0$ , so $p$ is independent of $\ell$ when $\ell > 5$ cm. So we have proved our assumption of homogeneity. + +![](images/81bbac460b61e98461125759ead2142514e7cdfd54f678b5287f5e9f960518f6.jpg) +Figure 7. Derivative of $p$ as a function of edge length $\ell$ . + +# Validation of Small Box Model + +Consider a single box with generalized dimensions $\ell \times w \times h$ . We assume that the stiffness is proportional to the average force that the box is subjected to when it is collapsed, with proportionality coefficient $k$ . Thus the work it does, or the energy that it absorbs from the colliding object, is $W = kPh$ . Equating $W$ with $E$ , we have $kPh = \rho Sh =$ + +$r h o l w h,$ or $kP = \rho \ell w$ . We use Wolf's formula (9) for $P$ and minimize the surface area of the box subject to the resulting constraint on $\ell ,w,$ and $h$ .. + +minimize $S = 2(\ell w + wh + \ell h)$ + +subject to $k \cdot \left(\frac{1.772P_m\sqrt{t(\ell + w)}(0.3228R_L - 0.1217R_L^2 + 1)}{100h^{0.041}}\right) = \rho \ell w.$ + +This optimization model is nonlinear, so we cannot easily get an analytical solution. However, adding our assumption that the box is a cube significantly simplifies the constraint equation. We wrote a program to search for the optimum value and found that the minimum surface area is $2.1 \times 10^{3} \mathrm{~cm}^{2}$ when the edge length is $19.1 \mathrm{~cm}$ . + +This result is consistent with our earlier one, $S = 3.0 \times 10^{3} \, \mathrm{cm}^{2}$ and $a = 22.5 \, \mathrm{cm}$ , confirming the correctness of our model. + +# Strengths and Weaknesses + +# Strengths + +- We carefully built our model on the limited information that we could find; some of the data crucial to our solution are from cardboard company advertisements. Although there are not enough data available for us to justify our model fully, we compared our result with available data. We also visited such Websites as http://www.stuntrev.com, and examined stunt videos. + +We found that most of their cushioning facilities agree well with our model. So, we believe our result has practical value. + +- We abstract the pile of boxes into a simple homogenous model, which proves reasonable. +- We apply careful mechanical analysis in our model design. Given reasonably accurate data, the model can provide a good result. +- The model examines various stacking approaches and modifications to the boxes. It helps to find the best way to design a pile of box for cushioning. +- We generalize the model to different situations and get good results. + +# Weaknesses + +- We ignore the scattering of boxes when they are crushed, which may contribute to cushioning. +- The model is only as accurate as the data used, but some data are dubious. We are forced to obtain a crucial data by analogy with a material (steel) of similar structure for which data are available. +- The number of boxes is very large (more than 2,000); this may be caused by our choice of the thinnest corrugated cardboard. +- We ignore air resistance; the error introduced is about $5\%$ . + +# Appendix + +# Corner Structure + +According to structure mechanics, the corner structure is a stable structure. That is why the "T shape L shape and I shape structure" are used widely in buildings and bridges [Punmia and Jain 1994]. + +We study the extension of the side surface when the crush happens. We consider a small bending of the pole after the outer force acts on the upper surface (see Figures 8-9) + +The force $F_{p}$ is composed of two equal forces $F_{p1}$ and $F_{p2}$ produced by the extension of the side surface, so $F_{p} = \sqrt{2} F_{p1} = \sqrt{2} F_{p2}$ . The flexibility of the cardboard is described by Young's modulus. We have + +$$ +E \cdot \frac {\Delta L}{a} = \frac {F _ {p 1}}{A}, +$$ + +where $\Delta L$ is the small extension of the side surface and $A$ is the cross-sectional area (thickness plus side length) of the cardboard. + +![](images/fec178e15bc44404a29ae2914cefeae5d2d9e2dd3cf67711ad04e7dbec00c902.jpg) +Figure 8. Breakdown of a corner structure. + +![](images/6de4fbc83eb250d96a9ddf716edc67e23b0c99b19f64ea4b659333c005ff588f.jpg) +Figure 9. Bulging of a side under pressure on the top surface. + +We do not find Young's modulus for corrugated cardboard on the handbook; but from the values for similar materials, we suppose that is about $0.5\mathrm{MPa}$ . Building the force balance equation, we assume $F_{p} = F_{f}$ ; then + +$$ +\Delta L = \frac {F _ {f} a}{A E}; \qquad \max F _ {f} = e _ {s} a ^ {2}; \qquad \Delta L = \frac {e _ {s} a ^ {3}}{A E} +$$ + +For $a = 10\mathrm{cm}$ , $A = 2.5\mathrm{cm}^2$ , and $e_s = 4\times 10^3$ Pa, we get $\Delta L = 3.2\mathrm{mm}$ . + +This result means that the maximum force $F_{f}$ that can make the side surface deform can bring only a tiny (3.2-mm) extension to the pole. This is a strong support for our supposition that $F_{p} >> F_{f}$ , which supports (3), $E_{\mathrm{box}} \approx E_{\mathrm{pole}}$ . + +# The Determination of $\tau$ + +To compute the $p_{\mathrm{pole}}$ value of is an enormous difficulty because of the lack of data. As in the iron industry, we set $p_{\mathrm{pole}} = \tau p_{\mathrm{side}}$ , where $\tau$ is a constant depending on the material. + +We determine $\tau$ by analogy to the method in the iron industry, for which there is theory about the axial compression of column with cross-sectional shapes that are rectangular or L-shaped. The former has strength parameter 0.77 to 0.93 while the latter has strength parameter 0.56 to 0.61; the ratio $\tau$ of the two is between 1.26 and 1.66. The average value 1.46; for simplicity, we let $\tau = 1.5$ . [Huang et al. 2002]. + +# References + +Ai, Zhaohu. 1999. Imported and Domestic Motorcycles. People's Posts and Telecommunications Publishing House. +Corrugated fibre board for export products GB 5034-85. 2001. http://main.pack.net.cn/wlz/wlz_article/f12.htm. +Estes, Richard D., and Kathryn S. Fuller. 1999. Elephant, Loxodonta africana. http://www.nature-wildlife.com/eletxt.htm. In *The Safari Companion: A Guide to Watching African Mammals*, Including Hoofed Mammals, Carnivores, and Primates, by Richard D. Estes, Daniel Otte, and Kathryn S. Fuller. White River Junction, VT: Chelsea Green Publishing Company. +Halliday, David, Robert Resnick, and Kenneth S. Krane. 2001. Physics. 5th ed. Vol. 1. New York: Wiley. +Huang, Chengkui, Dan Wang, and Bo Cui. 2002. Study of ultimate axial compression of the special-shaped columns. Journal of Dalian University Technology 42 (2): 213-217. +Johnson, K.L. 1985. Contact Mechanics. New York: Cambridge University Press. +Leavitt, Steven. n.d. The Art of Stunts. http://www.stuntrev.com. +Punmia, B.C., and A.K. Jain. 1994. Strength of Materials and Mechanics of Structure. New Delhi: Lakshmi Publications. +Sheng, Zhou, Shiqian Xie, and Chenyi Pan. 2001. Probability and Statistics. Higher Education Press. +Sun, Cheng. 1995, Package Structure Design. China Light Industry Press. +UIAA Safety Regulations. n.d. International Mountaineering and Climbing Federation. http://www.cc.nctu.edu.tw/~mclub/meichu/teach/equip/equipment2_1.html. + +# Judge's Commentary: The Outstanding Stunt Person Papers + +William P. Fox + +Dept. of Mathematics + +Francis Marion University + +Florence, SC 29501 + +bfox@fmarion.edu + +# Introduction + +Once again, Problem A proved to be a nice challenge for both the students and the judges. The students did not have a wealth of information for the overall model from the Web or from other resources. Students could find basic information for helping model the jumping of the elephant from many sources. This problem turned out to be a "true" modeling problem; the students' assumptions led to the development of their model. The judges had to read and evaluate many diverse (yet sometimes similar) approaches in order to find the "best" papers. Judges found mistakes even in these "best" papers, so it is important to note that "best" does not mean perfect. Many of these papers contain errors in modeling, assumptions, mathematics, and/or analysis. The judges must read and apply their own subjective analyses to evaluate critically both the technical and expository solutions presented by the teams. + +No paper analyzed every element nor applied critical validation and sensitivity analysis to all aspects of their model. + +# Advice to Students and Advisors + +At the conclusion of the judging, the judges offered the following comments: + +- Follow the instructions + +- Clearly answer all parts. +- List all assumptions that affect the model and justify your use of them. + +- Make sure that your conclusions and results are clearly stated. Any mission directives or additional questions need to be addressed. +- Restate the problem in your words. +- At the end, ask yourselves the question, "Does our answer make intuitive and/or common sense?" + +- "Executive" Summary (Abstract) of Results + +The summary is the first piece of information read by a judge. It should be well written and contain the bottom-line answer or result. It should motivate the judge to read your paper to see how you obtained your results. The judges place considerable weight on the summary, and winning papers are sometimes distinguished from other papers based on the quality of the summary. To write a good summary, imagine that a reader may choose whether or not to read the body of the paper based on your summary. Thus, a summary should clearly describe your approach to the problem and, most prominently, your most important conclusions. Summaries that are mere restatements of the contest problem or are cut-and-pasted boilerplate from the Introduction are generally considered weak. + +- Put the "bottom line results and managerial recommendations" in the summary. +- Be succinct; do not include a discussion of methods used, and do not just list a description or historical narrative of what you did. + +- Clarity and Style + +- Use a clear style and do not ramble. +- Do not list every possible model or method that could be used in a problem. +- A table of contents is very helpful to the judges. +- Pictures, tables, and graphs are helpful; but you must explain them clearly. +- Do not include a picture, table, or graph that is extraneous to your model or analysis. +- Do not be overly verbose, since judges have only $10 - 30\mathrm{min}$ to read and evaluate your paper. +- Include a graphic flowchart or an algorithmic flow chart for computer programs used/developed. + +The Model + +- Develop your model—do not just provide a laundry list of possible models even if explained. + +- Start with a simple model, follow it to completion, and then refine it. Many teams built a model without air resistance and—before determining the number of boxes needed—they refined only that part of the model to include air resistance. + +- Computer Programs + +- If computer programs are included, clearly define all parameters. +- Always include an algorithm in the body of the paper for any code used. +- If running a Monte Carlo simulation, be sure to run it enough times to have a statistically significant output. + +- Validation + +- Check your model against some known baseline, if possible, or explain how you would do this. +- Check sensitivity of parameters to your results. +- Check to see if your recommendations/conclusions make common sense. +- Use real data. +- The model should represent human behavior and be plausible. + +Resources + +- All work needs to be original or else references must be cited—with specific page numbers—in the text of the paper; a reference list at the end is not sufficient. (This particular problem lent itself to a good literature search.) +- Teams may use only inanimate resources. This does not include people. +- Surf the Web, but document sites where you obtained information that you used. + +# Judging + +The judging is accomplished in two phases. + +- Phase I is "triage judging." These are generally only 10-min reads with a subjective scoring from 1 (worst) to 7 (best). Approximately the top $40\%$ of papers are sent on to the final judging. +- Phase II, at a different site, is done with different judges and consists of a calibration round followed by another subjection round based on the 1-7 scoring system. Then the judges collaborate to develop an unique 100-point scale that will enable them to "bubble up" the better papers. Four or more longer rounds are accomplished using this scale, followed by a lengthy discussion of the last final group of papers. + +# Reflections of the Triage Judges + +- Lots of good papers made it to the final judging +- The summary made a significant difference. A majority of the summaries were poor and did not tell the reader the results obtained. A large number of teams simply copied and pasted their introductory paragraphs, which have the different purpose of establishing the background for the problem. +- The biggest thing that caught the judges' eye was whether or not the team paid attention to the questions asked in the problem. A number of teams addressed what they knew but didn't consider the real question—they got cut quickly. +- The next deciding factor was whether a team actually did some modeling or simply looked up a few equations and tried to shoehorn those into the problem. We looked for evidence that modeling had taken place; experiments were good to see. +- A final concern was the quality of writing—some was so poor that the judge couldn't follow or make any sense out of the report. + +Triage and final judges' pet peeves include: + +- Acronyms that are not immediately understood and tables with columns headed by Greek letters. +Definition and variable lists that are embedded in a paragraph. +- Equations used without explaining terms and what the equation accomplished. +- Copying derivations from other sources—a better approach is to cite the reference and explain briefly. + +# Approaches by the Outstanding Papers + +Six papers were selected as Outstanding submissions because they: + +- developed a workable, realistic model from their assumptions and used it to answer all elements of the stunt person scenario; +- made clear recommendations as to the number of boxes used, their size, and how they should be stacked, and offered a generalization to other stunt persons; +- wrote a clear and understandable paper describing the problem, their model, and results; and + +- handled all the elements. + +The required elements, as viewed by the judges, were in two distinct phases. + +- Models needed to consider the mission of the stunt person. A model had to be developed that ensured that the stunt person could jump over the elephant. The better teams then worked to minimize the speed with which to accomplish this jump. Teams that used a high-speed jump were usually discarded by the judges quickly. +- In Phase II, the model had to consider the landing; this included speed, energy, force, and momentum of the jumper, so that the boxes could be used safely to cushion the fall. + +Most of the better papers did an extensive literature and Web search for information about cardboard. However, many teams spent way too much energy researching cardboard; their time would have been better spent in modeling. + +The poorest section in all papers, including many of the Outstanding papers, was the summary. + +Another flaw found by the judges was the misuse of ECT (Edge Compression Testing) in a proportionality model. It is true that a proportionality exists, as shown in Figure 1; but that proportionality is not the one used by the teams. + +![](images/e4e0a7ed87eed88f4056543fd1d6ff8ca16e8f7cdfea4049c752f820d4bbf011.jpg) +Figure 1. Force as a function of delta in ECT. + +Many started with + +$$ +B C S = 5. 8 7 \times \mathrm {E C T} \times \sqrt {P t}, +$$ + +where $P$ is the perimeter of the box and $t$ is its thickness. Many went on to develop an energy model, where energy is the area under the curve, namely $\frac{1}{2} \times P \times \mathrm{ECT} \times h$ (magic number). However, this proportionality is flawed. The potential energy is an area but it is $\frac{1}{2} \times P \times \mathrm{ECT} \times \delta$ , which is not an equivalent statement (see Figure 1). + +The better papers used a variety of methods to model the safe landing. These included kinematics, work, and energy absorption. The discussion of the boxes and how they were to be secured was also an important feature. One team laid many large mattress boxes flat on top of the stacked boxes to give a smooth landing area. The judges enjoyed the creativity of the teams in this area. + +# About the Author + +![](images/c530d9fd0c9e90b16a37bcb8105090394bc380eb7304b832b738b6d24cf0ebaf.jpg) + +Dr. William P. Fox is Professor and the Chair of the Department of Mathematics at Francis Marion University. He received his M.S. in operations research from the Naval Postgraduate School and his Ph.D. in operations research and industrial engineering from Clemson University. Prior to coming to Francis Marion, he was a professor in the Department of Mathematical Sciences at the United States Military Academy. He has co-authored several mathematical modeling textbooks and makes numerous conference presentations + +on mathematical modeling, as well as being a SIAM lecturer. He is currently the director of the High School Mathematical Contest in Modeling (HiMCM). He was a co-author of last year's Airline Overbooking Problem. + +# The Genetic Algorithm-Based Optimization Approach for Gamma Unit Treatment + +Sun Fei + +Yang Lin + +Wang Hong + +Donghua University + +Shanghai, China + +Advisor: Ding Yongsheng + +# Abstract + +The gamma knife is used to treat brain tumors with radiation. The treatment planning process determines where to center the shots, how long to expose them, and what size focusing helmets should be used, to cover the target with sufficient dosage without overdosing normal tissue or surrounding sensitive structures. + +We formulate the optimal treatment planning for a gamma-knife unit as a sphere-packing problem and propose a new genetic algorithm (GA)-based optimization approach for it. Considering the physical limitations and biological uncertainties involved, we outline a reasonable, efficient and robust solution. + +First, we design a geometry-based heuristic to produce quickly a reasonable configuration of shot sizes, locations, and number. We first generate the skeleton using a 3D-skeleton algorithm. Then, along the skeleton, we use the GA-based shot placement algorithm to find a best location to place a shot. By continuously iterating the algorithm, we obtain the number, sizes, and the locations of all shots. After that, we develop a dose-based optimization method. + +Then we implement simulations of our models in Matlab. We did numerous computer simulations, using different shape or size targets, to examine the effectiveness of our model. From the simulation results, we know that the geometry-based heuristic with the GA optimization approach is a useful tool for in the selection of the appropriate number of shots and helmet sizes. Generally, all of the optimized plans for various targets provide full-target coverage with $90\%$ of the prescription isodose. + +Moreover, we do sensitivity analysis to our model in the following aspects: + +- the sensitive structures; +- at the $30\%$ , $40\%$ , $60\%$ , or $70\%$ isodose level; + +- the issue of global versus local optimality; +- conformality; and +- the robustness. We also discuss the strengths and limitations of our model. + +The results indicate that our approach is sufficiently robust and effective to be used in practice. In future work, we would fit ellipsoids instead of spheres, since some researchers note that the dose is skewed in certain directions. + +# Introduction + +The gamma knife unit delivers ionizing radiation from 201 cobalt-60 sources through a heavy helmet. All beams simultaneously intersect at the isocenter, resulting in an approximately spherical dose distribution at the effective dose levels. Delivering dose is termed a shot, and a shot can be represented as a spheres. Four interchangeable outer collimator helmets with beam channel diameters of 4, 8, 14, and $18\mathrm{mm}$ are available. For a target volume larger than one shot, multiple shots can be used. + +Gamma knife treatment plans are conventionally produced using a manual iterative approach. In each iteration, the planner attempts to determine + +the number of shots, +the shot sizes, +- the shot locations, and +- the shot exposure times (weights) that would adequately cover the target and spare critical structures. + +For large or irregularly shaped treatment volumes, this process is tedious and time-consuming, and the quality of the plan produced often depends on both the patience and the experience of the user. Consequently, a number of researchers have studied techniques for automating the gamma knife treatment planning process [Wu and Bourland 2000a; Shu et al. 1998]. The algorithms that have been tested include simulated annealing [Leichtman et al. 2000; Zhang et al. 2001], mixed integer programming, and nonlinear programming [Ferris et al. 2002; 2003; Shepard et al. 2000; Ferris and Shepard 2000]. + +The objective is to deliver a homogeneous (uniform) dose of radiation to the tumor (the target) area while avoiding unnecessary damage to the surrounding tissue and organs. Approximating each shot as a sphere [Cho et al. 1998] reduces the problem to one of geometric coverage. Kike [Liu and Tang 1997], we formulate optimal treatment planning as a sphere-packing problem and we propose an algorithm to determine shot locations and sizes. + +# Assumptions + +To account for all physical limitations and biological uncertainties involved in the gamma knife therapy process, we make several assumptions as follows: + +A1: The shape of the target is not too irregular, and the target volume is a bounded. As a rule of thumb, the target to be treated should be less than $35\mathrm{mm}$ in all dimensions. Its three-dimensional (3D) digital image, usually consisting of millions of points, can be obtained from a CT or MRI. +A2: We consider the target volume as a 3D grid of points and divide this grid into two subsets, the subset of points in and out of the target, denoted by $T$ and $N$ , respectively. +A3: Four interchangeable outer collimator helmets with beam channel diameters $w = \{4, 8, 14, 18\}$ mm are available for irradiating different size volumes. We use $(x_s, y_s, z_s)$ to denote the coordinates of the center location of the shot and $t_{s,w}$ to denote the time (weight) that each shot is exposed. The total dose delivered is a linear function of $t_{s,w}$ . For a target volume larger than one shot, multiple shots can be used to cover the entire target. There is a bound $n$ on the number of shots, with typically $n \leq 15$ . +A4: Neurosurgeons commonly use isodose curves as a means of judging the quality of a treatment plan; they may require that the entire target is surrounded by an isodose line of $x\%$ , e.g., $30 - 70\%$ . We use an isodose line of $50\%$ , which means that the $50\%$ line must surround the target. +A5: The dose cloud is approximated as a spherically symmetric distribution by averaging the profiles along $x, y$ , and $z$ axes. Other effects are ignored. +A6 The total dose deposited in the target and critical organ should be more than a fraction $P$ of the total dose delivered; typically, $25\% \leq P \leq 40\%$ . + +# Optimization Models + +# Analysis of the Problem + +The goal of radiosurgery is to deplete tumor cells while preserving normal structures. An optimal treatment plan is designed to: + +R1: match specified isodose contours to the target volumes; +R2: match specified dose-volume constraints of the target and critical organ; +R3: constrain dose to specified normal tissue points below tolerance doses; +R4: minimize the integral dose to the entire volume of normal tissues or organs; + +R5: minimize the dose gradient across the target volume; and +R6: minimize the maximum dose to critical volumes. + +It also is constrained to + +C1: prohibit shots from protruding outside the target, +C2: prohibit shots from overlapping (to avoid hot spots), +C3: cover the target volume with effective dosage as much as possible (at least $90\%$ of the target volume must be covered by shots), and +C4: use as few shots as possible. + +We design the optimal treatment plan in two steps. + +- We use a geometry-based heuristic to produce a reasonable configuration of shot number, sizes and locations. +- We use a dose-based optimization to produce the final treatment plan. + +# Geometry-Based Heuristic for Sphere-Packing + +We model each shot as a sphere, and we use the medial axis transform (known as the skeleton) of the target volume to guide placement of the shots. The skeleton is frequently used in shape analysis and other related areas [Wu et al. 1996; Wu and Bourland 2000b; Zhou et al. 1998]. We use the skeleton just to find good locations of the shots quickly. The heuristic is in three stages: + +- We generate the skeleton using a 3D skeleton algorithm. +- We place shots and choose their sizes along the skeleton to maximize a measure of our objective; this process is done by a genetic algorithm (GA)-based shot placement approach. +- After the number of focusing helmets to be included in the treatment plan is decided, the planning produces a list of the possible helmet combinations and a suggested number of shots to use. + +# Skeleton Generation + +We adopt a 3D skeleton algorithm that follows similar procedures to Ferris et al. [2002]. We use a morphologic thinning approach [Wu 2000] to create the skeleton, as opposed to the Euclidean-distance technique. The first step in the skeleton generation is to compute the contour map containing distance information from the point to a nearest target boundary. Then, based on the contour map, several known skeleton extraction methods [Ferris et al. 2002; Wu et al. 1996; Wu and Bourland 2000b; Zhou et al. 1998; Wu 2000] can be used. Since the method in Ferris et al. [2002] is simple and fast, we use it. + +# Genetic Algorithm-Based Shot Placement + +We restrict our attention to points on the skeleton. We start from a special type of skeleton point, an endpoint (Figure 1): A point in the skeleton is an endpoint if it has only one neighbor in the skeleton. + +![](images/def9ee141e3627d760d8a1055696245b684b44a49e00cc9b394ae02084e096c8.jpg) +(a) an end point + +![](images/1e961d61c645318f7a2d1afcb5cd760cadd691a4efd5e4333c8d663e14e6405e.jpg) +(b) an end point +Figure 1. Examples of endpoints. + +Starting from an endpoint, we look for the best point to place a shot and determine the shot size by using GA [Goldberg 1989; Mann et al. 1997]. In the GA-based shot placement algorithm, we must solve the following problems: + +The encoding method. In general, bit-string (0s and 1s)) encoding is the most common method adopted by GA researchers because of its simplicity and tractability. However, in this case, if we directly encode the point coordinates $(x_{s},y_{s},z_{s})$ into a bitstring, crossover and mutation generate some points that are not in the skeleton. To solve this problem, we build a table of correspondence between the point coordinates and the point number (1 to $M$ ); instead of encoding the point, we encode the point number. We select $m$ points from all points of the skeleton to form a population; a single point is a chromosome. + +Performance evaluation. The key to the GA-based approach is the fitness function. Ideally, we would like to place shots that cover the entire region without overdosing within (or outside) of the target. Overdosing occurs outside the target if we choose a shot size that is too large for the current location, and hence the shot protrudes from the target. Overdosing occurs within the target if we place two shots too close together for their chosen sizes. + +Before defining a fitness function, we give some definitions: + +- Fraction: A target part that is not large enough to be destroyed by the smallest shot without any harm to the surrounding normal tissue. +- Span: The minimum distance between the current location and the endpoint at which we started. +- Radius: The approximate Euclidean distance to the target boundary. + +We would like to ensure that the span, the radius, and the shot size $w$ are as close as possible. Therefore, we choose a fitness function that is the sum of the squared differences between these three quantities. The fitness function can ensure that the generating fraction is the smallest after every shot is placed on the target [Ferris et al. 2002]: + +$$ +\operatorname {F i t} = \phi_ {s, r} (x, y, z) + \phi_ {s, w} (x, y, z) + \phi_ {r, w} (x, y, z), +$$ + +where + +$$ +\phi_ {s, r} (x, y, z) = \left[ \operatorname {s p a n} (x, y, z) - \operatorname {r a d i u s} (x, y, z) \right] ^ {2}, \tag {1} +$$ + +$$ +\phi_ {s, w} (x, y, z) = \left[ \operatorname {s p a n} (x, y, z) - w \right] ^ {2}, \tag {2} +$$ + +$$ +\phi_ {r, w} (x, y, z) = \left[ \operatorname {r a d i u s} (x, y, z) - w \right] ^ {2}. \tag {3} +$$ + +- Equation (1) ensures that we pack the target volume as well as possible, that is, the current span between shots should be close to the distance to the closest target boundary. +- Equation (2) is used to choose a helmet size that fits the skeleton best for the current location. +- Equation (3) favors a location that is the appropriate distance from the target boundary for the current shot size. + +Genetic operators. Based on the encoding method, we develop the genetic operators in the GA: crossover and mutation. + +- Crossover/recombination is a process of exchanging genetic information. We adopt one-point crossover operation; the crossover points are randomly set. +- Mutation operation. Any change in a gene is called a mutation; we use point mutation. + +We propose the following GA-based shot placement algorithm: + +1. Find a skeleton and all its endpoints. Take one of the endpoints as a starting point. +2. Randomly search all the points in the skeleton using the GA to find the location and size of the best shots as follows: + +(a) Generate randomly $m$ ( $= 100$ ) points from all the points (e.g., $M = 1000$ ) in the skeleton. The $m$ points are the chromosomes. Set the crossover rate $p_c = .95$ , the mutation rate $p_m = .05$ , the desired Fit function, and the number $n_g$ of generations. +(b) Calculate the Fit of all the $m$ points in the skeleton. +(c) If the algorithm has run $n_g$ steps, or if one of the $m$ points satisfies the desired Fit, the GA stops (at this time, the best shot is chosen); else encode $m$ points into the bit strings. +(d) To do crossover and mutation operation to the $m$ bitstrings, go to 2b. + +3. Considering the rest of the target in whole as a new target, repeat Steps 1-2 until the rest of the target are fractions (at this time, all best shots are found). + +# Dose-Based Optimization + +After we obtain the number, sizes and the locations of shots, we develop a dose-based optimization method. + +We determine a functional form for the dose delivered at a point $(i,j,k)$ from the shot centered at $(x_{s},y_{s},z_{s})$ . The complete dose distribution can be calculated as a sum of contributions from all of the individual shots of radiation: + +$$ +D (i, j, k) = \sum_ {(s, w) \in S \times W} t _ {s, w} D _ {s, w} (x _ {s}, y _ {s}, z _ {s}, i, j, k), +$$ + +where $D_{s,w}(x_s,y_s,z_s,i,j,k)$ is the dose delivered to $(i,j,k)$ by the shot of size $w$ centered at $(x_s,y_s,z_s)$ with a delivery duration of $t_{s,w}$ . Since $D_{s,w}$ is a complicated (nonconvex) function, we approximate it by + +$$ +D _ {s, w} \left(x _ {s}, y _ {s}, z _ {s}, i, j, k\right) = \sum_ {i = 1} ^ {2} \lambda_ {i} \left(1 - \int_ {- \infty} ^ {x} \frac {1}{\sqrt {\pi}} e ^ {- x ^ {2}} d x\right), \tag {4} +$$ + +where $x = (t - r_i) / \sigma_i$ and $\lambda_i, \gamma_i,$ and $\sigma_i$ are coefficients [Ferris and Shepard 2000]. + +To meet the requirement of matching specified isodose contours to target volume at the $50\%$ isodose line, the optimization formulation should impose a constraint on the $50\%$ isodose line that must surround the target. We impose strict lower and upper bounds on the dose allowed in the target, namely, for all $(i,j,k) \in T$ , the dose $D(i,j,k)$ satisfies + +$$ +0. 5 \leq D (i, j, k) \leq 2. \tag {5} +$$ + +To meet the requirement (R2) to match specified dose-volume constraints of the target and critical organ, based on assumption (A3) (which sets out the size of the beam channel diameters), no more than $n$ shots are to be used; so in each card (any section of the target) we have + +$$ +\operatorname {c a r d} \left[ \left\{\left(s, w\right) \in S \times W \mid t _ {s, w} > 0 \right\} \right] \leq n. \tag {6} +$$ + +The value of tolerance doses of the normal tissue points is $q$ , $D(i,j,k) < q$ for all $(i,j,k)$ . The number of shots $n$ is no more than 15, so the tolerance doses of a specified normal tissue point should be $q = 15 / 201 = 7.46\%$ , or + +$$ +0 \leq D (i, j, k) < q = 7.46 \%, \tag{7} +$$ + +for all $(i,j,k)\in N$ + +To meet requirement (R3) (keep the does at normal tissue below a certain level), based on assumption (A6) (which sets the does levels), the tolerance dose radio of the total dose deposited in the target and critical organ to the total dose delivered by a plan is + +$$ +\frac {\sum_ {(i , j , k) \in T} D (i , j , k)}{\sum_ {(i , j , k) \in T \cup N} D (i , j , k)} \geq P, \quad P \in [. 2 5,. 4 0 ]. \tag {8} +$$ + +We wish to satisfy constraint (C3) (at least $90\%$ of the target volume must be covered). We set + +$V =$ the total volume of target; + +$V_{s} =$ the total effective dosage volume of the target whose dose value at the point is more than 0.5; and + +$f =$ the effective dosage rate, which satisfies the inequality + +$$ +90 \% \leq f = \frac {V _ {s}}{V} \leq 100 \% . \tag{9} +$$ + +The exposure time of each shot $t_{s,w}$ should be nonnegative: + +$$ +t _ {s, w} \geq 0. \tag {10} +$$ + +We introduce a binary variable $\delta_{s,w}$ that indicates whether shot $s$ uses width $w$ or not, i.e., + +$$ +\delta_ {s, w} = \left\{ \begin{array}{l l} 1, & \text {i f s h o t s u s e s w i d t h w ;} \\ 0, & \text {o t h e r w i s e .} \end{array} \right. +$$ + +Moreover, we have the constraints (C1) (no shots protrude outside the target), (C2) (shots do not overlap), and (C4) (as few shots as necessary). + +Given all these constraints (5)-(10), and based on the requirement (R4) (minimize dose to normal tissue), the goal is to minimize the dose outside of the target. + +Also, to meet the requirement (R5) (minimize the dose gradient across the target volume), the treatment plan needs to be both conformal and homogeneous. It is easy to specify homogeneity in the models simply by imposing lower and upper bounds on the dose delivered to points in the target $T$ . Typically, however, the imposition of rigid bounds leads to plans that are overly homogeneous and not conformal enough—that is, they provide too much dose outside the target. To overcome this problem, the notion of underdose (UD) is suggested in Ferris and Shepard [2000]. UD measures how much the delivered dose is below the prescribed dose on the target points. In our models, we either constrain UD to be less than a prespecified value or attempt to minimize the total UD. + +In practical application, rather than calculating the dose at every point, it is easy to estimate accurately the total dose delivered by a plan based solely on the $t_{s,w}$ variables and other precalculated constants. An upper bound is also placed on the dose to the target. Given these constraints, the optimizer seeks to minimize the total underdosage in the target. A point is considered to be underdosed if it receives less than the prescribed isodose $\theta$ , which for the example formulation is assumed to be 1. We actually use the optimization process to model UD, which is constrained to be + +$$ +\mathrm {U D} (i, j, k) m = \max \bigl [ 0, 1 - D (i, j, k) \bigr ] +$$ + +at every point in the target. We can implement this construct using linear constraints + +$$ +\theta \leq \mathrm {U D} (i, j, k) + D (i, j, k), \tag {11} +$$ + +$$ +0 \leq \mathrm {U D} (i, j, k) \tag {12} +$$ + +for all $(i,j,k)\in T$ + +Our second minimization problem is + +$$ +\text {O b j e c t i v e :} \min \sum_ {(i, j, k) \in N} \mathrm {U D} (i, j, k) +$$ + +subject to the same constraints (5)-(10) as earlier plus (11)-(12). + +To meet the requirement (R6) (minimize maximum dose to critical volumes), we have the additional optimization problem + +$$ +\text {O b j e c t i v e :} \min \sum_ {(i, j, k) \in N} D (i, j, k) \text {f o r a l l} (i, j, k) \in T \text {f o r w h i c h} \delta_ {s, w} = 0 +$$ + +subject to the same constraints (5)-(10) as earlier. + +All of the formulations are based on the assumption that the neurosurgeon can determine a priori a realistic upper bound $n$ on the number of shots needed. Several issues need to be resolved to create models that are practical, implementable, and solvable (in a reasonable time frame). Two main approaches are proposed in the literature [Ferris et al. 2002; 2003; Shepard et al. 2000; Ferris and Shepard 2000], namely mixed integer programming and nonlinear programming, to optimize simultaneously all of the variables. + +# Simulation Results and Model Testing + +We developed an optimization package to implement the algorithms of our models in Matlab and perform numerous computer simulations using targets of different shapes and sizes. + +To examine its correctness, we plot a dose-volume histogram for the four different helmets using (4), as shown in Figure 2. The histogram depicts the fraction of the volume that receives a particular dose for the target volume. The fit is best for the small shots and decreases slightly in accuracy for the larger ones. The lines show the fraction of the target and critical organ that receives a particular dosage. + +Generally speaking, the shape of the target is not too irregular, so we choose five typical shapes of the targets in different sizes. In Figure 3a, we illustrate the maximum section of a typical bean-shaped target, whose maximum dimension is $35 \mathrm{~mm}$ . Using the skeleton generation algorithm, we get the corresponding skeleton shown in Figure 3b. Then, we apply the GA-based shot placement algorithm, resulting in three shots for the target: one $14 \mathrm{~mm}$ helmet and two + +![](images/ed03759ec366f9df33b916a0b9f25075be755bec9a5a36f94609820fa165f5e7.jpg) +Figure 2. Dose-volume histograms for four different helmets. + +![](images/e97a9392eff6a2c2242b4f11916e4b3ae4588870f961fa3f56b0518498ab3913.jpg) +Figure 3a. The maximum section of the target. + +![](images/1bd9731a3102d1395f2ff141c5eb4dc3c72d84fd452ac4f2c3d3c6a0d56faa72.jpg) +Figure 3b. The skeleton. + +![](images/70e3cba835677734aa47c2e2b81072009ae14f7678d2a29d4a12156e4ccd99d7.jpg) +Figure 3c. The locations and sizes of the helmets, in 2D. + +![](images/3395a9713fc24677a4f02c8e3376665523b2fa0b5cbe740f45bfd13f2ba2b016.jpg) +Figure 4. The 3D shot placements in the target. + +8 mm helmets. The locations and sizes of the helmets in 2D are indicated in Figure 3c, while 3D shot placements are shown in Figure 4. + +For this target, we also plot six different isodose lines: $30\%$ , $40\%$ , $50\%$ , $60\%$ , $70\%$ , and $100\%$ (Figure 5). The thick (red) line is the target outline, and the thin (black) line is the isodose line. In Figure 5c, the $50\%$ isodose line covers all the points of the target, while in Figure 5f for the $100\%$ isodose line, no point of the shots exceeds the boundary of the target. We also present the 3D shot placements for four other target shapes in Figure 6. + +The optimized plans for all of the five shapes of the targets are shown in Table 1, together with the minimum target doses and the percentage coverages. + +From all of the results, we know that the geometry-based heuristic with the GA optimization approach is a useful tool for assisting in the selection of the appropriate number of shots and helmet sizes. Also, they indicate that our model exceeds the predefined quality of the treatment planning. + +# Sensitivity Analysis + +- Can the model be applied to sensitive structures? Yes, by applying more dose constraints, such as an upper bound on either the mean dose or the maximum dose to the sensitive structures +- Can we treat the tumor at an isodose level other than $50\%$ ? In Figure 5, with a lower isodose line, the dose outside of the target volume decreases rapidly, resulting in a reduction in the integral dose to normal tissue. With a higher + +![](images/8e3f2ee89343fd54a3f3660cf62872107aa096316df79439b1e311f3b9ca2848.jpg) + +![](images/f45b3a64c27fe14b4e32d8a6f2c7f2078f659e87186d8a1d8854ebb8e5570e46.jpg) + +![](images/d21a887ffbddc509c11829abdcf12c606cf8d0e3f01b9a22f6e52d20ef94755c.jpg) + +![](images/dda710af44c716e8461551db277196049b33bcaad200b76b583100407c9a3bb1.jpg) + +![](images/6d24b588ba78ec455b5001159e750a54285960d1b202809afad677cf2b3ffde6.jpg) +Figure 5. The specified isodose lines of different values: $30\%$ , $40\%$ , $50\%$ ; $60\%$ , $70\%$ , $100\%$ . + +![](images/0dc9c031fde1f346d9ad5385a427fa30950cbdbff135911602f16abfefc5b0c8.jpg) + +![](images/c4dc32072ec8d10516fbe5a93fe6cb7aef275e198ba7476b5ad4f7fd23f96fc0.jpg) + +![](images/bba4907d96a3e946dc26c381841398a9e018d30e4791299b42fcee2824c70728.jpg) + +Table 1. +Optimized plans for five targets. + +
Target (figure)Maximum section width (mm)Helmet sizes (mm)Number of shotsMinimum target doseCoverage (by isodose)
50%80%100%
6351810.51100%97%90%
45
8a26840.52100%97%88%
43
8b201410.52100%96%92%
81
41
8c10460.4599%82%69%
8d8420.67100%97%57%
+ +![](images/f43815593e3bbbdfb9198b2effbfad4b710b021ec88062fd174b08aa3fc50cec.jpg) + +![](images/af824ea791b347aa9b11ba98ea5ecf8d0d309064d9fbf19cf70efbe85f5a1a02.jpg) + +![](images/0fbeaefccc0ad0ef746a67f5306d867a54ba2814f25095ee644ff875222cd361.jpg) +Figure 6. Shot placement in four targets. + +![](images/80be9099b2451449aed6119af08f9a43b4d8011ecdfa105474589e8ede69d8bf.jpg) + +isodose line, the isodose line cannot cover all the points of the target. For the nodular regions or sensitive organs, the higher isodose coverage level should be specified. + +- The outstanding question from an optimization viewpoint is global vs. local optimality. How about our model? First, the GA-based shot placement algorithm can ensure that the generating fraction is the smallest one after every shot is placed on the target. For the whole target, it also minimizes the sum of all fractions. +- When we have several comparably optimization schemes for shot placement, how do we choose the best one? For example, for a $10\mathrm{mm}$ -diameter sphere target, we have two comparably optimization schemes, as shown in Table 1. The first places one $8\mathrm{-mm}$ shot and the second places six $4\mathrm{-mm}$ shots. If we consider the treatment merely as a sphere-packing problem, the better choice is the first one. However, in practical treatment, we should consider the diffuse regions where no shot is irradiated. Under the first plan, inn these regions the sum of the dose value at a point is more than 1, resulting in increasing the total effective dosage; so we adopt the second plan. + +- Will there be any points outside the tumor whose dose value is greater than 1? In Figure 7, though the shots have not protruded outside the target (constraint (C1)), some points outside the tumor overdose. This occurs due to the very irregular shape of the target, which is not avoidable. In this case, we should choose how to choose an optimization planning under some constraints. + +![](images/1e9d522f8587ff9709aa31035b994d58903922365ee7f67e6fdc90bab29a8181.jpg) +Figure 7. The $100\%$ isodose of the target in Figure 6a. + +- How about the robustness of our model? For the many cases optimized thus far, high-quality dose distributions have been obtained in all cases. + +# Strengths and Limitations + +# Strengths + +- Our optimization-based automated approach generates more-uniform and better treatment plans in less time than is currently used. +- The geometry-based approach is based on skeletonization ideas from computational graphics, which can speed the process of shot placement. +- The GA-based shot placement algorithm can guide the planner in selecting the number of shots of radiation and the appropriate collimator helmet sizes, it can quickly place a shot, and it can ensure global and local optimality simultaneously. +- The model parameters can be tuned to improve solution speed and robustness. +- The graphical interface is an intuitive way to demonstrate the isodose curve and the treatment effects of the planning. + +# Limitations + +- The skeleton is a key factor for the effectiveness of the algorithm; we should seek better methods to determine it. + +- Whether our model can handle very irregular targets needs to be examined. +- We use the function in Ferris and Shepard [2000] to approximate the dose calculation. Other methods of dose calculation should be examined. +- There is no guarantee that there is not a better treatment plan. Some more-intelligent algorithms, such as a neural network-based dynamic programming algorithm, could be considered. +- We have not tested our model on actual patient data. + +# Conclusions + +From the simulation results, we know that the geometry-based heuristic with the GA optimization approach is a useful tool in the selection of the appropriate number of shots and helmet sizes. Our approach is sufficiently robust and effective to be used in practice. + +In future work, we may fit ellipsoids instead of spheres, since some researchers have commented that the dose is skewed in certain directions. + +# References + +Cho, P.S., H.G. Kuterdem, and R.J. Marks. 1998. A spherical dose model for radiosurgery plan optimization. Physics in Medicine and Biology 43 (10): 3145-3148. +Ferris, M.C., J.-H. Lim and D.M. Shepard. 2002. An optimization approach for radiosurgery treatment planning. SIAM Journal on Optimization 13 (3): 921-937. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/01-12.pdf. +2003. Radiosurgery treatment planning via nonlinear programming. Annals of Operations Research 119 (1): 247-260. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/01-01.pdf. +Ferris, M.C., and D.M. Shepard. 2000. Optimization of gamma knife radiosurgery. In Discrete Mathematical Problems with Medical Applications, vol. 55 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, edited by D.-Z. Du, P. Pardolas, and J. Wang, 27-44. Providence, RI: American Mathematical Society. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-01.pdf +Goldberg, D.E. 1989. Genetic Algorithms: In Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley. +Leichtman, Gregg S., Anthony L. Aita, and H. Warren Goldman. 2000. Automated gamma knife dose planning using polygon clipping and adaptive simulated annealing. *Medical Physics* 27 (1): 154-162. + +Liu, J.-F., and R.-X. Tang. 1997. Ball-packing method: A new approach for quality automatic triangulation of arbitrary domains. In Proceedings of the 6th International Meshing Roundtable, 85-96. Sandia, NM: Sandia National Laboratories. +Man, K.F., K.S. Tang, S. Kwong, and W.A. Halang. 1997. Genetic Algorithms for Control and Signal Processing. New York: Springer-Verlag. +Shepard, D.M., M.C. Ferris, R. Ove, and L. Ma. 2000. Inverse treatment planning for gamma knife radiosurgery. Medical Physics 27 (12): 2748-2756. +Shu, Huazhong, Yulong Yan, Limin Luo, and Xudong Bao. 1998. Three-dimensional optimization of treatment planning for gamma unit treatment system. Medical Physics 25 (12): 2352-2357. +Wu, J.Q. 2000. Sphere packing using morphological analysis. In Discrete Mathematical Problems with Medical Applications, vol. 55 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, edited by D.-Z. Du, P. Pardolas, and J. Wang, 45-54. Providence, RI: American Mathematical Society. +________, and J.D. Bourland. 2000a. A study and automatic solution for multi-shot treatment planning for the gamma knife. Journal of Radiosurgery 3 (2): 77-84. +______ 2000b. 3D skeletonization and its application in shape based optimization. Computers in Medical Imaging 24 (4): 243-251. +________, and R.A. Robb. 1996. Fast 3D medial axis transformation to reduce computation and complexity in radiosurgery treatment planning. In Medical Imaging 1996: Image Processing, Proceedings of the SPIE, vol. 2710, edited by Murray H. Loew and Kenneth M. Hanson, 562-571. +Zhang, Pengpeng, David Dean, Andrew Metzger, and Claudio Sibata. 2001. Optimization of gamma knife treatment planning via guided evolutionary simulated annealing. Medical Physics 28 (8): 1746-1752. +Zhou, Y., A. Kaufman, and A.W. Toga. 1998. Three dimensional skeleton and centerline generation based on an approximate minimum distance field. Visual Computers 14: 303-314. + +# A Sphere-Packing Model for the Optimal Treatment Plan + +Long Yun + +Ye Yungqing + +Wei Zhen + +Peking University + +Beijing, China + +Advisor: Liu Xufeng + +# Abstract + +We develop a sphere-packing model for gamma knife treatment planning to determine the number of shots of each diameter and their positions in an optimal plan. + +We use a heuristic approach to solve the packing problem, which is refined by simulated annealing. The criteria for an optimal plan are efficiency, conformity, fitness, and avoidance. We construct a penalty function to judge whether one packing strategy is better than the other. The number of spheres of each size is fixed, the total number of spheres has an upper bound, and critical tissue near the target is avoided. + +Computer simulation shows that our algorithm fits the four requirements well and runs faster than the traditional nonlinear approach. After detailed evaluation, we not only demonstrate the flexibility and robustness of our algorithm but also show its wide applicability. + +# Introduction + +We develop an effective sphere-packing algorithm for gamma-knife treatment planning using a heuristic approach, optimized by simulated annealing. In our model, we take into consideration the following basic requirements: + +1. At least $90\%$ shot coverage of the target volume is guaranteed. This requirement is the main standard for evaluating our algorithm, or an efficiency requirement. +2. Minimize the non-target volume that is covered by a shot or by a series of delivered shots. This requirement is a conformity requirement. + +3. Minimize the overlapped region of the delivered shots in order to avoid the hot spots as well as to economize shot usage. This is a fitness requirement. +4. Limit the dosage delivered to certain critical structures close to the target. Such requirements are avoidance requirements. + +The traditional model for radiosurgery treatment planning via nonlinear programming assumes that the weights of the shots conform to a certain distribution, from which the construction of the objective function is possible. To avoid the complicated computation of nonlinear programming, we devise a more feasible and rapid heuristic algorithm without reducing any precision of the outcome. + +- We consider an optimal sphere-packing plan for a given number of spheres in each size, satisfying requirements 1-3. That is, in this step, we assume that the lesion part is far from any critical organ and try to find an optimal position for a fixed set of the spheres using the heuristic sphere-packing algorithm. +- We try all possible combinations of up to 15 spheres; for each, we use the above algorithm to get an optimal plan. We develop a criterion to select from the different combinations the best packing solution for our model, which is optimized by simulated annealing. +- We consider the real situation in field practice, in which the effect of a critical organ is added. Accordingly, we modify the judgment criterion so that requirement 4 is satisfied. +- Finally, to apply the above method to more general situations, we add the weights of the shots. + +Though we admit that the inherent limitations of this model due to the simplification of the problem and the restriction of the hardware capacity are unavoidable, we believe that our model has successfully solved the given problem. Our algorithm is not only fast in generating solutions but also flexible in allowing parameter settings to solve more difficult tasks. + +# Assumptions + +- Shots can be represented as spheres with four different diameters: 4, 8, 14, and ${18}\mathrm{\;{mm}}$ . +- The target volume is of moderate size with a mean spherical diameter of 35 mm (and usually less) [The Gamma Knife . . . n.d.] +- The maximum number of shots allowed is 15. + +- The target object is represented as a three-dimensional digital map with $100 \times 100 \times 100 = 1$ million pixels. +- The volume of a object is measured by the total number of pixels in it. +- The dose delivered is above the lower bound of the effective level to kill the tumor. + +Table 1. Description of the variables. + +
Ntotal number of shots
ninumber of shots of type i
sthe sth shot used, s = 1, . . . , N
(xs, ys, zs)position of the sth shot center
Positionmatrix storing all the positions of the shot centers
Maverage shot width
Radiusvector storing the four types of radius: [9 7 4 2]
BitmapM × M × M boolean matrix storing information information from the CT/MRI image
Dosedose delivered, a linear function of exposure time satisfying θ ≤ Dose(i, j, k) ≤ 1, where θ is the lower bound of the isodose contour
Coveredtotal number of covered pixels in the target volume; directly reflects the efficiency requirement
Miscoveredtotal number of covered pixels in the normal tissue; directly reflects the conformity requirement
Overlaptotal number of overlapped pixels among different shots; directly reflects the fitness requirement
Ratiopercentage of the target volume covered
SphereInofvector representing the number of each type of shot
SphereRadiusvector representing the radius of N shots
+ +# Background Knowledge + +Gamma knife radiosurgery allows for the destruction of a brain lesions without cutting the skin. By focusing many small beams of radiation on abnormal brain tissue, the abnormality can be destroyed while preserving the normal surrounding structures. Before the surgery, the neurosurgeon uses the digitally transformed images from the CT/MRI to outline the tumor or lesion as well as the critical structures of the surrounding brain. Then a treatment plan is devised to target the tumor. + +The determination of the treatment plan varies substantially in difficulty. When the tumor is large, has an irregular shape, or is close to a sensitive structure, many shots of different sizes could be needed to achieve appropriate coverage of the tumor. The treatment planning process can be very tedious and time-consuming due to the variety of conflicting objectives, and the quality of the plan produced depends heavily on the experience of the user. Therefore, a unified and automated treatment process is desired. + +In our model, we reduce the treatment planning problem to an optimal sphere-packing problem by focusing on finding the appropriate number of spheres of different sizes and their positions to achieve the efficiency, conformity, fitness, and avoidance requirements. + +# Construction and Development of the Model + +# Fixed Set of Spheres + +The main idea is to let a randomly chosen sphere move gradually to an optimal position. + +In the beginning, $N$ spheres are randomly positioned inside the target volume. Then one sphere is moved to a new position and we estimate whether the new location is better; if so, the sphere is moved, otherwise it remains in place. We repeat this process until a relatively good packing solution is achieved. + +To implement our algorithm, we need a criterion to judge a packing solution. According to our four requirements, it is reasonable to take the weighted linear combination of the volume of those covered, miscovered, and overlapped parts as our criterion—that is, a good packing solution means less miscovered, less overlapped, and more covered volumes. + +Let sphere A move to location B Figure 1. We restrict our consideration to just the pixels in the shaded area, which is very thin and thus has few pixels in it. The program judges which region a pixel belongs to: covered, miscovered or overlapped, and we count the pixels of each kind. We implement this idea using a function Penalty Judge that returns a signed integer indicating whether the change of the packing strategy results in a better solution. + +![](images/1aa08aa294474962529c3e3bb5ee0bb4bfc5ce81f773e0b1d0954cc3a0b02c96.jpg) +Figure 1. Sphere A moves to location B. Figure 2. The centers of the $18\mathrm{mm}$ spheres are set at $O$ the the centers of the three smaller spheres at random points in regions 1, 2, and 3, respectively. + +How do we set the initial position of the spheres? Our results will be affected significantly if the starting positions are not properly set. Cramming the spheres together will not do any harm, because according to our algorithm, all of the spheres move in different directions and finally scatter through the + +target volume. But there is one constraint that the initial positions must obey: Larger spheres cannot cover smaller ones. Otherwise, the smaller spheres will never move out of the larger ones, which means they are useless and wasted. Since spheres of the same size will not be covered by each other as long as their centers differ, we need to avoid only coverings between spheres of different sizes. Our technique is to set the spheres of different size in different regions of the target volume, which ensures that the spheres never cover each other. + +In Figure 2, point O represents the center of the CT image of the target volume. We set the centers of all the 18-mm spheres at point O and center the 14-mm, 8-mm, and 4-mm spheres randomly at the tumor pixels lying in the shadowed regions 1, 2, 3, respectively. Thus, a relatively good starting status is generated. + +We perturb the location of one sphere by a step (i.e., one pixel) in the North, South, East, West, Up, and Down directions. If a perturbation in one of these directions generate a better packing, we move the sphere one step in that direction. Then we choose another sphere and repeat the process. Applying this process to all of the spheres successively is one iteration. Our program generally generates a relatively good packing in about 10-15 iterations. + +# Results and Data Analysis + +# Heuristic Method + +To test the effectiveness of our algorithm, we construct a 3D target object with $100 \times 100 \times 100$ pixels through the combination of two spheres and a segment of a circle. For the effect of live simulation, we blur and distort the edge of the object through photoprocessing software so that it is very similar to the shape of a real tumor. The simulated results and the solution given by our program are excitingly good, as shown in Table 2. + +Table 1. Final distribution of shots from the heuristic algorithm on a simulated target. + +
Iterations% covered% miscovered% overlappedTime consumed (s)
0373914
57525820
109611540
159611562
209611583
+ +# Visualization of the Results + +Plotting the resulting bitmap, we can see clearly from Figures 3-4 the evolution of the locations of the spheres as well as the stability and robustness of our program. + +![](images/aade67bfeebaee0f344b02d284063ea1b1b6b2245596f767a935859ed150c6b7.jpg) + +![](images/e1640cf97e5009840e42e7a1ed3eba4b61cbebf1929d0c9776b16b5ce9b8ed21.jpg) + +![](images/19b7d3349dd8cf477db674b4ec4cd55444ef2054b90aafc3f813d4fe7e9f4124.jpg) +Figure 3. Distribution of spheres within the target after 0, 5, 10, and 15 iterations. + +![](images/43b704d4e6decd78cce6ac7fe72d90b496a7217b21cf4853e947acb91e45c7d9.jpg) + +![](images/e6e2aced4da0b250fd3baa67a2a9d54719e36835f4039d4da39fc21fd651a3d4.jpg) +Figure 4. Three-dimensional views of final placement of spheres. + +![](images/d40cd313f07d8b0cac065279ba4e6a1397db3fd9477ec3c9c7a984afb288ed07.jpg) + +After 10 iterations, all the spheres go into a relatively stable position. Such fast and stable convergence occurs in all of our simulations. Hence, we can reasonably assume that after 15 iterations any initial packing will turn into an optimal one. + +# Further Development: The Best Set of Spheres + +# Difficulties and Ideas + +So far, we have used our personal judgment about which sets of spheres to pack. A natural idea is to enumerate all combinations of the four types of spheres and find an optimal one. There are $\binom{19}{4} = 3$ , 876 nonnegative integer solutions to the equation $n_1 + n_2 + n_3 + n_4 \leq 15$ . The runtime for our program to check them all, at 83 s each, would be $89\mathrm{h}$ , excluding the time to read the + +bitmap and plot the graphs. + +So, to get a near-optimal but efficient solution, we turn to simulated annealing, which we use not only to find the optimal combination of the spheres but also to determine the direction of the spheres to move in each step. + +# Simulated Annealing to Find the Optimal Combination + +Simulated annealing (SA) is a Monte Carlo approach used in a wide range of problems concerning optimization, especially NP-complete problems, which can approximate the global extremum within the time tolerance. + +SA is a numerical optimization technique based on the principles of thermodynamics. The algorithm starts from a valid solution, randomly generates new states for the problem, and calculates the associated cost function. Simulation of the annealing process starts at a high fictitious temperature (usually manipulated according to practical need). A new state is randomly chosen from the "neighborhood" of the current state (where "neighborhood" is a map defined as $N: \text{State} \to 2^{\text{State}}$ , $i \to S_i$ satisfying $j \in S_i \iff i \in S_j$ ) and the difference in cost function is calculated. If (CurrentCost - NewCost) ≤ 0, i.e., the cost is lower, then this new state is accepted. This criterion forces the system toward a state corresponding to a local—or possibly a global—minimum. However, most large optimization problems have many local minima and the optimization algorithm is therefore often trapped in a local minimum. To get out of a local minimum, an increase of the cost function is accepted with a certain probability, i.e., the new state is accepted even though it is a little hotter. The criterion is + +$$ +\exp \left(\frac {\mathrm {C u r r e n t C o s t} - \mathrm {N e w C o s t}}{\mathrm {T e m p e r a t u r e}}\right) > \mathrm {R a n d o m} (0, 1). +$$ + +The simulation starts with a high temperature, which makes the left-hand side of the equation close to 1. Hence, a new state with a larger cost has a high probability of being accepted. + +The change in temperature is also important in this algorithm. Let $\beta_{n} = 1 / \text{Temperature}$ . Hwang et al. [1990] prove that if $\beta_{n} / \log n \to 0$ as $n \to \infty$ , then $P(\text{NewState}_n \in \text{global extremum}) \to 1$ . But in practice, we usually reduce the temperature, according to temperature $_{n+1} = 0.87$ temperature $_{n}$ , for convenience. + +We apply SA to determine both the next direction to move a specified shot and whether a shot should be deleted or not according to our judgment function. In the case of direction determination, we have + +$$ +\% \left(\text {CurrentCost} - \text {NewCost}\right) = 2 \times \text {SphereRadius} - \text {PenaltyJudge}; +$$ + +and in the case of determining whether to delete a shot or not, we have + +$$ +\% \left(\text {CurrentCost} - \text {NewCost}\right) = \text {RatioCovered} - 0.7. +$$ + +After this adjustment, the results and the speed improve dramatically. + +# Visualization of the Results + +This time, we use a tumor image from the Whole Brain Atlas [Johnson and Becker 1997]. Using 20 two-dimensional slices of the tumor, we construct a 3D presentation of it. We visualize the optimization process in Figures 5-7. + +Using Matlab, we seek out the contour of the tumor through reading the bitmap of all the pixels. Finally, we get a bitmap of $50 \times 50 \times 50 = 125000$ pixels, which is within the capacity of our computer. + +![](images/c1f44085163dd34f7168d2f519fc3044b6b35a5b661c6fe8148aa82d90fe3aa4.jpg) +Figure 5. Sample slice of the tumor. + +![](images/691f2a756fb6955ee847046141aba90df5c592685b8f218dff143ceea4af8a19.jpg) +Figure 6. Contour of the tumor +Figure 7 shows the power of our algorithm. + +# Critical Organ + +Finally, we take into consideration the existence of a critical organ. In real medical practice, the maximum dose to critical volumes must be minimized, that is, the dose delivered to any part of a critical organ cannot exceed a particular value + +Thus, we modify our judging criterion to meet the avoidance requirement. In our previous algorithm, the criterion is implemented in the function PenaltyJudge as a weighted linear combination of the Covered, Miscovered, and Overlapped variables. We change PenaltyJudge so that if after a step of the movement, a sphere covers any part of the critical organ, the movement should not be made, even if the PenaltyJudge function justifies it (this kind of criterion does not differ from setting a positive value as the maximum dose that can be delivered to a critical part). We can simply give the covered critical part an infinite weight in the linear combination to achieve this goal, which is a demonstration of the flexibility of our program. + +The results generated after this change do not differ significantly from the previous ones (because our heuristic algorithm also tries to avoid the protruding of the shots as much as possible), but the existence of critical organ does pose a negative contribution to the final strategy of the treatment planning. + +![](images/3f8c19fb631c3ccb55b8928aa11fe51841469050485a486ad2cbd177a7ea322f.jpg) +Figure 7a. Initial setting. + +![](images/501418b47bde5eb398b53be120193e7bfcbdbbde9d05c96fa8cab72f7c5565e2.jpg) +Figure 7b. The scatter process. + +![](images/f093ca4b9f171d6a981faf684f84d96031efe8ded313d6906d200aff3c118d41.jpg) +Figure 7c. Deletion and further scattering of the spheres. + +![](images/3cf405c25e6bb1f4d0de6040db9fadcfae2143904e31ace203b97b5c26ad9d75.jpg) +Figure 7d. Final effect; 12 shots are used to cover this large tumor. + +# Reoptimization + +There are two aspects in the reoptimization of our model: the improvement of the quality of the final solution, and the efficiency of the algorithm. + +The random starting position of the packing spheres significantly affects the performance of the solution. Although we use a technique to improve the soundness of the starting position, the starting position can still be unsatisfying, since the tumor can be of any irregular shape and size. For example, in Figure 3, our program sometimes generates an inferior solution; the result we present is the best from three executions. + +To get out of this dilemma, for each starting position we repeat three times the search for the optimal distribution and select the best solution. + +The model could also be easily modified to solve more complex situations: + +- consideration of the distribution of the dose of the shots, and +- varying the the radii of the available shots according to a continuous interval. + +However, with new factors or more pixels, the program slows down. To + +speed it up, we can use a stepwise optimizing method—that is, first solve the problem with a coarse approximation, then refine and optimize it. + +- In our initial model, we evolve a less good packing solution to a better one by pixel movements. In the modification, we make each step 3 pixels; after the packing has evolved to a stable status, we reset the step size to 1 pixel. +- We minimize the drawback of a large number of pixels by managing an image of a smaller size, i.e., an image in which one pixel represents several pixels of the original volume. We use our model to find an optimal solution for the smaller image, then return to the original data to generate a final solution. + +# Evaluation: Strengths and Weaknesses + +Four characteristics can be used in evaluating the algorithm for planning the treatment: effectiveness, speed, flexibility, and robustness. + +Since our algorithm focuses mainly on optimizing the final results to meet requirements 1-4, and the data in Table 2 show satisfying results, our algorithm achieves the effectiveness goal. Our model also simplifies the decision of a good treatment plan to an optimal sphere-packing problem. By using the heuristic approach and the simulated annealing algorithm, we can find the optimal number of spheres of each kind and their positions in a relatively short time. + +In addition, we take full consideration of various factors that affect the efficiency and safety of the gamma knife radiosurgery. By summarizing them to four requirements, we construct a penalty function that decides whether a change in the packing plan is desirable or not. Such a penalty function gives our algorithm great flexibility: If more factors are taken into consideration, we can simply add the contribution of that factor to the function. This flexibility is of great value in practice since the requirements of different patients may vary a lot. + +Furthermore, the heuristic method used in our program is general. In real medical practice, when many of the assumptions in this particular problem no longer stand, we can still use our algorithm to get an optimal plan. For example, some medical literature [Ferris et al. 2003] mentions that the actual dose delivered is ellipsoidal in nature rather than spherical; we can simply modify our model to handle this situation by changing the four sphere types to ellipsoidal ones—the main outline of our algorithm needs little change. + +Finally, our method is strengthened by simulated annealing, which ensures that our solution can reach the global optimum with great probability. + +Though we believe that our model solves the problem, there are some limitations: + +- The sphere-packing model is too simple; it fails to consider the real dose distribution and the time required for the shots to deliver enough energy. + +- Due to the restriction of the hardware, our final solution to for a target consisting of 1 million pixels needs approximately $30\mathrm{min}$ on a Pentium IV PC, which means any magnification of the scale of this problem is intolerable. + +# Extension of the Problem + +There are five factors that may affect the effectiveness of the treatment: + +- How many shots to use $(N)$ ? +- Which size shots (radius)? +- Where to deliver the shots (position)? +What's the distribution of the dose of a particular shot? +- How long to deliver shots $(t_s)$ ? + +To improve our model so that it can accommodate more practical situations, shot weights must be added. + +Our previous model mainly focuses on the first three factors, while our improvement also addresses the last two factors. We can obtain the actual shot weights distribution, since in practice it is easy to measure the relative weight of a dose at a certain distance from the shot center as well as to represent its distribution in a 3D coordination. + +We fit a nonlinear curve to these measurements, using nonlinear least squares. Suppose that the function of the curve is $D_{s}(x_{s},y_{s},z_{s},i,j,k)$ , which represents the relative dose at pixel $(i,j,k)$ from a shot centered at $(x_{s},y_{s},z_{s})$ . The dose at a certain pixel of the CT/MRI image can be calculated by the function + +$$ +\operatorname {D o s e} (i, j, k) = \sum_ {(s, r) \in S \times \text {R a d i u s}} t _ {s, r} D _ {s} \left(x _ {s}, y _ {s}, z _ {s}, i, j, k\right), +$$ + +where $t_{s,r}$ is the duration of a shot. + +We make our four requirements more precise and practical by setting numerical limitations to the dose that the tumor, normal tissue, and critical part receive. These limitations, set by the neurosurgeon, vary from patient to patient. + +A simple refinement is to modify the diameters in the sphere-packing problem. The diameters are no longer 4, 8, 14 and $18\mathrm{mm}$ but must be calculated using the function $D_{s}$ when the specified weight of shot is known. For example, if more than $50\%$ of the shot weight is required for the lesion part, the required diameters can be worked out from $D_{s} = 0.5$ . (We assume that the position of the shot won't affect the distribution of shot weight—only the distance from the shot center determines the weight.) If normal tissue can receive only less than $20\%$ of the shot weight, we calculate the diameters $D$ corresponding to the $80\%$ shot weight. Our conformity requirement is reduced to: The distance between the pixels of normal tissue and shot center must be greater than $D$ . + +Higher precision may be achieved using the concept of isodose curve: A $p\%$ isodose curve is a curve that encompasses all of the pixels that receive at least $p\%$ of the maximum dose delivered to any pixel in the tumor. The conformity requirement can be represented as the conformity of such an isodose curve to the target volume. We can also approach the shot weight problem by adjusting the amount of shot time, especially when the target is very close to the critical part. Under such circumstances, hitting the critical part is unavoidable. But we can divide the total time required for the treatment into short spans, so that the dose received by the critical part in one time span will do little harm to it while the cumulative dose can kill the tumor. + +Anyhow, any improvement cannot be attained without combining real-world practice and must be balanced with the speed and efficiency requirement. + +# References + +Johnson, Keith A., and J. Alex Becker. 1997. The Whole Brain Atlas. http://www.med.harvard.edu/AANLIB/home.html. +Buffalo Neurosurgery Group. n.d. Gamma knife radiosurgery. http://buffaloneuro.com/radio/radio2.html. +Center for Image-guided Neurosurgery, Department of Neurological Surgery, University of Pittsburgh. 2003. The gamma knife: A technical overview. http://www.neurosurgery.pitt.edu/imageguided/gammaknife/technical.html. +Donovan, Jerry. n.d. Packing circles in squares and circles page. http://homeAtt.net/~donovanhse/Packing/index.html. +Ferris, M.C., J.-H. Lim and D.M. Shepard. 2003. Radiosurgery treatment planning via nonlinear programming. Annals of Operations Research 119 (1): 247-260. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/01-01.pdf. +Ferris, M.C., and D.M. Shepard. 2000. Optimization of gamma knife radiosurgery. In Discrete Mathematical Problems with Medical Applications, vol. 55 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, edited by D.-Z. Du, P. Pardolas, and J. Wang, 27-44. Providence, RI: American Mathematical Society. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-01.pdf +Hwang, Chii-Ruey, and Sheu, Shuenn-Jyi. 1990. Large-time behavior of perturbed diffusion Markov processes with applications to the second eigenvalue problem for Fokker-Planck operators and simulated annealing. Acta Applicandae Mathematicae 19: 253-295. +Shepard, D.M., M.C. Ferris, R. Ove, and L. Ma. 2000. Inverse treatment planning for gamma knife radiosurgery. Medical Physics 27 (12): 2748-2756. + +# The Gamma Knife Problem + +Darin W. Gillis + +David R. Lindstone + +Aaron T. Windfield + +University of Colorado + +Boulder, CO + +Advisor: Anne M. Dougherty + +# Abstract + +Noninvasive gamma-knife radiosurgery treatment attacks brain tumors using spherical radiation dosages (shots). We develop methods to design optimized treatment plans using four fixed-diameter dosages. Our algorithms strictly adhere to the following rule: Shots cannot violate tumor boundaries or overlap each other. + +From a mathematical perspective, the problem becomes a matter of filling an irregularly-shaped target volume with a conglomeration of spheres. We make no assumptions about the size and shape of the tumor; by maintaining complete generality, our algorithms are flexible and robust. The basic strategies of the algorithms are deepest-sphere placement, steepest descent, and adaptation. + +We design representative 3D models to test our algorithms. We find that the most efficient packing strategy is an adaptive algorithm that uses steepest descent, with an average coverage percentage of $40\%$ over 100 test cases while not threatening healthy tissue. One variation covered $56\%$ of one test case but had a large standard deviation across 100 test cases. It also produced results four times as fast as the adaptive method. + +# Background + +# Brain Tumors + +The average volume of a tumor operable by radiosurgery is about $15\mathrm{cm}^3$ [Lee et al. 2002]. We generate 3D tumor models of approximately this volume with varying physical dimensions. + +# The Gamma Knife + +The gamma knife unit consists of 201 individual cobalt-60 radiation sources situated in a helmet. The 201 beams converge at an isocenter creating a spherical + +dose distribution ("shot"). Four sizes of spheres are possible: 4, 8, 14, and $18\mathrm{mm}$ in diameter. A radiosurgery plan is used to map out shots to destroy the tumor without harming the patient. Following successful treatment, surviving cancer cells lose their ability to grow. In fact, many partially destroyed tumors shrink or even disappear in time [Kaye and Laws 1995]. + +# The Problem + +The plans should arrange radiation doses so that tumor destruction is maximized, healthy tissue is protected, and hot spots are avoided. Thus, the algorithms are subject to the following constraints: + +- Prohibit shots from penetrating outside the target area. +- Prohibit overlap of shots, preventing hotspots. +- Maximize the percentage covered in the tumor, or target volume. +- Use a maximum of 15 shots. + +# Assumptions + +- The tumor is homogeneous; it is equally productive to treat any part of it. +- The tumor is modeled discretely using a three-dimensional image. +- No assumptions are made about the shape of the tumor. +- Tumor cells are either been radiated or not; there are no partial dosages. + +# Problem Approach + +We have divide the problem into three different pieces: + +- create a variety of 3-dimensional brain tumor models, +- develop and refine sphere-packing algorithms, and +- test and compare algorithms using tumor models + +# Data Models + +Our data consists of a $100 \times 100 \times 100$ array that represent a $1000~\mathrm{cm}^3$ space around the brain tumor. We refers to each element of the matrix as a voxel (three-dimensional pixel). Each voxel represents $1\mathrm{mm}^3$ of brain tissue [Wagner 2000]. We use 1s to indicate tumor and 0s to represent healthy tissue, and we populate the arrays with tumor models as described below. + +# Sphere Tumor Model + +Our first model is based on the simple equation for a sphere, $(x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 = r^2$ , where the center of the sphere is represented by $(x_0, y_0, z_0)$ with radius $r$ . We fill in the voxels representing the tumor by applying the inequality $(x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 \leq r^2$ throughout the test volume. + +# Ellipsoid Tumor Model + +The ellipsoid model uses the same principle as the spherical model. The inequality + +$$ +\frac {(x - x _ {0}) ^ {2}}{a ^ {2}} + \frac {(y - y _ {0}) ^ {2}}{b ^ {2}} + \frac {(z - z _ {0}) ^ {2}}{c ^ {2}} \leq r ^ {2} +$$ + +represents the interior of an ellipsoid. The spherical and ellipsoid models are a basis for the mutated sphere tumor model. + +# Mutated Spherical Tumor Model + +Tumor shapes can be modeled by unions of ellipsoids [Asachenkov 1994]. Thus, our most accurate model is created by intersecting several ellipsoids at random locations. We start with a small spherical tumor. Then we create three discrete uniformly distributed random variables $U_x$ , $U_y$ , and $U_z$ , where $(U_x, U_y, U_z)$ represents a randomly chosen voxel within the sphere. This point becomes the center of an ellipsoid that is added to the tumor. The $a$ , $b$ , and $c$ parameters that define the dimensions of the ellipsoid are defined by three other random variables $U_a$ , $U_b$ , and $U_c$ , uniform continuous random variables over [5, 15]. + +# Sphere-Packing Algorithms + +In practice, tumors are usually represented as a 3D image obtained from MRI (magnetic resonance imaging). We discretize the tumor and the removal spheres for processing. We explore four different methods: + +first-deepest, + +- steepest descent, +- improved steepest descent, and +adaptive. + +# Grassfire Algorithm + +All of our sphere-packing methods employ the grassfire algorithm [Wagner 2000]. The grassfire method progressively marks the layers of the tumor from the outside in, analogous to a fire burning away an object one layer at a time. + +For each 1-valued voxel, all surrounding voxels are surveyed. If any are 0-valued (outside the tumor), the current voxel is set to a depth of 2, which represents the boundary of the tumor. This process is repeated for every voxel in the 3D test volume, with layer numbers progressively increasing until all of the 1s in the array have been consumed. In other words, the grassfire method calculates an approximate measure of depth for each voxel in the tumor. Doing so gives an easy measure of the largest sphere that can be placed at any given point without violating the tumor boundary. If the voxel is at depth 8 or 9, then a 7-mm sphere should be used, and so on. + +The basic operation of grassfire (shown in two dimensions) is shown in Figure 1, which shows the effect of grassfire on a circle. For readability, the boundary layer has been left at a value of 1 and the data arrays represent a much smaller area than the plots. Depth is indicated by shade; darker is deeper. The arrows show progression through initial grassfiring to the removal of a small circle from the center (just as spheres are removed from the tumor). Notice that when grassfire is applied after removal, the maximum depth is smaller than that of the original circle. + +Although it is simple, grassfire provides the foundation for all of our sphere-packing algorithms. + +# Sphere Placement Methodology + +After grassfiring the tumor model, the deepest point in the tumor is easily found. Reasonably, the deeper the point in the tumor, the more likely it is that a large radius sphere can be placed without harming normal tissue. Large (particularly $9\mathrm{-mm}$ ) spheres being placed in the tumor increases the coverage of the solution. Conversely, the smallest sphere ( $2\mathrm{-mm}$ radius) is the least efficient in eradicating cancerous tissue. For the average tumor size, the $2\mathrm{-mm}$ sphere removes less than $1\%$ of the volume. Therefore, we place as many large spheres as possible before placing smaller spheres. + +![](images/795a03c18c5cdef913e379d9536e92ce3d2a0e564572dae6c3d579728d19ec6d.jpg) +Figure 1. Grassfire algorithm flowchart. + +# First-Deepest Method + +The first-deepest method begins by applying the grassfire algorithm to the tumor data. We generate a list of the deepest voxels (nearly all volumes will have multiple "deepest" points after the layering process). This method simply takes the first voxel off that list and places the removal sphere at that location; the radius of the sphere used is determined from the depth value at that voxel. For example, if a voxel is 8 layers deep (and thus $8\mathrm{mm}$ deep), then a 7-mm radius sphere can be removed from that location without harming healthy tissue. + +# Step-by-Step + +1. Grassfire the tumor data. +2. Grab one of the points at the deepest level. +3. Calculate the equation for the sphere centered at that point with the largest acceptable radius. +4. Set all voxels within the radius of the sphere to zero (effectively removing a spherical portion of tumor). +5. Reset all nonzero voxels to 1s (resetting the tumor for another grassfire run). +6. Return to step 1. + +This method is very robust; when a sphere is removed, it is simply seen as a new tumor boundary, so any of the variables such as shot size, number of shots, or tumor shape can change and the algorithm still works. + +# Variations + +We try to improve the method by looking down the list of the deepest voxels to find a more appropriate sphere center. We accomplish this by giving each voxel a score based on the depths of its neighboring points. Essentially, the algorithm tries to place the sphere at the greatest possible average depth. But doing this does not improve the total coverage; in fact, this algorithm is inferior to the first-deepest method. Placing the sphere as deep as possible reduces the depth of the next iteration, preventing more large spheres from being placed. A better strategy would be placing the sphere as shallow as possible (see Figure 2 for a 2D example), in an effort to leave room for more large spheres. + +# Steepest Descent Method + +The method of steepest descent tries to place the largest possible sphere (as determined by grassfire) close to the tumor boundary. The steepest descent uses a scoring function to find the best location for the biggest sphere. + +![](images/7195ce1b0537209212b34b457d0d8707f0369ecb2621aa7f2518cd85374750f4.jpg) +Figure 2a. A single large circle in the center prevents placement of any more large circles. + +![](images/87c746b7afed23585b41a467efea8e8edf4024660f7ce6846ba9c824afea9d5b.jpg) +Figure 2b. If the first circle is placed far from the center, a second large circle can fit. + +Starting from the deepest voxel, we calculate the gradient of the score function and proceed along the steepest path until a local max is reached, and this point is used as a sphere isocenter. This is implemented as follows: + +1. Calculate the score of the deepest voxel. +2. Calculate the score of all surrounding voxels. +3. If the original voxel has the highest score, it becomes an isocenter, otherwise move to the highest scoring voxel and go back to step 1. + +# Scoring Function + +This method is only as good as the scoring function. We have two factors, $W_{1}$ and $W_{2}$ , that figure into the score of a given voxel. + +The $W_{1}$ factor measures the depth of any nearby voxels; more specifically, it is an estimation of the depth-density of a sphere centered at that voxel. More rigorously defined, we estimate the $W_{1}$ at voxel $(x_0, y_0, z_0)$ : + +$$ +W _ {1} \approx \frac {\iiint_ {S (x , y , z)} D (x , y , z) d x d y d z}{\text {t o t a l v o l u m e o f s p h e r e}}, +$$ + +where $S(x_0, y_0, z_0)$ is a sphere centered at $(x_0, y_0, z_0)$ . + +$D(x,y,z)$ is the depth at $(x,y,z)$ , so effectively $W_{1}$ represents the average depth throughout the sphere's volume. To speed up the scoring function, we estimate this volume integral by averaging the depth values for a cube surrounding the point. The sphere is inscribed within our cube of estimation; and given that the scoring function will only be a basis of relative comparison, the level of error is tolerable. + +The $W_{2}$ factor is used to make sure that normal tissue is not contained in the shot. Given the sphere size that will be used for the potential shot, we have + +$$ +W _ {2} = \left\{ \begin{array}{l l} 1, & \text {i f d e p t h} (x _ {0}, y _ {0}, z _ {0}) > \text {s h o t r a d i u s}; \\ 0, & \text {i f d e p t h} (x _ {0}, y _ {0}, z _ {0}) \leq \text {s h o t r a d i u s}. \end{array} \right. +$$ + +This is another place where our decision to prohibit destroying healthy tissue becomes a central part of our solution. Total coverage of the tumor could be improved at the expense of healthy tissue by implementing a continuous scoring function for the $W_{2}$ weight. + +Finally, the total score is given by $W_{2} / W_{1}$ . This scoring function rewards the shot for being at a closer distance to the tumor edge (or a removed sphere, since this looks like an edge to our algorithms) while still being entirely contained within the tumor. + +# Improved Steepest Descent Method + +The improvement on steepest descent is to allow spheres to be placed closer to the tumor boundary. The only changes lie in how the $W_{1}$ and $W_{2}$ weights are calculated in the score function. + +# Altered Score Function + +The improved scoring function calculates the $W_{2}$ score factor in a more rigorous manner. Before, we used the depth (determined by grassfire) to determine if the shot would fit or not. The grassfire depth is actually a conservative depth estimate—there can be more distance between the voxel and the boundary than it indicates. To fit a sphere more tightly, we construct a list of the points on the boundary and consult it each time $W_{2}$ is calculated. Now, + +$$ +W _ {2} = \left\{ \begin{array}{l l} 1, & \text {i f a n y s h o t v o x e l s a r e i n \{(x , y , z) | (x , y , z) i s o n t u m o r b o u n d a r y \}}; \\ 0, & \text {e l s e .} \end{array} \right. +$$ + +# Adaptive Method + +The adaptive method generates an initial sequence of shots using steepest descent (coverage could be improved by using improved steepest descent, but the simulations would run an order of magnitude slower). The initial sequence is then changed one sphere at a time and repacked until each shot in the sequence has been changed once. Theoretically, taking this action allows for the exchange of a large sphere for many smaller spheres, which may be more effective. It follows the idea that perhaps some spheres need to be placed poorly initially in order to allow smarter shots to be placed down the line. + +For instance, consider an initial sequence of length $N$ that starts with the following shots: $\{9\mathrm{mm}, 4\mathrm{mm}, 4\mathrm{mm}, \ldots \}$ . Using the same initial tumor, the adaptive method runs the steepest descent method, but the new sequence must change the first sphere, so it starts with a 7-mm sphere. On the second iteration, the new sequence keeps the leading 9-mm sphere but must change the second element to a 2-mm sphere. The third iteration starts with a 9-mm sphere, followed by a 4-mm sphere, and then changes the third sphere to a 2-mm sphere. This continues until all $N - 1$ sequences have been generated. We seek and use the sequence with maximum coverage. + +# Quantitative Results + +Table 1. Comparison of methods. + +
MethodTimingMean% Coverage
Avg. (s)Relative speedSDMin.Max.
First-deepest831345.22045
Steepest descent1041.25382.93245
Improved steepest descent2292.8376.22856
Adaptive102512.3402.53544
+ +We ran each method on the same suite of 100 test cases, except for the adaptive method, for which, because of its longer runtime we used a subset of 20 cases. Table 1 contains a summary of the results. The maximum coverage that we could achieve was $56\%$ (Figure 3). + +Our algorithms work on tumors of arbitrary shape, even disjoint tumors. The result of one such pack is shown in Figure 4. + +# Qualitative Results + +# First-Deepest Method + +As expected, this algorithm's performance is quantitatively the weakest. The maximum coverage of $45\%$ is actually slightly better than the other methods. However, the minimum value of $20\%$ , as well as the average of $34\%$ are quite low compared with other methods. It also exhibits large random variations in coverage, meaning that the algorithm is equally likely to fill in a low percentage as it is to fill in a relatively high percentage of a tumor. + +The method is inconsistent, yields the lowest average coverage of all methods, but it is the fastest. + +![](images/20aba2966a4c3ebc40fcf3b92afa994a6ba31d65de13f0e8c3f9b2d7f7cfc06c.jpg) +Figure 3. Our best result: $56\%$ coverage. + +![](images/68db46388e3a14615655ccd494bb01762e77647bf0f076789ae8454f2825faff.jpg) +Figure 4. Sphere packing into two disjoint tumors. + +# Steepest Descent + +The minimum and average coverages are significantly better than for first-deepest, and the method is much more consistent. This makes sense, because this method adapts to variations in tumor size and shape rather than using the first available isocenter for each sphere. It is fast compared to the improved steepest descent and adaptive methods. + +# Improved Steepest Descent + +This algorithm is similar to steepest descent, except for the ability to pack spheres closer to edges and other spheres. This method does poorly on average, worse than steepest descent and the adaptive method. But it yields the best coverages—over $50\%$ on four different test cases. Also, this method has the highest deviation of all our methods. + +# Adaptive Method + +The average coverage of approximately $40\%$ is the best of all four algorithms, and this method also has the lowest standard deviation. But it is the slowest and most complicated algorithm. + +# Conclusions + +All of our algorithms have strengths and weaknesses. The first-deepest method is fast, while steepest descent is consistent in coverage. Improved steepest descent yields some of the best results in terms of coverage, while the adaptive algorithm maintains the highest average coverage and smallest standard deviation. + +# Weaknesses + +Prohibition of hot spots and radiating healthy brain tissue prevents our algorithms from covering much of the tumor. + +# Strengths + +Our algorithms can process any possible tumor. They are "patient-friendly"—they don't destroy anything outside of the tumor, nor do they produce spheres that intersect each other. They are also simple and robust. + +# Future Work + +It would be nice to merge the adaptive algorithm with steepest descent to try to build up the coverage. + +# References + +Asachenkov, Alexander, et al. 1994. Disease Dynamics. Boston, MA: Birkhäuser. +Aste, Tomaso, and Denis Weaire. 2000. The Pursuit of Perfect Packing. Philadelphia, PA: Institute of Physics Publishing. +Conway, J.H., and N.J.A. Sloane. 1988. *Sphere Packings*, Lattices and Groups. New York: Springer-Verlag. +Ferris, Michael C., R.R. Meyer, and W. D'Souza. 2002. Radiation treatment planning: Mixed integer programming formulations and approaches. Optimization Technical Report 02-08, Computer Sciences Department. Madison, WI: University of Wisconsin. http://www.cs.wisc.edu/~ferris/papers/rad-mip.pdf. +Kansal, A.R., et al. 2000. Simulated brain tumor growth dynamics using a three-dimensional cellular automaton. Journal of Theoretical Biology 203: 367-382. +Kaye, Andrew H., and Edward R. Laws. 1995. *Brain Tumors*. New York: Churchill Livingstone, New York. +Lee, J.Y., A. Niranjan, J. McInerney, D. Kondziolka, J.C. Flickinger, and L.D. Lunsford. 2002. Stereotactic radiosurgery providing long-term tumor control of cavernous sinus meningiomas. Journal of Neurosurgery 97 (1): 65-72. +Wagner, Thomas H., et al. 2000. A geometrically based method for automated radiosurgery planning. International Journal of Radiation Oncology Biology Physics 48 (5): 1599-1611. +Yi, Taeil. 2000. A Tree in a Brain Tumor. Gainesville, FL: University of Florida. + +# Editor's Note + +The May 2003 issue of SIAM News (Vol. 36, No. 4) contains on p. 12 a photo of the University of Colorado team, which we reproduce here, together with perspectives on the MCM experience. The team members recommend dividing up roles and reveal that they used a total of eight computers during the contest. Their advisor, Anne Dougherty, also offers her perspective; she emphasizes that the contest "actually begins in the late fall," when students are recruited and training sessions are held—"to give the students time to get to know one another and to begin to form their teams." + +On the same page, James Case, a SIAM judge for the MCM, gives his perspective on both MCM problems. Regarding the Gamma Knife Treatment Problem, he remarks, "Only the bravest contestants were prepared to concede that their algorithms might fail to achieve even $50\%$ coverage." + +![](images/05477e10032a5bc8d0d0d6afdd38d8467c70f1ce8608cfa1ff2ca49510430643.jpg) + +Team members Aaron Windfield, Darin Gillis, and David Lindstone, who are members of the SIAM Student Chapter at the University of Colorado, Boulder. Their paper earned the SIAM Award for the Gamma Knife Treatment Problem. (Photo courtesy of team advisor Anne M. Dougherty.) + +# Shelling Tumors with Caution and Wiggles + +Luke Winstrom + +Sam Coskey + +Mark Blunk + +University of Washington + +Seattle, WA + +Advisor: James Allen Morrow + +# Abstract + +We discuss a simple model for tumor growth, which results in realistic tumor shapes. We then address the question of efficient irradiation of tumors in terms of packing their volume with disjoint spheres of given radii. We develop three algorithms, based on a shelling method, which give recommendations for optimal sphere-packings. We interpret these packings as treatment plans, and we examine the corresponding radiation dosages for various qualities. + +We analyze the effectiveness of our algorithms in recommending optimal dosage plans. Several problems with the algorithms are addressed. Finally, we conclude that all of our algorithms have a common difficulty covering very large tumors, and that while one of our algorithms is effective as a means of packing the volume with spheres, it does not translate into an efficient treatment plan. + +# Introduction + +The gamma knife is a highly effective means of destroying tumor tissue. A collection of 201 beams, which send ionizing radiation produced by cobalt-60 sources, converge and deliver to a specific region a powerful dose of radiation without harming the surrounding tissue. + +It is necessary to determine where the beams should converge, so the patient's brain is scanned with an MRI or CT imaging device to produce a three-dimensional image of the tumor. An optimal treatment plan is determined by analyzing this image and deriving a collection of points inside the tumor. Doses concentrated at those points are administered by the gamma knife. Selecting an optimum treatment plan can be lengthy and difficult. + +We seek to automate the process of selecting an optimal treatment. We describe an algorithm to take a three-dimensional image and determine an + +optimal choice for the position and size of spheres that cover the tumor as much as possible while the same time minimizing the number of spheres utilized. + +The basic ingredients of our solution include: + +- three tumor growth models representing three different degrees of nonuniformity; +- three sphere-packing algorithms, each representing a different spin on the concept of "shelling"; and +- testing of the algorithms against three different tumor sizes. + +# Exploring the Layout of the Problem + +# Collimators + +A collimator helmet is used to direct the radiation toward a specific point in the patient's head. Cobalt-60 as a radiation source emits photons in an isotropic fashion; so, by introducing a long cylinder between the source and the target, the beams are constrained to a certain angular distribution. Over the short distance from the end of the cylinder to the target location, the radiation is confined approximately to a cylinder or column. This collimation of the radiation allows specific targeting and reduced exposure. + +The collimator helmet is essentially a spherical array of cylinders, each of which directs radiation through a common point. This configuration allows 201 low-intensity beams to enter the brain and intersect in one well defined location. Since the beams are positioned around the skull evenly and are relatively weak, no normal tissue receives a high dose. However, where the beams intersect in the tumor, a huge dose is delivered (Figure 1). Because the collimator has only four different settings, we are physically restricted to four different dose sizes. + +![](images/20d5a4322589a1342f3ac89f7bae89e55e8f27a82af8671d8e8cd3f7c6c8e53d.jpg) +Figure 1. Radiation profile where spheres meet. The bottom two curves represent radiation profiles for individual shots (radii of 7 and 4 centered at 0 and 11) and the bold line represents the total radiation dose received. + +# Radiation Distribution + +Because the cobalt-60 sources are evenly spaced, their intersection can be represented by a spherically symmetric radiation profile. This profile can be approximated well by error functions [Ferris and Shepard 2000]. Thus, we define the radial distribution of dose intensity to be + +$$ +f _ {\rho} (d) = \sum_ {i = 1} ^ {2} \lambda_ {i} \left[ 1 - \operatorname {e r f} \left(\frac {\mathrm {d} - \mathrm {r} _ {\mathrm {i}}}{\sigma_ {\mathrm {i}}}\right) \right], \tag {1} +$$ + +where $\lambda_{i},\sigma_{i}$ , and $r_i$ all depend on $\rho$ , the radius of the sphere, and $d$ is the distance from the center of the sphere. By fitting the values of $\lambda,\sigma$ , and $r$ to experimental data, an accurate model for radiation intensity is generated. Ferris and Shepard [2000] give fitted values for radii of various sizes. + +It is impractical to design an algorithm to optimize the placement of such intersections (called shots) over multiple treatments. Thus, a placement method must be developed that is both accurate and quick. By approximating the shots from collimator helmets as spheres, a much more approachable problem may be formulated: How can we pack the tumor with as many spheres as possible? By using spheres of the same radius as the beams, we agree very well with the drop-off point of the radiation distribution defined above. Thus, by packing the tumor, we can optimize the placement and number of shots. + +# Isosurfaces and Isodose + +The constructs that define habitability within our brain space are the isodose surfaces, or isosurfaces. These surfaces are the level sets of the scalar field that makes up the radiation distribution in the brain space. Every point on one side of the surface has a higher dosage than any point on the other. In an ideal case, the edge of the tumor would be the isodose surface, with every point inside the tumor receiving $100\%$ of the prescribed radiation and every point not in the tumor receiving none. However, this is infeasible, so some compromise must be reached. By comparing the regions within isosurfaces to the tumor, it is easy to see how much tumor has been cleared away, how much tumor is left, and how much brain has been sacrificed. + +# Digital Images of Tumors + +Typically, an MRI delivers an image of the brain case, with a resolution of approximately $1\mathrm{mm}^3$ . Since the brain is about $100\mathrm{mm}$ on a side, the tumor image sits inside a $100 \times 100 \times 100$ voxel space, where each voxel represents $1\mathrm{mm}^3$ . We assume that the shape given by the imaging software is an accurate and conservative approximation of the tumor, so that if we subject voxels denoted as tumorous to doses of radiation do not damage normal brain tissue. + +# Problem Restatement + +Given these considerations, we may restate the problem before us as follows. Given a digital image of tumor tissue, produce a list of at most 15 spheres (centers and radii) such that: + +- The spheres are entirely contained within the tumor tissue. +Each radius is either 2, 4, 7, or $9\mathrm{mm}$ . +- The spheres are disjoint (or nearly so, to avoid overdosing). +- The total volume of the spheres is as close to that of the tumor as possible. + +# Tumor Generation Model + +We could not find images of actual tumors, so we generated our own. We created an algorithm to simulate tumor growth in time steps. This algorithm uses a cellular automaton to model cellular division [Kansal 2000]. Ideally, the lattice would have a structure similar to a biological fabric, but we take instead a simple three-dimensional grid, for several reasons: + +- It is computationally convenient—not only is simulation elementary, but the output resembles a digital image without further processing. +- We are interested in studying the final shape of tumors, not their growth. Our algorithm is sufficient to produce volumes representative of tumor shapes. + +Each node of the lattice represents an idealized mathematical cell. We begin by placing a single cell in the center of the space, allowing the cells to "divide" (generate a new cell) at each time step until the tumor reaches the desired size. Whether or not a cell divides depends on several factors, which vary among the three different generation methods discussed below. + +In all cases, we smooth out unrealistic or undetectable holes in the tumor after it has grown to the desired size. It is theoretically possible for a tumor to reconnect with itself in a toroidal shape, but this does not happen in practice. + +# The Uniform Pressure Tumor + +This simulation supposes that the tumor is growing in a uniformly pressurized environment. At each time step, the probability for a given cell to divide is proportional to $(1 - r / R)$ , where $r$ is the distance from the cell to the origin and $R$ is a bound on the radius of the tumor. Since we are interested only in the final tumor shape and not its development, the constant of proportionality is essentially irrelevant. The resulting tumor shape is generally a ragged sphere. + +# The Varied Pressure Tumor + +Suspecting that the preceding model may be too symmetric, we created a second algorithm that varies the pressure against the tumor, corresponding to the effect that the surrounding area of the brain would have on the growth of a tumor. To do this, we specify certain random points $x_{i}$ as pressure points and curb tumor growth near these points. The probability for a given cell $x$ to divide is now proportional to the product $(1 - r / R)\prod |x - x_i| / M$ , where $M$ is the maximum width of the tumor space and $r$ and $R$ are as before. + +Tumors generated with this model are slightly more elongated and pointed than before. Some curve into a lima-bean shape around the pressure points. + +# The Running Amok Tumor + +Newman and Lazareff [2002] postulate that once a tumor begins to jut in one direction, cells in the outcropping are more likely to receive nutrients from the surrounding brain tissue and hence are more likely to grow, whereas the less extended regions of the tumor are less likely to grow. The tumor is more likely to grow in a direction in which it has grown before; the result is a tumor that is spiny in appearance. Thus, in this third model, we remove the outside pressures and specify that more recently generated cells be more likely to divide than older ones. The resulting shape is a bulky tumor with tendrils showing recent and active growth. + +# Packing Spheres into Volumes + +The sphere-packing problem is ages old and yet unsolved; the very problem that we are faced is NP-complete [Wang 1999]. + +When faced with packing rounds into a barrel, the military has been rumored to pack them as tightly as possible and then drive the barrel in a large truck over a bumpy road; doing so clears up enough space for another handful of rounds. We implement an algorithmic approximation to this idea. + +# The General Approach + +Our plan is based on two fundamental principles: + +Fit the largest spheres first. That is, we pack large spheres into the volume before considering smaller ones. The reason for this is the steep fall-off of coverage as shot radius decreases (Table 1). It takes twice as many 7-mm spheres to do the job of a 9-mm sphere, consuming more diametric space in the tumor and complicating the dosage plan. Since one of our stated goals is to minimize the number of doses, the principle of utilizing larger spheres first is clearly a good one. + +Table 2. +Shot radius vs. coverage. + +
Radius (mm)9742
Coverage (mm3)3000130025012
+ +![](images/e3497a4d5d815ec4c9071cbb104ee6718f84652506eff9192b75555316990043.jpg) +Figure 2. The sphere on the left is a better choice than the one on the right. + +Spheres should hug the tumor edge. It is very likely for there to be a many-way tie for where to place the largest sphere that fits inside a given volume. But not all choices are equal—a centrally placed sphere may preclude fitting other large spheres into the volume (Figure 2). For example, it would be unwise to place a 9-mm shot at the center of a spherical tumor that is 16 mm in radius, as this would not allow us to place any 7-mm spheres into the tumor. + +# Depth-Search Algorithm + +Because the tumor data is in the form of a digital image, we pack spheres into the image directly. That is, we select spherical subsets of voxels from the given tumor and consider the optimal placing of spheres (centered at voxels inside the tumor) that do not intersect one other or the edge of the tumor.) + +The algorithm acts by a shelling method [Wagner et al. 2000], which we describe in detail. The first step is to identify the boundary of the tumor, marking the voxels on a bounding surface with value of 0. We then assign every tumorous voxel a positive value, denoting its distance from the boundary nodes, via Algorithm 1. + +The sphere that we place inside the tumor should be as large as possible but also as close to the boundary as possible. We therefore take for the center of this sphere a point at depth $r$ from the boundary, where $r$ is the greatest of 2, 4, 7, and 9 available. This way, when we shell a sphere at depth $r$ , the shelled area is along the edge of the tumor as much as possible. + +In any case, our algorithm must choose the center $x$ of the sphere from the set $E_r$ . How this is done varies among the three algorithms. + +Having chosen our candidate shell center $x$ , we remove from the tumor the sphere centered at the center of this voxel of radius $r$ . This translates to removing all voxels in the tumor whose coordinates are within $r$ of the center + +Algorithm 1 The depth evaluation algorithm +$X$ is the set of tumorous voxels + $E_0$ is the set of boundary voxels + $d\coloneqq 0$ +while $X\neq \emptyset$ do for $x\in X$ do if $x$ neighbors a point of $E_{d}$ then move $x$ from $X$ to $E_{d + 1}$ end if end for increment d +end while + +of the sphere. + +We iteratively apply this shelling method to the volume of tumor that remains. The algorithm terminates after it has shelled 15 times, since that is the maximum number of shots allowed. However, we keep track at each stage of what percentage of the tumor has been eliminated so that fewer shots may be recommended if they are deemed sufficient. + +# A Hasty Algorithm + +This is the simplest algorithm. Each time we must choose a center for our shell of radius $r$ , we simply grab one at random. In some sense, this algorithm acts as a control for the next two. + +# A Careful Algorithm + +This next algorithm improves upon the previous one in an obvious way. At each stage, we select a candidate center $x$ from the set $E_r$ —one that will be most beneficial toward placing the remaining spheres. Since the placement of the largest spheres has the greatest effect on the percentage of the volume, we determine which choice of center results in the largest depth by applying the shelling process to what remains of the tumor after the sphere centered at that point has been removed. We want the remaining region to have as high a depth as possible, to make it more likely that we can fit large spheres in the surrounding area. + +While it would be preferable to compare all possible choices, to do so is in most cases too computationally intensive. We therefore select a number of our centers in $E_r$ (in this case, 5), compute in each case the depth of the volume of the tumor minus the sphere centered at that point, and from these 5 pick the one whose remaining tumor has voxels with the highest depth. + +# A Wiggling Algorithm + +Prototyping these two algorithms revealed that our Depth-Search algorithm does not always choose sphere centers as close to the tumor boundary as possible. + +We developed the Wiggling algorithm as a partial solution to this problem. It rests on the theory that the depth algorithm may still work to find spheres that hug the boundary closely. + +The Wiggling algorithm chooses a voxel $x$ from $E_r$ at random and places inside the tumor a sphere of radius $r$ centered at the voxel. It then transports the sphere, one unit at a time, starting off in a random direction and continuing until the sphere meets the boundary as closely as possible. That is, the algorithm simply transports the sphere until it discovers that the next step would land it outside the tumor. By no accident, this method very much resembles that truck on a bumpy road. + +Unfortunately, the methods of this algorithm cannot be combined efficiently with the previous Careful algorithm; any care exerted would simply be undone by wiggling. + +# Output + +One cannot distinguish from the three-dimensional images the differences among our three algorithms. Figure 3 shows a cut-away plot of tumor wall, together with 15 spheres selected by the Careful algorithm. + +# Analyzing Sphere-Placement Plans + +# Computing Dose Distributions + +After completing calculations using discrete spheres and voxel models of tumors, we calculate the actual dose distribution from the proposed treatment plan. Using the centers of the spheres and the radially symmetric radiation functions presented earlier, we construct a scalar field in brain space defined by the formula + +$$ +D (x, y, z) = \sum_ {i = 1} ^ {n} f _ {r _ {i}} \left(\sqrt {(x - x _ {i}) ^ {2} + (y - y _ {i}) ^ {2} + (z - z _ {i}) ^ {2}}\right), +$$ + +where $f$ is the function given in (1) on p. 361 and $(x_i, y_i, z_i)$ is the center of the $i$ th sphere with radius $r_i$ . + +For all $(x,y,z)$ in a voxel, we use the center of the voxel. Thus, all the points in the voxel get the value of $D$ at the center, and the values of $D$ are thereby discretized. Of course, it would be more accurate to integrate over the entire cube and use the computed value as the representative number for + +![](images/2d21b237ea29bde789ff16e817e06a334e34aee1bcc815f1fe7bddb4ed4f38a6.jpg) +Figure 3. Spheres packed in a tumor by the Careful algorithm. + +the voxel, but this is too computationally intensive and the accuracy gained on test cases was minimal. Using this scalar field, we can calculate isosurfaces of any value and the volume of the region contained in this isosurface. By taking various different isosurfaces, the efficiency of the spherical covering plan can be calculated. The amount of damage to the surrounding tissue is also available. + +# Determining the Number of Shots + +Each step in our algorithm is executed in the same way whether or not there will be a subsequent shot. Hence, computing every shot and then using some set of parameters to determine the value of each successive shot enables us to compute the ideal number of shots for each tumor. Since this method is not incredibly time consuming (10-15 min for a large tumor). It is certainly feasible to run this process in the hospital and deliver a dose profile to a patient in moments. + +# Visualizing Dose Distribution + +To facilitate visualization of dose distribution, we construct a program to slice the field of dose distribution and superimpose an image of the tumor + +![](images/9c7d6b8c85b67ee92c980f212d707f3fd5bb384015cf408bc6d6e727089dc8c6.jpg) + +![](images/cee07c8e1e6b5f580e014b5c14a7913ae7b0b6d3ce3304e9412b3e84e63dd0ec.jpg) + +![](images/73212bfa956f96793467af40b5c5f004b6466eed062ef9f6dd572ec279b3c5bc.jpg) +Figure 4. A single cross-sectional slice of a tumor (represented by dots). Top-left: $30\%$ cut-off. Top-right: $50\%$ cut-off. Bottom-left: $80\%$ cut-off. Bottom-right: Shows only hot spots. + +![](images/cef1f8080e8a36fbdad37463d7216c3210010f378a10108cbbcc6527c3550de6.jpg) + +on it (Figure 4). Multiple slices make it possible to see a three-dimensional isosurface profile of the tumor. The white areas in the images of Figure 4, from top-left clockwise, are the regions of the tumor that receive a radiation dosage above 30, 50, $80\%$ , and the region of potential overdosing, i.e., a region with an intensity higher than 2.5 times that of a single shot, referred to as a hotspot. + +# Applying the Algorithm + +We are allowed a maximum of 15 spheres. In all of our simulations, it was possible to place 15 spheres inside the tumors, since they were of relatively high volume compared to a sphere, especially to a sphere of radius $2\mathrm{mm}$ . + +From each of our three tumor-growth models, we produced 30 tumor images, 10 each of sizes $10,000\mathrm{mm}^3$ , $20,000\mathrm{mm}^3$ , and $36,000\mathrm{mm}^3$ . We then tested each of the three algorithms against all 90 tumors. + +We performed two operations: + +- We used the spherical dose pattern to create a scalar field of radiation expo + +sure values throughout the $100 \times 100 \times 100$ brain space. We then used this field to calculate the exposures of every cell within the modeled brain and tumor and to decide which cells survived the treatment. Using a $30\%$ contour as our cutoff for death and survival, and further calculating the percentage exposures to other isodose cutoff values, we obtained exposure percentages for various tumor shapes and sphere-placement algorithms (Table 2). + +Table 2. Percentage of tumor receiving ${30}/{40}/{50}\%$ of shot intensity. Best results for a tumor shape and size are shown in bold. + +
ShapeSize (mm3)
10,00020,00036,000
a. Hasty algorithm.
Uniform96/86/7196/86/6896/86/68
Varied95/82/7096/89/7076/69/54
Amok94/87/7091/82/6378/66/49
b. Careful algorithm.
Uniform96/85/6998/91/7099/92/73
Varied99/96/8492/82/6992/86/72
Amok94/85/6488/79/6279/66/50
c. Wiggling algorithm.
Uniform99/95/8291/83/6988/81/67
Varied99/96/8492/82/6992/86/72
Amok91/85/7279/71/5676/65/51
+ +- We took the spherical profiles generated by the placement algorithms and estimated the radiation exposure values to all of the brain space for each generation of spherical dose pattern created by the algorithm. When the total dosage of the tumor reached a specified value, in this case $30\%$ , the spherical dose profile of that generation was recorded. Thus, the total number of shots delivered to the patient is minimized while maintaining effective coverage of the tumor. Reducing this data gives Table 3. A dash means that the algorithm could not fill $90\%$ of the volume with 15 spheres. + +# Evaluation of Methods and Conclusions + +# Difficulties Reaching $90 \%$ Coverage + +Even for simple volumes, it can be impossible to cover $90\%$ of a tumor with shots. To see this, consider a sphere of $10 \mathrm{~mm}$ radius. We may shell this with a + +Table 3. Approximate number of shots needed to kill $90\%$ of the tumor. Best results for a shape and size of tumor are shown in bold. + +
ShapeVolume (mm3)
10,00020,00036,000
a. Hasty algorithm.
Uniform8157
Varied15109
Amok10--
b. Careful algorithm.
Uniform7127
Varied-129
Amok10--
c. Wiggling algorithm.
Uniform101015
Varied91115
Amok15--
+ +9-mm sphere, which would leave no room even for a 2-mm sphere and would still cover less than $3/4$ of the volume. + +Simply put, the only way our algorithms can destroy $90\%$ of the tumors is by allowing hotspots in the tumor. + +# Square Lattices vs. Euclidean Distance + +Even worse, no instance of our algorithm can see that it is possible to put a 9-mm shot in the 10-mm sphere. This is because the depth evaluation algorithm assigns depths to the tumorous voxels that do not entirely correspond to the physical distance between the voxels and the tumor edge. In Figure 5 (at left), when voxels happen to differ via the coordinate axes, then the depth differential coincides with the distance between the voxels. However, in every other case the depth does not quite accurately measure distance. For example, two voxels that are diagonally adjacent differ in depth by 1, whereas the Euclidean distance between the centers of the two voxels is $\sqrt{3}$ . + +In many situations, the shelling method gives centers a depth that is not an accurate representation of their position inside the tumor. For example, in Figure 5 (at right), the highlighted voxel has depth 3; so if we choose that voxel as the center of our circle, then we would place a circle of radius 3 at that point to cover the tumor. But it is clearly possible to squeeze a circle of radius 4 into the tumor. The problem is exaggerated further in three dimensions. + +Our Wiggling algorithm was developed as an attempt to find a partial so- + +![](images/6a97c74cd2ace140e58f1dc9cf1b269cf6f82528e16db582013a4903d0c03c11.jpg) +Figure 6. Left: Metric inconsistencies. Right: A schematic highlighting a weakness of our depth model. + +lution to this problem. By allowing the spheres to move toward the edge of the tumor, we hoped to pack the spheres as tightly as possible. + +# Conclusion/Recommendation + +For small or medium-sized tumors, the Careful algorithm is comparable to the Hasty Algorithm; the Hasty algorithm is preferable on time considerations. + +All of the algorithms, but especially the Wiggling algorithm, have difficulty handling very large tumors, ones so large compared to the sphere sizes that it is unlikely that any combination of spheres could fill the volume of the tumor to the degree required by the problem statement. + +However, for reasonably-sized tumors, the Wiggling algorithm is quite effective and works especially well toward filling in the varied-pressure tumors. This is because they bulge around the pressure points, forming pockets that are ideal for nesting spheres. When the spheres wiggle, they are more likely to settle into these regions and thus pack tightly. Furthermore, the Wiggling method runs much faster, especially compared to the Careful algorithm. + +In fact, the Wiggling algorithm does such a good job of packing spheres that if we do not constrain the total number of spheres used, it tends to pack a very many, resulting in a large dosage. Furthermore, since the spheres are packed to the edge, the radiation spills out of the tumor into the surrounding normal tissue. So, while the Wiggling algorithm is a very good method of sphere-packing, it is not good for treatment planning. + +# To summarize: + +- None of these algorithms should be used for very large tumors, which should be treated in multiple sessions. +- For moderate-sized tumors, the Hasty algorithm should be used in preference to the Careful algorithm. +- The Wiggling method is speedy and effective but does not translate well into an effective and safe treatment plan. However, we recommend it for tumors similar in shape to varied-pressure tumors, because it produces optimal plans for those. + +# References + +Ferris, Michael C., and David M. Shepard. 2000. Optimization of gamma knife radiosurgery. In Discrete Mathematical Problems with Medical Applications, vol. 55 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, edited by D.-Z. Du, P. Pardolas, and J. Wang, 27-44. Providence, RI: American Mathematical Society. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-01.pdf. +Kansal A.R., S. Torquato, G.R. Harsh, E.A. Chiocca, and T.S. Deisboeck. 2000. Simulated brain tumor growth dynamics using a three-dimensional cellular automaton. Journal of Theoretical Biology 203 (4): 367-382 +McKernan, R.O., and G.D. Wilde. 1980. Mathematical models of glioma growth. In *Brain Tumors*, edited by D.G.T. Thomas and D.I. Graham. London, UK: Butterworth & Co. +Newman, William I., and Jorge A. Lazareff. 2002. A mathematical model for self-limiting brain tumors. http://www2.ess.ucla.edu/~newman/LGA.pdf. +Wagner, Thomas H., et al. 2000. A geometrically based method for automated radiosurgery planning. International Journal of Radiation, Oncology, Biology, Physics 48 (5): 1599-1611. +Wang, Jie. 1999. Packing of unequal spheres and automated radiosurgical treatment planning. Journal of Combinatorial Optimization 3: 453-463. +Wu, Q. Jackie. 2000. Sphere packing using morphological analysis. In Discrete Mathematical Problems with Medical Applications, vol. 55 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, edited by D.-Z. Du, P. Pardolas, and J. Wang, 27-45. Providence, RI: American Mathematical Society. + +# Shoot to Kill! + +Sarah Grove + +Chris Jones + +Joel Lepak + +Youngstown State University + +Youngstown, OH + +Advisor: Angela Spalsbury + +# Abstract + +The goal of the model is to approximate a target volume by spherical doses (or shots) of radiation. The model that we provide satisfies several important constraints. + +- The algorithm guarantees that no shot overlaps another; thus "hot spots" are eliminated. +- Each shot must also lie entirely within the target volume (to prevent harm to healthy tissue). +- The final shot arrangement is optimal in the sense that it is impossible to place an additional shot of any size without causing an overlap or breach of the target volume. + +The algorithm's strength lies in deciding the best possible location for the placement of the initial shot. The remaining volume of the target area is filled by placing shots tangent to previous shots. The shots are placed in a way that guarantees as many large shots as possible are used before resorting to smaller shots, thereby minimizing the total number of doses. Our model's simplicity easily allows for adaptation when its features need to be modified to enhance its accuracy. The algorithm also is very efficient—even for an exceptionally large tumor, our program is bounded by just several million iterations, making it easily computable with any modern hardware. + +We include a working computer program based on our algorithm. The software takes into account the need to protect healthy tissue while treating abnormal tissue. Our program uses digital images similar to those from MRI scans to define accurately the boundary around the target volume. This boundary then serves to prevent the normal tissues from receiving harmful levels of radiation. + +We constructed data sets (sample tumors) to test our program. The performance of our algorithm on these data sets provides great confidence in its feasibility and practical effectiveness. In each case, every shot lies within the target volume without overlapping with another shot. The volume coverage had a high degree of success in each case, ranging between $86\%$ and $91\%$ . + +In summary, our model safely allows for strategic gamma knife planning. The algorithm approaches the final shot arrangement from a geometrical perspective. It provides an efficient and effective way of planning the treatment and guarantees that several important criteria are satisfied. + +# Analysis of the Problem + +Our task is to approximate the shape of a target volume (the brain tumor) by spherical doses (shots) of radiation. Several conditions are desirable for this approximation: + +- Prohibit shots from protruding outside the target in order to avoid harm to healthy areas of the brain. +- Prohibit shots from overlapping (to avoid hot spots). +- Cover the target volume with effective dosage as much as possible, with at least $90\%$ of the target volume covered. +- Use as few shots as possible. This is to help reduce the total amount of radiation passing through the healthy portion of the brain. + +Additionally, the algorithm should be efficient enough to avoid unreasonable waiting times. + +We can view the problem of formulating the treatment plan in mathematical terms. The object then is to place the least number of shots in the target volume while filling at least $90\%$ of the volume. From a theoretical perspective, this is a sphere-packing problem; that is, to fill as much of a volume as possible with spheres. + +Unfortunately, the problem of packing unequal spheres into a given volume is NP-complete [Wang 1999], making an absolute optimal solution intractable for all but the smallest target volumes. Hence, we find not an optimal solution but a solution that satisfies all the requirements while remaining of reasonable complexity. + +# Assumptions + +- The diagnostic images are acquired from MRI scans; we assume that the resolution is $1\mathrm{mm} \times 1\mathrm{mm} \times 1\mathrm{mm}$ . (The actual resolution is approximately $1\mathrm{mm} \times 1\mathrm{mm} \times 1.5\mathrm{mm}$ [Leventon 1998].) We assume that the image can be represented as a three-dimensional array of points such that the set of points in the tumor and the set of healthy points can easily be determined. +- The mean diameters of brain tumors range from $1 \mathrm{~mm}$ to $40 \mathrm{~mm}$ [New Jersey ... n.d.]. We assume that no tumors larger than $100 \mathrm{~mm} \times 100 \mathrm{~mm} \times 100 \mathrm{~mm}$ need to be considered. At the assumed resolution of $1 \mathrm{~mm} \times 1 \mathrm{~mm} \times 1 \mathrm{~mm}$ , this gives $10^{6}$ data points, a reasonably small amount of data. + +- Each shot volume contains $100\%$ of its potency with no leakage to outside points. +- All shots have uniform radiation density. + +# The Algorithm + +# Overall Description + +The basic idea is to construct each successive sphere based on the location of previous spheres. For this reason, the choice of the initial shot or shots is very important. + +The first shot placed is the largest possible shot that fits inside the target volume. Heuristics and curvature measurements of the target volume can be used to determine the exact location of this shot. For example, the first sphere could be placed tangent to the surface of the target volume in a location in which the curvature of the volume and the sphere are in close correspondence. In general, the first sphere should be placed tangent to the surface of the target volume. + +The algorithm then calculates the position of the next shot by determining the location of the largest possible shot tangent to the first one. This is repeated, placing new shots tangent to the first until no more shots of any radius can be placed. + +To determine if a shot of given radius and position is possible, it suffices to check each point on the surface of the proposed shot to make sure that it is fully contained in the tumor and intersects no previously placed shot. + +After the volume tangent to the first sphere is full, the program begins checking points tangent to the sphere placed second. The largest possible sphere tangent to the second sphere is then placed; this is again repeated until no more spheres can be placed. + +This process is continued with each sphere until no more can be placed inside the target volume. The output is a list of the centers and radii of the shots. + +# Definitions + +We are given digital images from the MRI scans, which are converted to three-dimensional arrays of data. + +Let the data set $D$ be defined as + +$$ +D = \{p = (x, y, z) \mid x \in \{1, 2, \dots , N _ {x} \}, y \in \{1, 2, \dots , N _ {y} \}, z \in \{1, 2, \dots , N _ {z} \} \}, +$$ + +where $N_x, N_y,$ and $N_z$ are the resolution (in pixels) of $x, y,$ and $z$ , respectively. + +We define a function $\delta : D \to \{0,1\}$ by + +$$ +\delta (p) = \left\{ \begin{array}{l l} 1, & \quad \text {i f p o i n t p i s p a r t o f t h e t u m o r ;} \\ 0, & \quad \text {o t h e r w i s e .} \end{array} \right. +$$ + +Hence, $T$ is the set of points in the tumor denoted by + +$$ +T = \{p \in D \mid \delta (p) = 1 \}. +$$ + +We say that a point $p = (x, y, z)$ borders $p' = (x', y', z')$ if + +$$ +(x, y, z) = (x ^ {\prime} + \Delta_ {1}, y ^ {\prime} + \Delta_ {2}, z ^ {\prime} + \Delta_ {3}), +$$ + +where $\Delta_{i}\in \{0,1, - 1\}$ for $i = 1,2,3$ . It is also useful to determine the set of points on the surface of the tumor. We let this set of boundary points $B$ be defined as + +$B = \{p\in D\mid \delta (p) = 1$ and $\exists p^{\prime}\in D$ such that $\delta (p^{\prime}) = 0$ and $p$ borders $p^\prime \}$ + +We represent each shot by a set of points on a sphere. Let $S_{p,r}$ denote the surface of a sphere of radius $r$ centered at $p$ , i.e. + +$$ +S _ {p, r} = \left\{p ^ {\prime} \mid \left\lfloor \Delta (p ^ {\prime}, p) \right\rfloor = r \right\}, +$$ + +where $\Delta(p', p)$ is simply the distance between the two points and $\lfloor \cdot \rfloor$ is the floor function. + +There is only a finite set $R$ of possible radii of shots, corresponding to the set of collimator sizes. In this case, + +$$ +R = \{9, 7, 4, 2 \}. +$$ + +# Formal Specification of Algorithm + +Input: The set $D$ of data points and the function $\delta$ used to determine the target volume. From this, the set of target volume points $T$ is determined, as well as the set of surface points $B$ . + +Output: The center and radius of each shot placed by the algorithm. + +# Notation: + +$C_i$ , for $i = 0,1,2,\ldots$ , is the set representing the $i$ th shot: $C_i = S_{p_i,r_i}$ for some point $p_i$ and radius $r_i$ ; + +$n$ is the number of shots placed; and i.e., $n = \max \{i\mid C_i$ is defined}. + +$c$ is the index of the sphere to which new spheres will be placed tangent. + +# Algorithm: + +1. Choose $C_0 = S_{p,r}$ such that $r \in R$ is the maximum possible and $C_0 \subseteq T$ . (Details in next subsection.) +2. Let $c = 0$ . +3. While $c < n$ do the following: + +(a) for every $r \in R$ (from largest to smallest) do the following: + +i. Let $r'$ and $p'$ be the radius and center of sphere $C_c$ , i.e. $C_c = S_{p',r'}$ . + +ii. Construct as many possible spheres $C_k = S_{p,r}$ , where $r \in R$ , $p \in S_{p', (r + r')}$ , such that $C_k \subseteq T$ and $C_k \cap C_i = \emptyset$ for all $i \neq k$ . (See details later.) + +(b) Let $c = c + 1$ + +# Placement of Initial Shot + +Near the surface of the target volume is the most difficult area to encompass by doses of the gamma knife—hence, the first shot should be arranged to conform as closely as possible to the contour of the tumor. Figure 1 gives an example of matching the contour. + +![](images/9d579a0a67e219cd2987ce73250921b3a94b915de2a434aa5d00522fd58cd5a2.jpg) +Figure 1. Placement of a good initial shot. + +Measuring the curvature of the target volume is straightforward and could be used to fit the initial shot to an ideal position. Alternatively, the doctors could choose the initial position interactively based on their judgment. + +A further alternative is the placement of multiple initial shots if the need arises. This could occur if the shape of the target is such that there is an obvious optimal configuration. + +One additional method is the use of many runs of the algorithm with a random initial placement. The algorithm requires very little computation time, even with a large tumor size. Many runs could be made, after which the best configuration is selected. This method is also easily parallelized, allowing for a very large number of trials. + +# Placement of Tangent Shots + +The placement of each shot following the initial sphere continues in a similar manner. The volume should be placed in contact with as many tangent points as possible in order to minimize lost volume. More specifically, choose the new $C_k = S_{p,r}$ to maximize the number of tangent points + +$$ +t = \left| \left\{q \in \bigcup_ {i = 1} ^ {n} C _ {i} \bigcup B \mid q \text {b o r d e r s s o m e e l e m e n t} q ^ {\prime} \in C _ {k} \right\} \right|. +$$ + +Other methods may be employed, including random placement and multiple runs (similar to those mentioned above). The number of shots placed is proportional to the size of the tumor. + +# Analysis of Model + +# Worst Case Analysis + +We establish a bound on the total number of points that must be checked by our algorithm. We assume a resolution of $1\mathrm{mm} \times 1\mathrm{mm} \times 1\mathrm{mm}$ . + +For each shot $S_{p_2,r_2}$ placed tangent to $S_{p_1,r_1}$ , at most all of the points on the sphere $S_{p_2,(r_1 + r_2)}$ must be examined. To confirm placement of the sphere $S_{p_2,r_2}$ , each point on its surface must be checked for penetration of the boundary or intersection with other shots. We can estimate the number of points on the surface of a digitally represented sphere by its area (in mm $^2$ ). If all possible radii must be checked when placing the new sphere, then the placement requires + +$$ +P _ {2} \leq \sum_ {i = 1} ^ {| R |} A \left(S _ {p _ {2}, r _ {i}}\right) A \left(S _ {p _ {2}, \left(r _ {1} + r _ {i}\right)}\right) +$$ + +examinations of points, where $A(S)$ is the area of sphere $S$ in $\mathrm{mm}^2$ (rounded up to nearest integer), and $|R|$ is the number of possible radii. + +We are given that the target volume is usually filled by fewer than 15 shots. Hence, we assume that 30 shots is a reasonable bound. Since $9\mathrm{mm}$ is the maximum radius and $|R| = 4$ , the total number of checks necessary after placement of the initial sphere is + +$$ +P \leq 3 0 P _ {2} \leq 3 0 \left(4 A (S _ {p _ {2}, 9}) A (S _ {p _ {2}, (9 + 9)})\right) = 3 0 (4) (4 \pi) (9 ^ {2} 1 8 ^ {2}) < 4 \times 1 0 ^ {7}. +$$ + +The number of checks required to place the initial sphere is assumed to be negligible, as it is bounded by the resolution of the digital image. The $4 \times 10^{7}$ data point examinations is also trivial for modern computers. Hence, our algorithm, even in the worst case, requires minimal computing time. + +# Other Strengths + +- By design, this model conforms to the constraints of gamma knife treatments. That is, the model prohibits shots from protruding outside the target volume, while avoiding overlapping shots within the target volume. By placing as many shots of large radii as possible before placing smaller shots, the algorithm guarantees a minimal number of shots. In a sample implementation, our algorithm shows a coverage of nearly $90\%$ . +- The main strength of our model is its simplicity and realistic application. +- In the model, the tumor image is transformed to a set of points, where each point represents a pixel from the MRI image. We address the boundary set to position our initial shot. The algorithm places each successive shot along contours of the target volume. +- In the final shot arrangement determined by the algorithm, overlapping shots are prohibited. +- While the shot configuration is not guaranteed to be optimal, the configuration is optimal in the sense that no shots can be added to the final configuration without overlap. +- While we assumed $0\%$ shot gradient, our model's design addresses this issue by allowing for different shot gradients to be added easily. In short, the model can be adjusted continuously for different shot gradients at each iteration. + +# Limitations + +- The final shot arrangement is not guaranteed to be absolutely optimal. +- A nonuniform dosage within each shot is not accounted for; however, if information is known, our program should be adapted accordingly. +- The algorithm considers only the local configuration of shots rather than the entire volume of the tumor. However, by examining only the volume tangent to a given sphere, the complexity is reduced dramatically. In practice, this limitation should affect only exceptional target volume shapes. + +# A Sample 2D Implementation + +We create a working implementation of our algorithm in two dimensions to test its effectiveness. + +The input is shown on the left in Figure 2 as a black-and-white image describing the shape of the target volume. This image can be thought of as a + +two-dimensional analog of an MRI image and is assumed to have approximately the same resolution (1 mm × 1 mm). The output of the algorithm is shown on the right in Figure 2, with each circle representing a shot. (Any visible overlapping of shots is only a result of roundoff error in scaling the image.) + +![](images/ec231a22b8a8f6c468138120222ed117f539c67151f6dcd98a49f5c98f79a482.jpg) +Figure 2. Sample target area and placement of 15 shots. + +![](images/bcdcfaacf451aaaf2e81cb0307d125d3ffad8f6f4375ac8df8f8a161ef855234.jpg) + +For a different sample area, Figure 3 illustrates the order in which our algorithm places the shots. + +The shot placement algorithm is implemented as stated but no optimizations are made in the selection process. That is, the shots are chosen by searching systematically for a satisfactory point. Nevertheless, the outcomes are successful. Over 21 samples, the minimum volume percentage achieved was $87\%$ , the mean was $89\%$ , and the maximum was $91\%$ . + +Although the example is only for two dimensions, the procedure is essentially the same for three—in fact, in actual implementation only the array used to represent the tumor and the definition of a sphere need to be changed. + +# Conclusion + +The gamma knife procedure is a highly tested and extremely effective treatment for a variety of brain abnormalities. Our model works to improve the placement of the shots; we believe that with the inclusion of our model, the treatment's efficiency will increase greatly. + +![](images/02a139a8921b9b524c400e1ca1550cb4fa458350c842f398183253c3e9acfe8f.jpg) +Figure 3. Sample shot sequence (13 shots). + +# References + +Leventon, Michael E. 1998. Operating room results. http://www.ai.mit.edu/people/leventon/Research/9810-MICCAI-Surgery/node9.html. In Clinical experience with a high precision image-guided neurosurgery system, by E. Grimson, M. Leventon, et al. http://www.ai.mit.edu/people/leventon/Research/9810-MICCAI-Surgery/final.html. +New Jersey Chapter Acoustic Neuroma Association. n.d. Recent outcomes for gamma knife and FSR. http://hometown.aol.com/ananj2/outcomes.html. +Sutter Gamma Knife Radiosurgery Center. n.d. Frequently asked questionshttp: //www.suttergammaknife.com. +Wang, Jie. 1999. Packing of unequal spheres and automated radiosurgical treatment planning. Journal of Combinatorial Optimization 3: 453-463. + +![](images/ea282c0dccd02b059ecd0424c3b3d3b5e6816a85edff64eb5ece3c0dfe028303.jpg) + +Receiving the MAA Award at MathFest 2003 in Boulder, Colorado: Team advisor Angela Spalsbury with the team members, who are all now graduate students: Sarah Grove (applied mathematics, North Carolina State). Joel Lepak (mathematics, Michigan), Chris Jones (mathematics, Pittsburgh). (Photo courtesy of J. Douglas Faires.). \ No newline at end of file diff --git a/MCM/1995-2008/2004ICM/2004ICM.md b/MCM/1995-2008/2004ICM/2004ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..80e879ceb15f6dfcec299ab8c9ce3ac324b79bb5 --- /dev/null +++ b/MCM/1995-2008/2004ICM/2004ICM.md @@ -0,0 +1,2256 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Interim Vice-President + +for Academic Affairs + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University + +Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Director of Educ. Technology + +Roland Cheyney + +Production Editor + +Pauline Wright + +Copy Editor + +Timothy McLean + +Distribution + +Kevin Darcy + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 25, No. 2 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. "Gene" Woolsey + +Brigham Young University + +The College of St. Rose + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes print copies of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2420 $90 + +(Outside U.S.) #2421 $105 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2470 $415 + +(Outside U.S.) #2471 $435 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2440 $180 + +(Outside U.S.) #2441 $200 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2410 $39 + +(Outside U.S.) #2410 $39 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc. 57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2004 by COMAP, Inc. All rights reserved. + +# Vol. 25, No. 2 2004 + +# Table of Contents + +# Editorial + +Building a Modeling Community: MATHmodelss.org Patrick J. Driscoll 93 + +# Special Section on the ICM + +Results of the 2004 Interdisciplinary Contest in Modeling Chris Arney 97 + +It's All About the Bottom Line Eli Bogart, Cal Pierog, and Lori Thomas 115 + +Making the CIA Work for You Warren Katzenstein, Tara Martin, and Michael Vrable 129 + +Firewalls and Beyond: Engineering IT Security Dennis Clancey, Daniel Kang, and Jeffrey Glick 143 + +Catch Thieves Online: IT Security Zhao Qian, Su Xueyuan, and Song Yunji 157 + +Authors' Commentary: The Outstanding Information Technology Security Papers Ronald C. Dodge, Jr., and Daniel J. Ragsdale 171 + +Judge's Commentary: The Outstanding Information Technology Security Papers Frank Wattenberg 175 + +Editor's Note Regarding Submissions 180 + +Reviews 181 + +![](images/6de243d8ac552a7e20a7e2b722628aee8a565c0e28857164f4f9709e17ad5dc8.jpg) + +# Guest Editorial + +# Building a Modeling Community: MATHmodels.org + +Patrick J. Driscoll + +Dept. of Systems Engineering + +U.S. Military Academy + +West Point, NY 10996 + +pat-driscoll@usma.edu + +Our society has become more complex over the past 30 years, fueled by a global economy necessarily dependent upon efficient transportation, logistic, financial, and communication systems that must operate in conditions constrained by limited or diminishing resources. It is not surprising, therefore, that math modeling has gained popularity and importance as a means of quantitatively understanding problems that decision-makers must solve to succeed in this setting. + +In its various forms, mathematical modeling has time and again demonstrated its ability to illuminate valuable and frequently hidden insights into the structure of these problems. With this in mind, one would conclude that conditioning our future leaders and managers to be comfortable with the techniques and best practices associated with a math modeling process, and to be confident in the information provided by this process, seems to be a no-brainer. Yet, despite the evidence and despite the continuous growth in participation that contests such as COMAP's HiMCM, MCM, and ICM have realized over the years, institutions have by and large been slow to capitalize on opportunities to build mathematical modeling into their curricula. + +To be fair, a large proportion of mathematics faculty members are educated in programs that emphasize theoretical mathematics, thereby leaving any real-world application experience up to their own initiative and opportunity. This point, coupled with the geographical isolation experienced by some schools, means that faculty and their students frequently lack a ready connection to the community of practitioners and professionals outside of academia whose livelihoods depend on analyzing and solving problems using the very mathematics taught in the classroom. + +Exposing a wider range of students to mathematical modeling at earlier education levels has been an explicit recommendation of professional organizations ranging from the American Mathematical Association of Two Year Colleges (AMATYC) [Writing Team... 1995, 31-35] to the undergraduate Accreditation Board for Engineering and Technology (ABET) [2000]. Moreover, presenting mathematics through applications and modeling has repeatedly proven effective both for teaching reasoning and quantitative skills and for retaining students in mathematically- and scientifically-based programs [Blum and Niss 1991; Boaler 1993]. This point was again made salient in an international context by an independent report on Mathematics in the University Education of Engineers [Kent and Noss 2003, 10-11] to the Ove Arup Foundation, which calls for "a shift in approach from teaching mathematical techniques towards teaching through modeling and problem-solving." + +What appears to be needed is + +- a forum for the exchange of ideas, curriculum to support modeling, and practical tools that can foster the inclusion of mathematical modeling into students, educational experiences; and +- dynamic linking in such a forum to embed mentoring relationships within which many new professional interactions can occur. + +In response to this need at the undergraduate level, the National Science Foundation (NSF) has sponsored a three-year project for the development of a Web-based mathematical modeling community called MATHmodels.org. Hosted by COMAP and directed by Pat Driscoll and Henry Pollak, this site will provide a rich environment in which students and faculty can explore in-depth a wide variety of modeling problems coded by level of difficulty and application area, learning mathematics in the context of its contemporary use. Furthermore, the site will have experienced faculty and practitioner monitors from across the country, so that students working on these problems can post partial and full solutions, ask questions, and participate in extended discussions with fellow students, faculty, and practitioners in government and industry. + +The triumvirate relationship between student-faculty-practitioner being developed within MATHmodels.org defines a mathematical modeling community that is laden with opportunity. Students and faculty can interact with other students, faculty, and practitioners; practitioners can gain an active awareness of the talents of an emerging generation; and all contribute to a growing body of knowledge and resources that subsequently fertilize and jump-start the efforts of new community members as time progresses. + +This direct interaction meets a need that efforts like the HiMCM, the MCM, and the ICM could not reasonably accommodate: providing direct individual feedback to students at key points throughout the modeling process. This focus on improving the modeling process represents a significant departure of a Website away from simply providing resources to members of the modeling community. It is a step towards improving the community's capacity to teach, learn, and do mathematical modeling. + +![](images/cea4a5196b463786f10b6751d71e0a36868962a485bb707c27064411d504253f.jpg) +Figure 1. Home page of MATHmodels.org. + +Although the project is currently in the beginning stages, the prototype design site is online now at http://www.mathmodels.org (see Figure 1 above) and will continue to add functionality through 2006. + +The Institute for Operations Research and the Management Sciences (INFORMS) is supporting and endorsing the creation of this site to complement both their Education Committee objectives and their long-term goals of recruiting and retaining young members into the professional OR/MS career field worldwide. + +# References + +Accrediting Board for Engineering and Technology (ABET). 2000. ABET Criteria 2000. Baltimore, MD: Accrediting Board for Engineering and Technology. http://www/abet.ca.md.gov. +Blum, W., and M. Niss. 1991. Applied mathematical problem solving, modeling, applications, and links to other subjects—state, trends, and issues in mathematics instruction. Educational Studies in Mathematics 22: 37-68. +Boaler, J. 1993. Encouraging the transfer of "school" mathematics to the "real world" through the integration of process and content, context and culture. Educational Studies in Mathematics 25: 341-373. + +Kent, P., and R. Noss. 2003. Mathematics in the university education of engineers. Report to the Ove Arup Foundation, London, England. http://www.engc.org.uk. + +Writing Team and Task Force of the Standards for Introductory College Mathematics Project (Don Cohen, ed.). 1995. Crossroads in Mathematics: Standards for Introductory College Mathematics Before Calculus. + +http://www.imacc.org/standards/. Executive summary at http://www.amatyc.org/Crossroads/CrsrdsXS.html. Memphis, TN: American Mathematical Association of Two-Year Colleges. + +# Acknowledgment + +This editorial is reprinted from COMAP's Consortium: The Newsletter of the Consortium for Mathematics and Its Applications No. 85 (Fall/Winter 2003): 16-17. + +# About the Author + +![](images/3ff924a48e62188b45ab7ad7b804d1742566de1cd37c7d1a05c09e3a3a31cbe5.jpg) + +Pat Driscoll is Professor of Operations Research in the Dept. of Systems Engineering at the U.S. Military Academy. Formerly an Academy Professor in the Dept. of Mathematical Sciences, he has also served as the Director of the Mathematical Sciences Center of Excellence and the Associate Dean of Information and Educational Technology. He received both an M.S. in Operations Research and an M.S. in Engineering Economic Systems from Stanford University, and a Ph.D. in Industrial and Systems Engineering from Virginia Tech. He is a member of the Operational Research Society (ORS) of the United Kingdom, the Institute for + +Operations Research and the Management Sciences (INFORMS), the Military Operations Research Society (MORS), and the honor societies Phi Kappa Phi and Pi Mu Epsilon. He is an Associate Director for the Mathematical Contest in Modeling (MCM) and one of the designers of the High School Mathematical Contest in Modeling (HiMCM). He serves on the Board of Directors for the Driscoll Foundation, Inc., and Media Knowledge, Inc., is a partner in Wine-mates & Company, LLC, has three cats, and is continuing to have more fun than should be legally allowed. + +# Modeling Forum + +# Results of the 2004 Interdisciplinary Contest in Modeling + +Chris Arney, Director + +Dean of the School of Mathematics and Sciences + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +# Introduction + +A total of 143 teams of undergraduates, from 82 institutions in 5 countries, spent an extended second weekend in February working on an applied mathematics problem in the 6th Interdisciplinary Contest in Modeling (ICM). + +This year's contest began at 8:00 P.M. (EST) on Thursday, Feb. 5, and ended at 8:00 P.M. (EST) on Monday, Feb. 9. During that time, the teams of up to three undergraduates or high-school students researched and submitted their solutions to an open-ended interdisciplinary modeling problem involving the security and costs of maintaining accurate and reliable information systems by organizations (universities and businesses). Teams registered, obtained contest materials, and downloaded the problem and data at the prescribed time through COMAP's ICM Website. After a weekend of hard work, solution papers were sent to COMAP. + +The four papers judged to be Outstanding appear in this issue of The UMAP Journal. Results and winning papers from the first five contests were published in special issues of The UMAP Journal in 1999 through 2003. + +The ICM is an extension of the Mathematical Contest in Modeling (MCM), which is held on the same weekend. The ICM is designed to develop and advance interdisciplinary problem-solving skills, as well as competence in written communication. Information about the two contests can be found at + +www.comap.com/undergraduate/contests/icm www.comap.com/undergraduate/contests/mcm + +The problems in the first four ICM contests involved concepts from mathematics, environmental science, environmental engineering, biology, chemistry, and/or resource management. Last year's ICM problem began a shift to operations research, information science, and interdisciplinary issues in security and safety, which will continue for another year (in the 2005 contest). Each team is expected to have advisors and team members who represent a range of disciplinary interests in applied problem-solving and modeling. + +This year's Information Technology Security Problem involved understanding, designing, and analyzing the security systems for networked information systems of information-rich organizations. The problem proved to be challenging, in that it contained various modeling and writing tasks to be performed, specific requirements needing scientific and mathematical connections, and the ever-present requirements to use data analysis, creativity, precision, and effective communication. The authors of the problem, computer scientists Daniel Ragsdale and Ronald Dodge, have studied and researched this problem for several years. Information security expert Daniel Ragsdale was a member of the final judging team and his and Prof. Dodge's Authors' Commentary appears in this issue. + +All 143 of the competing teams are to be congratulated for their excellent work and enthusiasm for scientific and mathematical modeling and interdisciplinary problem solving. This year's judges remarked that the quality of the modeling and presentation in the papers was extremely high, making it difficult to select just four Outstanding papers. + +Start-up funding for the ICM was provided by a grant from the National Science Foundation (through Project INTERMATH) and COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS). The research that motivated this year's problem was supported by the Office of Artificial Intelligence Analysis and Evaluation at the U.S. Military Academy. + +COMAP's Interdisciplinary Contest in Modeling and its Mathematical Contest in Modeling are unique among modeling competitions in being the only international contests in which students work in teams to find a solution. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better informed—and prepared—citizens, consumers, and workers. + +# Problem: The Information Technology Security Problem + +# To Be Secure or Not to Be? + +You probably know about computer hackers and computer viruses. Unless your computer has been targeted by one, you may not know how they could affect an individual or an organization. If a computer is attacked by a hacker or virus, it could lose important personal information and software. + +The creation of a new university campus is being considered. Your requirement is to model the risk assessment of information technology (IT) security for this proposed university. The narrative below provides some background to help develop a framework to examine IT security. Specific tasks are provided at the end of this narrative. + +Computer systems are protected from malicious activity through multiple layers of defenses. These defenses, including both policies and technologies (Figure 1), have varying effects on the organization's risk categories (Figure 2). + +![](images/541922434e2b34ba53e8affec934c6890f7250d1862184e04d6df81a5a6ef8ed.jpg) +Figure 1. Preventive defensive measures. + +Management and usage policies address how users interact with the organization's computers and networks and how people (system administrators) maintain the network. Policies may include password requirements, formal security audits, usage tracking, wireless device usage, removable media concerns, personal use limitations, and user training. An example of password policy would include requirements for the length and characters used in the password, how frequently they must be changed, and the number of failed log-in attempts allowed. Each policy solution has direct costs associated with its + +![](images/937d512c34c958bab9e8bc3bff7570223ddb20dc7289afb19e7b5c35e13e8327.jpg) +Figure 2. Economic risk schematic for IT systems. + +implementation and factors that impact productivity and security. In Figure 1, only the topmost branch is fully detailed. The structure is replicated for each branch. + +The second aspect of a security posture is the set of technological solutions employed to detect, mitigate, and defeat unauthorized activity from both internal and external users. Technology solutions cover both software and hardware and include intrusion detection systems (IDS), firewalls, anti-virus systems, vulnerability scanners, and redundancy. As an example, IDS monitors and records significant events on a specific computer or from the network examining data and providing an "after the fact" forensic ability to identify suspect activity. SNORT (www.snort.org) is a popular IDS solution. Figure 1 provides a sample of key defensive measures (management/usage policies and technology solutions). As with a policy, a technology solution also has direct costs, as well as factors that impact productivity and security. + +Sources of risk to information security include, but are not limited to, people or hardware within or outside the organization (Figure 2). Different preventive defensive measures (Figure 1) may be more effective against an insider threat than a threat from a computer hacker. Additionally, an external threat may vary in motivation, which could also indicate different security measures. For example, an intruder who is trying to retrieve proprietary data or customer databases probably should be combated much differently from an intruder who is trying to shut down a network. + +Potential costs due to information security that an organization may face (Figure 2) include opportunity cost, people, and the cost of preventive defensive measures. Significant opportunity costs include: litigation damages, loss of proprietary data, consumer confidence, loss of direct revenue, reconstruction of data, and reconstruction of services. Each cost varies based on the profile of the organization. For example, a health-care component of the university might have a greater potential for loss due to litigation or availability of patient medical records than with reconstruction of services. + +An organization can evaluate potential opportunity costs through a risk analysis. Risks can be broken down into three risk categories: confidentiality, integrity, and availability. Combined, these categories define the organization's security posture. Each of the categories has different impacts on cost depending on the mission and requirements of the organization. + +- Confidentiality refers to the protection of data from release to sources that are not authorized with access. A health care organization could face significant litigation if health care records were inadvertently released or stolen. +- The integrity of the data refers to the unaltered state of the data. If an intruder modifies pricing information for certain products or deletes entire data sets, an organization would face costs associated with correcting transactions affected by the erroneous data, the costs associated with reconstructing the correct values, and possible loss of consumer confidence and revenue. +- Finally, availability refers to resources being available to an authorized user, including both data and services. This risk can manifest itself financially in a similar manner as confidentiality and integrity. + +Each measure implemented to increase the security posture of an organization will impact each of the three risk categories (either positively or negatively). As each new defensive security measure is implemented, it will change the current security posture and subsequently the potential opportunity costs. A complicated problem faced by organizations is how to balance their potential opportunity costs against the expense of securing their IT infrastructure (preventive defensive measures). + +# Task 1 + +You have been tasked by the Rite-On Consulting Firm to develop a model that can be used to determine an appropriate policy and the technology enhancements for the proper level of IT security within a new university campus. The immediate need is to determine an optimal mix of preventive defensive measures that minimizes the potential opportunity costs along with the procurement, maintenance, and system administrator training costs as they apply to the opening of a new private university. Rite-On contracted technicians to collect technical specifications on current technologies used to support IT security programs. Detailed technical data sheets that catalog some possible defensive measures are contained in Enclosures A and B. The technician who prepared the data sheets noted that as you combine defensive measures, the cumulative effects within and between the categories confidentiality, integrity, and availability cannot just be added. + +The proposed university system has 10 academic departments, a department of intercollegiate athletics, an admissions office, a bookstore, a registrar's office (grade and academic status management), and a dormitory complex capable of housing 15,000 students. The university expects to have 600 staff and + +faculty (non IT support) supporting the daily mission. The academic departments will maintain 21 computer labs with 30 computers per lab, and 600 staff and faculty computers (one per employee). Each dorm room is equipped with two (2) high speed connections to the university network. It is anticipated that each student will have a computer. The total computer requirements for the remaining department/agencies cannot be anticipated at this time. It is known that the bookstore will have a Website and the ability to sell books online. The Registrar's office will maintain a Website where students can check the status of payments and grades. The admissions office, student health center, and the athletic department will maintain Websites. + +The average administrative employee earns $38,000 per year and the average faculty employee earns$ 77,000 per year. Current industry practice employs three to four system administrators (sysadmin) per subnetwork and there is typically one (1) sysadmin (help-desk support) employee per 300 computers. Additionally, each separate system of computers (for Web hosting or data management) is typically managed by one (1) sysadmin person. + +The current opportunity cost projection (due to IT) with no defensive measures is shown in Table 1. The contributions of various risk categories—Confidentiality (C), Integrity (I), and Availability (A)—to a given cost are also shown in Table 1. + +Table 1. Current opportunity costs and risk Category contributions. + +
Opportunity Cost (due to IT)Amount ($ millions)Risk category contribution
CIA
Litigation3.855%45%
Proprietary data loss1.570%30%
Consumer confidence2.940%30%30%
Data reconstruction0.4100%
Service reconstruction0.08100%
Direct revenue loss0.2530%70%
+ +# Task 2 + +We know that technical specifications will change rapidly over time. However, the relations and interplay among costs, risk categories, and sources of risk will tend to change more slowly. Create a model for the problem in Task 1 that is flexible enough to adapt to changing technological capabilities and can be applied to different organizations. Carefully describe the assumptions that you make in designing the model. In addition, provide an example of how the university will be able to use your model to initially determine and then periodically update their IT security system. + +# Task 3 + +Prepare a three-page position paper to the university President that describes the strengths, weakness, and flexibility of your model in Task 2. In addition, explain what can be inferred and what should not be inferred from your model. + +# Task 4 + +Explain the differences that may exist in the initial Risk Category Contributions (Table 1) if you model IT security for a commercial company that provides a search engine for the World Wide Web (e.g., Google, Yahoo, AltaVista, ...). Will your model work for this type of organization? + +# Task 5 + +Honeynets are designed to gather extensive information on IT security threats. Write a two-page memo to your supervisor advising whether a university or a search engine company should consider using a honeynet. + +# Task 6 + +To become a leader in IT security consulting, Rite-On Consulting must also take an active role in anticipating the future direction of information technology and advising companies on how to respond to future security risks. After performing your analysis, write a two-page memo to the President of Rite-On to inform him of the future of IT security. In addition, describe how your model can be used to anticipate and respond to the uncertain future. + +# Enclosure A + +# Technology Preventive Defensive Measures + +[EDITOR'S NOTE: We omit the 11 pp of tables of Enclosure A, which are available in their entirety at http://www.comap.com/undergraduate/contests/mcm/contests/2004/problems/icm2004.pdf. We give a sample in Table 2, together with (below) the instructions for reading the table.] + +How to read this table: The Qualitative Values are a judgment based on the assessment from industry experts on the tools' effectiveness. Each defensive measure has several instances that vary in costs and effectiveness. The Low, Mean, and High values represent a characterization of reviews found in different consumer review periodicals as they relate to user productivity, confidentiality, integrity, and availability. The variability indicates the concentration of the data about the mean. The Low and High are the minimum and maximum possible values, respectively. Costs are in U.S. dollars. A factor value of $5.00\%$ indicates an improvement of $5\%$ . A value of $-5.00\%$ indicates that the + +Sample from Enclosure A. + +Table 2. + +
Host-based Firewall
Intelli-ScanDirect Costs
Procurement/computern/a $45.00 n/a
Maintenance/year/computern/a — n/a
Training/year/sys adminn/a $1,000.00 n/a
FactorsUser Productivity-2.00% -1.00% 0.00% Low
Confidentiality9.00% 28.00% 38.00% High
Integrity9.00% 28.00% 38.00% High
Availability9.00% 18.00% 28.00% Med
+ +factor is degraded by $5\%$ . These values are modifiers to the existing levels. For example from a base Confidentiality level of .8 a factor value of $-25\%$ would result in a new Confidentiality factor of $0.8 - (0.8 \times 0.25) = 0.6$ . A positive value results in a positive change in the factor. + +# Enclosure B + +# Policy Preventive Defensive Measures + +[EDITOR'S NOTE: We omit the 2 pp of tables of Enclosure B, which are available in their entirety at the Web address noted earlier. We give a sample in Table 3; the instructions for reading the table are the same as for Enclosure A.] + +Sample from Enclosure B. + +Table 3. + +
Strong Passwords
Costs
FactorsPolicy implementationn/a$45,000n/aLow
Training/year per Sys Admin$8,000$12,000$15,000Med
Training/year per user$3$5$12Med
Maintenance costs$10,000$12,000$20,000Med
User Productivity9.00%28.00%38.00%Med
Confidentiality9.00%28.00%38.00%Low
Integrity9.00%28.00%38.00%Low
Availability9.00%18.00%28.00%Low
+ +# The Results + +Solution papers were coded at COMAP headquarters so that names and affiliations of authors would be unknown to the judges. Each paper was read preliminarily by at least two "triage" judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary, the model description, and the overall organization are the primary elements in judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, additional triage judges evaluated the paper. + +Final judging by a team of modelers, analysts, and subject-matter experts took place March 5 and 6, again at the U.S. Military Academy at West Point, NY. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
IT Security4265162143
+ +The four papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor Team Members + +"It's All About the Bottom Line" + +Harvey Mudd College + +Claremont, CA + +Hank Krieger + +Eli Bogart + +Cal Pierog + +Lori Thomas + +"Making the CIA Work for You" + +Harvey Mudd College + +Claremont, CA + +Jon Jacobsen + +Warren Katzenstein + +Tara Martin + +Michael Vrable + +"Firewalls and Beyond: Engineering Information Technology Security + +United States Military Academy + +West Point, NY + +Elizabeth W. Schott + +Dennis Clancey + +Daniel Kang + +Jeffrey Glick + +"Catch Thieves Online: IT Security" + +University of Electronic Science and + +Technology + +Chengdu, Sichuan, China + +Du Hongfei + +Zhao Qian + +Su Xueyuan + +Song Yunji + +# Meritorious Teams (26 teams) + +Asbury College, Wilmore, KY (David L. Couliette) + +Beijing University of Chemical Technology, China (Jiang Xinhua) + +Beijing University of Posts and Telecommunications, Beijing, China (He Zuguo) + +Beijing University of Posts and Telecommunications, Beijing, China (Sun Hongxiang) + +Carroll College, Helena, MT (Kelly Slater Cline) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Jiao Guanghong) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Shang Shouting) + +Jilin University, Changchun City, Jilin, China (Liu JinYing) + +Maggie Walker Governor's School, Richmond, VA (John A. Barnes) + +Montana Tech, Butte, MT (Richard J. Rossi) + +Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China + +(Yang Zhenhua) + +Olin College of Engineering, Needham, MA (Burt S. Tilley) + +Peking University, Beijing, China (Liu Yulong) + +Peking University Health Science Center, Beijing, China (Zhang Xia) + +Rowan University, Glassboro, NJ (Hieu D. Nguyen) + +Shanghai Jiaotong University, Shanghai, China (Zhou Guobiao) + +South China University of Technology, Guangzhou, Guangdong, China (Qin Yongan) + +Sun Yat-Sen University, Guangzhou, Guangdong, China (Wang Qi-Ru) + +Tianjin University, Tianjin, China (Liu Zeyi) + +Tianjin University, Tianjin, China (Rong Ximin) + +Tsinghua University, Beijing, China (Deng Xi) + +University of Science and Technology of China, Hefei, Anhui, China (Zhang Ziyu) + +University of Science and Technology of China, Hefei, Anhui, China (Yang Zhang) + +Xidian University, Xi'an, Shaanxi, China (Wang Xinhui) + +Xidian University, Xi'an, Shaanxi, China (Ye Ji-Min) + +Zhejiang University College of Science, Hangzhou, Zhejiang, China (Ji Min) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and by the Head Judge. Additional awards were presented to the Harvey Mudd team advised by Hank Krieger from the Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Contest Directors + +Chris Arney, Dean of the School of Mathematics and Sciences, + +The College of Saint Rose, Albany, NY + +Gary W. Krahn, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Associate Directors + +Richard Cassady, Dept. of Industrial Engineering, University of Arkansas, + +Fayetteville, AR + +Kathleen Snook, U.S. Army (retired), MA + +Judges + +Daniel Ragsdale, Dept. of Electrical Engineering and Computer Science, + +U.S. Military Academy, West Point, NY + +Caroline Smith, Dept. of Mathematics and Statistics, + +James Madison University, Harrisonburg, VA + +Frank Wattenberg, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Triage Judges + +U.S. Military Academy, West Point, NY: + +Dept. of Electrical Engineering and Computer Science + +David Barlow, Ronald Dodge, Aaron Ferguson, Kenneth Fritzsche, James + +Jackson, Michael Lanham, Brian Layton, Thomas Morel, Timothy Nix, + +Daniel Ragsdale, and William Suchan + +Dept. of Mathematical Sciences + +Mike Arcerio, John Billie, Gabe Costa, Mason Crow, Jeff Fleming, Andy + +Glen, Paul Goethals, Alex Heidenberg, Mike Huber, Michael Johnson, + +Gary Krahn, Tom Lainis, Howard McInvale, Barbara Melendez, Chris + +Moseley, Joe Myers, Mike Phillips, Jack Picciuto, Tim Povich, Tyge Ru + +genstein, Raymond Smith, Bart Stewart, Frank Wattenberg, and Brian + +Winkel + +# Source of the Problem + +The Information Technology Security Problem was contributed by Daniel Ragsdale and Ronald Dodge (Dept. of Electrical Engineering and Computer Science, U.S. Military Academy, West Point, NY). + +# Acknowledgments + +Major funding for the ICM is provided by a grant from the National Science Foundation through COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS). + +We thank: + +- all the ICM judges and ICM Board members for their valuable and unflagging efforts, and +- the staff of the Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY, for hosting the triage judging and the final judging. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, style has been adjusted to that of The UMAP Journal, and the papers have been edited for length. Please peruse these student efforts in that context. + +To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Editor's Note + +As usual, some Outstanding papers were much longer than we can accommodate in the Journal (one was 129 pp long!); so space considerations forced me to edit the Outstanding papers for length. The code and raw output of computer programs is omitted, the abstract is often combined with the summary, and usually it is not possible to include all of the many tables and figures. + +In addition, I have omitted the position papers and most of the other supplementary materials requested in Tasks 3-6, since they do not include new mathematical modeling not already present in the bodies of the reports. + +In all editing, I endeavor to preserve the substance and style of the paper, especially the approach to the modeling. + +Paul J. Campbell, Editor + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORI
CALIFORNIA
Harvey Mudd CollegeClaremontArt BenjaminH
Jon JacobsenO
Hank KriegerO
COLORADO
University of Colorado at DenverDenverWilliam L. BriggsP
ILLINOIS
Greenville CollegeGreenvilleGeorge R. PetersP
Monmouth College, PhysicsMonmouthMichael KroupaH
IOWA
Luther CollegeDecorahEric R. WestlundP
Simpson College, Bio. & GeologyIndianolaJeffrey ParmeleeP,P
KENTUCKY
Asbury CollegeWilmoreDavid L. CoulietteM,H
Ken P. RietzP
MASSACHUSETTS
Olin College of EngineeringNeedhamBurt S. TilleyM
MARYLAND
Villa Julie CollegeStevensonEileen C. McGrawH
MONTANA
Carroll CollegeHelenaMark R. ParkerH
Kelly Slater ClineM
Holly S. ZulloH
Montana TechButteRichard J. RossiM
NEVADA
Sierra Nevada College, Env'1 Eng.Incline VillageChristopher John DammP
NEW JERSEY
Rowan UniversityGlassboroHieu D. NguyenM
NEW YORK
Concordia College-New YorkBronxville
Computer Info. ServicesDaniel BurroughsP
United States Military AcademyWest Point
Mathematical SciencesMichael J. SmithP
Sakura Sen TherrienP
Systems EngineeringElizabeth W. SchottO
NORTH CAROLINA
North Carolina State UniversityRaleighJeffrey S. ScroggsP
Western Carolina UniversityCullowheeErin K. McNelisP
OHIO
Ohio Wesleyan UniversityDelawareRichard S. LinderP
Youngstown State UniversityYoungstown
MathematicsGeorge YatesP,P
PhysicsMichael CrescimannoH
VIRGINIA
Maggie Walker Governor's SchoolRichmondJohn A. BarnesM,H
CHINA
Anhui
Anhui University, ElectronicsHefeiChen MingshengH
Hefei University of TechnologyHefei
Applied MathematicsYang LiuP
Computing MathematicsBao ChaoweiP
Ding XiaojingP
Univ. of Science and Technology of ChinaHefei
Electronic Science and TechnologyZhang YangM
Special Class for the Gifted YoungZhang ZiyuM
Beijing
Beijing Institute of TechnologyBeijingLi BingzhaoP
Cui XiaodiH
Beijing Jiaotong UniversityBeijing
ChemistryBing TuanP
Information Science and ComputationLiu MinghuiP
PhysicsGuochen FengH,P
TransportationWang XiaoxiaH,P
Beijing University of Chemical TechnologyBeijing
Applied ChemistryCheng YanH
Chemical EngineeringJiang XinhuaM
Electric ScienceShi XiaodingP
Beijing Univ. of Posts and Telecom.Beijing
Institute of Information EngineeringDing JinkouP
School of ScienceHe ZuguoM
Sun HongxiangM
Beijing University of TechnologyBeijingXue YiP
Peking UniversityBeijing
Health Science CenterZhang XiaM,H
Mathematical ScienceGuo MaozhengH
Liu YulongM,P
Yang JiazhongH
Tsinghua UniversityBeijing
Applied MathematicsXi DengM
MathematicsHuang HongxuanH
Jiang Qi-yuanH
Lu MeiP
Chongqing
Chongqing UniversityChongqing
Computer ScienceYang XiaofanH
Wu KaiguiH
Mathematics and Physics, StatisticsLiu QiongsunP
Mathematics and ScienceYang DadiP
Guangdong
Jinan UniversityGuangzhou
ElectronicsYe ShiqiP
MathematicsFan SuohaiH
Ju DaiqiangH
South China University of TechnologyGuangzhou
Applied MathematicsQin YonganM
Applied PhysicsLiang ManfaH
College of ScienceTao ZhisuiP
Sun Yat-Sen UniversityGuangzhou
MathematicsWang Qi-RuM
PhysicsBao YunP
Scientific Computing & Computer Appl.Chen ZePengP
Heilongjiang
Harbin Institute of TechnologyHarbinJiao GuanghongM,H
Shang ShoutingM
Harbin Univ. of Science and TechnologyHarbinLi DongmeiH
Tian GuangyueH,P
Harbin Engineering UniversityHarbinGao ZhenbinH
Harbin Normal University, Information ScienceHarbinLiu HuanpingP
Zeng WeiliangP
Northeast Agricultural UniversityHarbin
Biological EngineeringTang Yan
Industrial EngineeringLi FangGeP
Hubei
Wuhan University of TechnologyWuhan
MathematicsHuang Xiao weiP
StatisticsLi YuguangP
Jiangsu
Nanjing University of Posts and Telecomm.Nanjing
Applied Mathematics and PhysicsQiu ZhongHuaP
Computer Science and TechnologyLi XinxiuP
Optical Information TechnologyYang ZhenhuaM
Nanjing University of Science & TechnologyNanjing
Applied MathematicsWang PinlingH
MathematicsChen PeixinH
Huang ZhengyouP
Southeast UniversityNanjingCao Hai-yanP
Sun Zhi-zhongP
Wang Li-yanP
Jilin
Jilin UniversityChangchun City
Applied MathematicsYang GuangH
Machinery and EngineeringPei YongchenH
MathematicsLiu JinYingM
TelecommunicationCao ChunlingP
Liaoning
Dalian UniversityDalian
Applied MathematicsHe MingfengP
Wang YiP
Zhao LizhongH
Info. EngineeringZhang ChangjunP
Shaanxi
Northwestern Polytechnical UniversityXi'an
Applied MathematicsXu WeiH
Applied PhysicsLu QuanyiP
Xiao HuayongH
Institute of Natural & Applied ScienceZhao XuanminP
Xidian UniversityXi'an
Applied MathematicsWang XinhuiM
Computer ScienceYe Ji-MinM
Shandong
Shandong University, Math & Sys. Sci.JinanHuang Shu xiangH
Shanghai
East China Univ. of Science and Tech.ShanghaiSu ChunjieH
Sun JunH
Fudan UniversityShanghaiCai ZhijieH
Cao YuanH
Shanghai Jiaotong UniversityShanghaiHuang JianguoH
ZhouGuobiaoM
Sichuan
Univ. of Electronic Science & TechnologyChengduQin SiyiP
Zhang YongH
Du HongfeiO
Tianjin
Nankai UniversityTianjin
Mgmnt Information SystemsHuo WenhuaP
Computer ScienceLiu ZeyiM
Information ManagementRong XiminM
Zhejiang
Hangzhou University of Commerce
Information & Computing ScienceZhao HengP
MathematicsZhu LingH,H
StatisticsZhao HengP
Zhejiang University
Applied Math.HangzhouTan ZhiyiH
City College, Computer ScienceHuang WaibinH
Kang XushengH,H
MathematicsJi MinM,H
FINLAND
Mathematical High School of HelsinkiHelsinkiJohannes KärkkäinenH,P
Päivölä CollegeTarttilaMerikki LappiP,P
INDONESIA
Institut Teknologi BandungBandungSapto Wahyu IndratnoH
Edy SoewonoH
Kuntjoro Adji SidartoP
IRELAND
University College DublinDublinRachel QuinlanP
+ +# Editor's Note + +Unless otherwise specified, the sponsoring department is the Dept. of Mathematics, Mathematical Sciences, or Mathematics and Computer Science. + +For team advisors from China, we have endeavored to list family name first. + +# It's All About the Bottom Line + +Eli Bogart + +Cal Pierog + +Lori Thomas + +Harvey Mudd College + +Claremont, CA + +Advisor: Hank Krieger + +# Summary + +A brand-new university needs to balance the cost of information technology security measures with the potential cost of attacks on its systems. We model the associated risks and costs, arriving at an equation that measures the total cost of a security configuration and then developing two algorithms that minimize the cost. Both algorithms give a total cost just over half the cost of no security and just over 1.5 times the theoretical minimum cost. + +Our model's lack of assumptions about the structure of the university allows the model to be used with any kind of organization, requiring only a set of opportunity costs and statistics about the size of the organization. Our model can even suggest upgrades to existing security systems by changing the costs associated with current security measures. + +We consider two extreme cases that bound our solution area and also test the sensitivity of our results by varying the parameters to see the impact on the security configurations chosen by the algorithms. In addition, we analyze equal-cost configurations that lead to different levels of risk. + +# Introduction + +We develop a model to evaluate and optimize choices of security systems for a new university, which could easily be extended to another organization. + +- We make assumptions to simplify the problem. +- We present our method for calculating the cost of a combination of security measures. + +- We develop a reduced-search-space brute-force algorithm and an iterative algorithm that use the cost formula to find an optimal security configuration. +- We report and analyze the results of these algorithms. +- We discuss the extensibility and flexibility of our approach, with particular attention to how it could be applied to an organization of considerably different priorities, such as a major commercial Internet search engine. +- We discuss improvements and further developments. + +# Assumptions + +A university's computer systems must support activities ranging from word-processing to scientific simulation, from Web-hosting to accounting, for tens of thousands of users on a day-to-day basis. Our client may have as many as 35,000 networked computers [Levine et al. 2003], differing in their operating systems, configurations and primary purposes, and extensively organized into subnetworks and departments. This scope and complexity, combined with the ever-increasing number and diversity of threats to information security, and the wide variety of countermeasures available to combat those threats, make precise optimization of the school's information technology security a challenge. To simplify this process, we make several assumptions: + +- All security measures are applied universally. We assume that a single, uniform package of defensive measures and policies is implemented for every computer on campus (although our model supports the ability to individually analyze subsystems). This assumption allows us to disregard any security-related interactions between differently-protected subsystems. +- At most one security measure of each type. Two security measures designed to protect against the same category of threat are highly likely to have overlapping capabilities. If we have two spam filtering programs, we would expect spam email detected by one program to be flagged by the other, so that it is unlikely that operating both is profitable. On the other hand, this sort of redundancy could be desirable as a protection against system failure. +- No redundancy or synergy among security measures of different types. The presence or absence of a security measure or policy of one type cannot impact the effectiveness of a security measure or policy of any other type. In practice, system administrators could use the information provided by a network intrusion detection system to adjust the configuration of a firewall, improving its performance; but in the absence of any relevant data, ignoring these effects seems to be a relatively benign simplification of the problem. +- Five-year time frame. The procurement and installation of a security measure is a one-time cost, while the associated maintenance costs and security + +benefits accumulate over time. Given the rapidly changing nature of security threats and computer technologies, it is reasonable to compare the net costs of security measures over five years. + +- Costs of security measures are independent. The prices of different security products are independent, and we neglect any financial effects of installing multiple products at one time—simply put, there is no bulk discount. In this respect, our model is overly pessimistic; apart from any discounts, bulk installation would also be faster and cheaper. However, this assumption allows us our model to encompass systems with pre-existing components. + +# Cost Equation + +Any analysis of risk requires a way to compare two security configurations and choose the better one. Our model accomplishes this by measuring the dollar amount that a security system will save over the next five years. This dollar amount has two components: attack costs and sunk costs. Attack costs accrue from information attack and the resulting litigation, data loss, loss of consumer confidence, and so on. Sunk cost is the price of implementing the security system plus the cost of maintaining it over five years. Additionally, the sunk cost includes the dollar estimate of the gain or loss in productivity caused by the security system. The sum of these two costs is total cost. + +# Attack Cost + +To measure the cost of an attack, we could lump all costs together and assume that all security measures reduce that total cost. To do so would be overly simplistic, since three security measures that all reduce the same aspect of cost are not necessarily as effective as three that reduce different components. + +We break the total risk into three components: information confidentiality, data integrity, and system availability [Levine et al. 2003]. The relative importance of the indices for confidentiality, integrity, and availability $(C, I, \text{and} A)$ for a given company will drive its choices in security measures. + +Table 1 of the problem statement allows us to break down the baseline cost (of no security whatever) into the three risk categories: \( BaseCostC = \\) 4.3 \( million,BaseCostI = \\( 3.585 \) million,and \( BaseCostA = \\) 1.045 \( million, corresponding to confidentiality, integrity, and availability. + +Each security device or policy affects $C$ , $I$ , and $A$ in terms of percentage changes $dC$ , $dI$ , and $dA$ from the initial values of 1. Positive changes reflect higher levels of confidentiality, integrity, and availability, so costs from attacks should decrease as the indices increase. We offer the following equation for the + +cost of an attack given $n$ categories of security features and policies: + +$$ +A t t a c k C o s t = y e a r s \times \left(\frac {B a s e C o s t C}{\prod_ {i = 1} ^ {i = n} (1 + d C _ {i})} + \frac {B a s e C o s t I}{\prod_ {i = 1} ^ {i = n} (1 + d I _ {i})} + \frac {B a s e C o s t A}{\prod_ {i = 1} ^ {i = n} (1 + d A _ {i})}\right) +$$ + +With no security features, the indicated products are all 1 and we get the baseline attack costs. + +# Sunk Cost + +There are two aspects to sunk costs: money spent on security measures and change in productivity. The money spent includes the one-time cost of purchasing or implementing the measure or policy, maintaining it, and training individuals in its use. This amount can depend on the number of users, computers, and IT staff trained to work with the measure. We divide IT staff into two categories, help-desk workers and system administrators. The cost of training IT staff for each product is assigned to the appropriate category of staff. This provides a bit more realism for the model, as help-desk workers in general do not require as much training as system administrators. + +The second aspect of the sunk cost is the change in productivity, $P$ , which works much like the $C$ , $I$ , and $A$ indexes used to determine attack costs. Increases in productivity should lead to decreases in the sunk costs, since increased productivity lessens the cost of the security feature, and the increase in productivity depends only on the existence of the security features, not on attacks. To calculate the change in productivity from the baseline, we subtract the base productivity value, getting + +$$ +\begin{array}{l} S u n k C o s t = \sum_ {i = 1} ^ {n} (\text {p r o c u r e C o s t} + \text {m a i n t C o s t} + \text {t r a i n C o s t}) \\ + y e a r s \times \left(\frac {\text {B a s e V a l u e P}}{\prod_ {i = 1} ^ {n} \left(1 + d P _ {i}\right)} - \text {B a s e V a l u e P}\right) \\ \end{array} +$$ + +The BaseValue \(P\) is the product of the number of users and the productivity per user, obtained by estimating the number of hours per year that the average user spends using the university's IT services and the replacement cost of those services (as estimated by our team). We arrived at BaseValue \(P = \\)12\( million. (Later, we analyze the sensitivity of the model to this value.) + +The costs to purchase or implement and maintain security depends on the numbers of computers, users, and IT staff. Table 1 lists the values that we chose to simulate the university. + +Table 1. Fixed input parameters for the model. + +
VariableValueVariableValue
Computers17,900BaseCostC$4.3 million
System administrators16BaseCostI$3.585 million
Help-desk staff55BaseCostA$1.045 million
Users17,000Productivity per user365
Years5
+ +# Total Cost + +Combining our two equations, we get + +$$ +\begin{array}{l} \text {T o t a l C o s t} = \sum_ {i = 1} ^ {n} (\text {p r o c u r e C o s t} + \text {m a i n t C o s t} + \text {t r a i n C o s t}) \\ + \text {y e a r s} \times \left(\frac {\text {B a s e C o s t C}}{\prod_ {i = 1} ^ {i = n} \left(1 + d C _ {i}\right)} + \frac {\text {B a s e C o s t I}}{\prod_ {i = 1} ^ {i = n} \left(1 + d I _ {i}\right)} + \frac {\text {B a s e C o s t A}}{\prod_ {i = 1} ^ {i = n} \left(1 + d A _ {i}\right)} \right. \\ \left. + \frac {\text {B a s e V a l u e P}}{\prod_ {i = 1} ^ {n} (1 + d P _ {i})} - \text {B a s e V a l u e P}\right) \\ \end{array} +$$ + +# Input + +The $C, I$ , and $A$ multipliers and prices for each security measure are obtained from Enclosures A and B of the problem statement. We reduce the values for $P$ by a factor of 10 to reflect reasonable changes in productivity due to any single product. We also ensure that each security category has a null option, with a cost of zero and values for $C, I, A$ and $P$ of 1. We create two categories of IT staff and split training costs for system administrators between the two categories. + +# Models and Approaches + +We explore several approaches to optimizing the university's security configuration using the cost formula above. + +# Brute-Force Computation + +Calculating the net cost of every combination of security features allowed by our assumptions and picking the best would be guaranteed to find the best security configuration within the parameters of our model. However, evaluation of the cost formula would be computationally intensive. If $j_{i}$ security + +features (including the null feature) are available in the $i$ th category, one feature can be chosen from each category in $j_1 \times j_2 \times \dots \times j_n$ distinct ways. Once a set of features has been chosen, calculating the resulting effects on confidentiality, integrity, availability, and productivity requires $4(n - 1)$ multiplications. Comparison of all possible security configurations thus requires + +$$ +4 (n - 1) \prod_ {i = 1} ^ {n} j _ {n} +$$ + +multiplications. For the security features available to the university, this would be $4.77 \times 10^{12}$ multiplications. + +While many of these multiplications are repeated many times, allowing a good algorithm to reduce the total number of calculations drastically, the brute-force approach is nonetheless too numerically intensive to produce results within a reasonable time frame. + +# Refined Brute Force + +Reducing the number of security features under consideration could substantially reduce the number of calculations required for a brute-force approach. To do this, we calculate the net cost of each security feature in isolation. If a feature, installed alone, results in more costs due to procurement, maintenance, and lost productivity than savings due to increased security, we assume that the feature will not become profitable as part of the optimal security configuration. + +This assumption is plausible but not guaranteed. A security policy such as allowing wireless networking, which allows a great increase in productivity at the expense of confidentiality, integrity and availability, might become profitable in combination with a (hypothetical) inexpensive combination of security measures that increase the $C$ , $I$ , and $A$ indexes more than enough to compensate. But most of the security measures that we are considering have small effects on productivity; their net cost is determined primarily by their installation and maintenance costs and their security benefits, which are proportional to $C$ , $I$ , and $A$ . Such measures, if unprofitable on their own, are almost certain to become even less profitable in combination with other measures that reduce $C$ , $I$ , and $A$ . We decided that such security features are safe to neglect. This eliminates 34 of 83 technological measures and settles 4 of the policy choices studied—many of which would have cost well over $1 million over five years—reducing the number of necessary multiplications to 600,000. + +# Cherry-Picking + +As an alternative to reducing the size of the search space, we created an iterative algorithm to construct a security configuration. Starting from an undefended network, we repeatedly add the most profitable security feature available until one technological measure (possibly null) from each category had + +been acquired and all policy choices had been made. In addition to producing an effective overall security system, this process also provides an outline for acquiring security features piecewise, as could be required on a limited budget. + +# Results + +# Model Results + +We ran both the Refined Brute Force and Cherry-Picking algorithms on the data set provided in the problem statement. The resulting total costs can be compared to each other but are not useful without a frame of reference. To provide that sort of framework, we ran both algorithms again, this time minimizing only one component of the cost, either attack cost or sunk cost. In both cases, both algorithms produced exactly the same security configuration and total cost, thereby giving lower bounds on the attack cost and the sunk cost for with the data set. We illustrate tradeoffs between attack costs and sunk costs in Figure 1. Lines of slope $-1$ consist of points with the same total cost. Table 2 shows the security configurations for points $A, B, C$ , and $D$ . + +![](images/0651448176517b9afc573daf24dce63fb53cbde2ac4bc0c1a741037499e552c8.jpg) +Figure 1. Sunk costs vs. attack costs. Diagonal lines are made up of points with equal cost. Point $A$ which minimizes the cost of an attack regardless of the sunk cost. Point $B$ is chosen by the Cherry-Picking algorithm. Point $C$ is chosen by the Refined Brute Force algorithm. Point $D$ minimizes the sunk cost, regardless of the cost of an attack. + +No configuration can have an attack cost lower than that of point $A$ nor sunk costs lower than that of point $D$ . The vertical line through $A$ and the horizontal line through $D$ intersect at the theoretical lowest total cost—a combination of security features that costs almost nothing and reduces the cost of attack to almost nothing. + +Table 2. Products and policies for points $A,B,C,$ and $D$ in Figure 1. + +
categoryABCD
Host FirewallIntelli-ScanLavaBarriernone
Net FirewallEnterpriseLavaEnterpriseLavanonenone
Host Anti-VirusBugKillerAnti-VAnti-Vnone
Net Anti-VirusEnterprise StopperSystem SplatterSystem Splatternone
Net IDSNetwork EyeWatcherWatchernone
Net Spam FilterSpam StopperSpam StopperSpamStoppernone
Net Vulnerability Scannonenonenonenone
Data Redundancynonenonenonenone
Service Redundancynonenonenonenone
Password PolicyStrongStrongStrongStrong
Security Audit?FormalFormalFormalnone
Wireless?nonenonenonenone
Removable Media?nonenonenonenone
Personal Use?RestrictedRestrictedRestrictedRestricted
User Training?RequiredRequiredRequirednone
IT Staff Training?Requirednonenonenone
+ +# Deviation from Expected Attack Rates + +In Figure 1, points $B$ and $C$ fall almost on the same line, so their total costs are almost equal. We would like to be able to distinguish between two points with similar total costs but different divisions of this total between sunk and attack costs. One way to do so is to consider what happens if the rate of successful attacks is different than expected. + +- Suppose that the government cracks down on computer crime and only half as many attackers manage to break into the university networks; costs due to attacks will be half as much as before, benefitting those who spent more on attack costs than sunk costs. +- On the other hand, suppose that the number of attackers is more than anticipated. In this case, the amount spent on attacks is much larger than expected, so those who spent more on attack costs than sunk costs end up spending more than they had planned. + +Figure 2 illustrates this point. A subscript 1 corresponds to cutting the rate of attack in half, a 2 to expected attack rates, and a 3 to doubling the attack rate. + +# Attack Cost vs Sunk Cost - Varied Attack Rates + +![](images/e8d102750e215b14ac14136045f8c3b37969bd9c0d2616e3cfc16b65ac051bc6.jpg) +Figure 2. Points $A$ , $B$ , $C$ and $D$ correspond to minimizing attack costs, the total cost using the Cherry-Picking algorithm, the total cost using the Refined Brute Force algorithm and the sunk costs, respectively. Subscript 1 corresponds to half as much cost from attacks as expected, 2 to the expected cost from attacks, and 3 to twice the cost expected. + +Among the points with halved attack rate, $C_1$ has the lowest total cost. However, when the attack rate is increased to twice the expected value, $B_3$ overtakes $C_3$ , indicating that although the Brute Force algorithm ( $C$ ) is best for low attack rates, the Cherry Picking algorithm ( $B$ ) surpasses it when the attack rate increases. + +# Variation of Parameters + +To test the robustness of our results, we individually varied BaseCostC, BaseCostI, BaseCostA and BaseValueP by a factor of 2 and by a factor of $1/2$ . These variations had a small effect on the security configurations chosen by the Refined Brute Force algorithm. The choice of host-based firewall varied the most in response to changes in values, with five of the nine configurations choosing Barrier, one choosing Watertight, and three choosing Lava. The next most varied was the choice of network-based firewall, with six choosing none and three choosing Enterprise Lava. Two of the configurations had single choices that differed from the other eight. These results suggest that our model is only somewhat sensitive to variations in these parameters with the restricted data set used by the Refined Brute Force algorithm. + +We varied the same parameters by the same factors using the Cherry-Picking + +algorithm. Though the actual security configuration changed less than with the Refined Brute Force algorithm, the order in which each security feature was chosen varied from the norm in all but one case. Even when the order differed, it usually did not do so until the seventh or eighth purchase out of 16. + +The two algorithms responded similarly to variation in the parameters except in the firewall categories, where there were consistent differences. TheRefinedBruteForcealgorithmchooseBarrierand no network-basedfirewall, where the Cherry-Picking algorithm chose Lava andEnterpriseLava in 10 instances out of 18. Thus, the two algorithms give generally consistent results and the inconsistencies are systematic to some degree. + +# Extensibility + +The model makes no assumptions about the kind of organization under analysis and thus is highly adaptable and can be used to analyze any company or organization's computing resources. Our model simply requires three pieces of data to do its computations: + +- A list of the currently available technologies and their expected impacts on confidentiality, integrity, availability, and productivity. +- The number of computers, users, and system administrators that the system is expected to support. +- How much the organization currently spends on confidentiality, integrity, and availability, as well as the estimated value of each user's productivity. + +Since these parameters are not specific to any type of company or organization, any organization can be modeled, from universities to on-line banking to Internet search engines (see below). + +# Subsystems + +What's more, our model can also analyze the security tradeoff of applying security features to only a subset of a larger system, such as buying a firewall for only one subnet on a university campus. + +Using this method of breaking down a larger system into smaller subsystems, we can more effectively secure each subsystem, because we can choose a completely custom set of security measures for each subsystem individually. This means that each part of the organization's computing facilities can implement only the security that is most effective rather than following global security policies that only slightly benefit the particular subsystem, thereby not only increasing the overall security of the system but also substantially decreasing the cost of the security system. + +Our model can also be utilized if the organization later decides to merge two subsystems or divide an existing system into several parts, even after the initial security system is in place. + +# New Technology + +The constantly changing face of security makes periodic updating essential. Fortunately, our model allows analysis of security systems already in place, so it can re-evaluate an existing security system to see if it can be updated. Systems already in place receive an implementation cost of zero. Due to their presumed age, their effective confidentially, integrity, and availability must be recalibrated in light of emerging technologies. + +Once we enter the old systems into the database, we use our analysis to determine if they are still financially viable. If maintenance costs exceed estimated utility, the systems' use should be discontinued. The analysis will additionally suggest upgrades or additional security systems that would be profitable to install. + +# Implementation Costs + +However, projected income is not the only concern. Companies must also consider their limited cash on hand. For example, a security system that would save money over the next five years might be infeasible because the company does not currently possess enough financial reserves to cover the initial costs. + +The Cherry-Picking Algorithm deftly addresses this concern, picking the most profitable systems first before others. In this manner, the company can most effectively allocate its limited financial reserves to the systems that will effect the largest profit increases. They need only start picking at the top of the list and working their way down until their security budget is expended. + +# Web Search Engines + +The priorities of web search engines are very different from those of universities. Therefore a search engine's opportunity costs (as presented in the problem statement's Table 1) are quite differently distributed. + +# Consumer Confidence + +Consumer confidence is of paramount importance to Web search engines. Nearly all search engines rely on advertising as their primary source of income. However, publicists are interested in advertising only with popular search engines, where the most users will see their ad. The financial situation of a search engine is intimately tied to its consumer confidence (usage), so it follows that a + +search engine's opportunity costs are primarily proportional to the consumer confidence category. + +# Service Reconstruction + +Service reconstruction is closely related to consumer confidence. After all, a search engine can be viewed as a company that provides a solitary service: searching the Web. If a search engine is unable to provide this service to its users because of a security breach, it will lose consumer confidence: The longer the outage, the worse its effects. Thus, service reconstruction is another consumer confidence category in which search engines are interested. + +# Direct Revenue Loss + +Although especially vulnerable to attack that damages consumer confidence, such as a denial-of-service attack, search engines are not susceptible to financial attacks such as a salami attack. The search engines' relative immunity to such attacks stems from the lack of financial transactions that involve the Website. Simply put, search engines do not have any sensitive data such as credit-card and bank-account numbers. This means that there is really no way for an attacker to steal money from the search engine, rendering the direct revenue loss category rather inconsequential. + +# Proprietary Data Loss + +Not only do search engines not store sensitive financial data, they do not store any private data at all. The purpose of a search engine is to allow people to quickly find data that is available to everyone. Therefore, all the information cached by a search engine is freely available to anyone with an Internet connection, meaning that search engines have very little proprietary data to lose. Attackers are therefore unlikely to cause proprietary data loss. + +# Data Reconstruction + +This complete lack of proprietary data also helps the search engines with regard to data reconstruction. Since all the information housed by a search engine is freely available, attackers are unlikely to attempt to corrupt or sabotage the data. Moreover, if any data is corrupted, it can be restored by downloading a fresh copy from the Internet. From a search engine's point of view, the entire Internet is a backup copy. + +# Litigation + +The majority of a search engine's users pay no fee, and so have little grounds for litigation since the search engine has no legal obligations to them. + +More problematic from a legal perspective are the advertisers. The search engine is contracted by the advertiser to display an ad in a certain manner. If the search engine is unable to do so because of the nefarious work of an attacker, then the advertisers have grounds for lawsuit. However, the situations that would earn the ire of advertisers are exactly the same ones that would lower consumer confidence, that is, the site going offline. Therefore, while some attention must be paid to legal ramifications, the defensive measures involved would be the same as the consumer confidence category. + +# Directions of Further Work + +The first priority of future work is to remove our assumption that one set of security features will be applied to every computer system within the organization. At present, entire categories of features (data and system redundancy) are excluded from configuration because they are far too expensive to implement campus-wide. In fact, those features are not intended to be applied on such a large scale but only to critical systems and services. Our model is entirely capable of handling the implementation of different security features on subsets of an organization's computer systems; doing so would require only a breakdown of the university's computing needs into subnetworks, each with its own productivity and costs associated with confidentiality, integrity, and availability. + +Another valuable refinement of this model would be in the method for assessing BaseCostC, BaseCostI, BaseCostA and BaseValueP for different components of the university or other organization. Hsiao [1979] assigns to each component of the IT network not only a value but also a probability of attack; the product of the two gives the expected cost of attack for that component. This cost could be broken down into costs due to $C, I$ , and $A$ , allowing us to consider each feature individually and its ideal security configuration. We could then go a step further and figure out how to group different components to share security features or policies in a cost-effective manner. + +The value of our results would also be enhanced by relaxing our assumptions regarding redundancy and synergy among security features. Two security features could interact, with effects difficult to judge from information on the individual effect of each. Quantitative estimates of such interactions could be obtained in the same way as the data on individual security features, by polling of industry experts or experienced system administrators. + +To highlight a particularly important synergy effect, a more-detailed model would acknowledge that as an organization's systems become more resistant to attack, not only will fewer attacks succeed but the organization will present + +a lessappealing target and fewer attacks will be launched. The provided estimates of security effects may take this into account for individual security features, but a successful combination of many security measures will have an even greater effect. + +We have ignored the variation in the expert estimates of the effects of different security measures and policies. A more-detailed analysis could easily produce estimates of the uncertainty in our predictions of the savings resulting from each security configuration we propose. An improved version of our model would give priority to a security feature whose effects are known with reasonable certainty over a feature which is expected on average to be more beneficial but in which we can have no confidence. + +Finally, to make this model more effective, it is essential to expand the number of security measures and policies considered. The nine types of technological defensive measures and seven policy defenses considered here hardly represent the entire spectrum of approaches. For example, no consideration has been given to physically protecting the university's hardware, a legitimate information technology concern with definite potential effects on the $C$ , $I$ , and $A$ factors considered in the model. Perhaps worse, currently we consider only one nonspecific "user training" policy. Some form of user training is the best defense against "social engineering" attacks, which are already a major unrealized vulnerability and likely to become only more common in the future. Research into available measures to address physical and other security factors, and a closer examination of user training possibilities, would make our model potentially much more powerful. + +# References + +Greenberg, Eric. 2003. *Mission-Critical Security Planner: When Hackers Won't Take No for an Answer*. New York: Wiley. +Honeynet Project. 2003. Know your enemy: honeynets—What a Honeynet is, its value, how it works, and risk/issues involved. http://project.honeynet.org/papers/honeynet/index.html. Last modified 12 November 2003. +Hsiao, David K. 1979. Computer Security. New York: Academic Press. +Levine, John, Richard LaBella, Henry Owen, Didier Contis, and Brian Culver. 2003. The use of honeynets to detect exploited systems across large enterprise networks. Proceedings of the 2003 IEEE Workshop on Information Assurance, United States Military Academy, West Point, NY, June 2003. http://www.tracking-hackers.com/papers/gatech-honeynet.pdf. +Spitzner, Lance. 2003. Honeypots: Simple, cost-effective detection. http://www.securityfocus.com/infocus/1690. Last updated 30 April 2003. + +# Making the CIA Work for You + +Warren Katzenstein + +Tara Martin + +Michael Vrable + +Harvey Mudd College + +Claremont, CA + +Advisor: Jon Jacobsen + +# Summary + +We develop a general model for formulating network security systems with minimal costs. Applying the model to a hypothetical university results in a security system that costs $35\%$ less than no security system. For a search-engine company, we create a system that costs $55\%$ less than no security system. + +Our model uses the standard categories of confidentiality, integrity, and availability (CIA) to create a security profile for a company. We determine an optimal combination of security measures from a set database. The result is a model that is flexible enough to initially and periodically assess a variety of company models and incorporate changes in security technology. + +Our database groups defense measures into categories and the model selects at most one from each category. To combine measures, we assume that the essential functions of each category do not overlap. We rely on estimated CIA values and costs over a fixed length of time to compare defense measures. We use sensitivity analysis to indicate in which categories a particular product is most effective and in which categories the choice does not matter. + +To improve our analysis of a university campus network, we next divided the university into departments of subnetworks. We analyze each separately, providing a $2 million reduction in savings over the whole-campus analysis. + +The model can also be used to find appropriate updates to an existing security system; however, decrease in the effectiveness of the proposed security system over time is not taken into account. + +# Introduction + +We propose a risk assessment model to evaluate the costs associated with network defense and to suggest a cost-optimal set of defense measures, according to the needs of an organization. We apply our model to a university network and Web search-engine company. + +# Network Security Risk Assessment Model + +# Terminology + +Defense or Defensive Measure: A technical measure or a policy used by the organization to limit the potential cost of security problems. + +Defense Class: A group of related defensive security measures, such as a host-based firewalls or anti-virus programs. + +Confidentiality (C): The need to protect sensitive data from falling into the wrong hands. + +Integrity (I): The need to prevent data modification by those who are not authorized to do so. + +Availability (A): The need for computer systems to function properly and be available for use by authorized users. + +# Assumptions + +- We use reasonable estimates, based on the provided potential costs of security attacks. +- The effect of each defensive security measure on system security can be quantified. +- Four-year network lifespan over which costs should be minimized. +- Estimates remain valid for the duration of the time period. The security risks and effectiveness of defensive measures do not appreciably change over time. + +# Basic Model + +We are concerned with three types of costs faced by the university: + +Risk Cost: Also referred to as "opportunity cost," this is the potential cost of dealing with security problems, including litigation, data loss, reconstruction, and direct revenue loss. + +System Cost: The cost to implement the defensive security measures. + +Productivity Cost: Costs associated with a loss in productivity from various security policies. + +The goal is to minimize the total of these three costs. + +# Estimating System Costs + +We use + +$$ +\sum_ {\text {d e f s e n s e m e a s u r e s}} (\text {p r o c u r e m e n t} + \text {a n n u a l c o s t} \times \text {t i m e}). +$$ + +# Estimating Risk Costs + +We break risk costs down into whether the risk is due to a loss of confidentiality, integrity, or availability (CIA) in the system. Our procedure is: + +- List possible costs that may be incurred. +- Estimate the monetary loss that would result if that event occurred when no security measures were in place for each possible cost. +- Estimate the likelihood of an event occurring, expressed as the expected number of times the cost would be incurred per year. Multiply this by the monetary cost to get the scaled risk cost. +- Proportion the total risk between the three risk factors (confidentiality, integrity, and availability). Divide the scaled risk cost up according to these proportions to give the scaled risk contribution to each risk factor. + +Summing up the scaled risk contributions for each risk factor gives the total initial risk cost per factor. That is, if $F$ is a risk factor (one of CIA) and $R_{F}$ is the risk cost due to $F$ then: + +$$ +\begin{array}{l} R _ {F} = \sum_ {\text {a l l i n c i d e n t s}} [ (\text {c o s t o f a s e c u r i t y i n c i d e n t}) \\ \times (\text {e x p e c t e d n u m b e r o f i n c i d e n t s}) \\ \times (\text {i m p o r t a n c e} F \text {i n a t t a c k}) ] \\ \end{array} +$$ + +We introduce three risk factors, denoted $C$ , $I$ , and $A$ , for confidentiality, integrity, and availability. By convention, a risk factor of 1 denotes no change from the initial risk cost; values larger than 1 denote improved security (and hence lower cost). The final adjusted risk cost is calculated by dividing the initial risk cost for each category by the corresponding risk factor. + +The addition of network and computer security measures lowers the risk costs. Each defensive security measure is evaluated according to how well it protects the confidentiality and integrity of data and the availability of systems. + +# Estimating Total Costs + +The total cost may depend on the number of computers, number of system administrators (sys admin), and other variables. Some costs are one-time procurement costs, while others are ongoing (yearly) costs. In our model, we consider the costs for a fixed number of years but report the average yearly costs. We spread one-time procurement costs over the number of years modeled. + +Each defensive measure has a potential impact on the productivity of users, which is measured as a percentage. This percentage is interpreted as measured relative to the salaries of the affected users. To compute productivity costs, a productivity factor $P$ is computed in the same manner as $C, I,$ and $A$ , and then the total salary of all users is divided by $P$ . We report the difference between this value and the original total. + +# Interaction Between Defenses + +A defensive strategy combines many different measures, so it is important to understand how combinations of measures affect the total cost. + +In some cases, defensive measures are complementary: Anti-virus software and a firewall protect differently against threats, and so the total effect can be treated as cumulative. But installing 10 anti-virus products on a single computer does not provide 10 times the protection of a single product, since most anti-virus products protect against the same types of attacks. + +We use at most one defensive measure of each type (host-based anti-virus, spam filter, etc.). We allow host-based and network-based products of the same type to co-exist, since their strengths are somewhat distinct. + +To evaluate the total change in $C, I, A,$ and $P$ due to a set of defenses, we use the following rule. Let $S$ be a set of defensive measures and let $\Delta C_s$ denote the change in confidentiality provided by defense $s \in S$ . For this single defense, we say that + +$$ +C = C _ {\mathrm {o l d}} + C _ {\mathrm {o l d}} \Delta C _ {s} = C _ {\mathrm {o l d}} (1 + \Delta C _ {s}). +$$ + +We generalize to say that for the collection of defenses, + +$$ +C = \prod_ {s \in S} (1 + \Delta C _ {s}) +$$ + +and similarly for $I, A,$ and $P$ . + +# Refined Model Using Subnetworks + +Different parts of the university have different security requirements (e.g., the registrar vs. a computer lab), so it does not make sense to choose a single uniform security plan for the entire university. + +We treat the university as a collection of different "departments" and specify the initial risks of each department separately (Table 1). + +Table 1. Each department's fraction of the total risk of each type. Key: + +
1. Litigation2. Proprietary data loss
3. Consumer confidence4. Data reconstruction
5. Service reconstruction6. Direct revenue loss
(1)(2)(3)(4)(5)(6)
Academic0.200.900.150.200.35
Labs0.100.30
Athletics0.050.050.050.02
Admissions0.150.100.300.200.050.40
Registrar0.300.100.400.05
Book Store0.050.150.100.050.40
Student Health0.300.050.05
Dorm Network0.150.10
+ +For the most part, we compute costs separately for each department and add to get the total cost; but departments are not analyzed completely independently. Any cost that does not depend on the number of computers is paid only once, even if multiple departments use it; such a one-time cost could represent a site-license cost. + +# Search Method + +We seek a minimum-cost solution over all possible defensive strategies. An exhaustive comparison of all strategies is not practical; even treating the entire university as a single unit, there are 50 billion possibilities to compare. Fortunately, it is not necessary to test all of these to develop a good defensive strategy. + +In most cases, which defense (within a single defense class) is best is not sensitive to what other defenses are employed. That is, usually one or two network-firewall products will be best for an organization regardless of which anti-virus products, spam filters, and other products are also used. This allows us to optimize separately for each defense class; the resulting combination of defenses should then be very near to the global optimum. + +We determine for each defense class the measure that seems to function best (averaged over multiple runs, with random selections of other defensive measures). The combination of all "best" defensive measures becomes the candidate best overall method. We then perform one or more "reoptimization" passes, where for each defense class we systematically evaluate all possibilities in the context of the current best guess at an optimum, to see if changes occur. + +# Other Extensions + +Network lifespan: We minimize the costs over a fixed number of years, typically four, in our simulations. + +- Updating systems: While our model plans the security for a new network before it is built, it can also analyze the security of an existing network and suggest changes to lower the future costs. +- Continual re-evaluation: By running the model whenever changes in available technology or the security profile of the organization take place, the security system can always be maintained at the most up-to-date status. + +# Model Strengths + +The model + +- is flexible, designed to work in different situations from universities to companies, +- can be adjusted easily to incorporate new defensive security measures, +- can recognize differing security needs within an organization, +- can be used to evaluate both new planned networks and existing networks, and +- functions much more efficiently than a brute-force search. + +# Model Weaknesses + +- We relate possible attack types, defenses against them, and risk costs only through the $C, I$ , and $A$ parameters. +- The model is sensitive to the quality of the data. +- We do not account for changes in the parameters with time. +- We do not account for all the ways that defensive measures may interact. + +# Results + +# University Results + +We analyze the best defensive strategy in two cases: when the university is considered as a single unit Table 2, and when different groups within the university have different security requirements (Table 3). + +The overall costs for the two strategies (in millions of dollars) are: + +Risk cost + +Single-Unit Model + +2.34 + +2.03 + +System cost + +6.14 + +3.82 + +Table 2. +Recommended system configuration for the university, treating all computers in the university equally. + +
CategoryProduct
Host-based firewallIntelli-Scan
Host-based anti-virusAnti-V
Network-based anti-virusSystem Doctor
Network-based spam filterEmail Valve
PoliciesStrong passwords, allow wireless, restricted personal use, user training, sys admin training
+ +Table 3. +Recommended system configuration for the university when differing security requirements of different groups are considered. + +
Academics:Unmonitored Personal Use, User Training, Sys
HB Firewall: Intelli-ScanAdmin Training
HB AV: Anti-VDorms:
NB AV: System DoctorHB AV: Fogger
Spam: Spam StoperNB AV: System Splatter
Policies: Strong Passwords, Allow Wireless,IDS: Watcher
Restrict Personal Use, User Training, SysSpam: Spam Stoper
Admin TrainingPolicies: Strong Passwords, Disallow Wireless,
Admissions:Unmonitored Personal Use
HB Firewall: Intelli-ScanHealth:
NB Firewall: network DefenseHB Firewall: Intelli-Scan
HB AV: Anti-VNB Firewall: network Defense
NB AV: System SplatterHB AV: Anti-V
IDS: WatcherNB AV: System Splatter
Spam: Spam StoperSpam: Email Valve
Policies: Strong Passwords Disallow Wireless,Policies: Strong Passwords, Disallow Wireless,
Unmonitored Personal Use, User Training, SysRestrict Personal Use, User Training, Sys
Admin TrainingAdmin Training
Athletics:Labs:
HB Firewall: Intelli-ScanHB Firewall: Watertight
HB AV: Anti-VHB AV: Anti-V
NB AV: Enterprise StomperNB AV: Bug Zapper
NB Spam: Spam StoperIDS: Watcher
Policies: Strong Passwords, Allow Wireless,Policies: Strong Passwords, Disallow Wireless,
Unmonitored Personal Use, User Training, SysUnmonitored Personal Use, User Training
Admin TrainingRegistrar:
Bookstore:HB Firewall: Intelli-Scan
HB Firewall: Intelli-ScanNB Firewall: network Defense
NB Firewall: network DefenseHB AV: Anti-V
HB AV: Anti-VNB AV: System Splatter
NB AV: System SplatterIDS: Watcher
IDS: WatcherSpam: Spam Stoper
Spam: Spam StoperPolicies: Strong Passwords, Disallow Wireless,
Policies: Strong Passwords, Disallow Wireless,Unmonitored Personal Use, User Training
+ +There is a cost savings of $0.31 million in risk costs and$ 2.32 million in system costs by considering different parts of the university separately. Considering requirements separately, security can be increased at the same time that costs are decreased, because necessary security measures are used where appropriate and cheaper defensive measures are used where more complex ones are not needed. + +# Web Search Engine + +We also analyze the defensive measures that should be employed by a Web-search company. The initial risk costs are given in Table 4. These data were estimated based on our research into various search-engine companies; we also estimated appropriate risk costs and $C, I, A$ values. Finally, to obtain an optimum security defense, we created two subnetworks. + +# Table 4. + +Initial risk costs for a search engine. For each category of risk, the fraction of the risk due to confidentiality, integrity, and availability problems is given. The last column gives the contribution of that risk category to the total company risk. + +
CategoryCIAFraction of total ($10 M)
Litigation20%20%60%5%
Proprietary Data Loss70%30%5%
Consumer Confidence30%70%30%
Data Reconstruction100%20%
Service Reconstruction100%10%
Direct Revenue Loss10%90%30%
+ +The rationale for this cost breakdown is: + +- Confidentiality: Since search-engine companies have the majority of their data available to consumers, confidentiality is not as important as for a university. Confidentiality is important for financial records and in research and development. +- Integrity: A search-engine company depends on large data sets, so integrity of the data is important. However, accuracy (and hence integrity) of the data plays only a minor role in consumer confidence, direct revenue loss, and litigation. +- Availability: Search engines are utterly reliant on being available to consumers, so the CIA values reflect this, and any opportunity costs directly associated with consumers or advertisers are heavily weighted towards availability. + +The recommended configuration suggested by our model is given in Table 5. The risk cost with this setup is $2.8 million (reduced from$ 10 million), at a system cost of $1.8 million. + +Table 5. Defensive security measures chosen for a web search engine. + +
Servers:Administrative:
HB Firewall: LavaHB Firewall: Intelli-Scan
HB AV: Bug KillerHB AV: Anti-V
NB AV: System SplatterNB AV: Blue Sky
IDS: WatcherSpam: Spam Stoper
Spam: Spam StoperPolicies: Strong Passwords, Allow Wireless,
Policies: Strong Passwords, Disallow Wireless,Restrict Personal Use, User Training, Sys
Unmonitored Personal Use, User TrainingAdmin Training
+ +No data or service redundancy measures are selected by our algorithm. The commercial data and service redundancy measures in our database are generally quite expensive; for a search-engine company with thousands of computers, the cost is prohibitive. More likely, a search-engine company would develop its own redundancy schemes tailored to its needs. + +# Sensitivity + +Factor values such as $C, I, A,$ and $P$ , as well as cost estimates for policy decisions, are estimates only. To incorporate the uncertainty in these values, we perform a sensitivity analysis using the estimated minimum and maximum factor values for defense measures. (Policy decisions were omitted for this analysis.) + +- Each parameter value was randomly chosen from a uniform distribution between the specified minimum and maximum estimate values. +- Using these values, the network security system was optimized with the previously described method. +- The solution defense measures were logged. +- This process is iterated approximately 330 times. + +Results are in Figure 1, with frequency that a defense measure is optimal plotted vs. number of trials. After sufficiently many trials, the frequency generally stabilizes, indicating theoretical stabilities. + +Although the sensitivity analysis was done at the departmental level, several trends were consistent: + +- Host-based firewall selection is generally stable, with Intelli-Scan preferred $60\%$ of the time when a firewall is implemented. +- Decisions not to use a network based firewall are stable, but particular defense measures are not (40–50%). +- Host-based anti-virus is usually split between Anti-V and Bug Killer, with each being chosen in $45 - 50\%$ of trials. + +![](images/5d3fbcf543317146621d24cbdf1573f0aa0c0a7224f99770551adba5aa45103f.jpg) +(a) Stable optimum + +![](images/d7edffe8730b99057ec779799d580e6da63877b9781c34f668ddb0f3d6b89136.jpg) +(b) Unstable optimum + +![](images/f9483c8df877566acd27726b0b97ae17864125d743c21195a9d9aad5dd35617d.jpg) +(c) Split optimum +Figure 1. Sensitivity analysis using randomly chosen parameters, giving frequency that a defense measure is selected as the optimal choice vs. number of trials. (a) Intelli-Scan is chosen as the best host-based firewall for the academic departments in about $60\%$ of trials. (b) Different network-based anti-virus software programs function about equally well in the academic departments. (c) In the dorms, the optimum host-based anti-virus is split between Anti-V and Bug Killer. + +Network-based anti-virus choices are highly unstable (20-30%). +- Intrusion-detection systems choices are stable in areas with a large number of computers (dorms, labs, academics), but much less so in smaller departments (admissions, bookstore, registrar). +- Spam-filter, network vulnerability scanning, data redundancy, and service redundancy choices are all very stable (90–100%). + +# Conclusion + +To help an organization determine the appropriate set of security measures given its own security needs, we have developed a model for determining the total cost of any security policy. This model: + +- takes into account all costs: risk costs, system costs, and productivity costs. +- can distinguish between several types of security problems, arising from failures in confidentiality, integrity, or availability. +- can treat different parts of an organization separately. Not all computers within an organization have the same security requirements; our model can assign them different security policies. +- is flexible enough to satisfy the needs of a range of organizations, whether academic or commercial. +- can be used to choose the security measures for a completely new system or analyze and suggest improvements to an existing system. +- can efficiently determine a near-optimum solution. + +Using our system, we suggest security measures appropriate for a new university and a Web-search company: + +- For the university, we suggest a system that reduces expected costs by a third relative to no security system. +- By tailoring the security policy to the different needs of each university subnetwork, we provide a further $2 million savings over a uniform security policy. +- For the Web-search company, our proposed security policy reduces costs by $55\%$ . + +# Memorandum on Honeynets + +To: Mia Boss, Rite-On Consulting Executive + +From: Awes Ome, Lowly Assistant + +An organization should consider a honeynet to assess possible attack techniques and as a tool for determining already-compromised systems. Honeynets have been proven useful in a university setting but can be applied to any organization, provided methods for data control and data capture are in place. + +# Description + +A honeynet is in one sense a decoy and in another a tool. It is a network of computers used solely to monitor attempts to gain access or to control the system. Since the honeynet network is passive, any activity detected is considered a threat. By monitoring and analyzing threats, system administrators can identify how their network can be compromised [Project Honeynet 2003]. honeynets are thus a tool to identify the weaknesses of a system, new techniques that intruders have developed, and the compromised parts of a network. + +# Implementation + +To implement a honeynet, one merely implements the architecture (Figure 2). + +![](images/29d7173f34c139fae760031ff01c6bdb57c072badbb62e5c864d24d7638dcacb.jpg) +Figure 2. Honeynet architecture. + +The two requirements that must be met are: + +- Data Control: Limiting the amount of data that enters or leaves the system, so as to mitigate risk + +- Data Capture: Monitoring and recording all activity within the honeynet system, since recorded data is what makes honeynets useful. + +# Risks + +The two main risks an organization would be subjected to in implementing a honeynet system are liability and exposure. + +# Liability + +Organizations can be held liable for any damages a compromised honeynet inflicts on other establishments. If the honeynet is compromised and the intruder is able to bypass the data controls, then the honeynet can be used to initiate malicious attacks on other companies or universities. + +# Exposure + +A poorly implemented honeynet can also expose the organization and its network to an increased risk of attack. Once an intruder has compromised the honeynet, he is in the system's network and thus can use the honeynet to explore other areas [Brenton 2003; Project Honeynet 2003]. Thus, there are risks associated with a honeynet, and this is the reason why great care needs to be taken in implementing the data control aspect of the honeynet. + +# Benefits + +The main benefits the honeynet would provide to the organization are: + +- A method to monitor the types of attacks its network is vulnerable to and to detect computers and sub-networks that have already been compromised. +- By analyzing the data collected by the honeynet, system administrators can identify weaknesses in their system and develop methods to eliminate those weaknesses. +- The data a honeynet collects can help system administrators identify data patterns that are indicative of compromised systems and identify systems on the network that are compromised [Levine et al. 2003; Project Honeynet 2003]. + +In six months of operation, a honeynet system recently implemented at Georgia Institute of Technology detected 16 compromised systems [Levine et al. 2003]. This experiment has shown that honeynets can be effective in a university setting, if deployed properly. Since a university's network is similar to a search engine's, at least in terms of bandwidth and data throughput, companies with large infrastructures also stand to benefit from a honeynet. + +# References + +Brenton, Chris. 2003. Honeynets. http://www.ists.dartmouth.edu/IRIA/knowledge_base/honeynets.htm. +Briesemeister, Linda, Patrick Lincoln, and Phillip Porras. 2003. Epidemic profiles and defense of scale-free networks. In Proceedings of the 2003 ACM Workshop on Rapid Malcode, 67-75. New YorK: ACM Press. +Carnegie Mellon Software Engineering Institute. CERT Coordination Center. http://www.cert.org/. +Cooley, Al. 2004. Network security whitepaper: Using integrated solutions to improve network security and reduce cost. Astaro Internet Security. http://techlibrary.networkcomputing.com/detail/RES/1084902218_671.html. +Levine, John, Richard LaBella, Henry Owen, Didier Contis, and Brian Culver. 2003. The use of honeynets to detect exploited systems across large enterprise networks. In Proceedings of the 2003 IEEE Workshop on Information Assurance. http://users.ece.gatech.edu/\~owen/Research/Conference%20Publications/honeynet_IAW2003.pdf. +Lipson, Howard F., and David A. Fisher. 2000. Survivability: A new technical and business perspective on security. In Proceedings of the 1999 Workshop on New Security Paradigms, 33-39. New York: ACM Press. +Moore, David, Colleen Shannon, Geoffrey Voelker, and Stefan Savage. 2003. Internet quarantine: Requirements for containing self-propagating code. In Proceedings of the 2003 IEEE Infocom Conference. http://www.cse.ucsd.edu/~savage/papers/Infocom03.pdf. +Honeynet Project. 2003. Know your enemy: Honeynets. http://www.linuxsecurity.com/features/story-95-page2.html. +Schneier, Bruce. 2003. Beyond Fear. New York: Copernicus Books. +Shoniregun, Charles Adetokunbo. 2002. The future of Internet security. *Ubiquity* 3 (37): 1-13. +Teo, Lawrence, Gail-Joon Ahn, and Yuliang Zheng. 2003. Dynamic and risk-aware network access management. In Proceedings of the Eighth ACM Symposium on Access Control Models and Technologies, 217-230. New York: ACM Press. +Zou, Cliff Changchun, Lixin Gao, Weibo Gong, and Don Towsley. 2003. Monitoring and early warning for Internet worms. In Proceedings of the 10th ACM conference on Computer and communication security, 190-199. New York: ACM Press. + +# Firewalls and Beyond: Engineering IT Security + +Dennis Clancey + +Jeffrey Glick + +Daniel Kang + +United States Military Academy + +West Point, NY + +Advisor: Elizabeth W. Schott + +# Abstract + +# Problem + +A new university requires defensive measures to protect its network from unauthorized access, alteration of data, and unavailability. Without implementing defensive measures, the university is exposed to an expected loss of \(8.9 million per year. Rite-On Consulting Firm has been tasked to conduct a risk analysis of information technology security for the university and to propose a model that minimizes costs while maintaining the highest possible level of security. This analysis addresses emerging technologies as an implied task. + +# Considerations + +Our model stresses flexibility and simplicity. The model is run in Microsoft Excel, common software. Any company can cheaply tailor this powerful model to its individual needs. It can easily be updated to accommodate new tools and policies that reduce an organization's risk. + +# Results + +Our model optimizes the mix of security tools and procedures. For the network-based measures, the new university should use the Network Defense Firewall, Enterprise Inoculation anti-virus program, Network Eye IDS, and a strong password policy. Additionally, the university should disallow wireless connections, have unmonitored personal use, and require user training. + +The host-based decisions are divided into three subnetworks. + +- The first subnetwork (admissions office, registrar, and health center) should use the Lava firewall, Bug Killer Anti-virus, and Robust Solutions service redundancy. +- Both the second subnetwork (academic departments and dormitories) and the third subnetwork (athletic department and bookstore) should use Intelliscan firewall, Bug Killer AV, Sonic Data data redundancy, and Web King SR. + +# Conclusions + +The model provides the optimal balance of security and risk, based on associated costs. By simply altering the relative importance of security to each network resource, our model can recalculate an optimal solution with three clicks of a button. We are confident that the model for determining the optimal set of security tools and policies will greatly enhance the profitability of the new university for which it is designed. Our procedure and methodology could be used by other universities, businesses, and organizations trying to establish an optimal level of security in an information network. + +# Introduction + +The creation of a new university requires the development of an information technology network with defensive measures protecting the university's assets from unauthorized access, alteration of data, and availability. The new university is expected to lose \(8.9 million per year if no effective defensive measures are implemented. However, each defensive measure is extremely costly, and designing an affordable and effective defense requires careful analysis of the costs and benefits of various combinations of defensive measures. + +We develop a model to minimize the costs and maximize the benefits in creating a secure network. The model assumes a law of diminishing return with every additional defensive measures. + +Using a Monte Carlo simulation, the development of the model requires several critical assumptions. We ran 500 iterations of the simulation to find the optimal combination of defensive mechanisms. + +The model reveals that the optimal suite of defensive measures costs $1.4 million and is expected to lower expected losses to$ 1.7 million, for a net savings of $5.9 million. + +# Problem Assumptions + +- All policies are network-wide. For example, if we decide on a strong password policy, all resources on the network will be in accordance with that + +policy. Different policies for different departments are not allowed. + +- Likewise, network-based security measures (tools) are employed across the entire network. If a particular type of network-based firewall is chosen, it is used to protect the entire network. +- Moreover, each type of network-based tool can be chosen only once. That means only one option for firewall can be used (and it can only be used once). Vertically stacking identical security measures at a network level produces no added benefit. +- Normally distributed observations: The performance data of each tool will follow a normal distribution if additional observations are taken. This was the basis for our creation of iterations; these iterations of independently performing tools was the basis of our Monte Carlo approach. +- Sub-networks: The network is additionally divided into three subnetworks, and we assume that each asset on a particular subnetwork has similar vulnerabilities. This assumption simplifies the use of host-based tools while making it easier for administrators to control uniform defensive measures. +- Combinations: A combination of tools that cover the same defensive measure is not allowed. For instance, two different firewalls cannot be employed at the same time. This is a model simplification that recognizes that the benefits of similar tools will do little to improve the systems when used together. + +# Problem Approach + +We develop a model that uses marginal-benefit/marginal-cost analysis and considers both the cost of defensive measures and the opportunity cost associated with assumed risks. We create and implement a four-step method to develop the model: Network Infrastructure, Data Analysis, Risk Analysis, and Cost Analysis. + +# Network Infrastructure + +The network infrastructure depends on the number and function of the computers within each department. This breakdown of computers by department was founded on both given information and estimates: + +Departments are grouped into subnetworks based on similar functions and security needs. The network topology (Figure 1) creates constraints for the implementation of defensive measures. All hosts in each subnetwork must assume identical defensive measures. The model allows each subnetwork to select an optimal array of defensive measures best suited to its hosts. + +Table 1. Breakdown of computers by department. + +
DepartmentComputers
10 Academic Departments1,230
Dormitory Complex15,000
Department of Intercollegiate Athletics30
Bookstore15
Admissions Office40
Registrar's Office35
Health Center35
TOTAL16,385
+ +![](images/72248d584eaa53535d6a1d033674bc87638705a72583c3c9136676820ec48106.jpg) +Figure 1. Proposed university network topology. + +# Data Analysis + +Every tool and policy has associated costs and benefits. The direct costs come in the form of procurement costs, maintenance costs, and training costs. The benefits are measured by the degree to which a tool can improve (or detract from) user productivity, confidentiality, integrity, and availability. An improvement results in reduction in opportunity cost. For instance, if a particular tool improves confidentiality by $9\%$ , then the opportunity costs associated with confidentiality will be reduced by $9\%$ . + +Quantitative information was provided in the problem statement enclosures for each piece of data: upper bound value, lower bound value, mean value, and variability level (concentration of the data about the mean). + +Not knowing the standard deviation, the number of data observations, and the exact distribution, we simulate values, using Crystal Ball (a spreadsheet add-in with random-number generator capabilities [Decisionering 2004]) and taking into account the possible range, the mean, and the variability. The Central Limit Theorem implies that if the number of observations is sufficiently + +large, then both their sum and their mean have approximately normal distributions, even when individual variables themselves are not normally distributed [Devore 2000]. + +We also consider issues relating to the spread of the data (distance between the minimum and maximum measured values). Extreme levels of variability do not necessarily follow the normal distribution; in cases of high variability, the distributions are likely to be flatter ("fatter in the tails") than the normal distribution. In cases of low variability, the curves will be more sharply peaked than the normal distribution. + +The function CB. Normal $(\mu, \sigma, \min, \max)$ in Crystal Ball returns a value from a truncated normal distribution with mean $\mu$ and standard deviation $\sigma$ and minimum and maximum values as specified. + +To estimate the standard deviations, we divide the range (max - min) by a specified factor depending on the level of variability. We wanted nearly all of the spread to be covered by three standard deviations. We settled on the values in Table 2. + +Table 2. Estimation of standard deviation. + +
VariabilityTypicalEstimate of st'd dev.
high0.32range/6
medium0.20range/5
low0.10range/4
+ +We were concerned about the accuracy of the simulated data in instances of an asymmetrical distribution (e.g., min = 0.05, mean = 0.17, max = 0.20). Crystal Ball creates a normal distribution with the inputted mean and standard deviation and then truncates it at the upper and lower boundaries. The mean of the resulting distribution can differ from the intended mean, as we confirmed from trial simulations. After all considerations, we designed a spreadsheet that would generate actual values for all relevant costs and factors, taking into account levels of variability and ranges of values. + +# Risk Analysis + +Table 1 of the problem statement quantifies the opportunity costs in dollars for various risks and apportions the risk to the categories of confidentiality, integrity, and availability. The university projects a total opportunity cost of \(8.93 million if a network is built without defensive measures. + +The next step in the risk analysis process involves the calculation of a subjective vulnerability score for each department. Vulnerability "is a weakness in the security system, for example, in procedures, design, or implementation, that might be exploited to cause loss or harm ... a particular system may be + +vulnerable to unauthorized data manipulation because the system does not verify a user's identity before allowing data access" [Pfleeger and Pfleeger 2003, 6]. A vulnerability differs from a threat, which is a "set of circumstances that has the potential to cause loss or harm" [Pfleeger and Pfleeger 2003, 6]. We use a threat/vulnerability work table to quantify each risk based on a 1-9 scale, thereby allowing each asset and risk category to be prioritized based on a summed value of the vulnerability scores. The priority system allows the model to focus control measures on risks that have the greatest impact (highest opportunity cost) and highest probability of affecting the asset. Table 3 shows the assigned vulnerability scores. + +Table 3. Vulnerability work table. + +
LowImpact MedHigh
ProbabilityHigh369
Med258
Low147
+ +The table breaks vulnerability into two factors, probability and impact. Probability refers to the likelihood of the threat occurring, while the impact is the cost associated with a manifestation of that actual threat. A category with low probability and high impact is something that doesn't occur very often, but if it does happen, could be fairly costly. Something with high probability and low impact could happen all the time but the costs would be minimal. + +Another worksheet, entitled "Risk Analysis Vulnerability Weighting System," allows the person conducting the risk assessment to give each department a vulnerability score. + +# Cost Analysis + +Cost analysis creates a relationship between the opportunity cost associated with assuming risks and the cost of implementing defensive measures. Our model calls for a cost-benefit optimization. The sum of all these costs (in dollars) that the university is still exposed to in the form of risk (given a particular security combination) is represented by $C_R$ . + +The second main category of costs is the total cost $C_T$ of security tools, which includes all aspects of security (training costs, tools, policies implementation, etc.). + +The sum of the two main categories of cost is the total expected expenditure on security related matters, $E(TC_{S})$ : + +$$ +E (T C _ {S}) = C _ {T} + C _ {R}. +$$ + +The total cost $C_T$ is the sum of each tool cost, multiplied by the quantity: + +$$ +C _ {T} = \sum (\operatorname {a m t} _ {T} \times \operatorname {c o s t} _ {T}). +$$ + +For network-based security measures, the amount of the tool is always assumed to be 1. On the contrary, many host-based measures have multiple costs (per computer or per network). + +The risk cost $C_R$ has three components: confidentiality, integrity, and availability. The implementation of each tool leads to a corresponding change in opportunity cost associated with each component. The specific opportunity costs that make up $C_R$ (e.g., litigation, service reconstruction, consumer confidence, etc.) are not necessarily important. However, the model is concerned with the degree to which a particular security measure changes opportunity costs in terms of confidentiality, integrity, and availability. Thus, $C_R$ can be broken down as + +$$ +C _ {R} = C _ {R _ {c}} + C _ {R _ {i}} + C _ {R _ {a}}. +$$ + +The subcosts that make up $C_R$ depend on two pieces of information: + +- the total original cost of each component in the absence of security measures $(T_{o}C_{c}, T_{o}C_{i}, T_{o}C_{a})$ ; and +- the degree to which that original value is decreased, the $\xi$ -factor. + +So we have + +$$ +C _ {R _ {c}} = T _ {o} C _ {c} - \xi_ {c}. +$$ + +The complexity of this model is increased when you consider all possible combinations of multiple tools. Most notably, you cannot simply add the percentages of improvement when multiple tools are used. If you use two tools, each with a confidentiality improvement of $20\%$ , it would be inaccurate to assume that the combined improvement is $20\% + 20\% = 40\%$ ; in particular, the improvement to risk cannot reach $100\%$ . + +We assume that the magnitude of incremental addition would decrease more slowly with lower levels of improvement than with higher levels. The best formula we could find to replicate this phenomenon is the tanh function. The function $y = \tanh 1.05x$ is very nearly equal to $x$ very closely until a $40\%$ degree of improvement ( $x = 0.40$ ), at which point the function starts to level off toward an asymptote of 1. Since $\tanh$ is symmetrical about 0, this formula performs in the same fashion for factors that detract or improve a given factor level. + +The final step in this model is creating a formula for optimization. As the opportunity cost of risk decreases, the cost of tools increases. We need to minimize the overall costs incurred by the system, + +$$ +\min \left(C _ {T} + C _ {R}\right). +$$ + +We use the Solver function in Microsoft Excel to perform the optimization. + +![](images/093fd55f1f887b869b953c53d30e46b66ade29a077efaf5fd85150a1b49f71fa.jpg) +Figure 2. Net improvement vs. sum of improvement effects ( $y = \tanh 1.05x$ ). + +We use the truncated normal distributions to generate 500 random numbers (based on that distribution) for each data item, with each number representing a different iteration. + +The decision variables are the amount of each tool that the network would use. Excel would search through all the possible combinations of decision variables and choose the set of decision variables that minimizes the cost equation over the 500 iterations. + +We constrained the Solver function to + +- force all decision variables to be integers (to eliminate the possibility of Solver recommending the use of a fraction of a resource, such as $54.34\%$ of a firewall); +- force all decision variables to be nonnegative (so we would not recommend -2 firewalls); and +- choose each tool at most once for the network or each subnetwork (to avoid Solver recommending relying on 16 network firewalls), via constraining that the sum of all decision variables for a given tool should be less than or equal to 1. + +The network policies have additional constraints. For instance, we assumed that we must select either a strong password policy or no password policy, so the sum of their decision variables must equal 1. Similar constraints apply to the use of wireless- and personal-use policies. Network-based decision variables are split into the subnetworks, for which similar constraints are made. Solver could choose a different combination of security measures for each subnetwork's host computers. + +The degree to which a host-based system used on a particular network improved the overall network was based on the relative weight of importance of that subnetwork. For instance, if Subnetwork 1 accounts for $50\%$ of the risk + +to overall confidentiality, and a tool improves it $20\%$ , then the use of that tool improves the overall network by $20\% \times 50\% = 10\%$ . It was in this use of weights that the host-based options were chosen along side network-based tools and policies. The sum of factor improvements renders the value of $\xi$ . + +Solver ran every possible combination and found which combination minimized total cost the most over the 500 iterations. + +# Results + +The optimal suite of defensive measures costs $1.37 million and is expected to lower its expected losses to$ 1.70 million, for a net savings of $5.86 million. The tools recommended are: + +# Network Based Tools + +- Network Defense Firewall +- Enterprise Inoculation Anti-Virus +- Network Eye IDS + +# Network Policies + +- Strong Password Policy +- Disallow Wireless +- Unmonitored personal use +- User training required + +# - Host Based Tools Subnetwork 1 + +(Adm., Reg., Hlth) + +Lava Firewall + +Bug Killer Anti-virus + +Robust Solutions SR + +# Subnetwork 2 + +(Acad. and Dorm) + +Intelliscan Firewall + +Bug Killer AV + +Sonic Data DR + +Web King SR + +# Subnetwork 3 + +(Athl. and Bkstr) + +Intelliscan Firewall + +Bug Killer AV + +Sonic Data DR + +Web King SR + +# Strengths and Weaknesses + +# Strengths + +The optimization model takes into account the delicate balance between the opportunity costs of the security risks (value of the assets) and the costs of implementing each additional defensive measure. + +The model takes into account the proper use of the defensive measures by optimizing each subnetwork according to its function and requirements. The + +risk category of integrity would not affect Subnetwork 2 (Academic Departments and Dormitory Complex) as significantly as Subnetwork 1 (Registrar's Office and Admissions Office). Thus, different defensive measures are utilized for each subnetwork and the respective host computers. + +By generating reliable observations (based on the normal distribution of supplied data), we simulated the performance of each allowable combination of tools and all possible defensive performances. Every possible outcome of these two factors was considered (over 500 iterations) to produce an optimal solution. + +# Weaknesses + +University Infrastructure: The proposed infrastructure is a simplified topology of the university's network, but perhaps not the best. + +# Economics + +The optimization model takes into account opportunity costs and the cost for the implementation of each defensive measure. However, information technology security cannot always be quantified. Certain human factors, behaviors, and other x-factors cannot necessarily be incorporated into a quantitative model. + +# Human Factors + +When building the model, we did not differentiate between inside and outside attacks. For instance, users in the dormitory complexes are probably more likely to "hack" the system than users in the admissions dept. The optimal security design probably would have been altered if our model accounted for these specific considerations. + +# User Productivity + +The technical data sheets provided give scores that indicate the degree to which each defensive product reduces opportunity costs in terms of integrity, confidentiality, and availability. Our model picks an optimal array of these products by considering costs broken down into these three categories. However, our model fails to consider another metric, User Productivity. For every product, the data sheets give a score that indicates the degree to which user productivity would be hindered by that defensive measure. Certain designs could lead to excessive slowing of the network, user frustration, prohibition of routine transactions, or reduction of potential profits. We certainly considered this factor, and the model even calculated the net reduction in user productivity (7%); but we did not assign a cost to user productivity and incorporate it into the objective function. Fortunately, 7% is not excessively large, so the reduction in user productivity appears to be acceptable. + +# Improvement by Combinations + +The model did not fully explore the degree to which the combination of different tools would effect overall performance of the system. As a partial solution, we disallowed the use of a single defensive measure twice on the same network. We did not explore the overlap which might be present between separate measures, opting instead for modeling this phenomenon in terms of diminishing degrees of improvement (via the tanh function). + +# Conclusion + +Our model for the security of the new university's network provides the optimal balance of security and risk, based on associated costs. As new technologies arise, they can be added to our current decision matrices. + +# Appendix: Honeynet Analysis + +# Purpose + +To determine whether a university or a search-engine company should consider using a honeynet. This memorandum provides a basic introduction to honeynet strategies. In addition, we highlight innovative techniques for deploying these strategies in a myriad of applicable fields. + +# Introduction + +Bears like honey. Honey is made by bees; bees hate bears. + +The bears of IT are blackhats (hackers). Their objective is to wallow neckdeep in a vat of warm, sweet honey. In this analogy, honey is a forbidden commodity—restricted information. True hackers claim a benevolent mission; others, called "crackers," have malicious aims to compromise network resources. + +Regardless of an intruder's aims, all can pose threats to a target system. Network administrators (white hats) need to monitor for instances of suspicious activity. On busy networks, the task of pinpointing unauthorized use is incredibly difficult. A hacker can appear and vanish across busy resources like a thief disappearing in the bustling crowd of a Chinese street market. To level this playing field, administrators snipe hackers in open fields, who are lured by the sight of "easy" honey. Here is how: + +Honeypot: an information system resource with value that lies in the unauthorized or illicit use of that resource [Spitzner 2003]]. The honeypot resources have no production activity, no authorized activity. Since the honeypot is not + +a productive system, any interaction with that resource implies malicious or unauthorized use [Honeynet Project 2003]. This assumption of wrongdoing allows administrators to set up complex systems for observing intruder behavior. In doing so, administrators can learn from observations of new hacker techniques. This information fuels the development of updated anti-intrusion systems. + +Honeynet: a network of honeypots created for an intruder to interact with. + +Honeytoken: While honeypots are traditionally thought of as computers, (and other physical resources), a honeytoken broadens that paradigm. Honeytokens can be credit-card numbers, Excel spreadsheets, or even a bogus login [Spitzner 2003]. An example might be a medical file database containing an entry "John F. Kennedy." Since there is no actual patient with that name, any interaction with that file is assumed to be unauthorized. These tokens can be spread over the network like honey barbecue sauce. + +Honey farm: a configuration in which traditional honeypot locations serve as portals, secretly redirecting intruders to one centralized honeynet system. This organization makes the monitoring of a single environment much easier. + +# Benefits and Risks + +# Benefits [Project Honeynet 2003] + +The advantage of a honeynet is that it allows an administrator to gain extensive data on the abilities and tactics of system intruders. The architecture of a honeynet is much like a fishbowl. It allows administrators to focus completely on a set of unauthorized actions. The traditional method of searching for hackers involved looking through gigabytes of data of a busy network (busted mostly by legitimate use). Searching busy resources is like searching for a needle in a haystack. The honeypot concept serves as a magnet to those needles—no searching necessary. The compilation of information on intruders allows a system administrator to tailor the defense of the network. + +# Risks [Project Honeynet 2003] + +Harm: An attacker may break into a honeynet and then launch an attack that the system cannot forestall. In this case, an attacker will successfully harm the intended victim. Data control is the primary method of reducing this susceptibility to system failure. Each organization must decide which level of control they want. More control allows the intruder to do less, leaving less to be observed. Less control allows the intruder more flexibility but increases the possibility of an administrator losing control. + +Detection: If an intruder is able to identify a honeynet, the value of that resource is dramatically reduced (to an observing administrator). An intruder can + +introduce false or bogus data into the honeynet, causing confusion for an administrator. In addition, an intruder might be able to identify the data-control and data-capture tools employed by the honeynet. If this occurs, an intruder can exploit the system architecture to gain access to non-honeynet resources. + +Disable: There is risk that an intruder will disable the honeynet functionality. The intruder might be able to do this without the honeynet administrator realizing. This risk can be mitigated by having multiple layers of data control and capture. + +Violation: If a honeynet is compromised, an intruder may attempt to use that resource for illegal activity. For example, the intruder might choose to upload and distribute illegal material, such as stolen credit cards or child pornography. This might cost the company painful litigation and additional penalties if they are found to be negligent in securing the resources involved. + +# Discussion + +Although there are many risks associated with creating a honeynet, these risks can be mitigated by using a customized and random configuration, layering, some type of dynamism, or other creative means to make detection of the honeynet and countermeasures against it tough to accomplish. Any organization can find and tailor a honeynet to their acceptable risk exposure. + +# Recommendation + +A university, search-engine company, or any other information system should employ some form of honeypot tactics. Combinations of the strategies allow white hats to seize the initiative in the battle against hackers, crackers, and dishonest employees. Additional cost/benefit analysis should be conducted to create an optimal honeynet configuration. + +# References + +Decisionering, Inc. 2004. Crystal Ball. Add-in software to Microsoft Excel under Microsoft Windows. http://www.crystaball.com/crystal_ball/index.html. +Honeynet Project. 2003. Know your enemy: honeynets—What a Honeynet is, its value, how it works, and risk/issues involved. http://project.honeynet.org/papers/honeynet/index.html. Last modified 12 November 2003. +Devore, Jay L. 2000. Probability and Statistics for Engineering and the Sciences. Pacific Grove, PA: Brooks/Cole. + +Peltier, Thomas R. 2001. Information Security Risk Analysis. New York: CRC Press. +Pfleeger, Charles P., and Shari Lawrence Pfleeger. 2003. Security in Computing. 3rd ed. Upper Saddle River, NJ: Prentice Hall. +Ragsdale, Cliff T. 2004. **Spreadsheet Modeling and Decision Analysis.** 4th ed. Mason, OH: South-Western. +Spitzner, Lance. 2003. Honeytokens: The other honeypot. http://www.securityfocus.com/infocus/1713. Last updated 21 July 2003. + +# Catch Thieves Online: IT Security + +Zhao Qian + +Su Xueyuan + +Song Yunji + +University of Electronic Science and Technology + +Chengdu, Sichuan, China + +Advisor: Du Hongfei + +# Summary + +We construct an optimal defensive system for IT security for a university network. After estimating whether the security measures' effect is worth the expense, we develop a model to seek the minimum sum of opportunity costs and defensive system expense. + +The model is composed of three modules. + +- Module 1 mainly deals with the risk evaluation. We apply the Analytic Hierarchy Process (AHP) to clarify the miscellaneous risks and separate the complex university network into nine simple subsystems. +- Module 2 employs a fast search algorithm to determine a technological defensive system for each subsystem. +- Module 3 determines the policies for the whole university network system and calculates the total cost. + +By using our model, we cut down the expense from an initial $8.9 million to$ 3.4 million. At the same time, this model is flexible enough to adapt to changing technological capabilities and can be applied to different organizations. Although the model has strengths such as modularization, high efficiency, and flexibility, it is a pity that we can only play defense—we do not have the initiative. If we want to change that fact, we urgently need new technologies, such as honeynets. + +# Introduction + +Risks to IT security can be broken down into the three categories of confidentiality, integrity, and availability; hence, we face a problem in multiple-objective programming. Risk evaluation is very complex; there are not only quantitative standards of evaluation, but also qualitative standards that are difficult to measure. At the same time, the evaluation is affected by people's economic ideas, so a benchmark cannot be easily determined. In addition, the task of evaluation is dynamic, since it changes with the development of society. Hence, what we should do is analyze the cost and estimate whether the security system's effect is worth the expense. After the risk evaluation, we can set up a defensive system that balances the opportunity costs and the defense system expense, minimizing the total cost. + +# Assumptions + +- Any complex computer network system can be separated into several unrelated subsystems by different functions. For example, the bookstore and the registrar's office are two different subsystems of a university. +- Different defensive measures play different roles in IT security system. For example, a network-based firewall and a host-basedfirewall perform different functions. +- Each new defensive measure has been evaluated before being made available; so we can use a new defensive measure in our model directly, because its effect is known. +- New defensive measures can only decrease the loss due to the aging of old defensive measures. + +Table 1. +Symbol table. + +
SymbolMeaning
TTotal cost of the whole network defensive system
cOpportunity cost contributed by the Confidentiality risk
iOpportunity cost contributed by the Integrity risk
aOpportunity cost contributed by the Availability risk
dDefensive expense, including procurement, maintenance, and system administrator training costs
TjTotal cost for subsystem j
cjConfidentiality risk cost for subsystem j
ijIntegrity risk cost for subsystem j
ajAvailability risk cost for subsystem j
djDefensive expense for subsystem j
+ +# Dealing with the Data + +Enclosures A and B describe the technology and policy preventive defensive measures. The information was obtained by interviewing consumers, who gave each measure a rating. The data are summarized in terms of Low (minimum), Mean, and High (maximum) values, together with Variability (indicating the concentration of the data about the Mean), which is recorded as Low, Med, or High. + +We need to determine a single value for each measure: + +- If the Variability is Low, the opinions of different consumers are almost the same. We use the Mean value. +- If the Variability is Med, we assume that $10\%$ gave the Low value, $10\%$ the High value, and the rest the Mean. We calculate the value of the measure as + +Value $= 0.10 \times$ Low value $+0.80 \times$ Mean value $+0.10 \times$ High value. + +- If the Variability is High, we assume that $20\%$ gave the Low value, $20\%$ the High value, and the rest the Mean. We calculate the value of the measure as + +Value $= 0.20 \times$ Low value $+0.60 \times$ Mean value $+0.20 \times$ High value. + +Although the specific numerical values of $10\%$ and $20\%$ may not be suitable for all cases, the specific values in fact will not affect the models that we develop. + +# Optimal Defensive Measures for a University + +If there are no defensive measures, the opportunity cost projection is as shown in Table 2 and the total cost is + +$$ +T = 3. 8 + 1. 5 + 2. 9 + 0. 4 + 0. 0 8 + 0. 2 5 = \$ 8. 9 3 \text {m i l l i o n}. +$$ + +The initial Confidentiality risk cost is + +$$ +c = 3. 8 \times 0. 5 5 + 1. 5 \times 0. 7 0 + 2. 9 \times 0. 4 0 = \$ 4. 3 \text {m i l l i o n}. +$$ + +Analogously, the initial Integrity risk cost and the initial Availability risk cost are + +$$ +i = \$ 3.585 \text {m i l l i o n}, \quad a = \$ 1.045 \text {m i l l i o n}. +$$ + +Each defensive measure affects four factors: User Productivity, Confidentiality, Integrity, and Availability. However, the cumulative effects within and between the risk categories cannot just be added. Hence, we shift our focus from the effect on the four factors to the effect on the costs. For example, from an + +Table 2. Current opportunity costs and risk Category contributions (data from the problem statement). + +
SymbolOpportunity Cost (due to IT)Amount ($ millions)Risk Category Contribution
CIA
P1Litigation3.855%45%
P2Proprietary data loss1.570%30%
P3Consumer confidence2.940%30%30%
P4Data reconstruction0.4100%
P5Service reconstruction0.08100%
P6Direct revenue loss0.2530%70%
+ +initial Confidentiality opportunity cost of \(10,000, a factor value of \(25\%\) would increase the Confidentiality level by \(25\%\) and at the same time result in a new Confidentiality opportunity cost of \(\$ 10,000 \times (1 - 0.25) = \)7,500. Thereby, improvements attributable to specific measures are directly associated with decreases in costs. Moreover, costs can be added directly. + +Based on such ideas, we consider the effects of different defensive measures in economic terms. Our task can be described as structuring an optimal network defensive system to minimize the total cost $T = c + i + a + d$ , where $c, i,$ and $a$ are potential opportunity costs and $d$ is expense on defensive measures. + +We organize our model into three modules. Each module completes a specific task: + +- Module 1 separates the whole university network system into several subsystems by different functions. After analysis of these subsystems, the initial opportunity cost is distributed among the subsystems. Hence the aim of our task becomes to find + +$$ +\min T = \sum_ {j} T _ {j}. +$$ + +- Module 2 determines the technological measures used for each subsystem to minimize the cost of the subsystem, that is, for subsystem $j$ the task is to find + +$$ +\min T _ {j} = c _ {j} + i _ {j} + a _ {j} + d _ {j}. +$$ + +- Module 3 determines the policies for the whole university network system and calculates the total cost. + +# Module 1: Apply AHP to Subsystems + +The university's various components have different functions and hence different requirements for Confidentiality, Integrity, and Availability. So based on the structure and functions of the network, we separate the whole university network system into nine subsystems (Figure 1), designated A1-A9 in Table 3. + +![](images/91305a7e8ec5b7c6f4427b55951759d0fda7a472ead21e3a80f0a3bdbdaf1a78.jpg) +Figure 1. Separation of the network system. + +Table 3. +Subsystems of the university network system. + +
SymbolSubsystem
A1Computer Labs
A2Staff and Faculty Computers
A3Dormitory Network
A4Bookstore
A5Registrar's Office
A6Admissions Office
A7Student Health Center
A8Athletic Department
A9University Server
+ +We install a set of defensive systems for each subsystem. Such a defensive system defends against attacks on just that particular subsystem, so the cost of each subsystem can be calculated separately. We distribute the initial opportunity cost among the subsystems. We determine the weights for the subsystems by application of the Analytic Hierarchy Process (AHP) [Saaty 1980], a way to evaluate systems that involves both quantitative analysis and qualitative analysis. It exhibits the analytic and synthetic thoughts in decision-making strategy. + +The hierarchy of the system is shown in Figure 2; A1-A9 stand for the nine subsystems in Table 2 and P1-P6 represent the six kinds of opportunity costs in Table 1. + +![](images/73df79038a155c8599deaf5f24a4c74602c671f3f0fa47c34f65a1033e452d6d.jpg) +Figure 2. Hierarchy of the network system. + +Our aim is to determine the weights for the risk categories distributed into each subsystem. As an example, we describe the calculation for the Confidentiality risk cost $c$ . Figure 3 shows the detailed $c$ branch. + +![](images/8695051c4ce219a10c31e8cf05947cc8977b9d49d8ae3d0b15484f08b40f2b18.jpg) +Figure 3. Detailed $c$ branch. + +We set up the equation + +$$ +W R = T, +$$ + +or + +$$ +\left( \begin{array}{c c c} w _ {1 1} & \ldots & w _ {9 1} \\ \vdots & & \vdots \\ w _ {9 1} & \ldots & w _ {9 6} \end{array} \right) \left( \begin{array}{c} R _ {1} \\ \vdots \\ R _ {9} \end{array} \right) = \left( \begin{array}{c} t _ {1} \\ \vdots \\ t _ {6} \end{array} \right). +$$ + +The elements $w_{mn}$ are the weights of the six kinds of opportunity costs in each subsystem, where $m$ is the subsystem and $n$ is the kind of opportunity cost. For example, $w_{34} = P_4 / c_4$ in subsystem A3, that is, $w_{34} =$ (Data reconstruction loss)/(Confidentiality risk cost) in the dormitory network. + +The elements $R_{m}$ are the weights of the nine subsystems in risk categories. For example, $R_{3} = c_{3} / c$ . + +The elements $t_n$ are the weights of the six kinds of opportunity costs in the whole system. For example, $t_4 = P_4 / c$ , that is, $t_4 =$ (Data reconstruction loss)/(Confidentiality risk cost) in the whole system. + +Based on the analysis of the functions of each subsystem, we develop nine judging matrices to analyze the weight of each subsystem. Take A1 (Computer Labs), for example: The element $P_{mn}$ represents the importance of $P_{m}$ to $P_{n}$ . + +
A1P1P2P3P4P5P6
P11P12P13P14P15P16
P21/P121P23P24P25P26
P31/P131/P231P34P35P36
P41/P141/P241/P341P45P46
P5P15P25P351/P451P56
P61/P161/P261/P361/P461/P561
+ +Commonly, we use 1, 2, 3, ..., 9 and their reciprocals to represent different degrees of importance: The larger the number, the more important the factor. While $P_{mn}$ represents the importance of $P_{m}$ to $P_{n}$ , the importance of $P_{n}$ to $P_{m}$ is $1 / P_{mn}$ . + +We normalize the column vectors in the judging matrix, + +$$ +\overline {{P _ {m n}}} = \frac {P _ {m n}}{\sum_ {k = 1} ^ {6} P _ {k n}}, +$$ + +and then add the normalized matrix in rows: + +$$ +\overline {{W _ {m n}}} = \sum_ {k = 1} ^ {6} \overline {{P _ {k n}}}. +$$ + +We normalize again to get + +$$ +w _ {m} = \frac {\overline {{W _ {m}}}}{\sum_ {k = 1} ^ {6} \overline {{W _ {k}}}}, +$$ + +The eigenvector $w$ represents the opportunity costs' weights in the subsystem. Use the judging matrix $P$ and eigenvector $w$ , we calculate the maximum eigenvalue + +$$ +\lambda_ {\mathrm {m a x}} = \sum \frac {(P W) _ {m}}{6 W _ {m}}, +$$ + +where $(PW)_m$ is the $m$ th element of the vector $Pw$ obtained as the product of the matrix $P$ and the vector $w$ . + +Last, we check the coherence of the judging matrix. For a six-row matrix, the standard of coherence, CI, is calculated as + +$$ +\mathrm {C I} = \frac {\lambda_ {\operatorname* {m a x}} - 6}{5}, +$$ + +and if $\mathrm{CI} < 0.124$ , then the coherence of the judging matrix is suitable; otherwise, the judging matrix needs to be adjusted. + +Following the approach indicated, we calculate the eigenvector of each subsystem's judging matrix and combine them into matrix $W$ to get + +$$ +W = \left( \begin{array}{l l l l l l} 0. 2 7 5 6 & 0. 0 7 9 5 & 0. 0 7 9 5 & 0. 4 8 1 7 & 0. 0 5 0 2 & 0. 0 3 3 5 \\ 0. 4 2 9 0 & 0. 2 0 9 3 & 0. 2 0 9 3 & 0. 0 8 1 7 & 0. 0 4 1 5 & 0. 0 2 9 1 \\ 0. 4 6 0 6 & 0. 0 4 2 9 & 0. 3 3 8 4 & 0. 0 7 2 4 & 0. 0 4 2 9 & 0. 0 4 2 9 \\ 0. 1 0 5 7 & 0. 0 6 3 8 & 0. 5 6 5 0 & 0. 0 6 3 8 & 0. 0 3 9 6 & 0. 1 6 2 1 \\ 0. 4 3 3 4 & 0. 2 1 4 7 & 0. 2 1 4 7 & 0. 0 6 4 0 & 0. 0 4 3 3 & 0. 0 2 9 9 \\ 0. 2 4 6 3 & 0. 1 2 5 2 & 0. 4 5 7 9 & 0. 0 5 6 9 & 0. 0 2 9 4 & 0. 0 8 4 3 \\ 0. 4 5 4 7 & 0. 2 4 4 0 & 0. 1 7 7 1 & 0. 0 3 4 9 & 0. 0 3 4 9 & 0. 0 5 4 4 \\ 0. \dot {0} \dot {9} \dot {4} \dot {9} & \dot {0} \dot {5} \dot {8} \dot {1} & \dot {0} \dot {5} \dot {6} \dot {4} \dot {1} & \dot {0} \dot {1} \dot {5} \dot {2} \dot {8} & \dot {0} \dot {9} \dot {4} \dot {9} & \dot {0} \dot {3} \dot {7} \dot {8} \\ \dot {0} \dot {4} \dot {4} \dot {5} & \dot {0} \dot {1} \dot {6} \dot {0} \dot {4} & \dot {0} \dot {2} \dot {3} \dot {6} \dot {6} & \dot {0} \dot {8} \dot {0} \dot {0} & \dot {0} \dot {2} \dot {8} \dot {8} & \dot {0} \dot {5} \dot {4} \dot {7} \end{array} \right) +$$ + +For the matrix $T$ , we get + +$$ +T = \left(0. 4 4 0 6 0. 2 2 3 8, 0. 2 2 3 8 0. 0 3 7 3 0. 0 3 7 3 0. 0 3 7 3\right) ^ {T}. +$$ + +We calculate $R$ as + +$$ +R = W ^ {- 1} T. +$$ + +Two conditions must be fulfilled: + +- The elements in matrix $R$ must be nonnegative. +- The sum of the elements in $R$ must equal 1. + +Some adjustments may be needed to fulfill the conditions. At last, we get + +$$ +R = \left(0. 1 6 7 4 0. 0 4 3 5 0. 0 0 0 0 0. 1 9 1 5 0. 6 1 2 0 0. 5 3 6 4 0. 0 0 0 0 0. 0 0 0 0\right) ^ {T}. +$$ + +The process described above is for the Confidentiality risk cost $(c)$ . The results for all opportunity costs are shown in Table 4. + +Table 4. Distribution details of opportunity costs. + +
A1A2A3A4A5A6A7A8A9
c016.74%4.35%019.15%6.12%53.64%00
i11.04%9.84%43.24%00018.91%016.97%
a00073.18%00026.82%0
+ +From Table 4, we can know the distribution of initial opportunity costs among subsystems. For example, for A1 (Computer Labs), the Integrity risk cost is + +$$ +i _ {1} = \$ 3.585 \text {million} \times 11.04 \% = \$ 0.395 \text {million}. +$$ + +With the distribution of initial opportunity costs among subsystems now available, we can determine the defensive system for each subsystem. + +# Module 2: Perform a Fast Search Algorithm + +Defensive measures include technologies and policies. Technologies are hardware and software installed to protect the network; policies are guidelines publicized to instruct users' activities. Technologies should be different in each subsystem, according to the function it realizes; but policies should be the same throughout the whole network system. + +# Technologies + +Technologies consist of host-based firewall (HF), network-basedfirewall (NF), host-based anti-virus (HA), network-based anti-virus (NA), networkbased intrusion detection system (IDS), spam filter (SPAM), network-based vulnerability scanning (NVS), data redundancy (DR), and service redundancy (SR). We need to structure these technologies into several defensive layers. + +Firewalls defend against attack from hackers, while anti-virus protects the server from the virus. Their effects must be considered together, since they form one defensive layer. + +SPAM filtering, vulnerability scanning, data redundancy, and service redundancy are not real-time technologies. The form another defensive layer. + +The defensive layers are shown in Figure 4. + +The configurations of each subsystem are the same; the difference lies in which measure should be chosen in each defensive layer. Hence, the search process is the same for each subsystem. We describe our fast search algorithm: + +1. For the first layer, we search the measure to minimize the total cost, finding a locally optimal solution. +2. We go on to the next layer. Based on the result of the previous layer, we combine the effects of different measures in this layer to find another locally optimal solution. + +![](images/b3abb28f6dfffc5fbdcdd497440dba7a3345865312a61f35ecf7b1460741231f.jpg) +Figure 4. Technologies defensive layers. + +3. Iterate Step 2 until all four defensive layers have been examined. +4. If all the measures of a technology cannot cut down the cost, it means that this technology is not needed. After the iterative search, the locally optimal solution will approach the globally optimal solution at last. + +Following the search algorithm, we determine the technological measures suitable for each subsystem. The result is shown in Table 5. + +Table 5. Technological measures for each subsystem. + +
NFNAIDSHFHASPAMNVSDRSR
A1228110004
A2228120004
A3228110004
A4329731004
A5239120000
A6239120000
A7222120004
A8329730004
A9228110004
+ +This table shows the optimal defensive systems for each subsystem. The numbers in the table represent the sequence number of measures in each technology. For example, for A1 (Computer Labs), we choose the 2nd measure (Network defense) for Network-based Firewall and 8th measure (Network eye) for Network-based Intrusion Detection System. Note that 0 means that none of the measures of such technology is suitable, so this technology is not needed for the subsystem; for example, SPAM, NVS, and DR are not needed for A1. + +Using the technological measures in Table 5, we calculate the effect of such measures for each subsystem. By adding such effects, we get the effect for the whole system (Table 6). + +Table 6. Effects of technologies (in millions of dollars). + +
ciaTotal
Initial opportunity cost4.33.61.08.9
Opportunity cost after technology defenses plus cost in technologies1.51.00.40.5
3.4
+ +# Module 3: Determine the Policies + +Policies to instruct users' activities should be the same throughout the whole network system. There are seven kinds: Passwords, Formal Security Audits, Wireless, Restrict Removable Media, Personal use, User Training and Sys Admin Training. + +We check the effect of each policy by following the search algorithm that we used in Module 2. The result is shown in Table 7. + +Table 7. Policies for the network system. + +
AreaPolicy
PasswordStrong
Formal Security AuditsNo need
WirelessDisallow
Restrict Removable MediaNo restriction
Personal UseUnmonitored
User TrainingNeeded
SysadminNo need
+ +The economic effect of this set of policies, after adoption of the technologies prescribed, is shown in Table 8. + +Table 8. Effects of policies (in millions of dollars), after adoption of recommended technologies. + +
cia
Opportunity cost before policies1.51.00.42.9
Opportunity cost after policies0.80.50.22.9
plus cost of policies1.3
+ +In all, the effect of the recommended defensive system is shown in Table 9. + +Table 9. Effect of the whole defensive system (in millions of dollars). + +
ciadTotal
Cost with no defensive system4.33.61.008.9
Cost under recommended defensive system0.80.50.21.83.4
+ +The minimized total cost is + +$$ +T = c + i + a + d = 0. 8 + 0. 5 + 0. 2 + 1. 8 = \$ 3. 4 \text {m i l l i o n}. +$$ + +# Updating the IT Security System + +Every organization has a potential opportunity cost that can be broken down into the three categories of Confidentiality, Integrity and Availability, which costs we choose as parameters. Additionally, the model separates the whole network system into subsystems by network structure and functions. These issues do not change in different organizations. So this model has a universal character and can be used in defensive system design for all kinds of organizations. + +At the same time, technical specifications change over time. With the progress of technology, new attack measures are taken by hackers, and our security system will lose its power. Hence, we should update the security system regularly. But two questions lie before us: + +- Which kind of new technology do we need? +- How often should we update the security system? + +To answer the questions, we assume that the effect of all technologies decreases periodically and new technologies appear at the same time. Based on these assumptions, we describe our measure as follows: + +- The first technology to replace is the one with the poorest effect. +- The time to update the system is not fixed but is based on the current security system's state and the capability of the new technology. +- We evaluate the cost when new technology appears. If the application of new technology can decrease the total cost further, then the old technology should be replaced. + +We take the bookstore (A4) as an example to describe our approach. From the earlier result, we know that the opportunity cost of the bookstore is .7318 × $1,045,000 = $765,000, all of it contributed by Availability (Table 1). Hence, when new technology appears, only the effect on availability should be taken + +into consideration. Suppose that every month a new kind of host-based firewall appears and the effect on availability of the firewall in use decreases by $3\%$ . With the rapid decrease of effect, host-based firewall becomes the weakness of the security system. + +- Suppose that the security system of the bookstore is established in April. The host-based firewall in use is "watertight" and its effect on availability is $19.4\%$ . + In May, the effect reduces to 16.4%. If there were no firewall, the opportunity cost of the bookstore would be $16,839 this month. Firewalls defend against attack from hackers, while antivirus protects the server from viruses, so their effects are additive. We assume that firewalls and antivirus protects each have 50% of the protective effect, so the current firewall reduces the opportunity cost by $16,839 × .164 × 50% = $1,381. At the same time, a new host-based firewall appears whose effectiveness on Availability is 20.3%, while it costs $1,045 to install. If the new firewall is installed, considering the installation cost, it reduces the opportunity cost by $16,839 × .102 - $1,045 = $1,709-1,045 = $664. It is clear that keeping the old firewall is more suitable. +- Things change again in June. Since the effect of the original firewall reduces to $13.4\%$ , it can cut down the cost by only \ $1,128. In this month, another new host-based firewall appears; assume that its effectiveness on Availability is \(19.2\%$ , while it costs \\)1,015 to install. So, the application of the new firewall reduces the cost by \ $1,617 - \$ 1,015 = \$602. It is still not worth the expense. +- In July, we again evaluate the opportunity cost. The effect of the original firewall is \(10.4\%\), so it can save just \\(876. The effect of the new firewall is \(23\%\), and it costs \\)1,045 to install. The application of the new firewall saves \\(1,937 - \\)1,045 = \$892. With the new firewall, we can save \\)16 more. So we should update the firewall in July. + +# References + +Brin, Sergey, and Lawrence Page. 2000. The anatomy of a large-scale hypertextual Web search engine. http://www-db.stanford.edu/~backrub/google.html. +Curtin, Matt. 1998. Introduction to network security. http://www.interhack.net/pubs/network-security/network-security.html. Last revised 16 July 1998. +Honeynet Project. 2003a. Know your enemy: honeynets—What a honeynet is, its value, how it works, and risk/issues involved. khttp://project.honeynet.org/papers/honeynet/index.html. Last modified 12 November 2003. + +Honeynet Project. 2003b. Know your enemy: Defining virtual honeynets: Different types of virtual honeynets. http://www.honeynet.org/papers/virtual/. Last modified 27 January 2003. +Mitra, Sanjit Kumar. 2001. Digital Signal Processing: A Computer-Based Approach. 2nd ed. New York: McGraw-Hill. +Oppenheim, Alan V., and Alan S. Willsky. 1996. Signals and Systems. 2nd ed. Englewood Cliffs, NJ: Prentice-Hall. +Saaty, T.L. 1980. The Analytic Hierarchy Process. New York: McGraw-Hill. + +# Authors' Commentary: The Outstanding Information Technology Security Papers + +Ronald C. Dodge, Jr. + +Information Technology and Operations Center + +United States Military Academy + +West Point, NY 10996 + +ronald.dodge@usma.edu + +Daniel J. Ragsdale + +Dept. of Electrical Engineering and Computer Science + +United States Military Academy + +West Point, NY 10996 + +daniel.ragsdale@usma.edu + +# Introduction + +Information Assurance (IA) education and training in today's world is increasingly important. Several incidents in the past few years, such as data theft, malicious worms and viruses, denial of service attacks, and defacement of corporate and government web pages highlight the need to educate users and administrators of information systems. IA is more than just the simple application of technical measures to secure an information system; it is the combination of defensive technologies; well-conceived policies and procedures, and properly trained users [Maconachy et al. 2001]. + +Computer networks are ubiquitous, but aside from a relatively small number of network engineering professionals, few understand the fundamentals of information assurance (IA). Many institutions of higher learning that offer degrees in computer science offer courses that address the topic of computer networks. Often these courses focus on network protocols and theory, with little emphasis on the policy and hands-on application that individuals in organizations face every day. The integration of security practices into the business model of an organization is laden with tradeoffs. The implementation of security measures often has both direct costs and productivity costs as affected + +information systems become more difficult to use or are degraded with the introduction of enhanced security measures. + +# Formulation and Intent of the Contest Question + +The main goal of this year's interdisciplinary modeling problem was for competitors to reduce the potential costs associated with malicious behaviors in an simulated organization. This reduction results from the implementation of the set of preventative measures that were identified by the contest participants. The problem of how an organization maximizes its IT security posture while considering the overall economics impact on its mission requires an analysis of the known, expected, and potential costs. Organizations must analyze this problem in three primary areas: First, the organization must define the areas of risk in the IT infrastructure. Typically these are data confidentiality, data integrity, and service availability. Next the sources of risk need be identified, for example a malicious outside hacker, a "clumsy" insider, or hardware/software failure. Finally, the costs must be enumerated. This includes both the costs associated with security measure implementation (such as direct costs, training and productivity) and the potential costs if any or all of the areas of risk are compromised. + +This is a complex problem that requires a thorough analysis of many variables that have positive impacts in one area and negative impacts in another [Bishop 2002, 17-18; Garfinkel and Spafford 1996, 27-40]. Additionally, an organization might have missions that vary within its structure that require different security measures. The problem of how to design and implement the security architecture of an organization is further complicated by the dynamic nature of the problem. The evaluation conducted in the early stages of an assessment will be modified over time by changes in the organization mission and advances in technology. In building the framework for this year's modeling question, we attempted to generalize many factors to enable the students to build tractable models. + +The problem posed to the teams described a generic organization (a university) that consisted of several competing components that in some ways required completely different and competing security measures. The organization required both an open environment for information distribution and student access and a more secure system for grades, tuition, and book store management. Additionally, a hybrid solution was required for a third group made up of staff and faculty. The specific identification of these needs and several others was left to the teams as part of the analysis process. + +The teams were then required to examine the efficacy of various technical solutions and security policies in light of the various organization requirements. The solutions were then balanced against the overall potential for loss due to + +security failings and the direct costs of the security architecture. This underlines the fundamental premises that: the total cost of a security solution is the sum of the direct financial costs and the indirect costs due to usefulness and productivity and an organization should not spend more on a solution that it is at risk for losing. For example one would not be wise to install a $10,000 alarm system on an item valued at$ 1,000. + +Lastly, the ICM teams were required to analyze their proposed solutions model's ability to withstand technology changes as time passes. + +# References + +Maconachy, V., C. Schou, D. Welch, and D.J. Ragsdale. 2001. A model for information assurance: An integrated approach. In Proceedings of the 2nd Annual IEEE Systems, Man, and Cybernetics Information Assurance Workshop (West Point, NY, June 5-6, 2001), 306-310. +Bishop, Matt. 2002. Computer Security: Art and Science. Boston, MA: Addison-Wesley. +Garfinkel, Simson, and Gene Spafford. 1996. Practical Unix and Internet Security. 2nd ed. Sebastopol, CA: O'Reilly and Associates. + +# About the Authors + +The authors of this year's contest question have been working in the area of Information Assurance for a combined 18 years. The foci of their research include: + +- Information assurance simulation development. The problem posed in this year's modeling contest closely mirrors the scenario used to frame simulation being developed under an NSF grant. Various components of the simulation have been under development since 2001 and have been the topic of five conference papers. +- Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) analysis and implementation, including the development and deployment of innovative IDS and IPS solutions, such as honeynets, layer-2 bridges, and attribution technologies. +- Virtual machine technology. The authors have pioneered the use of virtual machines (VMs) to overcome resource constraints encountered by computer science programs enabling each student to manage and administer a robust collection of servers and workstations. + +- Information assurance curriculum development. The authors have integrated the use of VMs into a hands-on curriculum consisting of a variety of introductory and technical computer science courses as well as policy-based analysis courses. The development and use of VMs is the topic of six conference and journal publications. +- Competitive cyber defense exercises. The authors developed and implemented the U.S. Military Academy Cyber Defense Exercise. This model is being used as the benchmark for an NSF-funded effort to introduce competitive cyber exercises to civilian universities. + +![](images/d65a1947cfc9817f00cb51ecddc0ac01a4eb15a1155c3989bdd25fde166ab58c.jpg) + +Major Ronald C. Dodge, Jr., has served for more than 16 years as an Aviation officer and is a member of the Army Acquisition Corps in the United States Army. His military assignments range from duties in an attack helicopter battalion during Operation Just Cause in the Republic of Panama to the United States Military Academy. Currently, he is an Assistant Professor and Director of the + +Information Technology and Operations Center (ITOC) at the United States Military Academy. Ron received his Ph.D. from George Mason University, Fairfax, Virginia in Computer Science. His current research focuses are on information warfare, network deception, security protocols, internet technologies, and performance planning and capacity management. He is a frequent speaker at national and international IA conferences and he has published many papers and articles on IA topics. + +![](images/09ebe54b2102157d179b1c09777c4abf51e8afd1eb079dbff18b07473d7d7ed5.jpg) + +Colonel Daniel J. Ragsdale has served for 23 years as an officer in the U.S. Army. He has served in a variety of important operational, and research and development assignments, including participation in Operation Urgent Fury in Grenada and Operation Enduring Freedom in Afghanistan. Currently, he is an Associate Professor and Director of the Information Technology Program, Profes + +sor in the Department of Electrical Engineering and Computer Science at the U.S. Military Academy. His current research focuses on information security, Information Assurance (IA), and Information Warfare. He is a frequent speaker and panelist at national and international IA conferences and he has published dozens of papers and articles on IA topics. + +# Judge's Commentary: The Outstanding Information Technology Security Papers + +Frank Wattenberg +Dept. of Mathematical Sciences +United States Military Academy +West Point, NY 10996 +Frank.Wattenberg@usma.edu + +# Introduction + +The final judging for the 2004 Interdisciplinary Contest in Modeling took place at the United States Military Academy on Saturday, March 6, 2004. The judges spent an extensive and enjoyable day reading a very good and varied set of papers. + +# Bottom Line Up Front: There is Room at the Top + +Although a number of submissions were very good and readers will recognize some well-known institutions among the Outstanding and Meritorious papers, there is room at the top. If this were a sporting event rated on a 10-point scale, it is quite likely that no one paper would have scored above 9.0. This IT Security Problem involved many complex issues, messy data, and several challenging tasks. Three points are crucial in addressing the requirements of this problem: + +This is first and foremost a modeling competition. Modeling is often about ill-posed problems, in complex settings with uncertain data. Conclusions necessarily involve simplifications and uncertainties and confronting them is absolutely imperative. The papers were judged primarily on modeling. + +The UMAP Journal 25 (2) (2004) 175-179. ©Copyright 2004 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +The constraints of the contest are exactly the constraints in real life. In the real world, modelers always work with limited time and resources. Thus, real-world modeling requires making simplifications, justifying those simplifications, examining the impact of those simplifications and, above all, being intellectually honest about the shortcomings as well as the successes of the resulting models. Some submissions were marred by puffery. + +Organization, clarity, and brevity are essential. The judges were surprised by the number of submissions that lacked a table of contents. Although a table of contents was not required, its omission usually reflected a general lack of organization. The summary too is particularly important for any report. Although many summaries were well-written, even the summaries in the Outstanding papers merited at best a grade of B; none talked about the potential shortcomings of their models due to modeling assumptions or uncertainties in the data. + +# The Problem + +This year's problem dealt with information technology security for a new university campus. An undefended IT system is exposed to potential losses but, as usual, the costs of defense are considerable. + +There are many possible approaches to this problem. The problem description focused on two categories of defenses—policies and technology. + +- Policies include, for example, whether the network is wireless, as well as password policies—how complicated must passwords be and how often must they be changed. +- Technologies include things like firewalls and virus scanning. + +The description also focused on risk in three areas—confidentiality, integrity, and availability. A breach of confidentiality can result in litigation or the costs associated with the release of proprietary or classified information. The integrity and availability of data and information are, of course, essential to their value. + +In addition to a description of the structure of the situation, the problem included a 12-page enclosure with data about several alternatives in various categories—for example, it included data on eight different host-based firewalls. These data had two glaring features, and the judges looked specifically at how the submissions addressed the issues raised by these features: + +- Alternative defensive measures were discussed individually with no information about how they might work in combination. + +The better submissions all addressed this issue at least briefly. In general, however, none of the submissions did an outstanding job. The fact is that + +there is a range of ways in which two alternatives might interact. For example, at one extreme, two different virus scanners might be completely redundant if they both picked up the same viruses; or, at the other extreme, they might protect from completely different viruses. The results are potentially very sensitive to whatever assumptions are made in this area. + +This problem is compounded by the fact that assuming redundancy among measures in the same category reduces the computational complexity of the problem. Many submissions (including highly-rated ones) justified this assumption to make the problem computationally feasible. This is a reasonable assumption only if it is accompanied by a discussion of the sensitivity of the conclusions to this assumption. + +- The data were based on multiple reviews of the measures and there was considerable variation in the conclusions of the various reviews. + +Here again, while most of the papers addressed this issue in some fashion, many of papers made simplifications without discussing the sensitivity of their conclusion to those simplifications. + +# Analysis + +The different teams applied a variety of optimization techniques to their models. Some teams worked with models that were computationally infeasible and applied techniques—for example, simulated annealing—that led with relatively high probability to near optimal solutions; others made assumptions that led to computationally feasible optimization problems. Some teams used standard software and others wrote their own programs using C++ or other programming languages. Although some of the teams used sophisticated mathematics and algorithms (for example, simulated annealing) and others used sophisticated software effectively, neither was necessary for this problem. Many teams did first-rate work using straightforward implementations of their models with general-purpose tools. + +The analytic part of this problem can be broken into two parts: + +- evaluating the costs and the effectiveness of a mix of defensive measures, and +- searching the space of possible mixes of defensive measures to find an optimal or near optimal mix. + +The first part rightfully drew the most attention in most of the submissions—this is the modeling part. This required considerable attention to details and to the extensive data provided. Most importantly, however, it required thoughtful analysis of the two difficulties mentioned earlier—the impact of combinations of defensive measures and how to handle the uncertainties in the data. This is also the first focus of the absolutely necessary sensitivity analysis. In its starkest + +form, an assumption that two measures are redundant leads to a recommendation that at most one measure should be employed, while an assumption that two measures cover disjoint sets of attacks may lead to a recommendation that both defensive measures should be employed. + +The computational difficulty of the second part depended in part on the assumptions about how individual preventive measures interacted when used together. Other modeling assumptions also impacted this part of the problem. For example, some teams assumed that the same mix of defensive measures was used across the university, whereas others broke the problem up into different subnetworks. The new university's computing needs are diverse—ranging from student computers in dormitory rooms, to the commercial needs of a bookstore whose business skyrockets at the beginning of each semester, to the registrar's office and student health services that routinely deal with confidential data. In addition, the sophistication and professionalism of users is also very diverse—the registrar's office, bookstore, and student health services, for example, are more likely to accept stringent security measures than individual students, who might want to be able to install software of questionable origin. + +We saw a wide variety of approaches to searching for an optimal or near-optimal solution and most had considerable merit. Here again, we focused on the implications of the underlying modeling assumptions and on an analysis of the sensitivity of the conclusion to the search procedure used in addition to the modeling assumptions. + +# Conclusions and Advice to Future Teams + +This section is essentially an amplification of the same points made by Richard Cassady last year [2003, 188]. + +Assumptions Making simplifying assumptions is a critical part of modeling. In fact, good models are always the result of an iterative procedure beginning with fairly drastic simplifying assumptions to obtain some initial traction and then building progressively more sophisticated models based on sensitivity analysis and reality checks. Articulate your assumptions and their consequences. Your summary must identify clearly the assumptions made and their impact on your conclusions. + +Analysis Analysis is not the last step. It is an integral part of the iterative modeling procedure. Do regular reality checks and above all use sensitivity analysis to guide your model development and to determine both the strengths and weaknesses of your conclusions. + +Communication You must express and communicate your work well. Clarity of expression is a consequence of clarity of thought. If your summary and your paper are not clear then the modeling is almost certainly weak. + +References As always, use proper citation and be careful about the provenance and worth of the work you use. + +Congratulations are extended to all the participants on their accomplishments. Reading and judging the results of their weekend of interdisciplinary problem solving and modeling were enjoyable challenges for the judges. + +# Reference + +Cassady, C. Richard. 2003. Judge's Commentary: The Outstanding Airport Screening Papers. The UMAP Journal 24 (2) 185-188. + +# About the Author + +Frank Wattenberg is a professor in the Dept. of Mathematical Sciences at the United States Military Academy (USMA), West Point. He is particularly interested in modeling and simulation and in the use of technology for simulation and for education across the undergraduate curriculum. He is currently leading a team at the USMA that is developing on Online Book Modeling in a Real and Complex World to be published as part of the MAA Online Book Project. He is also working with colleagues at USMA and elsewhere to develop rich immersive environments for modeling and simulation. This project will produce environments with both virtual and hands-on components that students will revisit from middle school through college and from many different subject areas and levels. The architecture will support collaborative modeling and simulation based in part on the ideas of multiplayer games. + +# Editor's Note Regarding Submissions + +From August 2004 through August 2005, I will be editing The UMAP Journal from the University of Augsburg in Germany. Postal mail can be sent directly to the address in Germany below; mail to the Beloit College address on the masthead will be sent on. + +However, to avoid expense and delays, please endeavor to send all correspondence by electronic mail—and manuscripts by email attachment—to the Beloit College email address + +campbell@beloit.edu + +I will be retrieving email directly from this address, and email sent to it will be archived permanently against inadvertent loss. + +MID-AUGUST 2004 THROUGH MID-AUGUST 2005 + +Paul J. Campbell + +c/o Lst. Prof. Pukelsheim + +Institut für Mathematik der Universität Augsburg + +Universitatsstr. 14 + +D-86135 Augsburg + +Germany + +voice: 011-49-821-598-2206 fax: 011-49-821-598-2280 + +email: campbell@math.uni-augsburg.de + +www: http://cs.beloit.edu/campbell/ + +# About the Editor + +![](images/ea9253bd3e8e1ac2fa71a643da19dee15b9723c6e95acfc1b2e62dc99103db95.jpg) + +Paul Campbell graduated summa cum laude from the University of Dayton and received an M.S. in algebra and a Ph.D. in mathematical logic from Cornell University. He has been at Beloit College since 1977, where he was Director of Academic Computing from 1987 to 1990. He is Reviews Editor for Mathematics Magazine and has been editor of The UMAP Journal since 1984. + +He first visited Augsburg in 1967 on an exchange of young adults between the sister cities of Augsburg and Dayton, Ohio, where he had gone to high school and college. On his last sabbatical + +and in alternate summers since, he and his family have lived in Augsburg. He remains immensely grateful to the memory of Dr. Alfred Beigel (deceased), with whom he studied German for three years at the University of Dayton. + +# Reviews + +Albert, Jim. 2003. Teaching Statistics Using Baseball. Washington DC: Mathematical Association of America; xi + 288, $45. ISBN 0-88385-727-8. + +Albert, Jim, and Jay Bennett. 2003. Curve Ball: Baseball, Statistics, and the Role of Chance in the Game. Rev. ed. New York: Copernicus, 2003; xxii + 410, $19.95. ISBN 0-387-00193-X. + +Lewis, Michael. 2003. Moneyball: The Art of Winning an Unfair Game. New York: W.W. Norton; xv + 288, $24.95. ISBN 0-393-32481-8. + +The development of statistical inference in the twentieth century was spurred by agricultural research more than any other application; even today, many of the best statistics departments are at the schools with the best agricultural departments. The second application that comes to mind after agriculture—the application that generates most of the press and public controversy—is pharmaceuticals. But another application has generated a lot of intense scrutiny and passion by parts of the general public, and that is baseball. + +Baseball has several characteristics that make it have greater statistical import than other sports. + +- Major-league baseball has maintained a statistical record of every game and every inning since the late nineteenth century. +- Baseball has changed slowly enough that today's game resembles the game of 100 years ago much more closely than is the case for football, where significant differences can show up in a mere 10-year span. +- Perhaps most importantly, baseball lends itself to statistical questions, and it is this characteristic that makes it appropriate to education. For example: + +- How important is defense relative to hitting? +- How much of the game is pitching? +- Who are the greatest hitters that ever lived? +- Who is the best player ever to play third base? +- When is it smart to lay down a bunt? +- And perhaps most important of all, who should the home team draft in 2005? + +In the last 25 years, one figure has dominated the statistical analysis of baseball: Bill James. While employed as a security guard, he wrote his first baseball almanac in 1977, which he published himself and sold to 75 people; his second historical analysis of baseball [James 2003] was reviewed in both the New York Times Book Review [McGrath 2002] and in the New York Review of Books. There is more than a little irony in his story, and I'll return to that in a while. + +Teaching Statistics Using Baseball is a textbook, with plenty of exercises and case studies. It is an effective introduction to exploratory data analysis at an elementary level, easily readable by the motivated high-school student. It would work very well in a course on data analysis specifically, perhaps as a secondary text. There is historical detail, and a great many questions are posed and then analyzed. I can't say how well it would work in the classroom, but I can imagine a student gripped by it. Prof. Albert is not just a statistician but a baseball enthusiast (saying "nut" could have negative connotations). + +Curve Ball: Baseball, Statistics, and the Role of Chance in the Game by Jim Albert (again) and Jay Bennett won the 2001 SABR award (Society for American Baseball research) and has done quite well. This book is not a text and thus is much more interesting to readers who do not need a course in statistics. At the same time, it does serve as a course in data analysis and is not for the reader who is math-phobic. Both books cover some modeling and simulation. The second book, as implied by the title, spends more time and depth on the study of randomness. In particular, there has been much study in the last few years of streakiness in both baseball and basketball. The central question is: Do streaks really exist, or are they just a manifestation of ordinary random variation? Curve Ball is essential reading for any serious baseball fan who is also a nerd. + +If the slightly pejorative connotations of "nerd" are seriously offensive to you, then you should stay in the ivory tower if you are a professor and should change your major to literature if you are a student (lots of employers seek the kind of critical thinkers churned out by literature departments). + +Moneyball is very much about nerds and nerdiness. For a book that merely describes a handful of formulas, there is quite a bit here to interest statisticians. The author, Michael Lewis, writes books on management and business. While this book falls into that category, most bookstores put it in the baseball section. Moneyball has been something of a best-seller and has gained a great deal of notoriety both outside of baseball and inside, where it has been exceedingly controversial. Quite a few baseball fans and commentators have attacked the book, although most have not read it or did not understand it. It has made famous the general manager of the Oakland Athletics, a former major-league ball player named Billy Beane (not to be confused with another former player, Billy Bean, who recently wrote a book where he came out as gay). + +Moneyball is the book to read. If you are interested as well in a superb monograph on baseball analysis and chance, then also read Curve Ball. If you want a text in data analysis using baseball, you might very much like Teaching Statistics Using Baseball. But Moneyball is one of the most exciting and fascinating books I have read in some time. + +There are several caveats before I get started extolling Moneyball. + +- Be warned that the language is rated R. A hard R. + +The book is quite elliptical. It will refer offhandedly to random variation, a term that is all too familiar to mathematicians and statisticians but generally is not appreciated by the lay person. It refers to theories of perfect market information, theories that are pivotal to investment theory but are lost again on lay people. + +- Sprinkled throughout the book are simple statistical points that are left unsaid entirely. + +I would love to discuss this book with seniors in mathematics. On the surface, it is book about major-league baseball; to me, it is about the real world. Academia is full of mathematics professors who have no knowledge of industry or about nonacademic jobs. Others, some top academics in particular, think of industry in terms of national labs and research environments that are themselves rather academic and often have fairly good job security. However, most graduates who go into industry, especially the non-Ph.D.s, do not go into national labs. They go into something that I long ago learned not to talk about with most academics. But whereas others will see Moneyball as merely describing baseball, I say that it is much more general than that, and clearly so does author Michael Lewis. + +Whereas the first two books are about data analysis and are very fine examples of it, I am amazed about how well Moneyball conveys the spirit of data analysis and of analytical thinking, all amidst colorful anecdotes and profiles (not to mention the colorful language). It is this book that gives a necessary chapter to Bill James. The irony here is that a man without statistical training, but with a passion for analysis, found formulas that serious statisticians (at least one of whom was an academic superstar) who happened to be baseball nuts missed. In fact, one question that I've seen no one address is this: Why didn't statisticians running regression models find these formulas? Late in Moneyball, we have another Bill James-like character, Voros McCracken, who while unemployed and living with his parents developed a formula for ranking pitchers that was startling in its simplicity and its implications. (In particular, the existence of such a formula startled Bill James.) I found no mention of Mr. McCracken or his formula in the other two books. + +A nice summary of James's attitude is given on p. 95: + +Intelligence about baseball had become equated in the public mind with the ability to recite arcane baseball stats. What James's wider audience had failed to understand was that the statistics were beside the point. The point was understanding; the point was to make life on earth just a bit more intelligible; and that point, somehow, had been lost. "I wonder," James wrote, "if we haven't become so numbed by all these numbers + +that we are no longer capable of truly assimilating any knowledge which might result from them." + +This point is precisely what R.W. Hamming meant: "The purpose of computing is insight, not numbers" [1986, frontispiece]. + +The following are some of the points of Moneyball: + +- Much of the common knowledge within baseball is wrong. +- In particular, baseball scouts have been relying on methods of evaluation that are nearly useless. +- On-base percentage (OBP) is a much better measure of a batter's offensive value than batting average. Although some people figured this out before Bill James, baseball clubs have relied almost exclusively on batting averages. +- Baseball has relied on batting averages because of a historical accident. A nineteenth-century cricket writer invented the batting average statistics partly because he made an incorrect assumption about the art of hitting. A very basic statistical study, quite within the range of any analytical thinker, could have shown this. +- Major-league baseball clubs in their entirety have been extremely inefficient in drafting, promoting, and paying their players. +- Baseball clubs have had nerds on their staffs who told them how to do things better, but the executives almost never listened to them. +- At least one major-league club had a front office that was simply stupid; that is, it was staffed by people with uniformly low cognitive ability. +- The single biggest mistake made in baseball was the belief that one's judgement is more reliable than the statistical record. + +Lastly, I found Moneyball laugh-out-loud funny. + +# References + +Hamming, R.W. 1986. Numerical Methods for Scientists and Engineers. New York: Dover. +James, Bill. 2002. The New Bill James Historical Baseball Abstract. 2003. Rev. ed. New York: Free Press. +McGrath, Ben. 2002. Where's Marv Throneberry? Review of James [2002]. New York Times (31 March 2002) Section 7, 12. +James M. Cargal, Mathematics Dept., Troy University-Montgomery Campus, Montgomery, AL 36121-0667; jmccargal@sprintmail.com. \ No newline at end of file diff --git a/MCM/1995-2008/2004MCM/2004MCM.md b/MCM/1995-2008/2004MCM/2004MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..52ebe7f8038fea4c8941b285d770d68526a09d28 --- /dev/null +++ b/MCM/1995-2008/2004MCM/2004MCM.md @@ -0,0 +1,4168 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Interim Vice-President + +for Academic Affairs + +The College of Saint Rose + +432 Western Avenue + +Albany, NY 12203 + +arneyc@mail.strose.edu + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University + +Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Director of Educ. Technology + +Roland Cheyney + +Production Editor + +Pauline Wright + +Copy Editor + +Timothy McLean + +Distribution + +Kevin Darcy + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 25, No. 3 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Ron Barnes + +Arthur Benjamin + +James M. Cargal + +Murray K. Clayton + +Courtney S. Coleman + +Linda L. Deneen + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Charles E. Lienert + +Walter Meyer + +Yves Nievergelt + +John S. Robertson + +Garry H. Rodrigue + +Ned W. Schillow + +Philip D. Straffin + +J.T. Sutcliffe + +Donna M. Szott + +Gerald D. Taylor + +Maynard Thompson + +Ken Travers + +Robert E.D. "Gene" Woolsey + +Brigham Young University + +The College of St. Rose + +University of Houston-Downtown + +Harvey Mudd College + +Troy State University Montgomery + +University of Wisconsin—Madison + +Harvey Mudd College + +University of Minnesota, Duluth + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Metropolitan State College + +Adelphi University + +Eastern Washington University + +Georgia College and State University + +Lawrence Livermore Laboratory + +Lehigh Carbon Community College + +Beloit College + +St. Mark's School, Dallas + +Comm. College of Allegheny County + +Colorado State University + +Indiana University + +University of Illinois + +Colorado School of Mines + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes print copies of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2420 \$90 + +(Outside U.S.) #2421 $105 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2470 $415 + +(Outside U.S.) #2471 $435 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2440 $180 + +(Outside U.S.) #2441 $200 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2410 $39 + +(Outside U.S.) #2410 $39 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc. 57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2004 by COMAP, Inc. All rights reserved. + +# Vol. 25, No. 3 2004 + +# Table of Contents + +# Publisher's Editorial + +The Good Fight + +Solomon A. Garfunkel 185 + +# Special Section on the MCM + +Results of the 2004 Mathematical Contest in Modeling + +Frank Giordano 189 + +The Myth of "The Myth of Fingerprints" + +Steven G. Amery, Eric Thomas Harley, and Eric J. Malm . . . . . .215 + +Can't Quite Put Our Finger On It + +Seamus Ó Ceallaigh, Álva Sheeley, and Aidan Crangle 231 + +Not Such a Small Whorl After All + +Brian Camley, Pascal Getreuer, and Bradley Klingenberg 245 + +Rule of Thumb:Prints Beat DNA + +Seth Miller, Dustin Mixon, and Jonathan Pickett 259 + +Judge's Commentary: The Outstanding Fingerprints Papers + +Michael Tortorella 261 + +Practitioner's Commentary: The Outstanding Fingerprints Papers + +Mary Beeton 267 + +Editor's Commentary: Fingerprint Identification + +Paul J. Campbell 273 + +A Myopic Aggregate-Decision Model + +Ivan Corwin, Sheel Ganatra, and Nikita Rozenblyum 281 + +Theme-Park Queueing Systems + +Alexander V. Frolkin, Frederick D.W. van der Wyck, and + +Stephen Burgess. 301 + +Developing Improved Algorithms for QuickPass Systems + +Moorea L. Brega, Alejandro L. Cantarero, and Corry L. Lee . . . 319 + +KalmanQueue: An Adaptive Approach to Virtual Queueing + +Tracy Clark Lovejoy, Aleksandr Yakovlevitch Aravkin, and + +Casey Schneider-Mizell 337 + +Theme Park Simulation with a Nash-Equilibrium-Based + +Visitor Behavior Model + +Andrew Spann, Daniel Gulotta, and Daniel Kane 353 + +Judges' Commentary: The Quick Pass Fusaro Award Paper + +Peter Ansbach and Kathleen M. Shannon + +![](images/6de243d8ac552a7e20a7e2b722628aee8a565c0e28857164f4f9709e17ad5dc8.jpg) + +# Publisher's Editorial The Good Fight + +Solomon A. Garfunkel +Executive Director +COMAP, Inc. +57 Bedford St., Suite 210 +Lexington, MA 02420 +s.garfunkel@mail.comap.com + +The MCM issue of *The UMAP Journal* has historically been an opportunity for me to reflect on the year at COMAP and discuss with you many of the new projects underway. I am, with your indulgence, going to diverge from that tradition this year. Perhaps it is the coming (as I write this) election, but this has been an extremely political year. And that politics is having its effect on all of us involved in mathematics education. I have found myself as a consequence writing pieces that are more "polemical" in nature—defending our beliefs and our work within our community and without. So, with no apologies, here are two short essays that reflect my thoughts on the "good fight." + +# Mathematical Breadth for All + +The discussion (debate, war) about differentiating the curriculum for students with the perceived ability to go on in mathematics vs. the rest usually misses the crucial point of breadth. It is in many ways ironic that the "mathematics for all" movement has succeeded in infusing the secondary school mathematics curriculum with many important ideas and concepts that the "better" students simply do not see. A great deal of effort has gone into the creation of materials intended to show all students the usefulness of mathematics, through the use of contemporary applications and the processes of mathematical modeling. In many cases, this means teaching more discrete mathematics such as graph theory, game theory, social choice theory, and operations research. + +Henry Pollak is fond of saying that there are four reasons for students to learn mathematics—to help them as they enter the world of work, to make them more knowledgeable citizens, to help them in their daily lives, and to + +have them appreciate the beauty and power of the discipline. Clearly, for the mathematically talented, we focus on the last, while for more average or less motivated students, we (hopefully) stress the first three. I believe that this is a terrible mistake and that we are paying a terrible price. + +It is no secret that there has been a worldwide decline in mathematics majors. In the U.S., the half-life of students in mathematics courses, from 10th grade to the Ph.D., remains one year. In other words, if we look at the students enrolled in 11th-grade math courses, there are approximately half as many as were enrolled the previous year in 10th-grade math courses, and so on to the Ph.D. I argue that while we are doing a much better job in showing average students the importance and relevance of mathematics in their lives, we are simultaneously discouraging our brightest students from continuing their mathematical studies. + +I believe that the reasons for this are clear. We assiduously avoid showing the mathematical elite the utility of our subject and its relevance to their daily lives, career choices, and role in society. There is some mythical linear sequence of courses from birth to the Ph.D., which we feel that they must take. Many of theses courses are highly technical, providing practice in skills necessary only for the next course. But we have ample proof that this delayed gratification simply does not work. Our best students are leaving mathematics for what they perceive as more relevant and rewarding fields, such as biology and finance. + +Even if one accepts the notion that we should have a differentiated curriculum, based on ability, it is patently absurd to avoid showing our best and brightest students the power and utility of our subject. It isn't a horse race. Students don't have to take advanced calculus or point set topology by the age of 17, no matter how talented they might be. We need to show our students, at every ability level, the breadth of our discipline and the breadth of its applications. By not doing so, we only invite the disaffection we see. + +# Now It's Personal + +I should preface my comments by saying that I am first and foremost a curriculum developer. For the past 30 years, I have worked to produce curriculum materials that attempt to teach mathematics through contemporary applications and modeling. COMAP has produced literally thousands of modules from primary school through university-level mathematics, as well as several high school and tertiary texts, television and Web-based courses. + +There is, however, in many countries, a feeling that we have created sufficient new curricula over the past several years and that before we create more, we need to look hard at what we have done and whether we have made a difference in student achievement. At first blush this makes perfectly good sense, but the devil is truly in the details. Everyone is aware that student achievement is affected by several factors, not the least of which is teacher preparation and performance. And with the new curricula, teacher training and staff develop + +ment consistently lag woefully behind. In part, this is due to the enormity of the task, and in large part, to the enormity of the expense involved in doing staff development "right." + +But most of this discussion is beside the point. What we have today is a call for research. We have politicians and colleagues saying that before we develop new materials we must learn what works. We must experiment (almost in the medical sense) in order to be certain that what we teach from now on has a sound body of research behind it. While I realize that this analysis sounds extreme, I assure you that at least in U.S. educational circles it is a reality. Moreover, this reality is being played out by real politicians who decide where educational funds will be spent. Sadly, this dichotomy can also be seen within the discipline of mathematics education. Most of us, of a certain age, came to mathematics education through other pathways—as mathematical researchers, or as university or secondary-school mathematics faculty. There was not yet a discipline of mathematics education, few Ph.D.s, no direct career path, and few journals to publish our work. We were in the truest sense self-taught. We learned what works by working. + +I would argue that there are examples / models / problems that are beneficial on their face. These problems illustrate key aspects of the modeling process, can be set in contemporary and inherently interesting contexts, and permit us to teach and/or reinforce important mathematical concepts and skills. I believe that their introduction into the curriculum should not wait for double-blind experiments with control groups, based on a theoretical framework, evaluated through statistical techniques valid in a $95\%$ confidence interval. + +That mathematics education is now a respected discipline is, of course, a good thing. More and more talented young people are entering the field and more and more journals and international meetings give them respected outlets for their work. But, I fear that we are losing the best part of our past. Much of that past does not rest in journals, but in ourselves. Anyone who was fortunate enough to view the tape that Henry Pollak made for the ICMI study conference in Dortmund [2004], or to listen to the stirring words of Ubi D'Ambrosio at ICME-10 [2004], or go to any talk by Claudi Alsina (see [2001a; 2001b] will understand what I mean. + +These giants may or may not describe their work in the vernacular of the day. They may or may not explicate a theoretical framework, or reference a standards document for content or an educational statistics journal for a methodology. But the quality of their ideas is a thing to be treasured. Yes, we must describe our work in ways that can be replicated. Yes, we must conduct real research to establish whether our ideas as implemented make a positive difference in student performance. Yes, we must publish our work in respectable journals, reviewed by our peers. But in the same way that we understand analogous truths about mathematics research, we must not lose sight of the art of mathematics education. Just as with mathematics, there is beauty and elegance here. We must continue to make room for those who would strike out in new ways—try new content, new applications, new technologies. + +# References + +Alsina, Claudi. 2001a. A guided tour of daily geometry. The UMAP Journal 21 (1): 3-14. + +_____. 2001b. Mathematics and comfort. The UMAP Journal 21 (1): 97-115. + +D'Ambrosio, Ubiratan. 2004. Diffusion and popularization of science and mathematics. International Study Group on the Relations between History and Pedagogy of Mathematics, affiliated to the International Commission on Mathematical Instruction (ICMI). 10th International Congress on Mathematical Education (ICME-10). Copenhagen, Denmark. + +Pollak, Henry. 2004. A history of the teaching of mathematical modeling in the U.S.A. Plenary Lecture, International Commission on Mathematical Instruction (ICMI) Study Conference 14: Applications and Modeling in Mathematics Education. Dortmund, Germany: University of Dortmund. + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for 11 years and has dedicated the last 25 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he also appeared as the on-camera host), Against All Odds: Inside Statistics (still showing on late-night TV in New York!), and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary school mathematics. + +# Modeling Forum + +# Results of the 2004 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +frgiorda@nps.navy.mil + +# Introduction + +A total of 600 teams of undergraduates, from 253 institutions and 346 departments in 11 countries, spent the second weekend in February working on applied mathematics problems in the 20th Mathematical Contest in Modeling (MCM). + +The 2004 MCM began at 8:00 P.M. EST on Thursday, Feb. 5 and ended at 8:00 P.M. EST on Monday, Feb. 9. During that time, teams of up to three undergraduates were to research and submit an optimal solution for one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problems at the appropriate time, and entered completion data through COMAP'S MCM Website. After a weekend of hard work, solution papers were sent to COMAP on Monday. The top papers appear in this issue of The UMAP Journal. + +In addition, this year, on the 20th anniversary of the founding of the MCM by Ben Fusaro, COMAP announces a new annual award for MCM papers. Typically, among the final papers from which the Outstanding ones are selected is a paper that is especially creative but contains a flaw that prevents it from attaining the Outstanding designation. In accord with Ben's wishes, the award will recognize such teams. + +Results and winning papers from the first 19 contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2003). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first 10 years of + +the contest and a winning paper for each. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. + +This year's Problem A asked teams to develop a model to address the issue of the uniqueness of human thumbprints. Problem B asked teams to propose and test schemes for a Quick Pass system in an amusement park that allows customers to decrease their time spent waiting in line for the park rides. + +In addition to the MCM, COMAP also sponsors the Interdisciplinary Contest in Modeling (ICM) and the High School Mathematical Contest in Modeling (HiMCM). The ICM, which runs concurrently with MCM, offers a modeling problem involving concepts in operations research, information science, and interdisciplinary issues in security and safety. Results of this year's ICM are on the COMAP Website at http://www.comap.com/undergraduate/contests; results and Outstanding papers appeared in Vol. 25 (2004), No. 2. The HiMCM offers high school students a modeling opportunity similar to the MCM. Further details about the HiMCM are at http://www.comap.com/highschool/ contests. + +# Problem A: Are Fingerprints Unique? + +It is a commonplace belief that the thumbprint of every human who has ever lived is different. + +Develop and analyze a model that will allow you to assess the probability that this is true. + +Compare the odds (that you found in this problem) of misidentification by fingerprint evidence against the odds of misidentification by DNA evidence. + +# Problem B: A Faster Quick Pass System + +"Quick Pass" systems are increasingly appearing to reduce people's time waiting in line, whether it is at tollbooths, amusement parks, or elsewhere. + +Consider the design of a Quick Pass system for an amusement park. The amusement park has experimented by offering Quick Passes for several popular rides as a test. The idea is that for certain popular rides you can go to a kiosk near that ride and insert your daily park entrance ticket, and out will come a slip that states that you can return to that ride at a specific time later. For example, you insert your daily park entrance ticket at 1:15 P.M., and the Quick Pass states that you can come back between 3:30 and 4:30 P.M. when you can use your slip to enter a second, and presumably much shorter, line that will get you to the ride faster. To prevent people from obtaining Quick Passes for several rides at once, the Quick Pass machines allow you to have only one active Quick Pass at a time. + +You have been hired as one of several competing consultants to improve the operation of Quick Pass. Customers have been complaining about some anomalies in the test system. For example, customers observed that in one instance Quick Passes were being offered for a return time as long as 4 hours later. A short time later on the same ride, the Quick Passes were given for times only an hour or so later. In some instances, the lines for people with Quick Passes are nearly as long and slow as the regular lines. + +The problem then is to propose and test schemes for issuing Quick Passes in order to increase people's enjoyment of the amusement park. Part of the problem is to determine what criteria to use in evaluating alternative schemes. Include in your report a nontechnical summary for amusement park executives who must choose between alternatives from competing consultants. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at either Appalachian State University (Fingerprints Problem) or at the National Security Agency (Quick Pass Problem). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +This year, again an additional Regional Judging site was created at the U.S. Military Academy to support the growing number of contest submissions. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Fingerprints Problem32450126203
Quick Pass Problem438109246397
762159372600
+ +The seven papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +# Fingerprints Papers + +"The Myth of 'The Myth of Fingerprints'" + +Harvey Mudd College + +Claremont, CA + +Jon Jacobsen + +Steven G. Avery + +Eric Thomas Harley + +Eric J. Malm + +"Can't Quite Put Our Finger On It" + +University College Cork + +Cork, Ireland + +James J. Grannell + +Seamus Ó Ceallaigh + +Alva Sheeley + +Aidan Crangle + +"Not Such a Small Whorl After All" + +University of Colorado at Boulder + +Boulder, CO + +Anne M. Dougherty + +Brian Camley + +Pascal Getreuer + +Bradley Klingenberg + +# Quick Pass Papers + +"A Myopic Aggregate-Decision Model for Reservation Systems in Amusement Parks" + +Harvard University + +Cambridge, MA + +Clifford H. Taubes + +Ivan Corwin + +Sheel Ganatra + +Nikita Rozenblyum + +"Theme-Park Queueing Systems" + +Merton College, University of Oxford + +Oxford, U.K. + +Ulrike Tillmann + +Alexander V. Frolkin + +Fderick D.W. van der Wyck + +Stephen Burgess + +"Developing Improved Algorithms for QuickPass Systems" + +University of Colorado + +Boulder, CO + +Bengt Fornberg + +Moorea L. Brega + +Alejandro L. Cantarero + +Corry L. Lee + +"KalmanQueue: An Adaptive + +Approach to Virtual Queueing" + +University of Washington + +Seattle, WA + +James Allen Morrow + +Tracy Clark Lovejoy + +Aleksandr Yakovlevitch + +Aravkin + +Casey Schneider-Mizell + +# Meritorious Teams + +Fingerprints Papers (24 teams) + +Bethel College, St. Paul, MN (William M. Kinney) + +Central Washington University, Ellensburg, WA (Stuart F. Boersma) + +Chongqing University, Chongqing, China (Li Zhiliang) + +Cornell University, Ithaca, NY (Alexander Vladimirsky) + +Dalian University of Technology, Dalian, Liaoning, China (We Mingfeng) + +Dalian University, Dalian, Liaoning, China (Tan Xinxin) + +Donghua University, Shanghai, Shanghai, China (Ding Yongsheng) + +Duke University, Durham, NC (William G. Mitchener) + +Gettysburg College, Gettysburg, PA (Peter T. Otto) + +Kansas State University, Manhattan, KS (Fosskorten N. Auckly) + +Luther College, Decorah, IA (Reginald, D. Laursen) + +MIT, Cambridge, MA (Martin Z. Bazant) + +Northwestern Politechnical University, Xi'an, Shaanxi, China (Peng Guohua) + +Olin College of Engineering, Needham, MA (Burt S. Tilley) + +Shanghai Jiaotong University, (William G. Mitchener) + +Rensselaer Polytechnic Institute, Troy, NY (Peter R. Kramer) + +Simpson College, Indianola, IA (Murphy Waggoner) + +Tsinghua University, Beijing, China (Hu Zhiming) + +University College Cork, Cork, Ireland (James J. Grannell) + +University of Colorado at Boulder, Boulder, CO (Anne M. Dougherty) + +University of Delaware, Newark, DE (Louis F. Rossi) + +University of South Carolina Aiken, Aiken, SC (Thomas F. Reid) + +University of Washington, Seattle, WA, (Rekha R. Thomas) + +Zhejiang University, Hangzhou, Zhejiang, China (Yong He) + +Quick Pass Papers (38 teams) + +Beijing Forestry University, Beijing, China (Gao Mengning) + +Bloomsburg University, Bloomsburg, PA (Kevin K. Ferland) + +Carroll College, Helena, MT (Marilyn S. Schendel) + +Civil Aviation University of China, Tianjin, China (Nie Runtu) + +The College of Wooster, Wooster, OH (Charles R. Hampton) + +Colorado College, Colorado Springs, CO (Jane M. McDougall) + +Concordia College New York, Bronxville, NY (John F. Loase) + +Davidson College, Davidson, NC (Dennis R. Appleyard) + +Duke University, Durham, NC (William G. Mitchener) + +Grand View College, Des Moines, IA (Sergio Loch) + +Greenville College, Greenville, IL (George R. Peters) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Liu Kean) (two teams) + +Harvey Mudd College, Claremont, CA (Ran Libeskind-Hadas) +Kansas State University, Manhattan, KS (Fosskorten N. Auckly) +Loyola College, Baltimore, MD (Christos A. Xenophontos) +MIT, Cambridge, MA (Martin Z. Bazant) +Nanjing University of Science and Technology, Nanjing, Jiangsu, China (Zhao Peibiao) +Nankai University, Tianjin, China (Yang Qingzhi) +North China Electric Power University, Baoding, Hebei, China (Shi HuiFeng) +Rensselaer Polytechnic Institute, Troy, NY (Peter R. Kramer) +Salisbury University, Salisbury, MD (Joseph W. Howard) +Shanghai Jiaotong University, Shanghai, China (Song Baorui) (two teams) +Simpson College, Indianola, IA (Werner S. Kolln) +Southeast Missouri State University, Cape Girardeau, MO (Robert W. Sheets) +University of California, Berkeley, CA (Lawrence C. Evans) +United States Military Academy, West Point, NY (J. Scott Billie) +University College Cork, Cork, Ireland (Patrick Fitzpatrick) +University of Massachusetts Lowell, Lowell, MA (James Graham-Eagle) +University of Pittsburgh, Pittsburgh, PA (Jonathan E. Rubin) +University of Saskatchewan, Saskatoon, SK, Canada (James A. Brooke) +University of Trier, Trier, Germany (Volker H. Schulz) +University of Washington, Seattle, WA (James Allen Morrow) +Wake Forest University, Winston Salem, NC (Miaohua Jiang) (two teams) +Wake Forest University, Winston Salem, NC (Robert Plemmons) +Wartburg College, Waverly, IA (Brian J. Birgen) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, recognized the teams from Harvey Mudd College (Fingerprints Problem) and Merton College, Oxford University (Quick Pass Problem) as INFORMS Outstanding teams and provided the following recognition: + +- a letter of congratulations from the current president of INFORMS to each team member and to the faculty advisor; +- a check in the amount of $300 to each team member; +- a bronze plaque for display at the team's institution, commemorating their achievement; +- individual certificates for team members and faculty advisor as a personal commemoration of this achievement; +- a one-year student membership in INFORMS for each team member, which includes their choice of a professional journal plus the OR/MS Today periodical and the INFORMS society newsletter. + +- a one-year subscription access to the COMAP modeling materials Website for the faculty advisor. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from Harvey Mudd College (Fingerprints Problem) and University of Colorado at Boulder (Quick Pass Problem). Each of the team members was awarded a $300 cash prize and the teams received partial expenses to present their results in a special Minisymposium at the SIAM Annual Meeting in Portland, OR in July. Their schools were given a framed hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding team from each problem as an MAA Winner. The teams were from University of Colorado at Boulder (Fingerprints Problem) and Harvard University (Quick Pass Problem). With partial travel support from the MAA, both teams presented their solutions at a special session of the MAA Mathfest in Providence, RI in August. Each team member was presented a certificate by Richard S. Neal, Co-Chair of the MAA Committee on Undergraduate Student Activities and Chapters. + +# New: The Ben Fusaro Award + +Two Meritorious papers were selected for the Ben Fusaro Award, named for the Founding Director of the MCM and awarded for the first time this year. It recognizes teams for an especially creative approach to the contest problem. The Ben Fusaro Award teams were from Central Washington University (Fingerprints Problem) and MIT (Quick Pass Problem). Each team received a plaque from COMAP. + +# Background + +The Ben Fusaro Award is created to recognize technical papers that demonstrate an exemplary modeling effort for the MCM. These papers are well-written; their modeling approach and the reasoning for adopting such an approach are clearly communicated; and their analysis, results, and conclusions are measured and appropriate within the context of the problem. + +# Award Committee + +The award committee consists of two judges for each problem: the Triage Head Judge for the problem, plus a COMAP-sponsored judge currently serving in a two- or four-year college who has at least one prior year of experience in final judging for the MCM. + +# Eligibility + +To compete for the Ben Fusaro Award, a paper must be considered as Meritorious or Outstanding by the problem judges and normally be among those remaining for final discussion prior to the identification of Outstanding papers. Ideally, a paper selected to receive the Ben Fusaro Award should not be one of those already selected for recognition by one of the professional societies. + +# Selection Process + +Following the final discussion round, the award committee will select one paper from each problem that best demonstrates the following characteristics: + +- The paper presents high-quality application of the complete modeling process as represented by the 100-point-scale elements developed for the final rounds of judging. +- The team has demonstrated noteworthy originality and creativity in their modeling effort to solve the problem as given. +- The paper is well-written with a clear exposition, and a pleasure to read. + +# Ben Fusaro + +Ben was the founder of the MCM and its director for the first seven years. He has a B.A. from Swarthmore College, an M.A. from Columbia University (analysis), a Ph.D. from the University of Maryland (partial differential equations), and most recently (1990) an M.A. from the University of Maryland (computer science). + +He taught at several other colleges and universities before going to Salisbury State in 1974, where he served as chair of the Mathematics and Computer Science Dept. 1974-82 and received the Distinguished Faculty Award in 1992. Ben was NSF Lecturer at New Mexico Highlands University and at the University of Oklahoma, Fulbright Profes + +sor at National Taiwan Normal University, and visiting professor at the U.S. Military Academy at West Point. He has taught most undergraduate mathematics courses, plus graduate courses in integral equations, partial differential equations, and mathematical modeling. + +In recent years, Ben has been a major exponent of environmental mathematics, a topic on which he has presented several minicourses. + +![](images/a809d6cf1c821fce419e10255157f96641e90832fd94ecb768328a20a7b4499a.jpg) + +# Judging + +Director + +Frank R. Giordano, Naval Postgraduate School, Monterey, CA + +Associate Directors + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick J. Driscoll, Dept. of Systems Engineering, U.S. Military Academy, + +West Point, NY + +Contest Coordinator + +Kevin Darcy, COMAP Inc., Lexington, MA + +# Fingerprints Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, + +Stillwater, OK (MAA) + +Associate Judges + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, + +Appalachian State University, Boone, NC (Triage) + +Kelly Black, Mathematics Dept., University of New Hampshire, + +Durham, NH (SIAM) + +Lisette De Pillis, Mathematics Dept., Harvey Mudd College, Claremont, CA + +J. Douglas Faires, Youngstown State University, Youngstown, OH (MAA) + +Ben Fusaro, Mathematics Dept., Florida State University, + +Tallahassee, FL (SIAM) + +Mario Juncosa, RAND Corporation, Santa Monica, CA (retired) + +Deborah P. Levinson, Hewlett-Packard Company, Colorado Springs, CO + +Michael Moody, Olin College of Engineering, Needham, MA + +John L. Scharf, Mathematics Dept., Carroll College, Helena, MT + +Dan Solow, Mathematics Dept., Case Western Reserve University, + +Cleveland, OH (INFORMS) + +Michael Tortorella, Dept. of Industrial and Systems Engineering, + +Rutgers University, Piscataway, NJ + +Richard Douglas West, Francis Marion University, Florence, SC + +Daniel Zwillinger, Newton, MA + +# Quick Pass Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, + +Bloomington, IN + +Associate Judges + +Peter Anspach, National Security Agency, Ft. Meade, MD (Triage) + +Karen D. Bolinger, Mathematics Dept., Clarion University of Pennsylvania, Clarion PA + +James Case, Baltimore, MD (SIAM) + +William P. Fox, Mathematics Dept., Francis Marion University, Florence, SC (MAA) + +Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC + +John Kobza, Mathematics Dept., Texas Tech University, Lubbock, TX (INFORMS) + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Mathematics Dept., St. Mary's College, Notre Dame, IN (SIAM) + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, Salisbury University, Salisbury, MD + +Marie Vanisko, Dept. of Mathematics, California State University, Stanislaus, CA (MAA) + +# Regional Judging Session + +Head Judge + +Patrick J. Driscoll, Dept. of Systems Engineering + +Associate Judges + +Darrall Henderson, Dept. of Mathematical Sciences + +Steven Henderson, Dept. of Systems Engineering + +Steven Horton, Dept. of Mathematical Sciences + +Michael Jaye, Dept. of Mathematical Sciences + +—all of the U.S. Military Academy, West Point, NY + +# Triage Sessions: + +# Fingerprints Problem + +Head Triage Judge + +William C. Bauldry, Chair + +Associate Judges + +Terry Anderson, + +MarkGinn, + +Jeff Hirst, + +Rick Klima, + +Katie Mawhinney, + +and + +Vickie Williams + +—all from Dept. of Math'1 Sciences, Appalachian State University, Boone, NC + +# Quick Pass Problem + +Head Triage Judge + +Peter Anspach, National Security Agency (NSA), Ft. Meade, MD + +Associate Judges + +Dean McCullough, High Performance Technologies, Inc. + +Robert L. Ward (retired) + +Blair Kelly, + +CraigOrr, + +Brian Pilz, + +Eric Schram, + +and other members of NSA. + +# Fusaro Award Committee + +Fingerprints Problem: + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, + +Appalachian State University, Boone, NC + +Michael Moody, Olin College of Engineering, Needham, MA + +Quick Pass Problem: + +Peter Anspach, National Security Agency, Ft. Meade, MD + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, + +Salisbury University, Salisbury, MD + +# Sources of the Problems + +The Fingerprints Problem was contributed by Michael Tortorella (Dept. of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ). + +The Quick Pass Problem was contributed by Jerry Griggs (Mathematics Dept., University of South Carolina, Columbia, SC). + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency and by COMAP. We thank Dr. Gene Berg of NSA for his coordinating efforts. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +We thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Editing (and sometimes substantial cutting) has taken place: Minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathbf{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORAB
ALASKA
U. of AlaskaFairbanksJill R. FaudreePH
ARIZONA
Northern Arizona U.FlagstaffTerence R. BlowsP
ARKANSAS
Hendrix CollegeConwayDuff Gordon CampbellH
CALIFORNIA
California Baptist U.RiversideCatherine KongP
Calif. Poly. State U.San Luis ObispoJonathan E. ShapiroP,P
Calif. State Poly. U.PomonaHale, Mihaila, and SwithkesP,P
Calif. State U.SeasideHongde HuPP
Calif. State U.BakersfieldMaureen E. RushP
Calif. State U.NorthridgeAli ZakeriP
Calif. State U.TurlockBrian JueP
Christian Heritage C.El CajonTibor F. SzarvasP
Harvey Mudd Coll.ClaremontJon JacobsenO
Hank KriegerH
Ran Libeskind-HadasM,H
UC BerkeleyBerkeleyLawrence C. EvansM
U. of San DiegoSan DiegoJeffrey H. WrightPP
(CS)Diane HoffossP
COLORADO
Colorado CollegeColorado SpringsJane M. McDougallM
Colo. State U.PuebloBruce N. LundbergH
Regis UniversityDenverJim SeibertPP
U.S. Air Force Acad.USAF AcademyJames S. RolfP
U. of ColoradoBoulderAnne M. DoughertyM
Bengt FornbergO
Michael H. RitzwollerH,H
DenverWilliam L. BriggsP
Colorado SpringsRadu C. CascavalP
CONNECTICUT
Sacred Heart UniversityFairfieldPeter LothP
Hema GopalakrishnanP
Southern Connecticut State U.New HavenRoss B. GingrichP
Yale University (Stat)New HavenAndrew R. BarronP
DELAWARE
University of DelawareNewarkLouis F. RossiM
FLORIDA
Embry-Riddle Aeronautical U.Daytona BeachGreg S. SpradlinH,P
Jacksonville UniversityJacksonvilleRobert A. HollisterP,P
Stetson UniversityDeLandLisa O. CoulterP
University of Central Florida (Phys)OrlandoCostas J. EfthimiouP
GEORGIA
Georgia Institute of Tech. (Eng)AtlantaBernard KippelenH
Georgia Southern UniversityStatesboroLaurene V. FausettP,P
State University of West GeorgiaCarrolltonScott GordonH
IDAHO
Boise State UniversityBoiseJodi L. MeadP
ILLINOIS
Greenville CollegeGreenvilleGeorge R, PetersM
Illinois Wesleyan UniversityBloomingtonZahia DriciH
Monmouth College (Phys)MonmouthChristopher G. FasanoP
Northern Illinois UniversityDeKalbYing C. KwongP
Wheaton CollegeWheatonPaul IsiharaP
INDIANA
Earlham CollegeRichmondMichael Bee JacksonP
Timothy J. McLarnanP
(Phys)Mihir SejpalH,P
Franklin CollegeFranklinJohn P. BoardmanP
Goshen CollegeGoshenDavid HousmanH,P
Rose-Hulman Institute of Tech.Terre HauteDavid J. RaderHP
Cary LaxterH
Saint Mary's College (CS)Notre DameJoanne R. SnowPH
IOWA
Grand View CollegeDes MoinesSergio LochHM
Grinnell CollegeGrinnellMarc ChamberlandH,H
(Phys)Jason ZimbaP
Luther CollegeDecorahReginald D. LaursenMP
Mt. Mercy CollegeCedar RapidsK.R. KnoppP
Simpson CollegeIndianolaMurphy WaggonerM,H
(Chm)Werner S. KollnPM
Wartburg CollegeWaverlyBrian J. BirgenM
KANSAS
Emporia State UniversityEmporiaBrian D. HollenbeckP
Kansas State UniversityManhattanFosskorten N. AucklyMM
KENTUCKY
Asbury CollegeWilmoreDuk LeeH
Ken P. RietzH
Northern Kentucky Univ. (Phys)Highland HeightsGail MackinPP
Sharmanthie FernandoP
Thomas More CollegeCrestview HillsRobert M. RiehemannH
MAINE
Colby CollegeWatervilleJan E. HollyH,P
MARYLAND
Hood CollegeFrederickBetty MayfieldP
Frederick Kimber TysdalP
Loyola CollegeBaltimoreChristos A. XenophontosM,P
Mount St. Mary's College (Sci)EmmitsburgFred PortierP,P
Robert RichmanP
Salisbury UniversitySalisburyMike J. BardzellP
Steven HetzlerP
(Phys)Joseph W. HowardM
Towson UniversityTowsonMike O'LearyP
Villa Julie CollegeStevensonEileen C. McGrawP
Washington CollegeChestertownEugene P. HamiltonP,P
MASSACHUSETTS
Boston UniversityBostonGlen R. HallP
Harvard UniversityCambridgeClifford H. TaubesO
MITCambridgeMartin Z. BazantMM
Olin College of EngineeringNeedhamBurt S. TilleyM
John GeddesP
Simon's Rock CollegeGreat BarringtonAllen B. AltmanP
Michael BergmanP,P
U. of MassachusettsLowellJames Graham-EagleM,P
Western New England Coll.SpringfieldLorna B. HanesP,P
Worcester Polytechnic Inst.WorcesterSuzanne L. WeekesP
MICHIGAN
Eastern Michigan Univ.YpsilantiChristopher E. HeeP,P
Lawrence Technological U. (Sci)SouthfieldRuth G. FavroP,P
Valentina Tobos,PP
Siena Heights UniversityAdrianToni CarrollP
Pamela K. WartonP,P
Univ. of Michigan (Phys)Ann ArborJames D. WellsH
MINNESOTA
Bemidji State UniversityBemidjiColleen G. LivingstonP
Bethel CollegeSt. PaulWilliam M. KinneyM
College of Saint Benedict / Saint John's UniversityCollegevilleRobert J. HesseH
Macalester CollegeSt. PaulDaniel T. KaplanH,H
MISSOURI
Northwest Missouri State U.MaryvilleRussell N. EulerP
Saint Louis University (CS)St. LouisJames E. DowdyP
Dennis J. BouvierP
Southeast Missouri State U.Cape GirardeauRobert W. SheetsM
Truman State UniversityKirksvilleSteve J. SmithHH
Washington Univ. (Eng)St. LouisHiro MukaiH
MONTANA
Carroll CollegeHelenaHolly S. ZulloH
Kelly Slater ClineH
Marilyn S. SchendelPM
(Naive)
NEBRASKA
Hastings CollegeHastingsDavid B. CookeP
NEW JERSEY
Rowan UniversityGlassboroHieu D. NguyenP
NEW MEXICO
New Mexico TechSocorroWilliam D. StoneH
NEW YORK
Bard College (CS)Annandale-on-HudsonLauren L. RoseP
Robert W. McGrailP,P
Colgate UniversityHamiltonWarren WeckesserP
Concordia Coll. New YorkBronxvilleJohn F. LoaseM,H
Eric FriedmanH,P
Cornell University (Eng)IthacaAlexander VladimirskyM,H
Eric FriedmanH,P
Marist CollegePoughkeepsieTracey B. McGrailP
Nazareth CollegeRochesterDaniel BirmajerH
Rensselaer Polytechnic InstituteTroyPeter R. KramerMM
Roberts Wesleyan CollegeRochesterGary L. RadunsP
United States Military Academy (Eng)West PointJ. Scott BillieM
(Ctr for Teaching Excellence)Gregory S. ParnellP
Westchester Community CollegeValhallaA. David TrubatchP
Marvin L. LittmanP
Janine EppsH
NORTH CAROLINA
Appalachian State UniversityBooneEric S. MarlandP,P
Anthony G. CalamaiP
Davidson CollegeDavidsonLaurie J. HeyerH,P
Dennis R. AppleyardM
Duke UniversityDurhamWilliam G. MitchenerMM
Owen AstrachanH,H
Meredith CollegeRaleighCammey E. ColeP
North Carolina School of Science and MathematicsDurhamDot DoyleH
North Carolina State University"RaleighJeffrey S. ScroggsP
Wake Forest UniversityWinston SalemMiaohua JiangM,M
Robert PlemmonsM
Western Carolina UniversityCullowheeErin K. McNelisP
OHIO
College of WoosterWoosterCharles R. HamptonM
Malone CollegeCantonDavid W. HahnH,P
Miami UniversityOxfordStephen E. WrightP,P
University of DaytonDaytonYoussef N. RaffoulH
Wright State UniversityDaytonThomas P. SvobodnyP
Youngstown State University (Eng)YoungstownAngela SpalsburyPP
Scott MartinPH
OKLAHOMA
Southeastern Oklahoma State U.DurantBrett M. ElliottP
OREGON
Eastern Oregon UniversityLa GrandeDavid L. AllenH
Lewis and Clark College (Econ)PortlandRobert W. OwensH,H
Clifford BekarP,P
Linfield CollegeMcMinvilleJennifer A. NordstromP
Pacific UniversityForest GroveChris C. LaneHP
Southern Oregon UniversityAshlandKemble R. YatesP
Willamette UniversitySalemLiz A. StanhopeH
PENNSYLVANIA
Bloomsburg UniversityBloomsburgKevin K. FerlandM
Bucknell University (Phys)LewisburgSally KoutsoliotasH
Clarion UniversityClarionJon BealP
Gettysburg College (Eng)GettysburgPeter T. OttoM
Sharon L. StephensonP
Juniata CollegeHuntingdonJohn F. BukowskiH,P
University of PittsburghPittsburghJonathan E. RubinM,H
Villanova UniversityVillanovaBruce Pollack-JohnsonP
Westminster CollegeNew WilmingtonBarbara T. FairesPP
RHODE ISLAND
Rhode Island CollegeProvidenceDavid L. AbrahamsonPP
SOUTH CAROLINA
Benedict CollegeColumbiaBalaji IyangarP
Midlands Technical CollegeColumbiaJohn R. LongP,P
University of South CarolinaAikenThomas F. ReidMP
York Technical CollegeRock HillFrank W. CaldwellP
SOUTH DAKOTA
SD School of Mines & Tech.Rapid CityKyle RileyPH
TENNESSEE
Austin Peay State UniversityClarksvilleNell K. RayburnP,P
Belmont UniversityNashvilleAndrew J. MillerP
TEXAS
Austin CollegeShermanJohn H. JaromaP,P
Baylor UniversityWacoFrank H. MathisP
Liberty Christian School (Technology)DentonLaura A. DuncanP
Bryan Lee BunselmeyerPP
Trinity UniversitySan AntonioAllen G. HolderP
Diane SaphireH
(Econ)Jorge G. GonzalezH
(Phys)Robert LairdP
VERMONT
Johnson State CollegeJohnsonGlenn D. SproulP
VIRGINIA
Maggie Walker Governor's Schl (Sci)RichmondJohn A. BarnesH,P
Harold HoughtonHH
Roanoke CollegeSalemJeffrey L. SpielmanP
University of RichmondRichmondKathy W. HokeH,P
Virginia Western Comm. Coll.RoanokeRuth A. ShermanP,P
WASHINGTON
Central Washington UniversityEllensburgStuart F. BoersmaM
Heritage CollegeToppenishRichard W. SwearingenH,P
Pacific Lutheran UniversityTacomaDaniel J. HeathP
University of Puget Sound (CS)TacomaMichael S. CaseyHP
John E. RiegseckerP
University of WashingtonSeattleJames Allen MorrowO,M
Rekha R. ThomasMP
Western Washington UniversityBellinghamTjalling J. YpmaP
WISCONSIN
Edgewood CollegeMadisonSteven B. PostP
Northland CollegeAshlandWilliam M. LongP
University of WisconsinRiver FallsKathy A. TomlinsonP
AUSTRALIA
University of New South WalesSydneyJames W. FranklinP,P
CANADA
McGill UniversityMontrealAntony R. HumphriesH
University of Saskatchewan (CS)SaskatoonJames A. BrookePM
Raj SrinivasanH
(Phys)Kaori TanakaH
University of Western OntarioLondonMartin H. MueserHH
York UniversityTorontoHuaxiong HuangP
Huaiping ZhuP
CHINA
Anhui
Anhui UniversityHefeiHe ZehuiP
Wang XuejunP
Wang JihuiP
(CS)Zhang QuanbingP
(Phys)Chen MingshengP
Anhui University of Tech. and ScienceHong KongSun HongyiP
Yang XubingP
Wang ChuanyuP
Hefei University of TechnologyHefeiShi LeiH
Liu FanglinH
Huang YouduP
Chen HuaP
U. of Science and Tech. of China (Phys) (Stat)HefeiLe XuliP
Bin SuH
(Eng)Yao XieH
(Modern Phys)Tao ZhouH
Beijing
Beihang UniversityBeijingPeng LinpingP
Liu HongyingP,P
Wu SanxingP
(Phys)Zhang Yan JiaP
Beijing Forestry University (Bio)BeijingHongjun LiP
Gao MengningM,H
Beijing Institute of TechnologyBeijingLi BingzhaoH
Chen YihongPP
Cui XiaodiH
Beijing Jiaotong University (CS)BeijingZhang ShangliP,P
Liu MinghuiP
(Chm)Bing TuanP
(Phys)Wang BingtuanPP
Beijing Materials InstituteBeijingTian De LiangPP,P
Cheng Xiao HongP
Beijing Normal UniversityBeijingLiu LaifuH,P
Qing HeP,P
Lu ZijuanP
Beijing University of Chemical Tech.BeijingLiu HuiH
Jiang GuangfengH
(Chem Eng)Huang JinyangP
(Chem)Liu DaminP
Beijing U. of Posts and Telecom. (Sci)BeijingHe ZuguoH
(Sci)Sun HongxiangH
(CS)Ding JinkouH
(CS)Wang XiaoxiaP
Beijing University of Technology (Sci)BeijingYi XueP
(Sci)Guo EnliH,P
(Info)Yang Shi LinH
(CS)Deng MikeP
China Agriculture UniversityBeijingLiu JunfengH,P
Peking UniversityBeijingLiu XufengHH
Deng MinghuaH,H
(Phys)Zheng HanqingP
Tsinghua UniversityBeijingYe JunH,H
Hu ZhimingM,H
Jiang QiyuanP
Chongqing
Chongqing UniversityChongqingGong QuP
Li ChuandongP
He RenbinP
(Chm)Li ZhiliangM
Chongqing U. of Posts and Telecom. (CS)ChongqingYang Chun-de Zheng Ji-mingH H
Guangdong
Guangzhou UniversityGuangzhouFeng Yongping Liang Dahong Fu RongLinP P,P
(Econ)GuangZhouHu Daiqiang Luo Shizhuang Zhang Chuanlin Ye ShiqiP P
Jinan University (CS) (CS) (Phys)GuangZhouLiu Shenquan Liao Shizhuang Zhang ChuanlinP P
South China University of Technology (Phys) (Sci)GuangzhouTao Zhisui Wang Henggeng Liu Xiuxiang Feng GuoCan Li CaiWei Yuan ZhuojianH P P H
South-China Normal University (Phys)Guangzhou
Sun Yat-Sen UniversityGuangzhouJiang XiaoLong Feng GuoCanP P
(CS) (Geology)Yuan ZhuojianH
Guangxi
Guangxi UniversityNanningLu YuejinP,P
Hebei
North China Electric Power U.BaodingZhang Po Gu GendaiP H
(CS) (Eng)Shi HuiFeng Ma XinshunM P
Heilongjiang
Harbin Engineering UniversityHarbinZhang Xiaowei Gao Zhenbin Yu Fei Shen JihongnP.P P
(Sci)HarbinHong Ge Zhang Chiping Liu Kean Shang ShoutingP P P
Harbin Institute of Technology (Sci)HarbinChen Dongyan Wang Shuzhong PP P
Harbin University of Science and Tech.HarbinJiamusi Bai FengShan Fan WeiP P
Jiamusi University (Eng)HarbinGe Huiline Wu QiufengP
Harbin Normal UniversityHarbinXu MingyueP,P
Yin HongcaiP,P
(CS)Yao HuanminP,P
(CS)Zhang YanyingP
(CS)Zeng WeiliangP
Hubei
China University of Geosciences (CS)WuhanCai ZhiHuaP
Huazhong University of Sci. & Tech.WuhanYe YingH
He NanzhongP
Wang YizhiHP
(Chm)Guan WenchaoP
(Eng)Wang YongjiP
(Civil Eng)Jiang Yi ShengP
Wuhan UniversityWuhanDeng AijiaoH
Hu YuanmingP
Wuhan University of TechnologyWuhanPeng Si junP
Wang Zhan qingP
He LangP
(Stat)Li YuguangH
Hunan
Central South University (CS)ChangshaHou MuzhouP
(CS)Zhang HongyanP
(Stat)Yi KunnanH,P
National University of Defence Tech.ChangshaMao ZiyangP,P
Wu MengdaPH
(Econ)Cheng LizhiH,P
Jiangsu
China University of Mining and Tech.XuzhouZhou ShengwuP
Zhang XingyongP
(Educational Admin.)Wu ZongxiangH
(Educational Admin.)Zhu KaiyongH
Commanding Inst. of Engineer Corps, People's Liberation ArmyXuzhouWang ChuanweiH,P
Jiang Su University, Nonlinear Inst. CtrZhenjiangYi Min LiH
Nanjing Normal UniversityNanjingFu ShitaiPP
Chen XinP,P
Nanjing U. of Posts and Telecomm.NanjingQiu ZhongHuaH
Xu LiWeiH
(CS)Ming HeP
(Optical Info. Tech.)Yang ZhenhuaP
Nanjing U. of Science & TechnologyNanjingXu ChungenP
Zhao PeibiaoM
(Econ)Liu LiweiP
(Econ)Zhang ZhengjunP
Xuzhou Institute of TechnologyXuzhouLi SubeiP
Jiang YingziH
Southeast UniversityNanjingZhang Zhi-qiangPH
He DanPH
Jiangxi
Nanchang University (Sci)NanchangTao ChenP,P
Liao ChuanrongP
Jilin
Jilin UniversityChangchun CityZou YongkuiH
Huang QingdaoP
Fang PeichenP
(Eng)Pei YongchenP
Northeast China Institute of Electric Power EngineeringJilin CityGuo XinchenP
Chang ZhiwenP
Liaoning
Dalian Coll. of Chemical EngineeringDalianYu HaiyanH
Dalian Nationalities UniversityDalian Dev. ZoneLi XiaoniuP
Zhang HengboH
Dalian University (CS)DalianTan XinxinMP
Gang JiataiH,H
Dalian University of TechnologyDalianHe MingfengM
Zhao LizhongP
Yu HongquanP
Yi WangP
Northeastern University (CS)ShenyangHao PeifengPH
(Phys)ShenyangHe XuehongPP
(Eng)ShenyangJing YuanweiP
(Sci)ShenyangSun PingH,P
Shenyang Pharmaceutical U. (Basic Courses)ShenyangXiang RongwuPP
Shaanxi
North China U. of Science and Tech.TaiyuanXue Ya KuiH
Yang MingP
Lei Ying JieP
Bi YongP
Northwest UniversityXi'anWang LiantangP
Northwestern Polytechnical U.Xi'anPeng GuohuaM
Nie YufengP
(Chm)Liu XiaodongH
(Chm)Zhang ShengguiP
Xi'an Communication InstituteXi'anYan JiangP
Yang DongshengP
Guo LiP,P
Xi'an Jiaotong UniversityXi'anDai YonghongP,P
Zhou YicangH,H
He Xiaoliang HePP
Xidian University (Comm. Eng)Xi'anZhang ZhuokuiH,P
(CS)Ye Ji-MinP
(Sci)Zhou ShuishengP
Shandong
Shandong UniversityJinanHuang ShuxiangP
Ma JianhuaP
Fu GuohuaP
(CS)Wang QianP
(Econ)Zhang ZhaopingP
(Phys)He XiqingP
University of Petroleum of ChinaDongyingWang ZitingP
Shanghai
Donghua University (CS)ShanghaiChen JingchaoH
(Phys)He GuoxingP
(Econ)Ding YongshengM
(Eng)Guo ZhaoxiaP
East China Univ. of Science and TechnologyShanghaiSu ChunjieP
Liu ZhaohuiP
(Phys)Lu YuanhongH
(Phys)Qin YanP
Fudan UniversityShanghaiCai ZhijieP
Cao YuanP
Jiading No. 1 Senior High SchoolShanghaiXu RongH
Xie XilinP
Shanghai Foreign Language SchoolShanghaiPan LiquinH,H
Sun YuH
Shanghai International Studies University
Shanghai Foreign Language SchoolShanghaiChen LiedaH
Shanghai Jiaotong UniversityShanghaiSong BaoruiM,M
Huang JianguoM
Zhou GuobiaoP
Tianjin Normal UniversityTianjinLi JianquanH
Li BaoyiH
Tianjin Polytechnic UniversityTianjinFan LiyaH
Su YongfuP
Tianjin University (CS)TianjinSong ZhanjieH
(Comm. Eng)Liang FengzhenH
(Comm. Eng)Lin DanH
(Software Eng)Zhang FenglingH
Shanghai Normal UniversityShanghaiGuo ShenghuanP
Shi YongbinP
Zhang JizhouP
(CS)Liu RongguanP
Shanghai University of Finance & EconomicsShanghaiFan ZhangH
Xie BingP
Shanghai Yucai High School (Sci)ShanghaiLi ZhengtaiP
Sichuan
Southwest Transportation U. (Eng)E'meiHan YangPP
Southwestern U. of Finance and Econ. (Econ)ChengduSun YunlongH
Univ. of Elec. Sci. & Tech.ChengduGao QingPH
(CS)Li MingqiPP
Tianjin
Civil Aviation University of China
(Air Traffic Mgmt)TianjinNie RuntuM
(Air Traffic Mgmt)Gao YongxinP
(Sci)He SongnianP
Nankai UniversityTianjinChen WanyiH
Yang QingzhiM
(Phys)Huang WuqunP
Zhejiang
Hangzhou University of Commerce (CS)HangzhouDing ZhengzhongH,P
Hua JiukunP,P
Zhejiang UniversityHangzhouHe YongMP
Tan ZhiyiH
Yang QifanP,P
City College (CS)HangzhouZhao YananP,P
(CS)Huang WaibinP
(CS)Dong TingP
Zhejiang University of Finance and Econ.HangzhouLuo JiH
Wang FulaiH
FINLAND
Mathematical High School of HelsinkiHelsinkiVille VirtanenPP
Päivölä CollegeTarttilaEsa I. LappiP
GERMANY
University of TrierTrierVolker H. SchulzM,P
HONG KONG
Hong Kong Baptist UniversityKowloonChong Sze TongH
Wai Chee ShiuH
INDONESIA
Institut Teknologi BandungBandungEdy SoewonoP
Rieske HadiantiP
IRELAND
National University of Ireland, GalwayGalwayNiall MaddenP,P
University College CorkCorkPatrick FitzpatrickM
James J. GrannellO,M
Andrew UsherP
University College Dublin (Phys)DublinMaria MeehanH,P
Peter DuffyH
SOUTH AFRICA
University of StellenboschStellenboschJan H. van VuurenH,H
UNITED KINGDOM
Dulwich CollegeLondonJeremy LordP
Merton College, Oxford UniversityOxfordUlrike TillmannO
+ +# Abbreviations for Organizational Unit Types (in parentheses in the listings) + +
(none)MathematicsM; Applied M; Computing M; M and Computer Science; +M and Computational Science; Computing M; M and Information Science; +M and Statistics; M, Computer Science, and Statistics; +M, Computer Science, and Physics; Mathematical Sciences; +Applied Mathematical and Computational Sciences; Natural Science and M; +M and Systems Science; Applied M and Physics
BioBiologyB; B Science and Biotechnology
ChemChemistryC; Applied C; C and Physics; C, Chemical Engineering, and Applied C
CSComputerC Science; C and Computing Science; C Science and Technology; +C Science and Software Engineering; Software Engineering; Artificial Intelligence; Automation; Computing Machinery; +Science and Technology of Computers
CSInformationI Science; I and Computation Science; I and Calculation Science; +I Science and Computation; I and Computer Science; +I and Computing Science; I Engineering
EconEconomicsE; E Mathematics; Financial Mathematics; +Financial Mathematics and Statistics; Management; +Business Management; Management Science and Engineering
EngEngineeringCivil E; Electrical Eng; Electronic E; Electrical and Computer E; +Electrical E and Information Science; Electrical E and Systems E; +Communications E; Civil, Environmental, and Chemical E; +Propulsion E; Machinery and E; Control Science and E; +Operations Research and Industrial E; Automatic Control
PhysPhysicsP; Applied P; Mathematical P; Modern P; P and Engineering P; +P and Geology; Mechanics; Electronics
SciScienceS; Natural S; Applied S; Integrated S
StatStatisticsS; S and Finance; Mathematical S
+ +For team advisors from China, we have endeavored to list family name first. + +# The Myth of "The Myth of Fingerprints" + +Steven G. Amery +Eric Thomas Harley +Eric J. Malm +Harvey Mudd College +Claremont, CA + +Advisor: Jon Jacobsen + +# Summary + +For over a century, fingerprints have been an undisputed personal identifier. Recent court rulings have sparked interest in verifying uniqueness of fingerprints. + +We seek to determine precisely the probability of duplicate fingerprints. Our model of fingerprint structure must achieve the following objectives: + +- Topological structure in the print, determined by the overall flow of ridges and valleys, should be described accurately. +- Fine detail, in the form of ridge bifurcations and terminations, must also be characterized accurately. +- Intrinsic uncertainties, in our ability to reproduce and measure fingerprint data, must be considered. +- Definite probabilities for specified fingerprint configurations must be calculated. + +We place special emphasis on meeting the modeling criteria established by Stoney and Thornton [1986] in their assessment of prior fingerprint models. + +We apply our model to the conditions encountered in forensic science, to determine the legitimacy of current methodology. We also compare the accuracies of DNA and fingerprint evidence. + +Our model predicts uniqueness of prints throughout human history. Furthermore, fingerprint evidence can be as valid as DNA evidence, if not more so, although both depend on the quality of the forensic material recovered. + +# Introduction + +# What is a Fingerprint? + +A fingerprint is a two-dimensional pattern created by the friction ridges on a human finger [Beeton 2002]. Such ridges are believed to form in the embryo and to persist unchanged through life. The physical ridge structure appears to depend chaotically on factors such as genetic makeup and embryonic fluid flow [Prabhakar 2001]. When a finger is pressed onto a surface, the friction ridges transfer to it (via skin oil, ink, or blood) a representation of their structure. + +Fingerprints have three levels of detail [Beeton 2002]: + +1. Overall ridge flow and scarring patterns, insufficient for discrimination. +2. Bifurcations, terminations, and other discontinuities of ridges. The pairwise locations and orientations of the up to 60 such features in a full print, called minutiae, provide for detailed comparison [Pankanti et al. 2002]. +3. The width of the ridges, the placement of pores, and other intraridge features. Such detail is frequently missing from all but the best of fingerprints. + +# Fingerprints as Evidence + +The first two levels have been used to match suspects with crime scenes, and fingerprint evidence was long used without major challenge in U.S. courts [OnIn.com 2003]. In 1993, however, in Daubert v. Merrill Dow Pharmaceutical, the U.S. Supreme Court set standards for "scientific" evidence [Wayman 2000]: + +1. The theory or technique has been or can be tested. +2. The theory or technique has been subjected to peer review or publication. +3. The existence and maintenance of standards controlling use of the technique. +4. General acceptance of the technique in the scientific community. +5. A known potential rate of error. + +Since then, there have been challenges to fingerprint evidence. + +# Individuality of Fingerprints + +Francis Galton [1892] divides a fingerprint into squares with a side length of six ridge periods and estimates that he can recreate the ridge structure of a missing square with probability $\frac{1}{2}$ . Assuming independence of squares and introducing additional factors, he concludes that the probability of a given + +fingerprint occurring is $1.45 \times 10^{-11}$ . Pearson refines Galton's model and finds a probability of $1.09 \times 10^{-41}$ [Stoney and Thornton 1986]. + +Osterburg [1977] extends Galton's approach by dividing a fingerprint into cells that can each contain one of 12 minutia types. Based on independence among cells and observed frequencies of minutiae, he finds the probability of a configuration to be $1.33 \times 10^{-27}$ . Sclove [1979] extends Osterburg's model to dependencies among cells and multiple minutiae in a single cell. + +Stoney and Thornton [1986] charge that these models fail to consider key issues completely: the topological information in level-one detail; minutiae location, orientation, and type; normal variations in fingerprints; and number of positions considered. We try to correct some of these omissions. + +# Our Model: Assumptions and Constraints + +# Assumptions + +- Fingerprints are persistent: they remain the same throughout a person's lifetime. Galton [1892] established this fact, in recent times verified from the processes of development of dermal tissues [Beeton 2002]. +- Fingerprints are of the highest possible quality, without damage from abrasion and injury. +- The pattern of ridges has some degree of continuity and flow. +- The ridge structure of a fingerprint is in one of five categories: Arch, Left Loop, Right Loop, Tented Arch, or Whorl, employed in the automatic classification system of Cappelli et al. [1999] (derived from those of the FBI and Watson and Wilson [1992]). Each category has a characteristic ridge flow topology, which we break into homogeneous domains of approximately unidirectional flow. While Cappelli et al. [1999] raise the issue of "unclassifiable" prints, and they and Marcialis et al. [2001] confuse classes of ridge structures, we assume that such ambiguities stem from poor print quality. +- Fingerprints may further be distinguished by the location and orientation of minutiae relative to local ridge flow. Stoney and Thornton [1986] argue that the ridges define a natural coordinate system, so the location of a minutia can be specified with a ridge number and a linear measure along that ridge. Finally, minutiae have one of two equally likely orientations along a ridge. +- Each minutia can be classified as a bifurcation, a termination, or a dot (Figure 1) [Pankanti et al. 2002; Stoney and Thornton 1986]. Though Galton [1892] identifies 10 minutia structures and others find 13 [Osterburg et al. 1977], we can ignore these further structures (which are compositions of the basic three) because of their low frequency [Osterburg et al. 1977]. + +![](images/d72888dfd792e9a30a95ddcb471bc38c8db9a68664f4224f6c13cd39092e270d.jpg) +Figure 1. The three basic minutiae types (from Galton [1892]). We refer to ending ridges as terminations. + +- A ridge structure produces an unambiguous fingerprint, up to some level of resolution. A ridge structure can vary in print representations primarily in geometric data, such as ridge spacing, curvature, and location of minutiae [Stoney and Thornton 1986]. Topological data—ridge counts, minutiae orientation, and ordering—are robust to such variation and are replicated consistently. + +A more serious consideration is connective ambiguities, such as when a given physical minutia is represented sometimes as a bifurcation and sometimes as a termination. But our highest-quality assumption dictates that such ambiguity arise only where the physical structure itself is ambiguous. + +- Location and orientation of minutiae relative to each other are independent, though Stoney and Thornton [1986] find some dependency and Sclove [1979] model such dependency in a Markov process. +- Ridge widths are uniform throughout the print and among different prints, and ridge detail such as pores and edge shapes is not significant. While ridge detail is potentially useful, we have little data about types and frequencies. +- Frequencies of ridge structure classes and configurations and minutiae types do not change appreciably with time. We need this invariance for our model's probabilities to apply throughout human history. + +# Constraints Implied by Assumptions + +- Our model must consider ridge structure, relative position, orientation, and type of minutiae. +- Locations of minutiae must be specified only to within some uncertainty dependent on the inherent uncertainty in feature representation. + +# Model Formulation + +We examine a hierarchy of probabilities: + +- that the given class of ridge structure occurs, + +- that the ridge structure occurs in the specified configuration of ridge flow regions, and +- that minutiae are distributed as specified throughout the regions. + +We further break this last probability down into a composition of the following region-specific probabilities: + +- that a region contains the specified number of minutiae, +- that the minutiae in this region follow the specified configuration, and +- that the minutiae occur with the specified types and orientations. + +# Probability of Ridge Structure Class + +To each of the five classes of ridge structures (Arches, Left and Right Loops, Tented Arches, and Whorls), we associate a probability of occurrence $(\nu_{A}, \nu_{L}, \nu_{R}, \nu_{T}, \nu_{W})$ , which we estimate from observed frequency in the population. + +# Probability of Ridge Structure Configuration + +Each print is partitioned into regions in which the overall flow is relatively unidirectional, and the class of the print is determined from five prototypical masks characteristic of ridge-structure classes (Figure 2) [Cappelli et al. 1999]. The variations of flow region structure within each class then depend on parameters for the class. For example, the ridge structure of a Loop print can be determined from the locations of the triangular singularity and the core of the loop (Figure 3). To determine the probability of a particular region configuration, we determine the probability that the associated parameters occur. + +Because of uncertainty in the parameters, we discretize the parameter space at the fundamental resolution limit $\delta_{1}$ (subscript indicates feature level). We use independent Gaussian distributions about the mean values of the parameters. + +We now detail the parameter spaces for each ridge-structure class. The use of the prototypes requires an $X \times Y$ region within the print. + +# Arch + +The parameters for the Arch consist of the Cartesian coordinates $(x, y)$ of the lower corner of the left region, the height $h$ of the central corridor, and the four angles $\theta_{1}, \theta_{2}, \theta_{3}, \theta_{4}$ at the inner corners of the left and right regions. We consider as fixed the width $b$ of the central corridor. The ratio of the resolution limit $\delta_{1}$ to the mean length of a typical segment determines the uncertainty in the angular measurement of that segment. + +![](images/cc06e80a2b3c02c3f7f94fc6dd173b49edfb650e494d1f30fa243da8910bdddd.jpg) +Arch + +![](images/c77a6f1be72637cade18a12aa87dc59e111d09efcec4e571e6d26294dfed4859.jpg) +Right Loop + +![](images/e0af9fd5f490e6ac6ed5937227cc6c7e25893e0f9a6f3452cf8fe253c224da4e.jpg) +Tented Arch + +![](images/445a9f49658b3fac1ac8d483389bfd0e0b5406152c7eee28be0b2cff5ee918cb.jpg) +Whorl + +![](images/8062ddf5b394b40f8d674325e3e2390b2a78e0b2e383ab398919a6ad4bbe9a15.jpg) +Figure 2. The prototypical region structures and parameters for each ridge structure class, derived from the masks in Cappelli et al. [1999]. +Arch +Figure 3. The prototypical region structures applied to an Arch, a Right Loop, and a Whorl. + +![](images/7d6a081b743b394874275fb57a988e093149a90ac26fd171faeca2b149979be1.jpg) +Right Loop + +![](images/36d6771d3a5440f82844c41c1dae49ee16ba18658b220154ac7ef2806b5a4c22.jpg) +Whorl + +# Loops, Left and Right + +Since Left and Right Loops are identical except for a horizontal reflection, we use the same parameter space for both classes. The two principal features are the position $(x,y)$ of the triangular singularity outside the loop and the distance $r$ and angle $\theta$ of the core of the loop relative to this singularity. + +# Tented Arch + +The major structure is the arch itself; the parameters are the position $(x, y)$ of the base of the arch and the height $h$ of the arch. + +# Whorl + +The Whorl structure has four major features: the center of the whorl, $(x_{C},y_{C})$ ; the base of the whorl, $(x_{B},y_{B})$ ; and the triangular singularities to the left and right of the base of the whorl, at $(x_{L},y_{L})$ and $(x_{R},y_{R})$ . We assume that the center and the base lie between the two singularities, so that $x_{L}\leq x_{C}$ and $x_{B}\leq x_{R}$ , and that the base lies above the singularities, so that $y_{B}\geq y_{L}$ and $y_{B}\geq y_{R}$ . + +# Probabilities of Intraregion Minutiae Distribution + +Since the geometry of a region is uniquely determined by the configuration parameters, we can divide each unidirectional flow region into parallel ridges. We can represent the ridge structure of the region as a list of ridge lengths. + +We assume a fundamental limit $\delta_{2}$ to resolution of the position of minutiae along a ridge and divide a ridge into cells of length $\delta_{2}$ , in each of which we find at most one minutia. The probability $P_{TC}(n,l,k)$ that the nth ridge in the partition, with length $l$ , has a particular configuration of $k$ minutiae is + +$$ +P _ {T C} (n, l, k, \ldots) = P _ {p} (n, k, l) P _ {c} (n, k, l) P _ {t o} (\{k _ {i}, p _ {i}, o _ {i} \}), +$$ + +where $P_{p}$ is the probability that $k$ minutiae occur on this ridge, $P_{c}$ the probability that these $k$ minutiae are configured in the specified pattern on the ridge, and $P_{to}$ the probability that these minutiae are of the specified types and orientations, indexed by $i$ and occurring with type probability $p_{i}$ and orientation probability $o_{i}$ . + +# Probability of Minutiae Number + +Under the assumption that minutiae occur at uniform rates along a ridge, we expect a binomial distribution for the number of minutiae on the ridge. Denote the linear minutiae density on ridge $n$ by $\lambda(n)$ . The probability that a minutia occurs in a given cell of length $\delta_2$ is $\delta_2 \lambda(n)$ . Thus, the probability that $k$ minutiae occur is + +$$ +P _ {p} (n, k, l, \lambda) = \binom {l / \delta_ {2}} {k} (\delta_ {2} \lambda) ^ {k} (1 - \delta_ {2} \lambda) ^ {l / \delta_ {2} - k}. +$$ + +# Probability of Minutiae Configuration + +Assuming that all configurations of $k$ minutiae are equally likely along the ridge, the probability of the specified configuration is + +$$ +P _ {c} (n, k, l) = \frac {1}{\binom {l / \delta_ {2}} {k}}. +$$ + +# Probability of Minutiae Type and Orientation + +The probability that minutiae occur with specified types and orientations is + +$$ +P _ {t o} (\{k _ {i}, p _ {i}, o _ {i} \}) = \prod_ {i} p _ {i} ^ {k _ {i}} o _ {i} ^ {k _ {i}}. +$$ + +Applying our assumption that the only level-two features are bifurcations, terminations, and dots, and that orientations are equally likely and independent along the ridge, this expression reduces to + +$$ +P _ {t o} = p _ {b} ^ {k _ {b}} p _ {t} ^ {k _ {t}} p _ {d} ^ {k _ {d}} \frac {1}{2 ^ {k _ {b} + k _ {t}}}, +$$ + +with $k_{b} + k_{t} + k_{d} = k$ . Then the total probability for the ridge configuration is + +$$ +P _ {T C} (n, l, k, \lambda , \{k _ {i}, p _ {i}, o _ {i} \}) = (\delta_ {2} \lambda) ^ {k} (1 - \delta_ {2} \lambda) ^ {l / \delta_ {2} - k} p _ {b} ^ {k _ {b}} p _ {t} ^ {k _ {t}} p _ {d} ^ {k _ {d}} \frac {1}{2 ^ {k _ {b} + k _ {t}}}. +$$ + +The total probability that minutiae are configured as specified through the entire print is then product of the $P_{TC}$ s for all ridges in all domains, since we assume that ridges develop minutiae independently. + +Applying the assumption that $\lambda$ and other factors do not depend on $n$ and are hence uniform throughout the print, we can collapse these multiplicative factors to an expression for the configuration probability of the entire print: + +$$ +P _ {T C} ^ {\mathrm {g l o b a l}} = (\delta_ {2} \lambda) ^ {K} (1 - \delta_ {2} \lambda) ^ {L / \delta_ {2} - K} p _ {b} ^ {K _ {b}} p _ {t} ^ {K _ {t}} p _ {d} ^ {K _ {d}} \frac {1}{2 ^ {K _ {b} + K _ {t}}}. +$$ + +Here $K$ is the total number of minutiae in the print, $K_{i}$ the number of type $i$ , and $L$ is the total linear length of the ridge structure in the print. The length $L$ is determined only by the total area $XY$ of the print and the average ridge width $w$ and is therefore independent of the topological structure of the print. + +# Parameter Estimation + +For parameters in our model, we use published values and estimates based on the NIST-4 database [Watson and Wilson 1992]. + +# Level-One Parameters + +All lengths are in millimeters (mm); angles are in radians or in degrees. + +- Level-one spatial resolution limit $\delta_{1}$ : Cappelli et al. [1999] discretize images into a $28 \times 30$ grid to determine level-one detail. From these grid dimensions, the physical dimensions of fingerprints, and the assumption of an uncertainty of three blocks for any level-one feature, we estimate $\delta_{1} = 1.5$ . +- Level-one angular resolution limit $\delta_{\theta}$ : Taking $X / 2 = 5.4$ (determined below) as a typical length scale, we have $\delta_{\theta} = \delta_{1} / 5.4 = 0.279$ radians. +- Ridge structure class frequencies $\nu_{A}, \nu_{L}, \nu_{R}, \nu_{T}$ , and $\nu_{W}$ : We use the estimates in Prabhakar [2001] (Table 1). + +Table 1. Relative frequencies of ridge structure classes (from Prabhakar [2001]). + +
νAνLνRνTνW
0.06160.17030.36480.07790.3252
+ +- Thumbprint width $X$ and height $Y$ : Examining thumbprints from the NIST-4 database and comparing them with the area given by Pankanti et al. [2002], we conclude that a width that covers the majority of thumbprints is 212 pixels in the 500 dpi images, a physical length of $10.8 \mathrm{~mm}$ . Similarly, $Y = 16.2 \mathrm{~mm}$ . +- Arch parameters $(x, y)$ , $h$ , $b$ , $\theta_{1}$ , $\theta_{2}$ , $\theta_{3}$ , and $\theta_{4}$ : We restrict the parameter space for $(x, y)$ to the lower half of the thumbprint with horizontal margins of length $b$ . We estimate $b = 2.5$ from examination of Arch fingerprints in the NIST database and from Cappelli et al. [1999]. This estimate places $x \in (0, 8.3)$ and $y \in (0, 5.6)$ . The mean for $(x, y)$ , which we need to describe the distribution of $(x, y)$ , is then (4.2, 2.8). We estimate that $x$ and $y$ both have a standard deviation of 0.7. We assume that $\theta_{1}, \ldots, \theta_{4}$ are all between $0^{\circ}$ and $45^{\circ}$ with mean $22.5^{\circ}$ and standard deviation $5.13^{\circ}$ . +- Loop parameters $(x, y)$ , $\theta$ , and $r$ : For a left loop (a right loop is reflection of this), $(x, y)$ must lie in the bottom left quadrant and the mean coordinate pair is (2.7, 2.8). Additionally, we restrict $\theta$ to lie between $15^{\circ}$ and $75^{\circ}$ , which allows us to estimate the mean $\theta$ as $45^{\circ}$ with a standard deviation of $15^{\circ}$ . We estimate that $r$ must be greater than 0 and less than 9.6. +- Tented arch parameters $(x, y)$ and $h$ : Along the $y$ direction, we restrict the bottom of the arch $(x, y)$ to lie in the bottom half of the thumbprint. We further estimate that $x$ lies in the middle two-thirds of $X$ . These assumptions yield $x \in (1.8, 9)$ and $y \in (0, 8.1)$ . Assuming a symmetric distribution of $(x, y)$ yields $(x, y) = (5.4, 2.8)$ with a standard deviation of 0.7 in both directions. Logically, we place $h$ between 0 and $Y / 2 = 8.1$ . Again, assuming a symmetric distribution in this parameter space and a standard deviation of one-eighth the maximum value yields $h = 4.05 \pm 1.02$ . + +- Whorl parameters $(x_{L},y_{L})$ , $(x_{C},y_{C})$ , $(x_{R},y_{R})$ , and $(x_{B},y_{B})$ : We expect $(x_{L},y_{L})$ to be in the bottom left quadrant for all but the most extreme examples and similarly $(x_{R},y_{R})$ to lie in the bottom right quadrant. We place $(x_{B},y_{B})$ between $x = X / 4$ and $x = 3X / 4$ and $y = 0$ and $y = 2Y / 3$ . The topmost point, $(x_{C},y_{C})$ , we place in the top half of the thumbprint. We again put the average values in the center of their restricted areas. + +Table 2 summarizes the estimates for these four classes of ridge structures. +Table 2. Parameter range estimates for the ridge structure classes. All lengths in millimeters (mm), angles in degrees. + +
Arch parameter ranges
(x,y)(4.2, 2.8) ± (0.7, 0.7)
h4.05 ± 0.7
b2.5 ± 0
θ1-θ422.5° ± 5.13°
Loop parameter ranges
(x,y)(2.7, 2.8) ± (0.7, 0.7)
θ45° ± 15°
r4.58 ± 0.7
Tented Arch Parameter Ranges
(x,y)(5.4, 2.8) ± (0.7, 0.7)
h4.05 ± 1.02
Whorl parameter ranges
(xL,yL)(2.7, 4.1) ± (0.7, 0.7)
(xC,yC)(5.4, 12.2) ± (0.7, 0.7)
(xR,yR)(8.1, 4.1) ± (0.7, 0.7)
(xB,yB)(5.4, 4.1) ± (0.7, 0.7)
+ +# Level-Two Parameters + +- Level-two spatial resolution limit $\delta_{2}$ : We estimate $\delta_{2}$ by $r_0$ , the spatial uncertainty of minutiae location in two dimensions [Pankanti et al. 2002], and propose $\delta_{2} = 1$ for best-case calculations. +- Relative minutiae type frequencies $p_d, p_b$ , and $p_t$ : Almost every compound minutia can be broken into a combination of bifurcations and terminations separated spatially. Counting these compound minutiae appropriately, we determine the relative minutiae frequencies in Table 3. +- Ridge period $w$ : We use $0.463\mathrm{mm}$ /ridge for the ridge period, the distance from the middle of one ridge to the middle of an adjacent one [Stoney and Thornton 1986]. + +Table 3. Frequencies of simple minutiae types (from Osterburg et al. [1977]). + +
pbptpd
0.3560.5810.0629
+ +- Mean number of minutiae per print $\mu$ : Under ideal circumstances, we discern 40 to 60 minutiae on a print [Pankanti et al. 2002]; we take $\mu = 50 \pm 10$ . +- Linear minutiae density $\lambda$ : We calculate $\lambda$ by dividing the average number of minutiae per a thumbprint $\mu$ by the total ridge length of a thumbprint $XY / w$ . Under ideal conditions, this gives $\lambda = 0.13 \pm 0.03$ minutiae/mm. In practice, we may have $\lambda = 0.05 \pm 0.03$ minutiae/mm [Pankanti et al. 2002]. + +Finally, we estimate that there have been 100 billion humans [Haub 1995]. + +# Model Analysis and Testing + +Let the probability that a print has a configuration $x$ be $p_c(x)$ . Assuming that fingerprint patterns are distributed independently, the probability that two prints match is $p_c^2(x)$ . The sum of these probabilities over the configuration space is the total probability that some match occurs. + +The probabilities associated with the two levels of detail are determined independently, so the total occurrence probability factors into $p_{c1}(x_1)p_{c2}(x_2)$ . Denoting the level-one configuration subspace as $C_1$ and the level-two subspace as $C_2$ , the total probability of the prints matching is + +$$ +p = \sum_ {i \in C _ {1}} \sum_ {j \in C _ {2}} [ p _ {c 1} (i) p _ {c 2} (j) ] ^ {2} = \left(\sum_ {i \in C _ {1}} p _ {c 1} ^ {2} (i)\right) \left(\sum_ {j \in C _ {2}} p _ {c 2} ^ {2} (j)\right) = p _ {1} p _ {2}. +$$ + +# Level-One Detail Matching + +We restrict each parameter to a region of parameter space in which we expect to find it and assume that it is uniformly distributed there. This approximation is enough to estimate order of magnitude, which suffices for our analysis. Then + +$$ +p _ {c 1} (i) = \frac {\nu_ {i}}{\left(\prod_ {j \in V (i)} \frac {L _ {j}}{\delta_ {1}}\right)}, \tag {1} +$$ + +where $L_{j}$ is the range of parameter $j$ in $V(i)$ , the set of parameters for a type- $i$ ridge structure. For (1) to be accurate, we should make any $L_{j}$ corresponding to angular parameters the product of the angle range with our typical length of $5.4 \mathrm{~mm}$ . The product is simply the total number of compartments in the + +parameter space, since we assume a uniform distribution in that range. Calculating $p_{c1}(i)$ for each ridge structure type, and summing squares, we find the probability that two thumbprints have the same overall ridge structure: + +$$ +p _ {1} = \sum_ {i \in C _ {1}} p _ {c 1} ^ {2} (i) = . 0 0 0 4 4. \tag {2} +$$ + +# Level-Two Detail Matching + +If we disregard the infrequent dot minutiae, we obtain the probability + +$$ +p _ {c 2} (j) = (\delta_ {2} \lambda) ^ {k} (1 - \delta_ {2} \lambda) ^ {C - K} p _ {b} ^ {k _ {b}} p _ {t} ^ {k - k _ {b}} \frac {1}{2 ^ {k}} +$$ + +for a configuration $j$ with $k$ minutiae, $k_{b}$ of which are bifurcations (and the rest ridges), placed in $C = XY / w\delta_{2}$ cells. If we simplify minutia-type frequencies to $p_{b} = p_{t} = 1 / 2$ , and note that there are + +$$ +\left( \begin{array}{c} C \\ k \end{array} \right) \left( \begin{array}{c} k \\ k _ {b} \end{array} \right) 2 ^ {k} +$$ + +ways to configure $j$ given $k$ and $k_{b}$ , the total probability of a match becomes + +$$ +\begin{array}{l} p _ {2} = \sum_ {k = 0} ^ {C} \sum_ {k _ {b} = 0} ^ {k} \left((\delta_ {2} \lambda) ^ {k} (1 - \delta_ {2} \lambda) ^ {C - K} \frac {1}{4 ^ {k}}\right) \binom {C} {k} \binom {k} {k _ {b}} 2 ^ {k} \\ = \sum_ {k = 0} ^ {C} (\delta_ {2} \lambda) ^ {2 k} (1 - \delta_ {2} \lambda) ^ {2 (C - k)} \frac {1}{4 ^ {k}} \binom {C} {k} \\ = \left(\frac {5 (\delta_ {2} \lambda) ^ {2} - 8 \delta_ {2} \lambda + 4}{4}\right) ^ {C}. \\ \end{array} +$$ + +Match probabilities for $\lambda = 0.13 \pm 0.03 / \mathrm{mm}$ , $\delta_{2} = 1 \mathrm{~mm}$ , and $C = 250$ to 400 cells range from $2.9 \times 10^{-23}$ to $9.8 \times 10^{-60}$ ; probabilities for the more realistic values $\lambda = 0.05 \pm 0.03 / \mathrm{mm}$ , $\delta_{2} = 2 - 3 \mathrm{~mm}$ , and $C = 100$ to 250 cells range from $3.7 \times 10^{-5}$ to $1.7 \times 10^{-47}$ . + +# Historical Uniqueness of Fingerprints + +Denote the probability of a match of any two left thumbprints in the history of the human race by $p$ and the world total population by $N$ . The probability of at least one match among $\binom{N}{2}$ thumbprints is + +$$ +P = 1 - (1 - p) ^ {\binom {N} {2}}. +$$ + +Figure 4a illustrates the probability of at least one match for $N = 10^{11}$ , while Figure 4b shows a log-log plot of the probability for very small $p$ -values. Since even conservative parameter values in the ideal case give $p \ll 10^{-30}$ , our model solidly establishes uniqueness of fingerprints through history. + +![](images/095f6dc6be85d11f7de0f0277b04a67e440abc2b8673f3e2d293cc7a451cfa1e.jpg) +Figure 4a. For $N = 10^{11}$ , probability of at least one thumbprint match through history. + +![](images/90c2ff485f7a884c52b02df0981da7cf59816a8b74f9e1067691f4a2d6f845ce.jpg) +Figure 4b. Log-log plot of probability. + +# Strengths and Weaknesses of the Model + +# Strengths + +- Topological coordinate system: We take topological considerations into account, as demanded by Stoney and Thornton [1986]. +- Incorporation of ridge structure detail: We use this in addition to the minutiae detail that is the primary focus of most other models. +- Integration of nonuniform distributions: We allow for more-complex distributions of the ridge structure parameters, such as Gaussian distributions for singularity locations, and we consider that distribution of minutiae along ridges may depend on the location of the ridge in the overall structure. +- Accurate representation of minutia type and orientation: We follow models such as those developed by Roxburgh and Stoney in emphasizing the bidirectional orientation of minutiae along ridges, and we further consider the type of minutiae present as well as their location and orientation. Cruder models of minutiae structure [Osterburg et al. 1977; Pankanti et al. 2002] neglect some of this information. +- Flexibility in parameter ranges: We test a range of parameters in both ideal and practical scenarios and find that the model behaves as expected. + +# Weaknesses + +- Ambiguous prints, smearing, or partial matches: We assume that ambiguities in prints reflect ambiguities in physical structure and are not introduced by the printing. This is certainly not the case for actual fingerprints. +- Domain discontinuities: We have no guarantee of continuity between regions of flow; continuity requirements may affect the level-one matching probabilities significantly. + +- Nonuniform minutia distribution: We assume that the distribution of minutiae along a ridge is uniform. However, models should account for variations in minutiae density and clustering of minutiae [Stoney and Thornton 1986]. Although we have a mechanism for varying this distribution, we have no data on what the distribution should be. +- Left/right orientation distribution: We assume that the distribution of minutiae orientation is independent and uniform throughout the print. Amy notes, however, that the preferential divergence or convergence of ridges in a particular direction can lead to an excess of minutiae with a particular orientation [Stoney and Thornton 1986]. +- Level-three information: We neglect level-three information, such as pores and edge shapes, because of uncertainty about its reproducibility in prints. + +# Comparison with DNA Methods + +# DNA Fingerprinting + +The genetic material in living organisms consists of deoxyribonucleic acid (DNA), a macromolecule in the shape of a double helix with nitrogen-base "rungs" connecting the two helices. The configurations of these nitrogen bases encode the genetic information for each organism and are unique to the organism (except for identical twins and other cases in which an organism splits into multiple separate organisms). + +Direct comparison of base-pair sequences for two individuals is infeasible, so scientists sequence patterns in a person's DNA called variable number tandem repeats (VNTR), sections of the genome with no apparent genetic function. + +# Comparison of Traditional and DNA Fingerprinting + +While level-two data is often limited by print quality, we expect level-one detail to remain relatively unchanged unless significant sections of the print are obscured or absent. We use $p_1 = 10^{-3}$ from (2), allowing for a conservative loss of seven-eighths of the level-one information. Multiplying by this level-one factor $10^{-3}$ , all but the three worst probabilities are less than $10^{-9}$ . + +DNA fingerprinting has its flaws: False positives can arise from mishandling samples, but the frequency is difficult to estimate. The probability of two different patterns exhibiting the same VNTR by chance varies between $10^{-2}$ and $10^{-4}$ , depending on the VNTR [Roeder 1994; Woodworth 2001]. The total probability of an individual's DNA pattern occurring by chance is computed under the assumption that the VNTRs are independent, which has been verified for the ten most commonly used VNTRs [Lambert et al. 1995]. + +# Results and Conclusions + +We present a model that determines whether fingerprints are unique. We consider both the topological structure of a fingerprint and the fine detail present in the individual ridges. We compute probabilities that suggest that fingerprints are reasonably unique among all humans who have lived. + +Fingerprint evidence compares well with DNA evidence in forensic settings. Our model predicts that with even a reasonably small fingerprint area and number of features, the probability that a match between a latent print and a suspect's print occurs by chance is less than $10^{-9}$ . Both DNA evidence with few VNTRs and fingerprints of poor quality with few features can give inconclusive results, resulting in uncertainty beyond a reasonable doubt. + +# References + +Adams, Julian. 2002. DNA fingerprinting, genetics and crime: DNA testing and the courtroom. http://www.fathom.com/courese/21701758/. +Beeton, Mary. 2002. Scientific methodology and the friction ridge identification process. Identification Canada 25 (3) (September 2002). Reprinted in Georgia Forensic News 32 (3) (November 2002) 1, 6-8. http://www.ridgesandfurrows.homestead.com/files/Scientific_Methodology.pdf. +Cappelli, Raffaele, Alessandra Lumini, Dario Maio, and Davide Maltoni. 1999. Fingerprint classification by directional image partitioning. IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (5): 402-421. +Epstein, Robert. 2002. Fingerprints meet Daubert: The myth of fingerprint "science" is revealed. Southern California Law Review 75: 605-658. +Galton, Francis. 1892. *Fingerprints*. London: Macmillan. +Haub, Carl. 1995. How many people have ever lived on Earth? Population Today 23 (2) (February) 4-5. Reprinted. 30 (8) (November/December 2002) 4-5. http://www.prb.org/Content/ContentGroups/PTarticle/Oct-Dec02/ How_Many_People_Have_Ever_Lived_on_Earth_.htm. +Jeffreys, A.J., V. Wilson, and S.L. Thein. 1995. Individual-specific fingerprints of human DNA. Nature 316: 76-79. +Lambert, J.A., J.K. Scranage, and I.W. Evett. 1995. Large scale database experiments to assess the significance of matching DNA profiles. International Journal of Legal Medicine 108: 8-13. +Marcialis, Gian Luca, Fabio Roli, and Paolo Frasconi. 2001. Fingerprint classification by combination of flat and structural approaches. In International + +Conference on Audio- and Video-Based Biometric Person Authentication AVBPA01 (June 2001, Halmstad, Sweden), edited by J. Bigun and F. Smeraldi. Springer Lecture Notes in Computer Science 2091: 241-246. +National Center for State Courts (NCSC). 2004. http://www.ncsconline.org/. +German, Ed. 2003. The history of fingerprints. http://onin.com/fp/fphistory.html. +Osterburg, James W., T. Parthasarathy, T.E.S. Ragvahan, and Stanley L. Sclove. 1977. Development of a mathematical formula for the calculation of fingerprint probabilities based on individual characteristics. Journal of the American Statistical Association 72: 772-778. +Pankanti, S., S. Prabhakar, and A. Jain. 2002. On the individuality of fingerprints. IEEE Transaction on Pattern Analysis and Machine Intelligence: 1010-1025. http://citeseer.ist.psu.edu/472091.html. +Prabhakar, Salil. 2001. Fingerprint matching and classification using a filter-bank. Ph.D. thesis. East Lansing, Michigan: Michigan State University. +Roeder, K. 1994. DNA fingerprinting: a review of the controversy. Statistical Science 9: 222-247. +Sclove, Stanley L. 1979. The occurrence of fingerprint characteristics as a two-dimensional process. Journal of the American Statistical Association 74: 588-595. +Stoney, David A., and John I. Thornton. 1986. A critical analysis of quantitative fingerprint individuality models. Journal of Forensic Sciences 31 (4): 1187-1216. +Thompson, William. 2003. Examiner bias. http://www.bioforensics.com/conference/ExaminerBias/. +Watson, C.I., and C.L. Wilson. 1992. NIST Special Database 4, Fingerprint Database. U.S. National Institute of Standards and Technology. +Wayman, James L. 2000. Daubert hearing on fingerprinting: When bad science leads to good law: The disturbing irony of the Daubert hearing in the case of U.S. v. Byron C. Mitchell. http://www.engr.sjsu.edu/biometrics/publications_daubert.html. +Woodworth, George. 2001. Probability in court—DNA fingerprinting. http://www.stat.uowa.edu/~gwoodwor/statsoc. + +# Can't Quite Put Our Finger On It + +Seamus Ó Ceallaigh + +Alva Sheeley + +Aidan Crangle + +University College Cork + +Cork, Ireland + +Advisor: James J. Grannell + +# Summary + +There are two main paths to identification through fingerprints. + +- Global analysis relies on the specific arrangement and characteristics of the ridges of a print. +- Local analysis assumes that the individuality of a print is based on the position and orientation of the two basic types of minutiae. + +We subdivide a print into a grid of square cells, consider the distribution of ridge features, and calculate probabilities from combinatorial analysis. We make predictions by refining parameters of the model, such as the number of minutiae required for a positive match between two generic prints, and the size of a cell in the main grid. We compare our results to previous studies and discuss the relation to DNA profiling. The simplicity of our model is its key strength. + +We conclude that it is extremely unlikely that any two randomly selected people have, or have ever had, the same set of fingerprints. + +Despite the apparently simplistic nature of fingerprinting, it is vastly more reliable in identification than a DNA comparison test. + +# Introduction + +In recent years, the scientific basis of fingerprint analysis has been questioned, in the U.S. Supreme Court ruling in the Daubert case that the reliability of expert scientific testimony must be established along the following five criteria: + +1. whether the particular technique or methodology in question has been subject to a statistical hypothesis testing; +2. whether its error rate has been established; +3. whether the standards controlling the technique's operation exist and have been maintained; +4. whether it has been peer-reviewed and published; and +5. whether it has a general widespread acceptance. + +Our model tries to address the first two issues. We aim to produce a probabilistic method to measure the uniqueness of a particular print. + +# Assumptions + +- A thumbprint is defined globally by ridge patterns and locally by a distribution of minutiae, which we refer to also as features. +- The area of interest typical thumbprint is a $20 \mathrm{~mm} \times 20 \mathrm{~mm}$ square grid. +- There are two significant types of minutiae, the bifurcation and the ridge ending: all other minutiae are compositions of these [Osterburg et al. 1977]. +- The probability of a minutia occurring in a grid box is .234 [Osterburg 1977]. +- The orientation of the minutiae was not taken into account by Osterburg; we assign a minutiae one of eight angles, from $0^{\circ}$ to $157.5^{\circ}$ , in steps of $22.5^{\circ}$ . +- When comparing two prints, we know one print arbitrarily well. +- The number of people who have ever lived is $1.064 \times 10^{11}$ [Haub 1995]. + +# The Model + +# Global Analysis + +# Ridge Patterns And Orientation Fields + +Global analysis concerns ridge patterns, which distinguish prints into six main pattern groups: Arch, Tented Arch, Left Loop, Right Loop, Twin Loop and Whorl. Each pattern is determined by an orientation field, which may have specific stationary points, known as the delta and the core. If a print contains 0 or 1 delta points and 0 or 1 core points, then it is classified as Lasso, and Wirbel otherwise. + +The Lasso class consists of arch, tented arch, right loop, and left loop. + +- If the fingerprint has 0 delta points or 0 core points, then it is an arch. +- Otherwise, if the core point and the delta point are aligned in the vertical direction, then the fingerprint is an arch if the length between the core point and the delta point is less than $2.5 \mathrm{~mm}$ and a tented arch otherwise. +- Otherwise, if the core point is to the right of the delta point, the fingerprint is a right loop. +- Otherwise, the fingerprint is a left loop. + +The Wirbel class consists of the whorl and the twin loop classes: + +- If there are exactly two core points and exactly two delta points, then the fingerprint is a whorl if the two core points are aligned horizontally and a twin loop otherwise. +- Otherwise, the fingerprint is a whorl. + +The main aim of global analysis is a vector field or orientation field to the ridge lines of a fingerprint. We must find suitable parameters for such functions that give rise to the different classes of ridge pattern. The most basic pattern without stationary points is the arch Figure 1, modeled by the simple system + +$$ +\frac {d x}{d t} = \mu y, \qquad \frac {d y}{d t} = - \nu , +$$ + +with parameters $\mu$ and $\nu$ . The orientation fields for other ridge patterns are more complex, so the bulk of our model is directed at the print's local features. + +![](images/c799b70249d35cc40f06cc032e26bbee83eff3c659bf613a41513b0caaec013b.jpg) +Figure 1. Arch orientation field. + +# Local Analysis + +# Estimates + +For an initial estimate of the probability of any two people having the same thumbprint, we must consider: + +- the total number of people who have ever lived, estimated to be $1.064 \times 10^{11}$ [Haub 1995] (that is, about $6\%$ of all people are alive right now); and +- the total number of possible thumbprints that can be classified as "different." + +To decide whether or not one thumbprint is the same as another, one must first decide on what exactly is a thumbprint. In our model, we take a typical area of the print as $20\mathrm{mm}\times 20\mathrm{mm}$ , which is large enough to encompass the area of interest on any print. We divide this area into boxes $1\mathrm{mm}$ on a side, thus giving 400 boxes, each of area $1\mathrm{mm}^2$ . In principle, each box can be examined to determine whether or not it contains a minutia. + +Minutiae are the features on a thumbprint that are used by almost all identification techniques to distinguish between prints. There are from 10 [Galton 1892] to 16 [Optel Ltd. 2003] different types of minutiae, but they are all composed of two fundamental types: ridge endings and ridge bifurcations. In our analyses, we consider only these two types. Also, if there are two minutiae in the same cell, it is impossible to resolve them separately. + +Say that there are $i$ resolvable features on the print. The number of ways that we can insert these $i$ features into the 400 spaces in the grid is + +$$ +\left( \begin{array}{c} 4 0 0 \\ i \end{array} \right). +$$ + +Remembering that there are two possibilities for each feature, the total number of combinations is then + +$$ +\left( \begin{array}{c} 4 0 0 \\ i \end{array} \right) 2 ^ {i}. +$$ + +What is the value of $i$ ? The probability that any box contains a feature we take as .234 [Osterburg et al. 1977]. Then $i$ has a binomial distribution: + +$$ +P (i = x) = \left( \begin{array}{c} 4 0 0 \\ x \end{array} \right). (2 3 4) ^ {x} (1 - . 2 3 4) ^ {4 0 0 - x}, +$$ + +with mean $\mu \approx 94$ and standard deviation $\sigma \approx 8$ , so that the average number of cells containing features is 94. + +Thus, the total number of possible thumbprints is + +$$ +N = \sum_ {i = 0} ^ {4 0 0} \binom {4 0 0} {i} 2 ^ {i}. +$$ + +The binomial distribution for $i$ , however, is concentrated mainly in the region $\mu - \sigma < i < \mu + \sigma$ , or $94 - 8 < i < 94 + 8$ . To be conservative, we consider only this range of numbers of minutiae; thus, there are approximately + +$$ +N \approx \sum_ {i = 8 6} ^ {1 0 2} {\binom {4 0 0} {i}} 2 ^ {i} \approx 1. 1 9 \times 1 0 ^ {1 2 8} +$$ + +different thumbprints "available" for any actual thumb to hold. So very roughly, + +$P(\text{two people ever having the same thumbprint}) = \frac{1.064 \times 10^{11}}{1.19 \times 10^{128}} \sim 10^{-117}$ . + +# Comparison + +This figure is the (approximate) probability that there have ever been two people who have had the same thumbprint. How might we take into account the chance that, when compared, two prints will be judged to be the same? To do this, we consider two hypothetical prints: + +- A control print: an ideal known print, in which all $i$ features are seen. +- A sample print: a print with more features $i$ than the $n$ available for comparison. + +For two prints that are compared in a realistic circumstance, there will be at least $(i - n)$ features that are not included in the comparison. These features, in theory, could be in any combination of positions in the grid. The main question is: How many prints have $n < i$ features corresponding to a match? In other words, how many different ways can the remaining $(i - n)$ features be inserted into the grid and still produce a match with the control print? Knowing that, we can estimate how likely it is that two thumbprints not actually the same will match. + +# Incorrect Matching + +We have $(i - n)$ features to distribute among $(400 - n)$ grid elements. The number of different ways to do this is, by previous reasoning, + +$$ +\sum_ {8 6} ^ {1 0 2} \binom {4 0 0 - n} {i - n} 2 ^ {i - n}. +$$ + +In criminal proceedings, a matching number of minutiae of anything from 8 [Collins 1992] to 15 [Vacca 2002] are accepted as conclusive proof of identification. Our model predicts that for $n = 12$ , the total number of thumbprints that could have the same set of matching minutiae while not being the same print is $N = 1.3 \times 10^{117}$ . But expressed as a fraction of the total number of possible prints, the probability of the print being one of these, if it selected from them, is + +$$ +P (\text {f a l s e} \quad \text {m a t c h}) = \frac {1 . 3 \times 1 0 ^ {1 1 7}}{1 . 1 9 \times 1 0 ^ {1 2 8}} \approx 1. 0 9 \times 1 0 ^ {- 1 1}. \tag {1} +$$ + +This is an extremely low probability. + +# Varying Parameters + +The result (1) depends on the parameters, which can be varied according to circumstance and also as a way of refining the model: + +- $p$ : the probability of finding a feature in a grid cell. We take $p = .234$ [Osterburg et al. 1977]. Others [Thai 2003; Kingston 1964; Stoney and Thornton 1987; Dankmeijer et al. 1980] give values in the range $.19 < p < .25$ . +- $N$ : the number of cells in the grid. If there are more cells, on average, then more features will be observed, since $p(\text{feature})$ for a cell remains the same. +- $n$ : the number of minutiae that one takes for comparison. We take $n = 12$ . +- $i$ : a variable, determined by $p$ and $N$ , that gives reasonable bounds for the summation. +- $F$ : the number of different features that can appear in a grid cell. In our initial estimate, we take $F = 2$ . +- $L, A$ : the length of a side, and the area, of a grid cell. We take $L \approx 1$ mm, the average distance between features [Thai 2003]. + +If we wish to examine a thumbprint more closely, we should consider smaller and smaller areas of the print. It is not meaningful, however, to take $L$ less than $\sim 0.1 \mathrm{~mm}$ , since this is the typical ridge width. + +# The Dependence on $L$ + +We rework the model, taking the width of the generic grid cell to be $0.5\mathrm{mm}$ . Taking the overall area of the print to be the same, there are now 1,600 grid cells to consider, each with the same probability of having a feature. The binomial expression for $i$ , the number of features observed on the whole print, is now + +$$ +P (i = x) = \binom {1 6 0 0} {x}. (2 3 4) ^ {x} (1 -. 2 3 4) ^ {1 6 0 0 - x}, +$$ + +with mean $\mu = 374$ and standard deviation $\sigma = 17$ . Thus, the region of relevance when summing is now $374 - 17 = 357 < i < 391 = 374 + 17$ . This means that when the thumbprint is examined on a scale half that of the initial, about four times as many minutiae will be observed. Intuitively, the likelihood of a false match will decrease, since there are more possibilities for the number of prints: + +$$ +N _ {1 6 0 0} = \sum_ {i = 3 5 7} ^ {3 9 1} {\binom {1 6 0 0} {i}} 2 ^ {i} \approx 3. 5 \times 1 0 ^ {5 0 2}. +$$ + +The probability that any of the 100 billion people who have ever lived have had the same thumbprint is + +$P(2 \text{ people ever having the same thumbprint}) = \frac{1.064 \times 10^{11}}{3.5 \times 10^{502}} \sim 10^{-492}$ . + +We now determine the number of ways in which, when a certain number $i$ of minutiae are selected for comparison, the remaining minutiae can be arranged. Following the same logic as before, this figure is + +$$ +\sum_ {3 5 7} ^ {3 9 1} \binom {1 6 0 0 - n} {i - n} 2 ^ {i - n}, +$$ + +which evaluates to $3.4 \times 10^{491}$ for $n = 12$ . The probability that any two compared thumbprints, judged to be identical by the standards of comparison, are actually different is therefore + +$$ +P (\text {f a l s e m a t c h} \mid n = 1 2, N = 1 6 0 0) = \frac {3 . 4 \times 1 0 ^ {4 9 1}}{3 . 5 \times 1 0 ^ {5 0 2}} \approx 1 0 ^ {- 1 1}. +$$ + +This, interestingly, is not much greater than the probability for the previous estimate. The result is not therefore acutely dependent on the value of $L$ , nor, by association, on the number $N$ of grid cells. That said, it easy to examine details in any print to a scale of $0.5 \mathrm{~mm}$ . + +# The Dependence on $p$ + +The probability of a feature in a cell is, as of yet, a purely empirical figure. The formation of fingerprints, and their associated characteristics, is known: The foetus, at about 6.5 weeks, grows eleven "volar pads"—pouches on various locations of the hand [Anonymous 2001]. These shrink at about 11 weeks; and when they are gone, beneath where they lay are fingerprints. However, the mechanism of formation of the specific features is unknown. Genetic influences are present, but the environment is crucial also, evidenced by the fact that identical twins—who have the same DNA genotype—do not have the same fingerprints. + +There is no way yet determined of predicting the frequency of occurrence of any type of minutia on the print of a particular person. Previous studies, cited in Table 1, show variation about $\sim 0.2$ minutiae/ $\mathrm{mm}^2$ . It is not unreasonable to propose that the density depends on the print classification (i.e., whorl, loop, arch, etc.). + +The range is $.204 < p < .246$ . For 1,600 boxes, we have + +$$ +P (\text {f a l s e} \quad \text {m a t c h}) \approx 1. 8 9 \times 1 0 ^ {- 1 2} \quad \text {f o r} p = . 2 0 4, +$$ + +$$ +P (\text {f a l s e} \quad \text {m a t c h}) \approx 1. 7 8 \times 1 0 ^ {- 1 1} \qquad \text {f o r} p = . 2 4 6. +$$ + +The variation of $p$ changes the final prediction by no more than an order of magnitude. + +Table 1. The multiplication table of $D_{10}$ + +
SourceNumber of printsMean density (minutiae/mm2)
Osterburg et al. [1977].234
Dankmeijer et al. [1980]1,000.19
Stoney and Thornton [1987]412.223
Kingston [1964]100.246
Thai [2003]30.204
+ +# The Dependence on $F$ + +In our initial estimate, we take the number $F$ of degrees of freedom of a feature in a print to be two: either a ridge ending or a ridge bifurcation. However, one can also consider the orientation of a feature. Each minutia lies on a ridge, which has a well-defined direction. We discretized this variable to one of eight possible directions, angles from $0^{\circ}$ to $157.5^{\circ}$ . Thus each feature, instead of having 2 degrees of freedom, now has 16. + +The probability of a false match, taking 1,600 grid cells and a probability of occurrence $p = .234$ , is now + +$$ +P (\text {f a l s e m a t c h}) = \frac {\sum_ {i = 3 5 7} ^ {3 9 1} \binom {1 6 0 0 - n} {i - n} 1 6 ^ {i - n}}{\sum_ {i = 3 5 7} ^ {3 9 1} \binom {4 0 0} {i} 1 6 ^ {i}}. +$$ + +Taking $n = 12$ as before, we find that + +$$ +P (\text {f a l s e} \approx 2. 6 \times 1 0 ^ {- 2 2}. +$$ + +This is an astonishingly smaller probability than the previous estimate of $10^{-11}$ . The orientation of a feature is no more difficult to determine in practice than its nature, so including it in the comparison process is a great improvement in efficacy with a modest increase in effort. + +# The Dependence on $n$ + +It is crucial to determine how many matching minutiae are necessary for a positive comparison. We have taken $n = 12$ in the preceding analyses; it is instructive to consider the variation of the probability of a false match with $n$ . The graphs in Figure 2 show that the probability falls off sharply, even as $n$ increases beyond 1. A value of $n \approx 5$ is quite sufficient. + +![](images/2e28dbade2d3dc0d430ec0e335d2e40c52727f816779ee6c17195ba585b07d46.jpg) +Figure 2. Probability of false match (linear and logarithmic scales) vs. $n$ . + +# Conclusion + +# The Model + +Our model, in the most general form, is + +$$ +P (\text {f a l s e m a t c h}) = \frac {\sum_ {i = \mu - \sigma} ^ {\mu + \sigma} \binom {N - n} {i - n} F ^ {i - n}}{\sum_ {i = \mu - \sigma} ^ {\mu + \sigma} \binom {N} {i} F ^ {i}}, +$$ + +where + +- $F$ is the number of degrees of freedom and $N$ is the number of grid cells, +- $\mu$ and $\sigma$ are the mean and standard deviation of the binomial distribution determined by $N$ and $p$ , +- $p$ is the probability of there being a feature in a cell, and +- $n$ is the number of minutiae being used to make a comparison. + +The preliminary result returned from our model, for $n = 12$ (a typical threshold for positive identification in many countries), is $P \approx 10^{-11}$ . Further refinement of the parameters reduces this to $P \approx 7 \times 10^{-22}$ . We conclude that 12 is a very reasonable comparison criterion, and that $n = 5$ or 6 is quite damning for any suspect so compared. + +Thus, we conclude that to a very high degree of certainty, not only that no two people, now living or having ever lived in the past, have had the same thumbprint, but also that there is a vanishingly small chance that two prints are even close enough to be confused, given a small fraction of minutiae from their patterns to compare. + +# DNA Analysis + +DNA identity testing is based on aspects of the DNA patterns called loci. For a $100\%$ match, the FBI [Thompson et al. n.d.] recommends that 13 loci be used. Using STR (Short Tandem Repeat) markers ensures that the inheritance profile at one location does not influence the inheritance at other locations. Each loci has two alleles, so 26 alleles must match. The FBI says that the possibility of a false match is $2.60 \times 10^{-9}$ while other sources quote between $10^{-9}$ and $10^{-12}$ . + +For two people chosen at random, the probability of a match based on the four most frequently analyzed alleles is between $1 \times 10^{-5}$ and $1 \times 10^{-8}$ . This is significantly higher than our estimated probability or a match for thumbprints. Hence, thumbprinting remains the most accurate form of biometric security known. + +# Strengths and Weaknesses + +# Strengths of the Model + +- Simplicity. Our model is based on easily understood principles and simply expressed assumptions. +Realistic assumptions. +- Parameters. The parameters in the model, such as the size of the grid-box, the total area, and the number of minutiae needed to match two fingerprints, can be easily varied. +- Degrees of freedom. In specifying two different kinds of possible minutiae and 8 orientation ranges for each one, the number of degrees of freedom is 16, greatly increasing the number of possible configurations of thumbprints and so minimising the probability of misidentification. Other studies [Osterburg et al. 1977; Galton 1892] do not take into account the orientation of the minutiae. By discretizing the directions of the features, we again keep the model simple. +- Corroboration. The probabilities returned by our model tie in with those given by previous studies by experts in the field (Table 2). + +Table 2. Comparison probabilities of studies. + +
Galton [1892]1.45 × 10-11
Osterburg et al. [1977]1.33 × 10-27
Stoney and Thornton [1987]3.5 × 10-26
our model10-11 to 10-22
+ +# Weaknesses of the Model + +- Multiple entries. We assumed that in any given grid-box only one minutia can be present, which is sufficiently accurate for most types of minutiae. For example, both the bridge (consisting of two ridge bifurcations) and the spur (consisting of a ridge bifurcation and a ridge ending) have been defined [Osterburg et al. 1977] as being less than $2\mathrm{mm}$ in length. Thus if the bridge or spur is more than $0.707\mathrm{mm}$ in length, their constituent endings and bifurcations appear in different boxes and are counted as two separate minutiae. + +However, for minutiae consisting of ridge endings and ridge bifurcations in very close proximity, there is a chance that each will not be caught in a different box. An example is a dot. The distance between the two ridge endings that make up a dot is so small that it is unlikely that our model would catch these two occurrences of ridge endings in different boxes. A dot has been defined [Osterburg et al. 1977] as being large enough to encompass one pore, whose size ranges from $0.088 \mathrm{~mm}$ to $0.22 \mathrm{~mm}$ [Roddy and Stosz 1997]. Therefore, the two ridge endings will not appear in different boxes but will instead be misidentified as a single minutia. + +- Independence of minutia occurrence. We assume that the placement of a minutia is completely unrelated to the placement of any others. This is not quite the case; there is a slight tendency for minutiae not to occur in direct proximity to each other. +- Global analysis. The overall ridge pattern of a thumbprint is entirely distinctive in its own right. We have not quantified this factor in our model. + +# Appendix: Classification + +# Minutiae + +The ridges in a fingerprint or thumbprint form various patterns, those patterns being called minutiae. Ten different types are shown in Figure A1. + +- Ridge Ending. A ridge ending occurs when a ridge ends abruptly. We define the orientation of a ridge ending as the direction the ridge came from. +- Bifurcation. A bifurcation is formed when two different ridges merge. We define the orientation as being the direction in which the merged ridge came from. +- Island. An island is a short ridge, comprised of two ridge endings whose orientations are in opposing directions. Two ridge endings occurring in neighbouring boxes with opposite configuration indicate the presence of an island. + +![](images/1f6c98fe941059681b61c10901e4ec146550f41392cec53f2d3bfe56542357e6.jpg) +Figure A1. Ten types of minutia. + +- Dot. A dot is an island but on a smaller scale. +- Bridge. A bridge is when a ridge branches out and merges with another ridge within a short region. It is composed of two bifurcations. +- Spur. A spur is when a ridge branches out and does not merge with another ridge. It is composed of one bifurcation and one ridge ending. +- Eye. An eye is formed by a ridge branching out into two ridges, and then recombining again a short distance later. It consists of two bifurcations. +- Double Bifurcation. As the name suggests, this type of minutia contains two bifurcations in succession. +- Delta. This type of minutia is composed of a dot and bifurcation, where the dot is between the merging ridges. +- Trifurcation. A trifurcation is a ridge that splits into 3 separate branches. It can be thought of as two bifurcations occurring in the same place. + +# Ridge Patterns + +Figure A2 shows examples of the different classifications of a fingerprint, including the presence of cores and delta points. + +![](images/f5aadc2d81e9396bc13e51281d940d7041418a0d3877603dce73beab6d864d58.jpg) +1. Arch + +![](images/da6dcce394be2787fdec5e2cd9fa57d75505a3a8cda20233ab401bcb90caa113.jpg) +2. Tented Arch + +![](images/340fdd80be7044ce2b3eb62ce5ed92ff0d4cd467c0e121b43fd8f0154e25bb81.jpg) +3.Left Loop + +![](images/7f3b27d55d1dbfc16ddc05156b8ad317b29882dd12a5ef02642d335782a12280.jpg) +4. Right Loop + +![](images/59e85061e5a684fdf32db5fe231e3b620ac0d0f51f90631df2e43d522b244919.jpg) +5. Whorl + +![](images/ceb0264f8fb248407a8852a5e4855c0c9eac088b989a9391564c30663310183a.jpg) +6. Twin Loop +Figure A2. Features in a fingerprint. + +# References + +Anonymous. 2001. Friction skin growth. http://www.ridgesandfurrows.homestead.com/Friction_Skin_Growth.html. +Ballan, Meltem, F. Ayhan Sakarya, and Brian L. Evans. 1997. A fingerprint classification technique using directional images. In Proceedings of the IEEE Asilomar Conference on Signals, Systems, and Computers (Nov. 3-5, 1997, Pacific Grove, CA), vol. 1, 101-104. http://www.ece.utexas.edu/~bevans/papers/1997/fingerprints/. +Beeton, Mary. 2002. Scientific methodology and the friction ridge identification process. Identification Canada 25 (3) (September 2002). Reprinted in Georgia Forensic News 32 (3) (November 2002): 1, 6-8. http://www.ridgesandfurrows.homestead.com/files/Scientific_Methodology.pdf. +Collins, Martin W. "Marty". 1992. Realizing the full value of latent prints. California Identification Digest (March 1992): 4-11. http://wwwlatent-prints.com/realizing_the_full_value_of_late.htm. +Dankmeijer, J., J.M. Waltman, and A.G. de Wilde. 1980. Biological foundations for forensic identification based on fingerprints. Acta Morphologica Nederlando-scandinavica 18(1) (March 1980): 67-83. http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=7395553&dopt=Citation. +Galton, F. 1892. *Fingerprints*. London: Macmillan. + +Haub, C. 1995. How many people have ever lived on Earth? Population Today 23 (2) (February): 4-5. Reprinted. 30 (8) (November/December 2002): 4-5. http://www.prb.org/Content/ContentGroups/PTarticle/Oct-Dec02/How_Many_People_Have_Ever_Lived_on_Earth_.htm. +Kingston, C.R. 1964. Probabilistic analysis of partial fingerprint patterns. D.Crim. dissertation, University of California, Berkeley. +Lindsay, Don. 2002. Don Lindsay archive. http://www.don-lindsay-archive.org/creation/definitions.htm. +Nilsson, Kenneth, and Josef Bigun. 2003. Localization of corresponding points in fingerprints by complex filtering. Pattern Recognition Letters 24 (13) (September 2003): 2135-2144. www.hh.se/staff/josef/public/publications/nilsson03prl.pdf. +Optel Ltd. 2003. New numerical methods of fingerprints' recognition based on mathematical description of arrangement of dermatoglyphics and creation of minutiea [sic]. http://www.optel.pl/software/english/method.htm. +Osterburg, James W., T. Parthasarathy, T.E.S. Raghavan and Stanley L. Sclove. 1977. Development of a mathematical formula for the calculation of fingerprint probabilities based on individual characteristics. Journal of the American Statistical Association 72: 772-778. +Pankanti, Sharath, Salil Prabhakar, and Anil K. Jain. 2001. On the individuality of fingerprints. http://citeseer.ist.psu.edu/472091.html. +Roddy, A.R., and J.D. Stosz. 1997. Fingerprint features—Statistical analysis and system performances estimates. Proceedings of the IEEE 85 (9): 1390–1421. http://citeseer.ist.psu.edu/roddy99fingerprint.html. +Sclove, Stanley L. 1980. The occurrence of fingerprint characteristics as a two-dimensional process. Communications in Statistics (A) 9: 675-695. http://tigger.uic.edu/\~slsclove/papers.htm. +Stoney, David A., and John I. Thornton. 1987. A systematic study of epidermal ridge minutiae. Journal of Forensic Sciences. +Thai, Raymond. 2003. Fingerprint image enhancement and minutiae extraction. Honours thesis, School of Computer Science and Software Engineering, University of Western Australia. http://www.csse.uwa.edu.au/~pk/studentprojects/raymondthai/. +Vacca, J. 2002. Biometric security solutions. Excerpt from Identity Theft. 2002. Indianapolis, IN: Prentice Hall PTR.http://www.informit.com/articles/article.asp?p=29801&redirect=1. +Thompson, William C., Simon Ford, Travis Doom, Michael Raymer, and Dan Krane. n.d. Evaluating forensic DNA evidence: Essential elements of a competent defense review. www.cs.wright.edu/itri/EVENTS/SUMMER-INST-2003/SIAC03-Krane2.PDF. + +# Not Such a Small Whorl After All + +Brian Camley + +Pascal Getreuer + +Bradley Klingenberg + +University of Colorado at Boulder + +Boulder, CO + +Advisor: Anne M. Dougherty + +# Summary + +Fingerprint identification depends on the assumption that a person's fingerprints are unique. We assess the truth of this assumption by calculating the total number of distinct fingerprints. + +We assume accurate fingerprints (ignoring procedural error) that are defined by 12 points of detail or minutiae. The number of distinct fingerprints depends also on the number of potential positions of these minutiae. Two historical methods and a geometric analysis estimate there to be 1,400 positions, a figure confirmed by our algorithm for counting ridges in a fingerprint. + +We create two models to estimate the number of unique fingerprints: + +- One model computes fingerprints as arrangements in minutiae; +- the other extrapolates the number of fingerprints from the Shannon entropy of the information that defines a fingerprint. + +These two models agree to within an order of magnitude that there are $5 \times 10^{33}$ unique fingerprints, a compelling validation of our general approach. + +To handle the large number of fingerprints, we implement an approximation for the calculation of probabilities. Given a cumulative world population of 120 billion [Catton 2000], the probability of two people ever having the same fingerprint is $1.4 \times 10^{-6}$ . + +The probability of two humans living today sharing a fingerprint is $3.5 \times 10^{-15}$ , which suggests that fingerprints are a theoretically more reliable method of identification than DNA analysis, which has a false positive probability of $10^{-9}$ . None of these calculations take into account procedural errors. + +# Introduction + +The possibility of duplicate fingerprints, or fingerprints likely to be mistaken for each other, has led to recent criticism of fingerprints as a means of identification [Pankanti et al. 2001]. The key problem is: + +How many distinct fingerprints are there? + +We approach this problem with two general methods: + +- "building" a fingerprint from the ground up, using different models; and +- using the information content of a fingerprint. + +# Assumptions + +- Fingerprint matching is done by comparing minutiae. Comparing small features (minutiae) such as ridge endings and bifurcations within a fingerprint is a typical method (used by the FBI) for recognition of identity, and it provides very good accuracy [Andrew 2002; Hrechak and McHugh 1990; Jain et al. 2001]. +- Two fingerprints with the same minutiae configuration are identical. +- The distribution of minutiae within a fingerprint is uniform. In fact, the minutiae are over-dispersed, but the uniform distribution is a good estimate for their locations [Stoney 1988]. +- Minutiae are statistically independent of one another. This assumption is justified by studies of fingerprint individuality (Galton and Henry) that assume independence [Stoney 1988; Stoney and Thornton 1986]. +- Minutiae are either directed along the flow of the ridge of a fingerprint, or against it, that is, there are only two possible directions. Attempting to measure more than two directions is very difficult [Stoney and Thornton 1986]. +- There is only one type of minutia, bifurcation. We make this assumption to simplify the problem and to avoid dealing with minutiae (i.e., dots) that have no direction. +- There are no errors in collection—we are dealing with "true" fingerprints. The greatest source of error in fingerprint identification is not in recognition but in training of employees and the condition of equipment [Fickes 2003]. In addition, determining the minutiae of a fingerprint is nontrivial. + +# Model 1: Designing from the Ground Up + +# The Worst-Possible Case + +There are no more than $10^{35,000}$ possible fingerprints. + +The FBI standard for storing and comparing fingerprint data uses 500 dpi (250,000 pixels per square inch), with 8 bits per pixel and an average size of 1.5 square inches per fingerprint [Aboufadel and Schlicker 1999]. If the image is stored as a bitmap, there are $250,000 \times 1.5 \times 8 = 3 \times 10^6$ bits of information in a fingerprint. This implies an absolute maximum of $2^{3,000,000}$ possible fingerprint images without a pixel-for-pixel match. + +However, the FBI does not store images in bitmapped form but instead uses the Cohen-Daubechies-Feauveau 9/7 or "Spline 9/7" wavelet for compression, by a factor of 26:1 with the thresholding used by the FBI. So there can be only $3 \times 10^{6} \div 26 = 1.15 \times 10^{5}$ bits of information within a stored fingerprint. We compressed several typical fingerprints [Bio-Lab 2000] to about 26:1 using our implementation of the wavelet. Comparison of edges in the original and compressed images shows that information about the minutiae is lost at 52:1 and higher levels of compression (Figure 1). + +![](images/145dca11973b78ad3304176b4dd35999ae1f656e446b908fbde23af831f0a8c1.jpg) +Figure 1. CDF $9/7$ wavelet compression. Minutiae detail, such as bifurcations, is lost at compression greater than 26:1. + +These results give the maximum number of fingerprint images as $2^{1.15 \times 10^5} \approx 10^{35,000}$ . + +# Limited Space for Minutiae + +In the previous subsection, we didn't make any assumptions about the image—it could have been a picture of a moose. If minutiae are limited to $L$ physical locations, with at most one per location, the number of distinct fingerprints determined by $n$ minutiae is $\binom{L}{n}$ . + +# The Lower Bound for Total Fingerprints + +Minutiae always occur on ridges, so it makes sense to represent the fingerprint as a set of ridges. We consider a typical fingerprint to be a set of 20 concentric ellipses. We assume that there are no minutiae closer than $5^{\circ}$ away from each other—in other words, they are reasonably separated. Essentially, we are assuming that the minutiae are more or less uniformly distributed. These assumptions are equivalent to restricting minutiae to the intersections of 20 ellipses and 72 equally spaced radial lines (a simplified version of this is seen in Figure 2). This is similar to the empirical Roxburgh model [Stoney and Thornton 1986]. + +There are therefore $20 \times 72 = 1,440$ possible locations for minutiae. + +![](images/6e0897d209208fcc8f5f1a7b4ba04767659d2f46f1362b72d798d6ec56d1ca0c.jpg) +Figure 2. Potential minutiae locations are on intersections of concentric ellipses and radial lines; 1,440 locations (far right) represent a fingerprint. + +What number of minutiae should we choose? A typical fingerprint has 30 to 40 minutiae, but not all of these are significant. In fact, even as few as 6 minutiae may be important [Bhowmick et al. n.d.]. + +The number of distinct fingerprints that can be created by arrangements of 6 minutiae in 1,440 possible locations is : + +$$ +\binom {1 4 4 0} {6} = \frac {1 4 4 0 !}{(6 !) (1 4 4 0 - 6) !} \approx 1. 2 3 \times 1 0 ^ {1 6}. +$$ + +This is a lower bound on the total number of distinguishable fingerprints. + +# Improving the Estimate + +Though in some cases there are only 6 useful minutiae, typically there are about 30 to 40 minutiae in a fingerprint [Stoney and Thornton 1986]. If all of these were used for identification, and there were still only 1,440 possible locations for minutiae, then the value for the number of total fingerprints would be + +$$ +\binom {1 4 4 0} {3 5} \approx 2. 2 3 \times 1 0 ^ {7 0}. +$$ + +However, the criteria for identity vary from about 10 to 16 matching features [Stoney and Thornton 1986], implying that using 35 features to define a fingerprint overestimates the number of fingerprints. We assume that 12 features are required to match a fingerprint (the FBI's "quality assurance" standard [Duffy 2002]). Then the number of distinguishable fingerprints is + +$$ +\left( \begin{array}{c} 1 4 4 0 \\ 1 2 \end{array} \right) \approx 1. 5 9 \times 1 0 ^ {2 9}. +$$ + +# Accounting for Direction of Minutiae + +Allowing two orientations (with or against the flow of the ridge) for a minutia doubles the number of possible placements and increases the number of fingerprints to + +$$ +\left( \begin{array}{c} 2 8 8 0 \\ 1 2 \end{array} \right) \approx 6. 6 4 \times 1 0 ^ {3 2}. +$$ + +Hence, the number of fingerprints is bounded between $1.23 \times 10^{16}$ and $2.23 \times 10^{70}$ but is most likely around $6.64 \times 10^{32}$ . + +We now narrow this range by using information theory. + +# Model 2: Information Theory + +# Clarification of Assumptions + +We phrase our original assumptions in a new way: + +- A single fingerprint contains $n$ minutiae. +- A fingerprint can be effectively mapped as $n$ minutiae falling into the squares of an $x \times x$ grid (at most one minutia per square). There are $x^2$ possible locations for the $n$ minutiae, so $x = \sqrt{L}$ , where $L$ is the number of locations for minutiae. + +# Derivation of the Entropy of a Fingerprint + +We can visualize a fingerprint as an $x$ by $x$ grid. If there is a minutia in a space, we mark it with an X; if not, we leave it blank (Figure 3). + +We treat the two mutually exclusive states, minutia and non-minutia, as the elements of a two-letter alphabet, $a$ . An $x$ by $x$ arrangement of this alphabet represents a fingerprint. Shannon's classic first-order equation gives the entropy of the alphabet: + +$$ +H = - \sum_ {i = 1} ^ {m} P \left(a _ {i}\right) \log_ {2} P \left(a _ {i}\right), \tag {1} +$$ + +![](images/ad8c1d03948d1f9763b7637532ad2446331f8b171d7cecad1a945def54622553.jpg) +Figure 3. A section of a possible fingerprint configuration. + +where $P(a_{i})$ is the independent probability of the state $i$ occurring in the fingerprint and $m$ is the length of the alphabet [Shannon 1948]. + +For us, $m = 2$ . The respective probabilities of each state (minutia or non-minutia) occurring in the alphabet are: + +$$ +P \left(a _ {1}\right) = \frac {\text {m i n u t i a e}}{\text {n u m b e r o f s p a c e s}} = \frac {n}{x ^ {2}}, \quad P \left(a _ {2}\right) = \frac {\text {n o n - m i n u t i a e}}{\text {n u m b e r o f s p a c e s}} = \frac {x ^ {2} - n}{x ^ {2}}. +$$ + +Substituting back into (1), we are left with the following value for $H$ : + +$$ +H = - \left[ \frac {n}{x ^ {2}} \log_ {2} \left(\frac {n}{x ^ {2}}\right) + \frac {x ^ {2} - n}{x ^ {2}} \log_ {2} \left(\frac {x ^ {2} - n}{x ^ {2}}\right) \right]. +$$ + +# The Number of Fingerprints Based on Entropy + +Because entropy is a measure of the minimum number of bits required to represent each element of the grid, it can be used to determine the total representative requirement of any fingerprint: + +bits required for fingerprint $= H \times$ size of fingerprint $= Hx^{2}$ . + +There are $Hx^{2}$ bits of information in a fingerprint, so there should be $2^{Hx^{2}}$ possible fingerprints. However, in our definition of the grid, we ignored the direction of minutiae. Each minutia has two possible directions, resulting in a total of $2^{n}$ possible directional configurations. + +Combining these numbers, we find $2^{Hx^2} \times 2^n = 2^{Hx^2 + n}$ possible fingerprints. + +Using the values established earlier $(n = 12, L = x^2 = 1,440)$ , we get + +$$ +H = - \left[ \frac {1 2}{3 8 ^ {2}} \log_ {2} \left(\frac {1 2}{3 8 ^ {2}}\right) + \frac {3 8 ^ {2} - 1 2}{3 8 ^ {2}} \log_ {2} \left(\frac {3 8 ^ {2} - 1 2}{3 8 ^ {2}}\right) \right] \approx 0. 0 7, +$$ + +which leads to + +$$ +H \times x ^ {2} \approx 0. 0 7 \times 3 8 ^ {2} \approx 1 0 0 +$$ + +and therefore $2^{Hx^2 + n} = 2^{100 + 12} \approx 5.19 \times 10^{33}$ fingerprints. + +# Consistency of the Two Models + +Experimentation with different values for $L$ and $n$ —not only for reasonable ranges of $L$ (500–2,500) and $n$ (6–18) but also for truly ridiculous numbers—indicates that the values from the combinatorial model and from the information theory agree to within an order of magnitude. + +# How Many Minutiae Locations Are There? + +The physical dimensions of a minutia confirm the estimate of $L \approx 1,400$ in two different ways. + +# The Kingston Method + +A visual inspection of a $300 \times 300$ pixel fingerprint image [Bio-Lab 2000] reveals that a minutia can be contained in a $9 \times 9$ pixel square (Figure 4). This would suggest that we can put a maximum of + +$$ +\frac {3 0 0 \times 3 0 0}{9 \times 9} \approx 1, 1 0 0 +$$ + +minutiae into one image. This method for estimating the number of possible minutiae locations in a fingerprint recalls the Kingston method, which calculates this value based on the area occupied by a minutia [Stoney and Thornton 1986]. This value for $L$ confirms our initial geometric estimate. + +![](images/6184055a80673676c819ea42c8f313f2adb1c0351af16271e32b15495050f9ce.jpg) +Figure 4. A minutia can typically be contained in a 9-by-9 pixel area. + +# The Amy Method + +Amy's method [Stoney and Thornton 1986] calculates the number of potential minutiae positions $L$ as + +$$ +L = \left(\mathcal {I} - \iota + 1\right) ^ {2}, +$$ + +where $\mathcal{I}$ is the number of ridge intervals on a side of a known fingerprint (see Figure 5). The studies of Roxburgh established the value of $\iota$ , the size of a minutia, to be $\sqrt{5/2}$ ridge intervals [Stoney and Thornton 1986]. + +![](images/c04fecf50a30e3991d73f8548829ae00efab219f4101420ac8a7c2e318897c64.jpg) +Figure 5. The number of ridge intervals on a fingerprint. + +The average value of $\mathcal{I}$ is 38 for a typical fingerprint, from our ridge-counting algorithm (see the Appendix). We calculate the number of potential minutiae locations as + +$$ +L = \left(3 8 - \sqrt {\frac {5}{2}} + 1\right) ^ {2} \approx 1, 4 0 0. +$$ + +This value is almost exactly the number that we predicted using only the initial geometry of the fingerprint! + +The geometry of fingerprint images and Amy's method confirm, in two unrelated ways, our prediction of $L = 1,440$ . + +# Estimating the Odds of Duplicate Prints + +# General Method + +Select a random person from a group of $N$ . The probability that a second person selected at random has a different fingerprint from the first is $(f - 1) / f$ , + +where $f$ is the total number of fingerprints. Generalizing, as in the classic birthday coincidence problem, the probability $P(N)$ of picking $N$ unique fingerprints (that is, no duplication) is + +$$ +P (N) = \prod_ {i = 1} ^ {N - 1} \left(\frac {f - i}{f}\right) = \frac {1}{f ^ {N - 1}} (f - 1) (f - 2) (f - 3) \dots (f - N + 1). \tag {2} +$$ + +We calculated the probability of a duplication this by writing a C program, using arbitrary precision arithmetic to deal with the fact that $f$ is very large. However, this calculation requires a lot of time for large $N$ , so it is useful to have an easy-to-calculate approximation. + +# Approximation + +We express (2) as + +$$ +P (N) = \frac {1}{f ^ {N - 1}} \left(f ^ {N - 1} + c _ {1} f ^ {N - 2} + c _ {2} f ^ {N - 3} + \dots + c _ {N - 1}\right), +$$ + +where the $c_{i}$ are integer coefficients of the powers of $f$ . We can then write + +$$ +P (N) = 1 + \frac {c _ {1}}{f} + \frac {c _ {2}}{f ^ {2}} + \dots + \frac {c _ {N - 1}}{f ^ {N - 1}}. +$$ + +Since $f$ is large, $f^{-2} \ll f^{-1}$ and we can discard everything except for the first term, as long as $N$ is not of the same order as $f$ , getting + +$$ +P (N) \approx 1 + \frac {c _ {1}}{f}. +$$ + +Now, what is the coefficient $c_1$ ? The product $(f - 1)(f - 2) \cdots (f - N)$ is a sum of terms created by choosing either $f$ or the number from each binomial. A term with $f^{N-1}$ occurs when $f$ is chosen for each binomial except one. There are $N$ ways to do so, resulting in the coefficients $-1, -2, \ldots, -N$ . Therefore, + +$$ +c _ {1} = (- 1) + (- 2) + \dots + (- N) = \frac {- (N ^ {2} + N)}{2}. +$$ + +Hence + +$$ +P (N) \approx 1 - \frac {N ^ {2} + N}{2 f}. +$$ + +The probability of a fingerprint duplication is + +$$ +1 - P (N) = \frac {N ^ {2} + N}{2 f}. +$$ + +Probability of a fingerprint coincidence, for various numbers of fingerprints and population sizes. + +Table 1. + +
NNumber of fingerprints
1.23 × 10166.64 × 10325.19 × 1033
10^54 × 10-710-2210-24
10^64 × 10-510-2110-22
10^74 × 10-310-1910-20
10^80.33410-2210-18
10^90.99910-2210-16
6 × 10^9110-223 × 10-15
(current world)
120 × 10^9110-221.4 × 10-6
(cumulative world)
+ +# Odds of Misidentification + +Table 1 gives the probability of a fingerprint coincidence for various population sizes, for each of our estimates of the number of fingerprints. + +Based on either our best value $(6.64 \times 10^{32})$ for the number of different prints, or the information theory estimate $(5.19 \times 10^{33})$ , there is essentially no chance of duplicating a fingerprint. + +# Conclusions + +We use the basic geometry of a fingerprint and the known distribution of minutiae to determine that there are about 1,440 possible minutiae locations in a fingerprint. We confirm this by studying minutiae in digitized fingerprints. + +Using this value, we calculate the number of possible distinct fingerprints given a certain number of minutiae to be used for identification. We choose the FBI "quality assurance" standard of 12 minutiae [Duffy 2002], which results in $6.64 \times 10^{32}$ possible fingerprints, once minutiae direction is taken into account. This number was confirmed by using the amount of information in a model of a fingerprint, which estimated $5.19 \times 10^{33}$ fingerprints using 12 minutiae. + +If only six minutiae are used to determine a fingerprint, and thus there are only $1.23 \times 10^{16}$ fingerprints, then the probability for a duplication in one billion people approaches unity. In fact, even in only 100 million people (the order of the FBI's fingerprint database), there would be a reasonable probability (.33) of a fingerprint duplication. + +However, using 12 minutiae, the probability of a fingerprint duplication in six billion people is only $3 \times 10^{-15}$ ; the probability of a duplication among the 120 billion people who have ever lived is only $1.39 \times 10^{-6}$ . Therefore, there is little risk of mistaken identity based on 12 minutiae, given perfect fingerprints. In the real world, fingerprints are not perfect, and the largest sources of error + +are from mishandling and errors in the process [Fickes 2003]. + +DNA analysis has a theoretical probability of false positives on the order of $10^{-9}$ , though this figure is often disputed [Thompson et al. 2003]. Fingerprint identification is thus theoretically more reliable than DNA testing. + +# Strengths and Weaknesses of the Models + +# Strengths + +- Agreement of the models. The same general number of possible minutiae locations is calculated in three completely separate ways (by assuming uniform distribution, by looking at the size of a minutia, and by counting ridges and using Amy's method). This consistency suggests that our result is reasonable. In addition, our two vastly different models (combinatorial and information theory) produce consistent numbers. + +# Weaknesses + +- Assumptions about the minutiae. We assume that there is only one type of minutia, when in fact there are at least three (bifurcations, endpoints, and dots) [Stoney and Thornton 1986]. Some of these are orientable and others are not, but we assume that all minutiae have two directions. In addition, though our assumption of uniform distribution is borne out by study of real fingerprints, correlations between minutiae could strongly skew the number of possible locations. + +- The models work only in an ideal setting. Many factors can create errors in a fingerprint, such as dirt, operator error, and mechanical breakdowns. Our models do not address this and only set a maximum theoretical accuracy for fingerprinting. + +# Appendix: Ridge-Counting Algorithm + +Our ridge-counting algorithm, which estimates the number of concentric ridges in an image, supplies useful empirical results to support and validate our theoretical work. The steps of the algorithm are: + +- edge detection and thresholding, +- selecting the ridge core location, and +- counting maximum number of ridges around the core. + +# Edge-Detection and Thresholding + +Given an image $X(x, y)$ , we use edge detection filters $f_x$ and $f_y$ to estimate the image gradient. These are directional derivatives of the general Gaussian function (Figure 6). + +$$ +f _ {x} (x, y) = - x \exp \left(- \frac {x ^ {2}}{2 \sigma_ {x} ^ {2}} - \frac {y ^ {2}}{2 \sigma_ {y} ^ {2}}\right), \qquad f _ {y} (x, y) = - y \exp \left(- \frac {y ^ {2}}{2 \sigma_ {y} ^ {2}} - \frac {x ^ {2}}{2 \sigma_ {x} ^ {2}}\right). +$$ + +![](images/889d131002ed1bff40671f42a90851300758ae03f6da4f236d5a2a9489f5bdd6.jpg) +Figure 6. Edge detector filter. + +With $f_{x}$ operating horizontally and $f_{y}$ vertically, the magnitude of the image gradient is + +$$ +\nabla X (x, y) \approx \sqrt {\left[ (X * f _ {x}) (x , y) \right] ^ {2} + \left[ (X * f _ {y}) (x , y) \right] ^ {2}}. +$$ + +Thresholding the gradient yields a binarized image $E$ of the edges, + +$$ +E (x, y) = \left\{ \begin{array}{l l} 1, & \mathrm {i f} \nabla X (x, y) > \alpha ; \\ 0, & \mathrm {i f} \nabla X (x, y) \leq \alpha , \end{array} \right. \quad (\min \nabla X < \alpha < \max \nabla X) +$$ + +where $\alpha$ is the threshold level. + +# Selecting the Ridge Core Location + +Most fingerprints are essentially concentric curves around a central part of the print, the ridge core. To find the number of ridges, one can begin at the core and move outwards, counting the ridges crossed in the path. Unfortunately, automatically determining the core location is difficult. One method to do this estimates the direction of the ridges through the directional field [Chan et al. 2004]. In this method, the image is segmented into $w \times w$ -pixel blocks and a least-squares orientation method finds the smoothed orientation field $O(x, y)$ . The value $\sin (O(x, y))$ reflects the local ridge direction and indicates the core, which is where the direction curves fastest. + +It is very easy to locate the core of a fingerprint by eye; and small deviations in the location of the core, like those expected from human error, do not make large differences in the measured number of ridges. + +# Maximum Number of Ridges around the Core + +The next step is to use the $E(x, y)$ image to count the ridges, beginning at the core and counting the number of changes between 1 and 0. A typical edge is counted twice and a typical ridge has two edges; so dividing the count by 4 estimates the number of ridges. However, moving along only one path may miss the shorter ridges at the edge of the print or ridges with noisy edges. Instead, we count along many directions and use the highest count (Figure 5). + +# Results from Implementation + +We applied the algorithm to 80 optically-scanned fingerprint images from Bio-Lab [2000]. The last nine images were too noisy for the edge detection and were discarded. The distribution of the remainder is roughly symmetrical about a mean of 38.0 ridges, with a standard deviation of 5.3. + +# References + +Aboufadel, Edward, and Steven Schlicker. 1999. Discovering Wavelets. 1st ed. John Wiley and Sons. +Bolle, Ruud M., Andrew W. Senior, Nalini K. Ratha, and Sharath Pankanti. 2002. Fingerprint minutiae: A constructive definition. In Proceedings ECCV Workshop on Biometrics. http://citeseer.ist.psu.edu/591141.html. +Bhowmick, P., A. Bishnu, B.B. Bhattacharya, M.K. Kundu, C.A. Murthy, and T. Acharya. n.d. Determination of minutiae scores for fingerprint image applications. http://citeseer.ist.psu.edu/554334.html. +Bio-Lab, University of Bologna. 2001. FVC 2000: Fingerprint Verification Competition. http://bias.csr.unibo.it/fvc2000/. +Brualdi, Richard A. 1999. Introductory Combinatorics. 3rd ed. Upper Saddle River, NJ: Prentice Hall. +Caton, William R., Jr. 2000. Worse than foreseen by Malthus (even if the living do not outnumber the dead). http://desip.igc.org/malthus/Caton.html. +Chan, K.C., Y.S. Moon, and P.S. Cheng. 2004. Fast fingerprint verification using subregions of fingerprint images. IEEE Transactions on Circuits and Systems + +for Video Technology 14 (1) (January 2004) 95-101. http://ieeexplore.ieee.org/xpl/abs_free.jsp?arNumber=1262035. +Cotton, R.W., and C.J. Word. 2003. Commentary on Thompson et al. [2003]. Journal of Forensic Sciences 48 (5): 1200. +Duffy, S.P. 2002. Experts may no longer testify that fingerprints "match." The Legal Intelligencer (9 January 2002). http://wwwtruthinjustice.org/print-match.com. +Fickes, Michael. 2003. Dirt: A fingerprint's weakest link. Access Control and Security Systems (1 February 2003). http://govtsecurity.securitysolutions.com/ar/security_dirt_fingerprints_weakest/. +Hrechak, A.K., and J. McHugh. 1990. Automated fingerprint recognition using structural matching. Pattern Recognition 23: 893-904. +Jain, Anil, Arun Ross, and Salil Prabhakar. 2001. Fingerprint matching using minutiae and texture features. In Proceedings of the International Conference on Image Processing (ICIP) (Thessaloniki, Greece), 282-285. http://citeseer.ist.psu.edu/jain01fingerprint.html. +Pankanti, Sharath, Salil Prabhakar, and Anil K. Jain. 2001. On the individuality of fingerprints. IEEE Transaction on Pattern Analysis and Machine Intelligence: 1010-1025. http://citeseer.ist.psu.edu/472091.html. +Shannon, C.E. 1948. A mathematical theory of communication. Bell System Technical Journal 27: 379-423, 623-656. +Stoney, D. A. 1988. Distribution of epidermal ridge minutiae. American Journal of Physical Anthropology 77: 367-376. +________, and J.I. Thornton. 1986. A critical analysis of quantitative fingerprint individuality models. Journal of Forensic Sciences 31: 1187-1216. +Thompson, W.C., F. Taroni, and C.G.G. Aitken. 2003. How the probability of a false positive affects the value of DNA evidence. Journal of Forensic Sciences 48 (1): 47-54. + +See Cotton and Word [2003] for a commentary. + +# Rule of Thumb:Prints Beat DNA + +Seth Miller + +Dustin Mixon + +Jonathan Pickett + +Central Washington University + +Ellensburg, WA + +Advisor: Stuart F. Boersma + +# Summary + +We distinguish thumbprints by their global features (those that can be easily detected by the naked eye): the thumbprint's pattern, the ridge count, and the type lines. With this model, the probability that two thumbprints have the same global characteristics is at least $5.02 \times 10^{4}$ . + +In a second model, we pay more attention to local features (those not easily detected by the naked eye), in particular the thumbprint's minutiae. We look at the number of minutiae, their location, and the direction in which their corresponding furrow endings point. With this stronger model, the probability that two thumbprints have the same local characteristics is approximately $6.41 \times 10^{-143}$ . + +With both models, we estimate the probability that every thumbprint in the history of mankind is unique to be virtually zero (though this contradicts what our logic dictates). + +Since identical twins share the same DNA pattern, the probability that every DNA strand in the history of mankind is unique must be zero. Given two DNA strands taken at random from people the probability that they are similar is approximately $8.29 \times 10^{-14}$ . Since this probability is significantly more than the analogous probability for local thumbprint features, we conclude that DNA testing has a higher rate of misidentification. + +# Judge's Commentary: The Outstanding Fingerprints Papers + +Michael Tortorella + +Dept. of Industrial and Systems Engineering + +Rutgers University + +Piscataway, NJ + +mtortore@rci.rutgers.edu + +# Introduction + +The brief statement of this problem hid many layers of complication. Teams were challenged to find the mathematical kernel of a problem of interest to anthropologists, forensic scientists, attorneys and judges, and just plain folks. In essence, the problem reduces to + +Estimate the probability that two humans who have ever lived have the same fingerprint. + +After the MCM was over, the Wall Street Journal carried an article entitled "Despite its reputation, fingerprint evidence isn't really infallible" [Begley 2004]. The uncritical acceptance of fingerprint evidence that was common in the past is undergoing new scrutiny, and our examination of the question in the MCM was timely indeed, if unplanned. + +# The Issues + +# Philosophical Questions + +The problem seems innocuous enough; but you get very quickly into some deep—even philosophical—questions that have to be addressed in modeling assumptions. For example, exactly how finely does nature model the + +real numbers? In mathematics, between every two real numbers there is another one; if between any two positions in physical space there is another one (distinguishable—by whom?—from the other two), then a homotopy between any two distinct fingerprints—however "fingerprint" is defined—produces an infinite number of distinct fingerprints. So the probability requested could reasonably be asserted to be zero, even if we say translations and rotations of a given fingerprint are not different from the original. Hence, a purely mathematical approach to the problem is not very interesting. On the other hand, the number of people $n$ who have ever lived is finite, so we may find the answer "zero" unsatisfying. + +# A Simple Model + +Here is a simple model that takes a next step: Assume that fingerprints (the actual skin patterns) are assigned at birth, at random from a pool of potential fingerprints. If we assume that the pool contains $N >> n$ elements and selection is made on an equally-likely basis, then the probability that there are no two fingerprints alike is the solution to a birthday problem with $n$ people and $N$ birthdays; namely, the probability of no match (denoted $Q_{1} = 1 - P_{2}$ in Weisstein [2004]) is given by + +$$ +P (\text {n o m a t c h}) = \frac {N}{N} \frac {N - 1}{N} \dots \frac {N - n + 1}{N} = \frac {N !}{n ! N ^ {n}} \approx e ^ {- n (n - 1) / 2 N}, +$$ + +which for fixed $n$ is asymptotic to 1 as $N\to \infty^1$ + +This is about the simplest model that one could devise for this problem, and teams should use these sorts of simple models as a baseline against which to assess other more complicated efforts. One thing that we learn from this model is that in effect, all the additional definitions for "fingerprint" serve essentially to shrink the pool of possible "fingerprints," that is, restrict $N$ so that there is a chance that the probability of no match will be less than 1. + +# Reinterpreting the Question + +In fingerprint analysis, a human being, either with the unaided eye or with some tool(s), judges two fingerprints to be "identical." So a reasonable interpretation of the relevant question could be: + +Determine the probability that there have never been two identical fingerprints, given the capability of the technology used to determine "different." + +This is a little more interesting a question. Different assumptions can reasonably be made concerning this capability, which lead to different models and, usually, different answers. + +# But First You Have to Define . . . + +The first step in developing a model based on this question is to define: + +- "fingerprint," +- the probability space in which this experiment is conducted, and +- "distinct." + +"Distinct" depends on who's looking; or, to put it more conventionally, resolution matters. All this is by way of scope delineation, so that when an answer is arrived at, the domain in which the answer is valid will be clear. + +The definition of "fingerprint" is wrapped up in the definition of the probability space, because most teams made assumptions about the minimum spacing between ridges that could possibly occur. This assumption is based on empirical evidence (at least from humans who have been alive in the last century or so) and is the first step down a road leading to consideration of only a finite number of potential fingerprints. + +Additional assumptions of this nature included restriction of the mathematical model to the six common types of fingerprint patterns (loops, arches, whorls, etc.) and a few variations. + +# The Importance of Interpretation + +As always, interpretation is the key to success in modeling problems. The first key was to understand that the word "fingerprint," in addition to its usual semantic or prose usage, must be given a mathematical meaning in the context of a model. Successful papers began by providing a mathematical definition of "fingerprint," for example, as a rectangular area, $2\mathrm{cm}$ by $3\mathrm{cm}$ , containing alternating ridges and valleys arranged in one of 6 global patterns (arch, tented arch, left loop, right loop, whorl, and twin loop). Alternatively, one may distinguish between the fingerprint as a physical or biological entity on the body and a fingerprint as an image made on paper or other surface by a deliberate or accidental process. Any of these can lead to reasonable solutions but the modeler's choice should be made clear. + +Once that is accomplished, it begins to be possible to talk in quantitative terms about how two fingerprints may be distinguished. Most papers adopted the FBI criteria concerning the number and location of minutiae as their differentiating method. A minutia is a local feature of the fingerprint, for example, the end of a ridge line or an isolated very short ridge of approximately the same length and width. Again, the standard FBI categorization of minutiae was most often used. A grid of some size (typically $1\mathrm{mm}$ on a side) is imposed on the fingerprint and the presence or absence of a minutia in a grid square is recorded (at most one minutia per square is permitted). Some papers noted that the size of the grid square should be approximately equal to, or slightly smaller than, + +the typical size of a minutia so that the possibility of more than one minutia in a grid square is minimized. The feature-resolving capability of the instrument used to view the fingerprint also matters, because if infinite resolution is possible, then all fingerprints will look different. In fact, this observation implies that one can pose this problem + +- "theoretically," treating the "fingerprint" as a mathematical construct and using only properties of the real numbers, etc., to form a solution; or +- "practically," where the aspects of detectability of differences by human or machine methods are central. + +A good solution to this problem, like that of the paper from the team at University College Cork, treats both aspects and their interplay. + +# At Last, a Model + +Even with these few assumptions, a model is possible: the total number of possible distinct fingerprints implied is $2^{600} \approx 10^{181}$ . The number of people who have ever lived is about $1.06 \times 10^{11}$ ; so, assuming that all $2^{600}$ patterns are equally likely, the probability that no two persons who have ever lived have the same fingerprint is approximately $1 - 10^{-159} / 2$ (this latter computed from the "birthday problem" with $1.06 \times 10^{11}$ people and $2^{600}$ possible "birthday," a point that many teams missed). The University College Cork team handled this approach about as well as could be. + +It is easy to poke holes in this model. Empirically, it is clear that + +not every grid square has the same probability of containing a minutia, +- stochastic independence of the presence or absence of minutiae from grid square to grid square is not reasonable, and +- there are several different types of minutiae. + +Many teams overcame these objections by adding to the basic model assumptions comprehending several types of minutiae and various other refinements based on empirical observations of physical characteristics, such as ridge width, interridge distance, and frequency of occurrence of different types of minutiae in a grid square. For example, both the papers from Harvey Mudd College and University College Cork introduced orientation of minutiae as another distinguishing characteristic (although the University College Cork paper does not follow through on this additional detail, giving this the feeling of a dead end). In all cases, though, when such assumptions based on empirical observation are introduced, the modeler should attempt to bound the answers using a range of possible reasonable values for the inputs because sampling error could affect the assumptions. Once could argue that sampling error should be negligible in drawing inferences from a database containing + +millions of records, like most fingerprint databases, but most teams did not address this issue in any way. + +Finally, the problem asks for comparison of the computed probability with the probability of misidentification by DNA evidence, a topic much in the public eye in the last decade. Some teams ignored this requirement. Others quoted popular anecdotes concerning the DNA misidentification probability. In the latter case, teams would be advised to bolster their contentions with at least one legitimate citation. + +# As Always, Advice + +- Make your paper easy to read. That means, at least, number the pages and the equations, check your spelling, and double-space the text (or at least use a font size large enough for easy readability). All three Outstanding papers shown here did a good job with this. +- Good organization will not make up for poor results, but poor organization can easily overwhelm good results and make them hard to dig out. Organize the paper into sections corresponding to the parts of the problem. +- Define all terms that a reader might find ambiguous; in particular, any term used in the model that also has a common prose meaning should be carefully considered. The paper from University College Cork in particular does a very thorough job with this. +- Complete all the requirements of the problem. If the problem statement says certain broad topics are required, begin by making an outline based on those requirements. +- Read the problem statement carefully, looking for key words implying actions: design, analyze, compare, etc. (imperative verbs). These are keys to the sections your paper ought to contain. +- Address sensitivity to assumptions as well as the strengths and weaknesses of the model. That means that these topics should be covered separately in sections of their own. +- When you do strengths and weaknesses, or sensitivity analysis, go back to your list of assumptions and make sure that each one is addressed. This is your own built-in checklist aiding completeness; use it. +- Your summary should state the results that you obtained, not just what you did. Keeping the reader in suspense is a good technique in a novel, but it simply frustrates judges who typically read dozens of papers in a weekend. The University of Colorado paper has an excellent summary: crisp, concise, and thorough. + +- Use high-quality references. Papers in peer-reviewed journals, book, and government Websites are preferred to individuals' websites. Note also that it is not sufficient to copy, summarize, or otherwise recast existing literature; judges want to see your ideas. It's okay to build on the literature, but there must be an obvious contribution from the team. +- Verify as much as you can. For example, the total population of the earth should be readily verifiable. Make whatever sanity checks are possible: is answer you get larger than the number of atoms in the known universe? If it is, should it be? +- Finally, an outstanding paper usually does more than is asked. For example, the University of Colorado team created two different models to attack the problem and compared the results from each approach; the reasonably good agreement they obtained showed that either + +- they were on the right track, or +- they were victims of very bad luck in that both of the methods that they tried gave nearly the same bad answers! + +# Reference + +Begley, Sharon. 2004. Despite its reputation, fingerprint evidence isn't really infallible. *Wall Street Journal* (4 June 2004) B1. + +Weisstein, Eric. 2004. Birthday problem. http://mathworld.wolfram.com/BirthdayProblem.html. + +# About the Author + +![](images/fb8141eec3e3b7d681bbc10253047b0590932592f2698a46acb47d99ae2f8b00.jpg) + +Mike Tortorella is a Research Professor in the Department of Industrial and Systems Engineering at Rutgers, the State University of New Jersey, and the Managing Director of Assured Networks, LLC. He retired from Bell Laboratories as a Distinguished Member of the Technical Staff after 26 years of service. He holds the Ph.D. degree in mathematics from Purdue University. His current research interests include stochastic flow networks, information quality and service reliability, and numerical methods in ap + +plied probability. Mike has been a judge at the MCM since 1993 and has particularly enjoyed the MCM problems that have a practical flavor of mathematics and society. Mike enjoys amateur radio, the piano, and cycling; he is a founding member of the Zaftig Guys in Spandex road cycling team. + +# Practitioner's Commentary: The Outstanding Fingerprints Papers + +Mary Beeton + +A.F.I.S. Technician + +Durham Regional Police Service + +Oshawa, Ontario + +Canada + +mbeeton@drps.ca + +# Introduction + +I read with great interest the three Outstanding papers. A mathematical model to assess the requested probability, that the thumbprint of every human who has ever lived is different, is beneficial to the science of identification from fingerprints. + +I offer a viewpoint from the forensic discipline of friction ridge identification. As an A.F.I.S. (Automated Fingerprint Identification System) Technician, I examine more than 125,000 fingerprints each year. On a daily basis, I determine if "friction ridge skin impressions" (also known as latent prints or latent marks) collected at crime scenes came from the same source as a known "rolled" inked fingerprint (also known as an exemplar). Mathematical models, such as those suggested by the Outstanding teams, could help resolve the hypothesis that a "small distorted fragment" of a friction ridge skin impression came from the same source as a known "rolled" inked fingerprint. + +# Uniqueness of Fingerprints + +Over 100 years of research in embryology, genetics, biology, and anatomy have documented the extensive genetic, biological, and random environmental occurrences that take place during fetal growth; all this work supports the + +premise that friction skin is unique, as does statistical research. Despite the oversimplification in each model from unrealistic assumptions such as + +- minutiae occur at uniform rates along a particular ridge, +- only two types of minutiae exist, +- all fingerprints are perfect rolled prints, and +- ridge widths are consistent, so that pores and edge shapes are not significant, + +each of the teams' conclusions support fingerprint uniqueness. Given the time constraint in the contest, oversimplification was likely unavoidable. + +Most legal professionals, scholars and scientists support the view that "nature never repeats itself," and most do not dispute fingerprint uniqueness. + +# Challenges of Fingerprint Identification + +Why is it, then, that the scientific reliability of friction ridge identification is frequently challenged in the courtroom? In 1993, a U.S. Supreme Court, in a ruling for the civil case Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), outlined specific "Daubert" criteria to be used by trial judges in assessing the admissibility of scientific evidence. While the Court's ruling is not necessarily binding on the individual states, many states have adopted the "Daubert" standard or the federal rules on which it was based. In 1999, the Assistant Federal Defender for the State of Pennsylvania was the first to challenge the admissibility of fingerprint evidence based on his own interpretation of the Court's ruling and the five "Daubert" criteria. Robert Epstein, defense counsel for Byron Mitchell on trial for armed robbery, posed the question: + +Is there a scientific basis for a fingerprint examiner to make an identification, of absolute certainty, from a small distorted latent fingerprint fragment, revealing only a small number of basic ridge characteristics such as nine characteristics identified by the FBI examiner at Mr. Mitchell's first trial? + +Friction ridge identifications usually involve smaller, frequently distorted friction ridge skin impressions that are compared to a usually larger rolled inked impression collected under a controlled environment. Therefore, it is necessary for the friction ridge identification specialist to analyze the latent print and assess if sufficient quantity and quality of unique friction ridge features can be observed. If sufficient information exists, the latent print is deemed to be "identifiable." Dr. Christophe Champod, co-author of a new book on fingerprint science [Champod et al. 2004] states that, "The number of studies devoted to partial marks, taking into account realistic features and effects such as pressure distortion and clarity is very limited. There is huge room for improvement here." + +A probability model, based on a statistical analysis of the frequency or rarity of all types of friction ridge features and modified to account for different types of distortion, may possibly be used to quantify the apparent "sufficient observed uniqueness" in a latent print and hence help support a conclusion of a positive fingerprint identification or "individualization." However, given our knowledge of the morphology of friction ridge skin, statistical analysis may never encompass all of the significant friction ridge features that can be observed in the latent print by the friction ridge identification specialist and applied to the identification process. Nevertheless, it is certainly an endeavour worthy of considerable attention by the scientific community. + +# Certainty + +A key phrase in Robert Epstein's statement is "of absolute certainty." How can Friction Ridge Identification Specialists do what they say they can do with absolute certainty? Dr. Champod argues that the opinion of positive identification made by the friction ridge identification specialist "is based on inductive reasoning" and, therefore, "must be probabilistic" [Champod et al. 2004]. However, others suggest that the probability is so small that it can be disregarded and hence the latent print examiner's conclusion of $100\%$ certainty is acceptable. Even Dr. Champod supports the view that individualization cannot be achieved through statistics, "But statistics can do no more than provide a probability. It is for others to decide on whether that probability is small enough to conclude identity of source" [Champod et al. 2004]. + +Dr. Champod further suggests that + +... the benefits from statistics applied to the fingerprint identification field will include a way to assess the statistical value of marks declared insufficient for identification. A model should allow probabilities to be assigned to partial marks, e.g., assessing the chance of finding another finger showing a limited number of matching features. + +Unfortunately, there are inherent risks in bringing a probability model out of academia and into a courtroom. In one DNA "match" case that he discusses, Andre Moenssens [2000] states, "The odds of the arrestee's DNA being wrongly matched against that of the crime scene were said to be one in 37 million." Moenssens believes that it is a common misunderstanding among lawyers, judges, lay persons and police that when a DNA "match" is reported with odds of one in 37 million that a like match in the DNA pattern exists once in 37 million people. This is clearly a misunderstanding of the statistics used by the experts. Monenssens continues by adding, "According to DNA scientist Keith Inman...it should be understood that the calculated frequency is an estimate, and can be off by an order of magnitude in either direction." + +# Crucial Considerations + +Correct interpretation of friction ridge features is critical to the friction ridge identification process. Whether or not these features are recognized, ignored, or given any significance can be seriously affected by any distortion present in the fingerprint. Another factor that should be considered is that the majority of crime scene fingerprints contain only $20\%$ of the information found in a rolled inked fingerprint. Therefore, the inability to accurately recognize friction ridge features or interpret them correctly may be detrimental to the reliability of the probability model. Unless the probability model accurately accounts for the effects of distortion in a crime scene print, the application of such a model could be detrimental to the judge or jury's assessment of the value of the fingerprint evidence. + +In my opinion, the conclusions of the Outstanding papers support the underlying premise that fingerprints are unique. Time constraints may have prevented a more detailed examination of comparing the probability of a fingerprint misidentification to a misidentification by DNA evidence. Unfortunately, the assumptions used in the models preclude the significance of distortion and lack of clarity in the crime scene print. Other factors, such as the subjective nature of latent print analysis by specialists with varying levels of training, knowledge, and experience, also need to be examined in assessing the odds of a fingerprint misidentification. + +# Conclusion + +I agree with Dr. Champod that "Statistical data, even gathered through myopic models, can only help the discipline work toward more reliable and transparent methods of assessing evidential value." I highly recommend that anyone interested in pursuing further the application of statistical analysis and probability models to friction ridge identification read his book [Champod et al. 2004]. + +I applaud the efforts of all this year's modeling teams in considering the important problems involved in the complex statistical analysis of friction ridge features and the advancement of the science of friction ridge identification. + +# References + +Beeton, Mary. 2004. Ridges and furrows. http://www.ridgesandfurrows. homestead.com/. + +Champod, Christophe. 2002. The Weekly Detail (7 January 2002) (weekly electronic newsletter at Kasey [2004]). + +_____, Chris Lennard, Pierre Margot, and Milutin Stoilovic. 2004. Fingerprints and Other Ridge Skin Impressions. Boca Raton, FL: CRC Press. + +Moenssens, Andre A. 2000. Mistaken DNA identification? What does it mean? Updated 20 October 2000. http://www.forensic-evidence.com/site/EVID/EL_DNAerror.html. + +Wertheim, Kasey. 2004. Complete latent print examination... a site FOR latent print examiners, BY latent print examiners. http://clpex.com. + +# About the Author + +![](images/16c723937662b8b0a573cf4de31d729edf5f79ed792a49f992de9ef33598910a.jpg) + +Mary Beeton was first introduced to the applied science of friction ridge identification through her training and education as an Automated Fingerprint Identification System Technician with the Durham Regional Police Service in Ontario, Canada. Her Website "Ridges and Furrows" is the culmination of many hours spent researching topics relating to the forensic discipline of friction ridge identification. Ms. Beeton frequently gives presentations on A.F.I.S., the history of friction skin identification, fingerprint patterns, and digit determination to police officers + +as part of their advanced training. Ms. Beeton is currently President of the Canadian Identification Society (C.I.S.), an organization with approximately 900 members from Canada, the United States, and other countries worldwide. The C.I.S. encourages forensic identification specialists to share their knowledge and experience and supports continuing research in all areas of forensic science. + +# Editor's Commentary: Fingerprint Identification + +Paul J. Campbell + +Mathematics and Computer Science + +Beloit College + +700 College St. + +Beloit, WI 53511 + +campbell@beloit.edu + +# Introduction + +Some problems from COMAP's Mathematical Contest in Modeling (MCM) and the Interdisciplinary Contest in Modeling (ICM) have arisen in very specific current situations, and it was not clear that specific ideas from the solution papers could have any immediate further application beyond the original setting. I am thinking here of MCM problems such as the + +- Emergency Facilities Location Problem (1986), +- Parking Lot Problem (1987), +- Midge Classification Problem (1989), +- Helix Intersection Problem (1995), +- Velociraptor Problem (1997), +- Lawful Capacity Problem (1999), +- Bicycle Wheel Problem (2001), +- Wind and Waterspray Problem (2002), +- Gamma Knife Treatment Problem (2003), and +- Stunt Person Problem (2003). + +The same is true of some of the ICM problems, such as the Zebra Mussel Problem (2001) and the Scrub Lizard Problem (2002). + +Other contest problems have arisen from situations that society faces chronically but have no urgency, yet the solution papers provide valuable ideas that could be put into practice. Here I include the + +- Salt Storage Problem (1987), +- College Salaries Problem (1995), +- Contest Judging Problem (1996), +- Discussion Groups Problem (1997), +- Grade Inflation Problem (1998), and +- Quick Pass Problem (2004). + +Finally, some problems have touched on issues of immediate concern, and the solution papers offer important insights: + +- Emergency Power-Restoration Problem (1992), +- Asteroid Impact Problem (1999), +- Hurricane Evacuation Problem (2001)—eminently relevant in multiple-hurricane season of 2004, +Airline Overbooking Problem (2002), +- IT Security Problem (ICM 2003), and +- Airport Security Problem (2004). + +Perhaps no problem has been as aptly timed, however, as the Fingerprints Problem of this year's MCM [Giordano 2004]. + +# Previous Developments + +The Outstanding papers for the Fingerprints Problem [Amery et al. 2004; Camley et al. 2004; O'Ceallaigh et al. 2004] and the commentaries by contest judge Michael Tortorella [2004] and practitioner Mary Beeton [2004] note the recent questioning in U.S. courts of the reliability of fingerprint evidence. That questioning took place after the U.S. Supreme Court set forth standards for admissibility of scientific testimony and evidence, the so-called Daubert criteria: + +1. that "the theory or technique" is one that "can be (and has been) tested"; +2. that “the theory or technique has been subjected to peer review and publication”; + +3. "in the case of a particular scientific technique, the court ordinarily should consider the known or potential rate of error ... and the existence and maintenance of standards controlling the technique's operation"; and +4. "general acceptance" in the "scientific community." + +509 (1993) U.S. at 593-594. + +The primary recent decisions about fingerprint evidence were made by Justice J. Pollak of the U.S. District Court for the Eastern District of Pennsylvania. In his first ruling, he agreed to the uniqueness and permanence of fingerprints and to allow the government to present evidence comparing latent prints and exemplars (in the terminology of Beeton [2004]) but disallowed any testimony that "a particular latent print is—or is not—the print of a particular person" [Pollak 2002, 49]. He found that only the fourth Daubert criterion was fulfilled ("general acceptance within the American fingerprint examiner community"), and that the difficulty with the Daubert criteria arises at the point that a fingerprint specialist uses subjective judgment and criteria to assert that two prints came from the same person [2002, 42-44]. + +Judge Pollak granted a subsequent hearing to reconsider his ruling and subsequently reversed his own decision, allowing such testimony. His change of mind resulted from being convinced by evidence presented that fingerprint identification does satisfy the "peer review" criterion of Daubert and also the "rate of error"/"standards" criterion ("there is no evidence that the error rate of certified FBI fingerprint examiners is unacceptably high" [2004, 36]). (Perhaps he would have a different opinion after reading the subsequent revelations by Heath [2004].) However, he still regarded the testing criterion as not met. Nevertheless, "to postpone present in-court utilization of this 'bedrock forensic identifier' pending ... research would be to make the best the enemy of the good" [2004, 49-50]. + +# Developments Since the Contest + +As the contest papers and the commentaries point out, fingerprint identification is not solely a scientific enterprise but takes place in an environment where human error can prevail. Various recent events pointedly identify some such sources of error. + +DNA testing is subject to similar errors; and with DNA evidence, even further questions can be raised, about possible contamination and the interpretation of the "odds" offered by DNA analysts (see Wood [1991] for discussion of the latter). + +# Appeals Court Ruling + +In an appeals ruling on a different case, the court found fingerprinting "testable" (though not completely tested), the error rate very low (though not + +"precisely quantified"), but standards to be lacking. Nevertheless, it found in the case at hand (U.S. v. Byron Mitchell) that most factors in the Daubert principles supported admitting the government's latent fingerprint evidence [Barry et al. 2004]. + +# Mistake + +Stephen Cowans, convicted in 1998 of shooting a police officer on the basis of a fingerprint match from a glass at the crime scene, was freed from prison in February 2004; reanalysis of the latent print showed that it did not match his prints [Mnookin 2004]. Fraud? Incompetence? Just plain error? + +# Misfiling + +Rene Ramon Sanchez was accused in an immigration court of being Leo Rosario and was arrested three times for Rosario's crimes, spending two months in custody. The reason: Sanchez's prints matched Rosario's. And they did match, perfectly; at least, they matched the prints that were on Sanchez's fingerprint card on record. That was because when police had fingerprinted Sanchez earlier on another charge (later dropped), they put Sanchez's prints on a card with the name and data for Rosario. Finally, the authorities compared photos of the two. The aggrieved Sanchez says that he has never received an apology from any of the authorities involved [Weiser 2004]. + +# Dueling Experts + +Brandon Mayfield, a lawyer in Portland, OR, was arrested and jailed in connection with the bombings of trains in Spain in April 2004. The basis was discovery of a fingerprint on a bag of detonators at the bomb scene, which three FBI fingerprint examiners concluded was a match to Mayfield's: a "100% positive identification." + +The FBI turned out also to be "100% wrong." Despite contentions all along by Spanish fingerprint experts that the match was "conclusively negative," the FBI maintained its position for five weeks. In a meeting of American and Spanish experts, the Americans maintained that the prints had 15 "Galton points" in common, while the Spaniards said there were only 7. (No specific minimum number of common "points" is required for an identification in the protocol used by the FBI.) + +Only after the Spaniards matched the print to Ouhnane Daoud, an Algerian, did the FBI admit that theirs had been a faulty match; subsequently, despite the match by the Spaniards, the FBI claimed that the print was unusable in the first place (i.e., the latent print was of poor quality). + +The Spaniards later expressed surprised at the FBI's single-minded pursuit of Mayfield, who had converted to Islam and had represented in a custody + +case an individual who was also a defendant in a terrorism case. "It seemed as though they [the FBI] had something against him, and they wanted to involve us." However, according to FBI authorities, the fingerprint examiners who made the mistaken match did not know Mayfield's name or anything about him [Kershaw 2004]. (Kershaw's article includes photos of the latent print and of Brandon Mayfield's; images are available at German [2004].) + +This is going to kill prosecutors for years every time they introduce a fingerprint ID by the FBI. The defense will be saying "is this a 100 percent match like the Mayfield case?" + +—U.S. Senate aide [Kershaw 2004, A13] + +# Philosophical Questions but Practical Implications + +As Tortorella [2004] notes, this MCM problem raises philosophical questions. + +# No Sound Statistical Foundation? + +One question that Tortorella mentions is about how to assign probabilities to the sample space of modeled fingerprints. + +The calculations in the Outstanding papers admit assuming independence of features from one area of a fingerprint to another (thus enabling their multiplication of probabilities), despite obvious local dependence (ridges cross multiple cells). + +However, more dangerously, implicit in the papers' calculations of the probability of a match is the assumption of a uniform distribution: that all the many fingerprints are equally (un)likely. As Mnookin [2004] puts it: + +Fingerprinting ... currently lacks any valid statistical foundation.... The important question is how often two people might have fingerprints sufficiently similar that a competent examiner could believe they came from the same person. This problem is accentuated when analyzing a partial print.... How often might one part of someone's fingerprint strongly resemble part of someone else's print? No good data on this question exist. + +The growing size of computer fingerprint databases makes this issue still more acute. As a database grows in size, the probability that a number of people will have strikingly similar prints also grows.... + +The FBI called the resemblance between Mayfield and Daoud's prints "remarkable." What is truly remarkable is that we simply do not know how often different people's prints may significantly resemble one another, or how good examiners are at distinguishing between such prints. + +# Is Science Certain ... Enough? + +A second key issue is the status of scientific truth and of evidence obtained by technical means. In May 2004, Gov. Mitt Romney proposed a death-penalty statute for Massachusetts (the state does not currently have a death penalty). A death sentence would require "conclusive scientific evidence" of guilt. + +Romney's proposal highlights the philosophical question: + +Can scientific evidence yield certainty? Should scientific evidence be regarded as more reliable than other evidence? Is it more reliable? + +Those who believe that scientific evidence is more reliable need to confront that today's science may be tomorrow's alchemy—"today's certainty is tomorrow's question mark" [Daley 2004]. The last few years have seen DNA analysis bring many cases of wrongful conviction to light; but many of those wrongful convictions were based primarily on the best "science" of the time, including microscopic analysis of hairs and also fingerprints. + +What we say in forensic science is the more certain the scientist is, the less reliable the scientist is.... [O]ur society can easily be taken in by science, and that is worrisome. + +James Starrs, Prof. of Law and Forensic Science, George Washington University [Daley 2004] + +# Subjectivity + +Fingerprint experts maintain, and the FBI agrees, that in the final analysis declaring a match of two fingerprints is a subjective decision, made by a human being based on training, experience, and all of the circumstances involved in the comparison. But what science is without subjective decisions, at some level? Judge Pollak found that the techniques in fingerprint identification have not been subjected to sufficient testing—so it is high time that the work be done to put fingerprinting on as unimpeachable a scientific basis as it deserves!—but nevertheless he was willing to consider such matching as "scientific" evidence. + +However, caution about subjectivity is in order, and not just in the realm of fingerprint matching: + +[F]ingerprints are valuable forensic evidence.... But when the evaluation of that data rests on a because-I said-so analysis, the door is wide open for injustice. And as Brandon Mayfield's case amply demonstrates, taking the government's say-so as definitive simply isn't enough. And when pseudoscience is turned loose in the context of the war on terror, the results may well terrify. + +—David Feige [2004] + +# References + +Amery, Steven G, Eric Thomas Harley, and Eric J. Malm 2004. The myth of "the myth of fingerprints." *The UMAP Journal* 25 (3): 215-230. +Barry, Becker, and Greenberg, Circuit Judges. 2004. Precedential. 29 April 2004. United States Court of Appeals for the Third Circuit. United States of America v. Byron Mitchell. No. 02-2859. caselaw.findlaw.com/data2/circs/3rd/022859p.pdf. +Beeton, Mary. 2004. Practitioner's commentary: The outstanding fingerprints papers. The UMAP Journal 25 (3): 267-272. +Camley, Brian, Pascal Getreuer, and Bradley Klingenberg. 2004. Not such a small whorl after all. The UMAP Journal 25 (3): 245-258. +Daley, Beth. 2004. Foolproof forensics. Boston Globe (8 June 2004). http://www.boston.com/news/globe/health_science/articles/2004/06/08/foolproof_forensics. +Ellement, John. 2004. Scientist rebuts reliance on fingerprints. Boston Globe (18 May 2004). http://www.boston.com/news/local/massachusetts/articles/2004/05/18/scientist_rebuts_reliance_on_fingerprints. +Feige, David. 2004. The inexact science of fingerprint analysis. Slate (27 May 2004). http://slate.msn.com/id/2101379/. +German, Ed. Problem identifiers: Madrid erroneous identification. Last updated 23 September 2004. http://onin.com/fp/problemsidents.html#madrid. +Giordano, Frank. 2004. Results of the 2004 Mathematical Contest in Modeling. The UMAP Journal 25 (3): 189-214. +Heath, David. 2004. Bungled fingerprints expose problems at FBI. Seattle Times (7 June 2004). http://seattletimes.nwsource.com/html/localnews/2001949987_fingerprints07m.html. +Kershaw, Sarah. 2004. Spain and U.S. at odds on mistaken terror arrest. New York Times (5 June 2004) (National Edition): A1, A13. +Mnookin, Jennifer L. 2004. A blow to the credibility of fingerprint evidence. Boston Globe (2 February 2004). http://www.boston.com/news/globe.editorial_opinion/oped/articles/2004/02/02/a_blow_to_the_credibility_of_fingerprint_evidence. +The Achilles' heel of fingerprints. The Washington Post (29 May 2004): A27. http://www.washingtonpost.com/wp-dyn/articles/A64711-2004May28.html. +O'Ceallaigh, Seamus, Alva Sheeley, and Aidan Crangle. 2004. Can't quite put our finger on it. The UMAP Journal 25 (3): 231-244. + +Pollak, J. 2002a. Opinion. 7 January 2002. United States District Court for the Eastern District of Pennsylvania. United States of America v. Carlos Ivan Llera Plaza, Wilfredo Martinez Acosta, and Victor Rodriguez. Cr. No. 98-362-10, 11, 12. http://www.dartmouth.edu/~chance/chance_news/for_chance_news/ChanceNews12.05/Pollak.pdf. +______ 2002b. Opinion [reversal]. 13 March 2002. http://wwwdartmouth.edu/~chance/chance_news/for_chance_news/ChanceNews12.05/PollakReverse.pdf. +Snell, J. Laurie. 2003. The controversy of fingerprints in the courts. http://www.dartmouth.edu/~chance/chance_news/recent_news/chance_news_12.05.html#item11. +Tortorella, Michael. 2004. Judge's commentary: The outstanding fingerprints papers. *The UMAP Journal* 25 (3): 261-265. +Weiser, Benjamin. 2004. Can prints lie? Yes, man finds to his dismay. New York Times (31 May 2004) (National Edition): A1, A17. +Wood, John B. Paternity probability: An inappropriate artifact [with commentaries]. The UMAP Journal 12 (1): 7-42. + +# About the Author + +![](images/f2a14ba996956b0f97282461f316590760426e5d335be195831950606e391b0d.jpg) + +Paul Campbell graduated summa cum laude from the University of Dayton and received an M.S. in algebra and a Ph.D. in mathematical logic from Cornell University. He has been at Beloit College since 1977, where he served as Director of Academic Computing from 1987 to 1990. He is Reviews Editor for Mathematics Magazine and has been editor of The UMAP Journal since 1984. + +# A Myopic Aggregate-Decision Model for Reservation Systems in Amusement Parks + +Ivan Corwin + +Sheel Ganatra + +Nikita Rozenblyum + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +We address the problem of optimizing amusement park enjoyment through distributing QuickPasses (QP), reservation slips that ideally allow an individual to spend less time waiting in line. After realistically considering the lack of knowledge faced by individuals and assuming a rational utility-oriented human-decision model and normally-distributed ride preferences, we develop our Aggregate-Decision Model, a statistical model of waiting lines at an amusement park that is based entirely on the utility preferences of the aggregate. + +We identify in this model general methods in determining aggregate behavior and net aggregate utility and use these methods, along with complex but versatile QP accounting and allocation systems to develop the Aggregate-Decision QuickPass Model. We develop criteria for judging QP schemes based on a total utility measure and a fairness measure, both of which the Aggregate-Decision QuickPass Model is able to predict. Varying the levels of individual knowledge, the QP line-serving rates, the ability to cancel one's QP, and the QP allocation routines, we obtain a variety of different schemes and test them using real life data from Six Flags: Magic Mountain as a case study. We conclude that the scheme in which individuals are able to cancel their QPs, know the time for which a QP will be issued, and are allocated to the earliest QP spot available provides park-goers with the greatest total utility while keeping unfairness levels relatively low. + +# Introduction + +As the number of park-goers increases, so do the waiting lines. In a QP system, rather than standing in a regular line, people can opt for a ticket to come back later and join a presumably faster line to the ride. People may hold only one active ticket at any given time. + +We develop a means to evaluate QP systems. We first develop a working economic understanding of myopic human decision-making in amusement parks. Applying this to all park-goers, we develop our Aggregate-Decision model, which predicts the statistical behavior of groups faced with queueing choices. We then include QP lines and develop the Aggregate-Decision QuickPass model, which describes large-group statistical decisions about joining a regular line or obtaining a QP ticket. We test various QP distribution schemes and compare them on the criteria of maximizing utility while maintaining an acceptable level or fairness. + +# Definitions and Key Terms + +- An amusement park is a collection of $n$ rides $R_{1},\ldots ,R_{n}$ associated with a number $P_{T}$ , representing the total population of the park (people in the park who are either looking, waiting, or are on a ride). +- For the $i$ th ride, $l_i$ is the number of people in line to get on the ride and $k_i$ is the rate (persons/min) at which the line moves. +- The fluid population $P_F$ of an amusement park is the number of people actively looking for a ride. +- The utility of a ride is measured by how long people are willing to stand in line for it, given that the alternative provides them with zero utility. Individuals have utilities $t_i$ , while the utilities for each ride have distributions $\mathcal{T}_i$ with expected values $\mu_i$ and standard deviations $\sigma_i$ . +- The preference that an individual has for a ride is normalized to $r_i = t_i / \sum_k t_k$ . +- The popularity of a ride $R_{i}$ is determined by its popularity rating $\rho_{i} = \mu_{i} / \sum_{k=1}^{n} \mu_{k}$ (we use here a first-order approximation of $E[\mathcal{T}_i / \sum_{k=1}^n \mathcal{T}_k]$ [Brown 2001]). +- A QuickPass system is a line-management scheme that allows a person to obtain a ticket for return later to a presumably faster QuickPass line. +- A QuickPass is live when it can be used by a holder to gain access to the QuickPass line. +- A QuickPass is active from the time of issue to the end of the time interval in which it is live. + +Symbol table. + +Table 1. + +
SymbolDefinitionUnits
Variables
liNumber of people waiting in the regular line for Ri(such that li,QP + li,NOQP = li)people
li,QPNumber of people waiting in the regular line for Riwith an active QuickPasspeople
li,NOQPNumber of people waiting in the regular line for Riwithout an active QuickPasspeople
qiNumber of people waiting in QP line for Ripeople
wi,exExpected free waiting timemin
kiRegular line speed (in non-QP model equal to ci)people/min
diQP line speedpeople/min
tiMeasure of the utility for Rimin
tCollection of tivector
riPreference for Ri an individual has based on the tiunitless
νImpatience measure for individual based on tiunitless
TiRandom variable representing based on distribution for tiabout μi with standard deviation σimin
TCollection of Tivector
ρiAggregate popularity for Ri based on the μiunitless
χExpected impatience measure based on μiunitless
Uj,iExpected net utility provided by Rj,gauged with variables from Rimin
Uqp,j,iExpected net utility provided by Rj(including QP waiting), gauged with variables from Rimin
Uqp,k,sExpected utility provided by Rk(including QP waiting)for people with QP for Rk which becomes live betweensI and (s+1)I time steps, gauged with variables from Rkmin
PFFluid population of the park (such that Pf,QP + Pf,NOQP = PF)people
PF,QPPart of fluid population with an active QuickPasspeople
PF,NOQPPart of fluid population without an active QuickPasspeople
QPi,tNumber of people with a QuickPass for Riwhich becomes live in t time stepspeople
QPiNumber of people with a QuickPass for Riwhich becomes live in any number of stepspeople
φiPreference distribution functionfunction
Constants
nNumber of ridesunitless
mNumber of QuickPass machinesunitless
RiRideunitless
μiExpected value of ti over the populationmin
σiStandard deviation of ti over the populationmin
ciServing rate of Ripeople/min
PTTotal park populationpeople
eRatio of the disutility for free waiting to line waitingunitless
κRatio of the time for free waiting to regular line waitingunitless
ILength of interval for which a QuickPass is issuedmin
CancelWhether active QuickPasses can be canceledboolean
DisplayTimeWhether QuickPass kiosk displays to the publicwhen the next QuickPass issued with become liveboolean
δi,sMaximum number of QuickPasses issued for Ri in interval sunitless
+ +- A time interval is the period of time over which a QuickPass is live. +- The free waiting time is the time from when a QuickPass becomes active to when the QuickPass becomes live. +- The net utility of a ride is the utility gained by taking the ride added to the disutility associated with waiting for it, plus (in the case of a QuickPass) the disutility associated with free waiting time. + +# General Assumptions + +# Time Stepping + +- The park runs continuously for a fixed amount of time (such as a working day), split into discrete time steps of length $\tau$ . We use $\tau = 5$ min. +- In one time step, a person can either take one action or make one decision: move to a ride to consider it (we assume that all rides are an equal distance apart and people can on average cover that distance in time $\tau$ ) or actually be on a ride. While considering a ride, a person can decide to get in line, get a QP, or wander on to another ride in one time step $\tau$ . +- A person in line will not leave before completing the ride. +- The serving rate $k_{i}$ of a ride is constant and independent of time and the length of the waiting line. +- Every ride takes one time step to run and lets out a batch of served people at the end of the time step in accordance with $k_{i}$ . Thus, the number of people let out per time step is $\tau k_{i}$ . +- People cannot trade QPs. (This assumption simplifies analysis.) Even if trading were allowed, it seems practically unfeasible. For a different perspective on reservation trading, see Prado and Wurman [2002]. +- The park population size "ramps up" to its target size over a small number of time steps and then stays constant. The ramp-up is consistent with arrivals at the park; it follows an exponential distribution (Figure 1). After reaching its maximum, the park population stays constant until the end of the popular period. In reality, a ramp-down period follows; but as populations dwindle and lines shorten, QPs play a lesser role. + +# Individual Behavior + +- All things are measured in utilities, and individuals seek to maximize their utilities. We measure all utilities in terms of time. Thus, the utility $t_i$ for + +![](images/98841952bb4151d1ecf30c1e3d3b03d631cce4f7ea6f1e8afb00c6b41e10f828.jpg) +Figure 1. Population levels as a function of time. + +taking a particular ride is measured by the length of time that it is worth waiting for the ride, given that all alternatives provide zero utility. We also measure the disutilities of waiting, as the total waiting time. + +- An individual's enjoyment of a ride is fixed, not affected by waiting time. +- Disutility is linearly proportional to time waiting in a line. +- Individuals are myopic: They know information only about the ride where they are and thus determine their expectations for other rides based on their own preferences $r_i$ and the line-serving rates $c_i, d_i,$ and $k_i$ . This is reasonable because in reality rides are a significant distance apart. +- An individual can immediately and accurately gauge how long a line is and its serving rate. +- Each individual knows $r_i$ , the individual's own preferences. + +# Aggregates + +- The population's preference distribution $\phi_{i}$ for riding each $R_{i}$ is normal. This is reasonable, because for large populations the central limit theorem [Weisstein 2004] applies—the time that a person is willing to wait for a ride is a function of many random variables. Moreover, we assume further that in any subset of the population, the distribution $\phi_{i}$ applies with equal validity. This is reasonable if the population (and the aggregate) is sufficiently large. Thus, it makes sense to discuss the random normally-distributed preference variables $\mathcal{T}_i$ of an aggregate. The actual specifics of the distribution, such as $\mu_{i} = E[\mathcal{T}_{i}]$ and $\sigma_{i}$ , can be estimated empirically, e.g., by taking surveys. +- The random preference variables $\mathcal{T}_i$ are independent of one other: Each person considers different rides independently. In reality, preferences might be correlated based on type of ride, age, and enjoyment of amusement parks in general, as well as other factors; but since there are many other factors within each ride, this assumption is reasonable. + +- The population preference distributions $\phi_{i}$ are temporally invariant. That is, aggregate preferences do not change with time and with ride experience; in essence, there is no aggregate memory. An individual's preference for a ride is likely a function of the number of visits [Prado and Wurman 2002], but we can either assume that this function is the constant function (that people have unchanging utility functions) or that preference changes over time cancel out in the distribution. If one person prefers a ride less after riding it, another will prefer it more. +- The popularity $\rho_{i}$ of a ride corresponds to the fraction of the fluid population that goes to and considers $R_{i}$ at any given time step. Effectively this is the fraction of the population for whom $R_{i}$ is their favorite ride. This assumption is a result of defining $\rho_{i}$ as a first-order approximate to $E[\mathcal{T}_i / \sum_{k = 1}^n\mathcal{T}_k]$ . + +# The Aggregate-Decision Model + +# Expectations of Our Model + +- After the ramp-up period, the marginal utility for each time step for small-population amusement parks will be greater than for larger amusement parks. In large parks, there is more crowding and thus longer lines and more disutility at each time step. +- Increasing the sum of the $\mu_{i}$ s should increase the cumulative utility over the course of the day. +- Increasing the popularity of a given ride should increase its line length. +- At a small park, people tend disproportionately toward the most popular rides. This expectation is suggested by the second-order expansion for expected value [Brown 2001]. At a large park, people tend toward popular rides less than expected, because of increased disutility from waiting in longer lines. + +# Individual Behavior with No QuickPass + +Let $T = (t_{1}, \ldots, t_{n})$ . We define $\nu = 1 / \sum_{k} t_{k}$ and $r_{i} = t_{i}\nu$ . The $r_{i}$ measure the individual's preference for a ride over the alternatives. We call $\nu$ the impatience measure, since multiplying by it normalizes utility (willingness to wait) and thus neutralizes differences in patience. We assume that people seek rides that they prefer most. Thus, a person considers the ride with the highest $r_{i}$ . + +An individual's net utility from a ride is the utility that the ride provides minus the disutility from waiting. As before, let $k_{i}$ be the rate at which the line + +moves and $l_{i}$ the length of the line; the approximate waiting time is then $l_{i} / k_{i}$ . Thus, for ride $R_{i}$ , a person's utility is + +$$ +U _ {i} (t _ {1}, \ldots , t _ {n}) = t _ {i} - \frac {l _ {i}}{k _ {i}}. +$$ + +Since individuals cannot know the lengths and speeds of the other rides, they must estimate the utilities of those rides. Let $U_{j,i}$ be the utility that the individual estimates for $R_{j}$ , using variables from $R_{i}$ (we assume that the individual is considering $R_{i}$ ). The person estimates $k_{j}$ and $l_{j}$ from their preferences towards $R_{j}$ and information about $R_{i}$ . In our model, a person assumes that the population has the same preferences as their own; that is, the number of people at a ride is proportional to the individual's preference for that ride, so that the person estimates that $l_{j} = r_{j}P_{T}$ . Furthermore, the person predicts the speed of the lines to be roughly be the same. Thus, an individual at $R_{i}$ would reason that + +$$ +U _ {j, i} (t _ {1}, \ldots , t _ {n}) = t _ {j} - \frac {r _ {j} P _ {T}}{k _ {i}} = t _ {j} - \frac {P _ {T}}{k _ {i}} \frac {t _ {j}}{\sum_ {k} t _ {k}}. +$$ + +Then the person, comparing the utility from $R_{i}$ with the expected utilities of the other rides, stays at $R_{i}$ if $U_{i}$ exceeds all the other expected utilities $U_{j,i}$ . Thus, a person joins line for the ride $R_{i}$ if $U_{i} \geq U_{j,i}$ for all $j \neq i$ . + +# Aggregate Behavior + +In dealing with an aggregate, randomly selected members considering ride $R_{i}$ have as preference the random normally-distributed variables $\mathcal{T}_i$ instead of $t_i$ , the case for the individual. Thus, because the utility functions $U_{i}$ , and $U_{j,i}$ are functions of the $t_i$ , they induce the following utility distribution variables on the aggregate population: + +$$ +\mathcal {U} _ {i} \left(\mathcal {T} _ {1}, \dots , \mathcal {T} _ {n}\right) = \mathcal {T} _ {i} - l _ {i} / k _ {i}, \quad \mathcal {U} _ {j, i} \left(\mathcal {T} _ {1}, \dots , \mathcal {T} _ {n}\right) = \mathcal {T} _ {i} - \frac {\mathcal {T} _ {i}}{\sum_ {j} \mathcal {T} _ {j}} \frac {P _ {T}}{k _ {i}}. \tag {1} +$$ + +# The Formal Model + +We develop an iterative process for determining how lines and utilities change as a function of time. + +In our model, $P_T$ (and $P_F$ ) "ramps up" until approximately 20 5-min time steps are completed and the park is at full capacity. At each time step, $\rho_i P_F$ people consider entering the line for ride $R_i$ . (This quantity may not be an integer; at our final calculation, we round.) The aggregate population considering $R_i$ has the utility distributions given by (1). + +An individual stays if $U_{i} \geq U_{j,i}$ for all $j \neq i$ . Since the $T_{i}$ are normal, the probability distribution function for each $T_{i}$ is + +$$ +\phi_ {i} (t _ {i}) = \frac {1}{\sigma_ {i} \sqrt {2 \pi}} e ^ {- (x - \mu_ {i}) ^ {2} / 2 \sigma_ {i} ^ {2}}. +$$ + +We would like to find the probability that $\mathcal{U}_i(T_1,\ldots ,T_n)\geq \mathcal{U}_{j,i}(T_1,\ldots ,T_n)$ for all $i,j$ . Define the domain $\Omega \subset \mathbb{R}^n$ as follows: + +$$ +\Omega = \left\{\left(t _ {1}, \dots , t _ {n}\right) \in \mathbb {R} ^ {n}: U _ {i} \left(t _ {1}, \dots , t _ {n}\right) \geq U _ {j, i} \left(t _ {1}, \dots , t _ {n}\right) \text {f o r a l l} j \neq i \right\}. +$$ + +Then the probability that a person prefers ride $R_{i}$ to all other rides is + +$$ +\tilde {P} = P (\mathcal {U} _ {i} \geq \mathcal {U} _ {j, i} \mathrm {f o r a l l} j) = \int_ {\Omega} \phi (\vec {t}) d \vec {t}, +$$ + +where the distribution function $\phi(t_1, \ldots, t_n) = \prod_{i=1}^{n} \phi_i(t_i)$ because the $\phi_i$ are independent of one another. So the number of people who get in line is the rounded value of the product of this probability and the number of people, or $\lfloor \tilde{P}\rho_iP_F \rfloor$ . Since $\Omega$ may be a complicated domain, direct integration is impossible. We compute this integral numerically, using a variant of the Monte Carlo method. The average utility gained for these people is + +$$ +\bar {U} = \langle U _ {i} \rangle = \int_ {\Omega} U _ {i} (\vec {t}) \phi (\vec {t}) d \vec {t}, +$$ + +and the total utility gained is the product of this with the (rounded) number who get in line, or $\bar{U}\lfloor \tilde{P}\rho_{i}P_{F}\rfloor$ . In a similar manner, we calculate the variance + +$$ +\sigma^ {2} = \langle U _ {i} ^ {2} \rangle - \langle U _ {i} \rangle . +$$ + +The model proceeds by adjusting the line counts, removing back into the fluid population people who do not enter the line, and increasing the total (and fluid) populations if in the ramp-up period. Figure 2 gives a flowchart. + +# Model Validation + +We programmed our model into a computer simulation. We simulate three different sized parks, with different $\mu_{i}$ s (thus creating different preference distributions and measures of impatience). Then we process the resulting distributions of people and the cumulative utility throughout the day. We find that our model meets our expectations for a basic human-behavior model. + +# The Aggregate-Decision QuickPass Model + +We add QP machines at $R_{1}, \ldots, R_{m}$ . QPs are given out for a constant time interval of $I$ units of our time step: A QP becomes live in one of the time intervals $[0, \tau I], [\tau I, 2\tau I]$ , etc. + +![](images/54ec3a565e72e754f8c4b012ae17fb052ee5ddc0ee9248a830c3b241c8bee16b.jpg) +Figure 2. A schematic depicting the steps occurring in the Aggregate-Decision Model. + +# Additional Definitions + +- A QP expires if + +- the individual does not enter the QP line during the QP's time interval (the individual forfeits the QP). +- The individual accepts another QP (in models that allow doing so). + +Since our framework presents a different decision model for QP holders vs. non-QP holders, we track the population of QP holders. + +- Let $q_{i}$ be population of the QP line at ride $R_{i}$ . Let $d_{i}$ be the rate at which $R_{i}$ draws from $q_{i}$ (people/min) and let $c_{i}$ similarly be the rate at which $l_{i}$ shortens—clearly, $d_{i} + c_{i} = k_{i}$ . +- $l_{i,QP}, l_{i,\mathrm{NOQP}}$ are the number of QP- and non-QP-holders in the line to $R_i$ , respectively. +- Similarly, $P_{F,\mathrm{QP}}$ , $P_{F,\mathrm{NOQP}}$ are the fluid populations of QP- and non-QP-holders, respectively. +- We do not track individuals but we track the QPs handed out for each time interval. Let $\mathrm{QP}_{i,s}$ be the number of QP users with QPs for ride $R_{i}$ at the $s$ th time interval $[sI, (s + 1)I]$ . The $\mathrm{QP}_{i,s}$ can decrease through forfeiting and increase according to the allocation routine of the QP scheme. + +# Assumptions about the QuickPass System + +We assume that $\mathrm{QP}_{i,s}$ is uniformly distributed throughout all lines and rides; that is, for any line, the people in that line with a QP are uniformly distributed throughout the line. However, the proportion of people with a QP can vary from ride to ride. + +There is a limit $\delta$ for the total number of QPs for a ride. + +# Formal Development of the Model + +The model relies on examining the populations with and without QPs, determining the proportion of each who take certain actions, and updating the populations and line counts. Figure 3 presents the intuitive summary of how our model works. We describe the model illustrating two different scheme factors: the case where QP holders may cancel their QP for another QP (the cancel model), and the case where they cannot (and must wait until their QP expires to obtain a new QP). + +![](images/b8ca44fdf11c87abf2a9fedff9cb207e88fb9a89cab6cd613d92b81d26f8e455.jpg) +Figure 3. A flowchart detailing the Aggregate-Decision QuickPass Model. + +# Aggregates without QuickPasses + +An individual without a QP must decide not only which ride to go to but also whether to get a QP for some ride, hence will examine the expected utility from staying at a particular ride vs. the expected utilities of other rides—just as in the non-QP model. However, with QPs, an individual also compares that with the expected utilities of obtainable QPs. + +As in the non-QP model, the individual estimates the utility of ride $j$ as + +$$ +\mathcal {U} _ {j i} (\mathcal {T} _ {1}, \ldots , \mathcal {T} _ {n}) = \mathcal {T} _ {j} - \frac {\mathcal {T} _ {j} P _ {T}}{c _ {i} \sum_ {k} \mathcal {T} _ {k}}. +$$ + +Now the person compares these utilities to the utility provided by a QP. Because the person cannot go on the ride immediately, there is a disutility proportional to the wait. We assume that the constant of proportionality $e$ is approximately the same for all individuals. Further, an individual assumes that the length of the QP line later will be approximately the current length. Therefore, the utility for a person getting a QP at ride $R_{i}$ (if available) is + +$$ +\mathcal {U} _ {i i} ^ {q p} (\mathcal {T} _ {1}, \ldots , \mathcal {T} _ {n}) = \mathcal {T} _ {i} - \frac {q _ {i}}{d _ {i}} - e w _ {i}, +$$ + +where $w_{i}$ is the time of the wait on the QP ticket. If there is free waiting-time clairvoyance, the individual will know $w_{i}$ in advance of getting the ticket; if not, and the person does not know what $w_{i}$ will be, the person approximates $w_{i}$ as proportional to the current length $l_{i}$ of the line. This is reasonable if we regard the $l_{i}$ and $w_{i}$ as correlated with the popularity of the ride. Thus, a person not knowing $w_{i}$ approximates it as $\kappa l_{i} / c_{i}$ , for some constant of proportionality $\kappa$ ; for rides other than $R_{i}$ , the approximation is $\kappa P_{T} r_{j} / c_{i}$ . Thus, we have + +$$ +\mathcal {U} _ {i i} ^ {q p} = \mathcal {T} _ {i} - \frac {q _ {i}}{d _ {i}} - e \kappa l _ {i} / c _ {i}, \quad \mathcal {U} _ {j i} ^ {q p} = \mathcal {T} _ {j} - \frac {q _ {i}}{d _ {i}} - \frac {\mathcal {T} _ {j}}{\sum_ {k} \mathcal {T} _ {k}} \frac {e \kappa P _ {T}}{c _ {i}}. +$$ + +So an individual decides to go on ride $R_{i}$ if the utility $U_{ii}$ exceeds each $U_{ji}$ and $U_{ji}^{qp}$ . Similarly, a person gets a QP at ride $R_{i}$ if $U_{ii}^{qp}$ exceeds everything else (that is, if $U_{ii}$ exists—if not, then the individual cannot opt for a QP). To find the proportions of the aggregate that make these decisions, we must integrate over the domains $\Omega_{0}, \Omega_{1} \subset \mathbb{R}^{n}$ in which $U_{ii}$ is greater than all alternatives and $U_{ii}^{qp}$ is greater than all alternatives, respectively. Similarly, we calculate the average utility gained by entering the line. + +# Aggregates with QuickPasses + +People who already hold QPs provide an additional complication. As Figure 3 demonstrates, a QP-holder considering a ride can forfeit the QP and enter a line, simply enter the line, obtain a new QP (in our cancel model), or do nothing. Because the decision to forfeit depends on the time interval in which the user holds the QP, we must split up our aggregates into proportions corresponding to each $\mathrm{QP}_{i,s}$ . Since each $\mathrm{QP}_{i,s}$ is uniformly distributed throughout + +the population of QP-holders, which in turn is uniformly distributed in line and in the fluid populations, the subset of $QP_{i,s}$ that is fluid is $P_{F,\mathrm{QP}}\mathrm{QP}_{i,s} / \mathrm{QP}_{TOT}$ ; hence, we can say as before that $\rho_{i}P_{F,\mathrm{QP}}\mathrm{QP}_{i,s} / \mathrm{QP}_{TOT}$ people consider ride $R_{i}$ . + +Then these people consider the expected and actual utilities of obtaining a QP for each ride and for entering any ride line. Via integration, we calculate the proportion of people who choose another QP, with the added comparison against the remaining utility of the existing QP $U_{k,i,s'}^{qp}$ , which only accounts for the remaining free waiting time (the lost time is sunk). + +We calculate the proportion of people who enter a line (and the respective average utility) as the previous section, with one caveat: If members of the QP population would forfeit their QPs by entering a line, then we must also ensure that the utility that they would gain by entering line is greater than the remaining utility of their existing QP $U_{k,i,s}^{qp}$ —an additional constraint on the domain of integration. + +# Adjustments to Populations, Lines, and Utility + +- At every ride, we must move people from lines and QP lines into the fluid population $P_{F,\mathrm{NOQP}}$ and $P_{F,\mathrm{QP}}$ . To account for the fact that our lines $l_i$ and $q_i$ may have fewer individuals than the rates at which they are drawn from $c_i\tau$ and $d_i\tau$ , respectively, we create a function that incorporates checking whether $(q_i \text{ or } d_i\tau)$ and $(l_i \text{ or } c_i\tau)$ , respectively, is the minimum. Let $x_i = \min(q_i, d_i\tau)$ , $y_i = \min(l_i, c_i\tau)$ , $\eta_i =$ the number of people who leave from $l_i$ , and $\zeta_i =$ the number who leave from $q_i$ . Then + +$$ +\eta_ {i} = \min \left(c _ {i} \tau \frac {x _ {i}}{d _ {i} \tau} + k _ {i} \tau \frac {d _ {i} \tau - x _ {i}}{d _ {i} \tau}, l _ {i}\right), +$$ + +$$ +\zeta_ {i} = \min \left(d _ {i} \tau \frac {y _ {i}}{c _ {i} \tau} + k _ {i} \tau \frac {c _ {i} \tau - y _ {i}}{c _ {i} \tau}, q _ {i}\right). +$$ + +Then from this ride, the new line and fluid population lengths become: + +$$ +P _ {F, \mathrm {N O Q P}} \leftarrow P _ {F, \mathrm {N O Q P}} + \left\lfloor \frac {l _ {i , \mathrm {N O Q P}}}{l _ {i}} \eta_ {i} \right\rfloor + \left\lceil \zeta_ {i} \right\rceil +$$ + +$$ +P _ {F, \mathrm {Q P}} \leftarrow P _ {F, \mathrm {Q P}} + \lceil \frac {l _ {i , \mathrm {Q P}}}{l _ {i}} \eta_ {i} \rceil +$$ + +$$ +l _ {i, \mathrm {N O Q P}} \leftarrow l _ {i, \mathrm {N O Q P}} - \left\lfloor \frac {l _ {i , \mathrm {N O Q P}}}{l _ {i}} \eta_ {i} \right\rfloor +$$ + +$$ +l _ {i, \mathrm {Q P}} \leftarrow l _ {i, \mathrm {N O Q P}} - \lceil \frac {l _ {i , \mathrm {Q P}}}{l _ {i}} \eta_ {i} \rceil +$$ + +$$ +q _ {i} \leftarrow q _ {i} - \lceil \zeta_ {i} \rceil . +$$ + +To see why this is so (up to our rounding), suppose that $q_{i} < d_{i}\tau$ (the argument for $l_{i} < c_{i}\tau$ is symmetrical). Then, since + +$$ +d _ {i} \tau \frac {y _ {i}}{c _ {i} \tau} + k _ {i} \tau \frac {c _ {i} \tau - y _ {i}}{c _ {i} \tau} > d _ {i} \tau +$$ + +(because $k_{i} \geq d_{i}$ ), note that $\eta_{i} = q_{i}$ , and $\zeta_{i}$ is as we would like, because $R_{i}$ takes people from $l_{i}$ with rate $c_{i}$ until $q_{i}$ is depleted, after which it takes from $l_{i}$ at the full rate $k_{i}$ —that is, unless this number is less than $l_{i}$ . + +- For all of the QP populations free-waiting another unit, subtract a unit of utility $e\tau$ . +- Once this process has been completed, for each ride $R_{i}$ , for the population $\mathrm{QP}_{i,s}$ where $s$ is an interval that is currently live, move a certain number of people into the QP line $q_{i}$ from $R_{i}$ . This number need not be the entire population $\mathrm{QP}_{i,s}$ but rather a random number predetermined by our model such that after the live period is over, the entire population has entered the QP line. This certainly corresponds to real life, in which the arrival time of humans in queues is erratic and can affect the dynamics of the queue. For each collection of individuals added to the queue, add the remaining utility of their QP—essentially $(t_{i} - q_{i} / d_{i})$ . +- Multiply each of the average utilities by the number of people entering the line, and add, to get approximate total net utility, and add these total net utilities to the total utility. +- Use proportions calculated and the aggregate size to determine the number of people entering a given line $l_{i}$ , from the fluid members of each $\mathrm{QP}_{i,s}$ (and consequently $P_{F,\mathrm{NOQP}}$ —these are added to $l_{i,\mathrm{QP}}$ ), and from the non-QP fluid populations (which are added to $l_{i,\mathrm{NOQP}}$ ). If the fluid members of $\mathrm{QP}_{i,s}$ entering the line are forfeiting their QPs, add them to $l_{i,\mathrm{NOQP}}$ instead, and remove that number of members from $\mathrm{QP}_{i,s}$ . +- Finally, use the proportions calculated to determine how many people are given a new QP, whether from the fluid non-QP population or from another QP population $\mathrm{QP}_{j,s}$ . Distribute this number among the $\mathrm{QP}_i$ using the assignment routine of the scheme. + +# QuickPass Schemes + +We propose alternatives to the four factors below: + +- Free-Waiting-Time Clairvoyance We allow people to know the free waiting time prior to getting a QP ticket, so they can better gauge whether to get a QP or to wait in line for the ride. +- Cancellation Flag People can get a new QP while a previous one is active; doing so deactivates the old QP. +- Service Protocol The QP line is served at a rate proportional to the length of the QP line, instead of at a constant rate. + +- Assignment Routine A QP is front-end-loaded if the time interval on the ticket is the closest time interval with an available spot open (whether from cancellation or otherwise). Such loading is efficient but leads to anomalies such as two people getting QPs within minutes but the second person having a shorter wait. A QP is queue-loaded if it goes to the next time interval that is not fully populated and assigns a person to that time. This scheme is fair but does not take into account cancellations. + +# Case Study + +To test the schemes, we study the amusement park Six Flags Magic Mountain, in Los Angeles, CA. Even though Magic Mountain does not use the Fast-lane technology (an electronic modified version of QP), many other Six Flags parks of comparable size and type do [Six Flags Theme Parks 2004]. We estimate $R_{i}$ , $k_{i}$ , $\mu_{i}$ , $\sigma_{i}$ , and $P_{T}$ . + +Magic Mountain has six rides with long waits during the busiest time, 3:00-4:00 [Ahmadi 1997]; we give these rides QP lines. We do not give QP lines to two other rides with medium-length lines, and we combine the other 20 rides into two generic rides with minimal utility, high $k_{i}$ (serving rates), $\mu_{i} = 5$ , and $c_{i} = 50$ . We calculate all $k_{i}$ data (except for FreeFall) from the Roller Coaster DataBase [Marden 2004]; Freefall's $k_{i}$ is calculated from the New Jersey Six Flags Freefall [Six Flags Great Adventure 2004]. Meanwhile, we estimate the expected utilities $\mu_{i}$ from the values for wait-line times between 3:00-4:00 [Ahmadi 1997]. Lastly, to introduce reasonable variation, we estimate the $\sigma_{i}$ to be one-fifth of the $\mu_{i}$ . Table 2 summarizes the values. + +Table 2. Line speeds and expected utility values for Case Study rides. The first group are lines with QP lines and the second group are without such lines. + +
RideNameki(people/min)μi(min)σi(min)
QP
R1Ninja26.67489.6
R2Colossus43.338016
R3Flashback18.336012
R4FreeFall3.757214.4
R5Psyclone20469.2
R6Viper28.335410.8
Not QP
R7Goldrusher29.17255
R8Revolution6.67357
R9Generic Ride 15051
R10Generic Ride 25051
+ +We also must estimate $P_T$ . Magic Mountain has daily attendance from 9,000 to 35,000 [Ahmadi 1997]; we estimate an expected $P_T$ of 20,000 people/day. From surveys, we estimate $e = 0.2$ (ratio of the disutility for free waiting to line waiting) and $\kappa = 3$ (ratio of the time for free waiting to regular line waiting). We assume that $\delta = d_i\tau I$ (maximum number of QPs issued for $R_i$ in an interval), which is the number of people that the QP line can handle in one time interval. + +# Criteria for Judging Schemes + +- Enjoyment: Measured in minutes, this quantity is the net total utility. A QP system should enhance total utility. +- Fairness: Fairness plays a major role in people's satisfaction [Larson [1987]]. In a fair system, a person obtaining a QP before someone else should be serviced first. We keep track of all relevant data by recording the allocation of QPs to different time intervals for each time step. + +# Representative Schemes and Simulation Results + +We choose five representative schemes (plus a control case based on the Aggregate-Decision Model), to which we apply our two criteria for success. The various features of the test schemes are indicated in Table 3. + +Table 3. Overview of properties of test schemes. + +
Scheme #12345Control
Waiting-time clairvoyanceNoNoNoYesYesControl
Cancellation allowedNoNoYesNoYesControl
Service protocol (A: Const. Am’t, P: Const. Prop'n)APAPPControl
Assignment routine (Q: Queue, FE: Front-end)QQFEFEFEControl
+ +# Conclusions and Extensions of the Problem + +# Solving the Problem + +In Figure 4 and in Table 4 we give the results, in terms of our criteria of enjoyment and fairness, for the five representative schemes plus control case. + +In enjoyment, Scheme 5 out-performs all of the other schemes (Figure 4). This result meshes with our expectations that increased knowledge and choices, as well as a more-efficient service protocol and assignment routine, result in higher utility. + +![](images/212249b4da37f90e44f6cb939325524bad7358ec1dc643098fcfc2b7ecba3081.jpg) +Figure 4. A graph demonstrating cumulative utilities and thus enjoyment levels associated with representative test schemes. Scheme 5 dominates after 50 time steps. + +Table 4. Overview of test scheme fairness as measured by QuickPass anomalies to assignments. + +
Scheme #12345Control
Ratio of QuickPass anomalies to QuickPass assignments0.000.000.360.030.040.00
+ +Queue-loading systems do not allow anomalies in QP assignments and therefore are the most fair by that standard. In addition, to prevent unfair line-length distributions, we should use the constant-proportion service protocol. Table 4 shows that all schemes except Scheme 3 result in about equal fairness levels. + +On both criteria, Scheme 5 performs nearly as well as (if not better than) the alternatives. Ideally, then: + +- the waiting time should be displayed, and +- people should be able to cancel a QP by activating another, +- the serving rate for the QP line should be proportional to the QP line length, +- QPs should be assigned via front-end loading (i.e., for the longest time interval with available space). + +# Further Study + +Our results have a recurring theme: The more knowledge and greater number of choices an individual has, the higher the cumulative utility tends to be. + +We propose that every ride should have an electronic display of the waiting times for normal and QP lines for all rides. + +Our model does not allow QP selling and trading, but allowing it might improve cumulative utility and fairness, as discussed in Prado and Wurman [2002]. + +# Strengths of Model + +The Aggregate-Decision model is a realistic and robust probabilistic framework that, for large population sizes, is statistically accurate at calculating aggregate behaviors. The utility functions are simple and realistic and reflect the processes of individual decision-making. The decisions, based on utility comparison, are reasonable and reflect the interest of the individual. + +# Weaknesses of Model + +The probabilistic nature of our model and related statistical flaws are the main source of weakness. Our model for the aggregate breaks down for small population sizes and small numbers of rides, where issues such as memory and changing preferences influence preference distributions. We assume that the distributions remain constant over time, but preferences change with experience (whether riding or waiting). Our model also assumes that preference distributions are independent of one another, while in reality we expect that more of the people who enjoy one roller coaster also enjoy a similar one. + +Whereas most humans would plan out a series of rides rather than considering just one ride at a time, our model does not have such forethought capabilities. + +Lastly, our assumptions on uniform distances neglect the actual geometric configuration of the park and the effect of that geometry on ride considerations. + +# Conclusion + +Our model takes into account the limited knowledge that influences the decisions of park-goers, based on economic assumptions. We apply our understanding of individual decision-making to develop a versatile model of aggregate decision-making for parks with and without QPs. + +We tested different QP schemes systems using data from the Six Flags Magic Mountain. Factors in a QP scheme include: + +- whether an individual has foreknowledge of the time interval for which a QP will be issued; +- whether people can cancel an active QP and obtain a new one; +- how people are fed onto the ride from the queues; and +- how QP times are allocated. + +Our criteria for successful schemes were: + +- Cumulative utility, summed over all people throughout the entire day, from taking rides and waiting in lines. +- Fairness, as measured by the ratio of the number of anomaly QP allocations to the total number QPs. + +We compared QP schemes and found that the scheme with the greatest utility has the following properties: + +- People have foreknowledge of when QPs are being issued for (perhaps by way of an electronic sign posting this information), +- people can cancel their QPs by switching to another QP, +- the QP line moves at a rate proportional to its length, and +- when people are allocated QP tickets, they receive tickets for the first available time interval. + +This scheme provides a cumulative utility (measured in minutes) of 7,000, while the next highest cumulative utility was only 5,000 (the control had cumulative utility of 2,500 minutes). This scheme had an anomaly-to-allocation ratio of 0.04, while other schemes had values as high as 0.36. + +# References + +Acklam, Peter J. 2004. An algorithm for computing the inverse normal cumulative distribution function. http://home.online.no/~pjacklam/notes/invnorm/. +Ahmadi, Reza H. 1997. Managing capacity and flow at theme parks. Operations Research 45 (1): 1-13. http://www.anderson.ucla.edu/x3241.xml. +Browne, John M. Probabilistic Design Course Notes. 2001. http://www.ses.swin.edu.au/homes/browne/probabilisticdesign/coursenotes/. +Larson, Richard C. 1987. Perspectives on queues: Social justice and the psychology of queueing. *Operations Research* 35 (6) (November–December 1987): 895–905. + +Levine, Arthur. 2004. Participating Six Flags Parks. http://themeparks. about.com/. +Marden, Duane. 2004. Roller Coaster DataBase. http://www.rcdb.com/. +Prado, Jorge E., and Peter R. Wurman. 2002. Non-cooperative planning in multi-agent, resource-constrained environments with markets for reservations. AAAI 2002 Workshop on Planning with and for Multiagent Systems, Edmonton, July-August, 2002, 60-66. +Press, William H., Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling. 1992. Numerical Recipes in C: The Art of Scientific Computing. New York: Cambridge University Press. +Six Flags Theme Parks. 2003. Six Flags Great Adventure. http://www.sixflags.com/parks/greatadventure/index.asp. +Weisstein, Eric. n.d. Central Limit Theorem. Mathworld. http://mathworld.wolfram.com/CentralLimitTheorem.html. + +# Theme-Park Queueing Systems + +Alexander V. Frolkin +Frederick D.W. van der Wyck +Stephen Burgess +Merton College, Oxford University +Oxford, England + +Advisor: Ulrike Tillmann + +# Summary + +We determine an optimal system for allocating QuickPasses (QPs) to themepark guests, subject to key criteria that we identify. We recommend a specific system—a way of deciding when a guest asks for a QP whether they should get one, and if so for what time. On the other hand, we warn against some plausible systems that would actually worsen the queueing situation. We also explain why some theme parks use an unfair way of allocating QPs, where late guests can fare better than early arrivals. + +The keys to our approach are two very different simulations with the same parameters. The Excel simulation breaks the day into 10-min intervals, works with groups of people, and is nonrandom. It is fast and allows us to test quickly many different QP allocation systems. The Perl simulation breaks the day into 1-min intervals, models individual people, and includes randomness. It is more realistic and flexible. Thus, the simulations are useful in different contexts. The fact that their results are consistent provides a strong safeguard against incorrect results caused by coding errors, a risk in large simulations. In addition, we carry out extensive tests of the stability of our model and the robustness of our recommendation. + +We conclude that it is best to allocate lots of QPs for slots early and late in the day, and fewer for the peak demand in the middle. + +We also explore modifications to the QP concept, including charging. + +# Introduction + +# The Problem + +Theme parks have introduced two basic types of new queueing systems: + +- In virtual queue systems, guests use a pager to register in a queue; it pages when "they" are near the head of the queue and should come to the ride. For example, Lo-Q Plc. has developed such a system, used in Six Flags theme parks in the U.S. [Six flags ... 2002]. + +- In QuickPass systems, rides can issue guests a QuickPass, allowing them to return to the ride at a specified time, when they can ride with minimal queueing. Examples of this type of system are Disney's FastPass® [Disney Tickets 2004] and Universal Studios Express Pass [Universal Studios n.d.]. + +Either type of system may or may not be free; Disney's FastPass® is free but guests pay for Lo-Q's pagers. + +We focus on the QuickPass system and analyse how to implement it effectively. We conclude with a brief comparison with virtual queue systems. + +# Criteria for a Good QuickPass system + +We take the following as general guidelines in choosing a QuickPass system. + +- At no time should more than $50\%$ of a ride's capacity be QP users. +- No ride should have a queue longer than $45\mathrm{min}$ +- The average waiting time should be as short as possible. +- Waiting times should be evenly distributed. (It is better that 100 people wait 20 min than that 50 people have no wait but another 50 wait 40 min.) +- The system should seem fair. People arriving later should not get QPs when previous guests have been refused them. Similarly, people arriving later should not be allocated earlier slots than previous guests. +- QPs should not be allocated for more than $4\mathrm{h}$ in the future. (We assume, based on personal experience, that people stay for only about $5\mathrm{h}$ .) + +# Summary of Our Approach + +- We collect data and perform calculations to obtain reasonable initial modelling assumptions. + +- We construct two computer simulations and modify them with further assumptions. Eventually we find the behaviour agreeable with common sense and consistent between the two simulations. +- We use two very different simulations with the same parameters. The Excel simulation breaks the day into 10-min intervals, works with groups of people, and is nonrandom. The Perl simulation breaks the day into 1-min intervals, models individual people, and includes randomness. Each approach has its advantages. The fact that the two simulations give similar conclusions provides very powerful evidence for the validity of the conclusions. +- We list systems for allocating QPs that seem likely to work well and test them using the simulations. +- We analyse the results with graphical interpretations and summary statistics. +- We assess the stability of our model under variations in input parameters and the robustness of our recommendation under differing conditions. + +# The Simulation Process + +# Initial modelling assumptions + +- We have in mind as an example a particular theme park, Thorpe Park, Surrey, UK, which we regard as a typical medium-sized park. +- We model a day running from 8 A.M.-6 P.M. The number of people in the park varies over the day and has a key impact on queue lengths. +- Thorpe Park has 2 million visitors per year, and the park is open for around 200 days per year [Thorpe Park Guide n.d.]. So we assume that 10,000 people visit the park on a typical day. Most arrive late morning or early afternoon and admissions stop well before closing time so that queues can subside. +- People who arrive early typically stay for about $5\mathrm{h}$ ; later arrivals stay until a short time before closing. +- There is one overwhelmingly popular ride, the Big Ride, the only ride for which we issue QPs. +- We estimate from personal experience that popular rides take 40 people and leave every $5 \mathrm{~min}$ , so the Big Ride has a capacity of 8 people/min. +- We offer QPs for free. Since there is then no harm in taking a QP, whenever they are available, people take them. +- We do not give guests more than one QP at a time. + +- For simplicity, we ignore the effect of people going around in groups; all guests behave independently of one another. The effect of grouping would be insignificant, because the size of each group is small compared to the total number of people. + +# The Perl Simulation + +The Perl simulation breaks the day into 1-min intervals, models individual people, and includes randomness. To implement it, we need only the following further very reasonable assumptions: + +- We model arrivals as a Poisson process with rate $\lambda(t)$ per minute, where $\lambda(t)$ varies with the time of day: + +8 A.M.-11 A.M.: 13 11 A.M.-3 P.M.: 27 + +3 P.M.-4.30 P.M.: 13 4.30 P.M.-6 P.M.: 0 + +We chose these numbers roughly in accord with personal experience with the aim of making the number of arrivals per day about 10,000, consistent with the modeling assumption. + +- We model departures as follows: + +- If someone arrives before 12.30 p.m., their length of stay (in hours) is a normally distributed $\mathrm{N}(5, 0.5^2)$ random variable. +- If someone arrives after $12.30 \mathrm{P.M}$ , their length of stay is a $\mathrm{N}(k, 0.25^2)$ random variable, where $k$ is the time between arrival and $5.30 \mathrm{P.M}$ . (So late arrivals typically leave at $5.30 \mathrm{P.M}$ and $97.5\%$ of them leave by $6 \mathrm{P.M}$ ). + +# The Excel Simulation + +The Excel simulation breaks the day into 10-min intervals, works with groups, and uses expected values and no randomness. So it requires more-significant further assumptions: + +- We model guests as either on the Big Ride, queueing for the Big Ride, or elsewhere. So we do not worry about the effect of other rides. +- We adopt the following distribution of arrivals every $10\mathrm{min}$ to implement the arrival behaviour described above: + +8 A.M.-11 A.M.: 130 11 A.M.-3 P.M.: 270 + +3 P.M.-4.30 P.M.: 130 4.30 P.M.-6 P.M.: 0 + +- We adopt the following departure distribution, in departures per $10\mathrm{min}$ , to implement the departure behaviour described above: + +8 A.M.-12.30 P.M.: 0 12.30 P.M.-3 P.M.: 130 +3 P.M.-4.30 P.M.: 270 4.30 P.M.-6 P.M.: 270-750 (incr. linearly) + +The spreadsheet medium was not suited to modelling departures in the more realistic manner of the Perl simulation. + +At the end of the day, queues shut at 6 P.M. and people then in the queues get a last ride after 6 P.M. + +# Baulking + +The initial simulations generated implausibly large queues, some more than a day long! We realised that we needed to introduce baulking—people being put off by long queues. Also, people with QPs for a ride should be much more easily discouraged from queueing for it than people without. (But if there is a small queue, QP holders should want to join it for an early extra ride.) We experimented with linear, exponential, and polynomial baulking models in both simulations, and found that the most realistic behaviour is given by using: + +- for non-QP-holders: the inverse quartic model, + +$$ +\mathrm {P} (\mathrm {c h o o s i n g t o q u e u}) = \left(1 + \frac {q d}{4 0 (1 + p) c}\right) ^ {- 4}, +$$ + +where + +$q$ is the queue length (people), + +$d$ is the ride duration (minutes), + +$c$ is the ride capacity (people taken each time the ride runs), and + +$p$ is the relative popularity of the ride (the probability of choosing it). + +Figure 1 shows a graph from this function family. Any guest will join an empty queue, but only one-sixteenth are prepared to wait $40\mathrm{min}$ . The adjustment factor $(1 + p)$ makes people more likely to persevere for more popular rides. + +- for QP-holders: the linear baulking model, + +$$ +\mathrm {P} (\text {c h o o s i n g t o q u e u e}) = \max \left(0, 1 - \frac {q d}{1 5 c}\right). +$$ + +An example graph is shown in Figure 2. Here, any guest will join an empty queue but none are prepared to wait for more than $15\mathrm{min}$ , irrespective of the popularity of the ride. (After all, they have a QP to come back later.) + +Finally, we found a problem in the Perl simulation. All rides in the simulation ran for 5 min and were "in-sync." The result was an implausible 5-min-periodic behaviour in the Big Ride queue. Removing the synchronicity by staggering the ride departures (some rides at 8:01, 8:06, 8:11, ...; others at 8:02, 8:07, 8:12, ...) eliminated this problem. + +![](images/d7d6a99c8271a0f579caddebe81904a7da99415cc22f3cf11ef8fbcbeea6db95.jpg) +Figure 1. An inverse quartic baulking function. + +![](images/9043cfda60d2d6c33d6a8c59bde0829eb95c12d416327033af690ecd3eccdeb8.jpg) +Figure 2. A linear baulking function. + +# Allocation Systems + +A QP system can be represented by an allocation matrix, a $10 \times 10$ matrix indicating the maximum number of QPs that can be issued by the end of a given hour for a particular future hour. (At the detail level of our simulations, a finer-grained system would make no difference.) + +For example, the matrix in Figure 3 indicates that by the end of the 08:00-09:00 hour, at most 100 QPs are issued for slots starting during the 11:00-12:00 hour and at most 50 for slots starting during the 12:00-13:00 hour. We always give out the earliest slots available at a given time, so that the system is seen to be fair. + +![](images/c8261418d133c9f5ad7bc94c5d8bbc657fba14be919c3d85c906933e4f9830ea.jpg) +Figure 3. Sample allocation matrix. + +For a different example, the matrix in Figure 4 indicates that by the end of the 08:00-09:00 hour, at most 100 QPs are issued for 13:00 slots, at most 150 for 14:00 slots, and so on. The values in a column are cumulative; so if 100 people take QPs between 08:00 and 09:00, only $150 - 100 = 50$ remain for people arriving between 09:00 and 10:00. + +![](images/28b50aa7e67c6fd6bdec9ade8ef09b324d057cbc890d34a2ad7a989232ce6fe4.jpg) +Figure 4. Another example of an allocation matrix. + +Notice some features of the matrix: + +- The lower triangular entries are irrelevant—we cannot issue QPs for the past, and we allocate them only for at least an hour in the future. +- The columns are nondecreasing because the entries are cumulative. + +- If a column is strictly-increasing, the system is not fair. In Figure 4, if 120 people want QPs between 08:00 and 09:00, the last 20 will be refused them. But there will still be 50 available for people arriving between 09:00 and 10:00—later. Similarly, consider the matrix in Figure 5. Now if 120 people want QPs between 08:00 and 09:00, the last 20 get 14:00 slots, while the first 50 people arriving during 09:00–10:00—later—get 13:00 slots. + +![](images/b43880f71a91eae01f469936ea6bffdf126c4e071e1b72683e42ab197de02c1d.jpg) +Figure 5. A third allocation matrix. + +# How the Perl Simulation Works + +The Perl simulation works in discrete time steps (usually 1-min long). Each person is assigned one of the states: Walking around, Queueing, Queueing with QuickPass, Riding, or Gone home. + +At each time step: + +- New arrivals are added to the people in the park; departures have their state set permanently to "Gone home." +- For each person in the park: + +- If the person is walking or has just entered, they check if they have overstayed their staying time, and if so, leave. If they have a QP for the current time, they move to the front of the queue for that ride; if not, they carry on walking with probability 0.8. (This is based on a geometric distribution with mean $5\mathrm{min}$ .) Otherwise, they choose a ride based on the rides' popularities and proceed as follows. + +* If the ride does not have QPs, they decide whether to queue or to carry on walking, based on the queue length, using the inverse quartic baulking model. +* If the ride does have QPs, they try to get a QP if possible (i.e., if available and they do not already have a QP). + +- If they obtain a QP, they decide whether to queue for the ride or carry on walking, using the linear baulking model. + +- If they do not obtain a QP, they decide whether to queue or carry on walking, using the inverse quartic baulking model. + +- If the person is queueing for a ride, they check if they have a QP for the current time, in which case they will leave their current queue and move to the front of the queue for their QP ride. + +- For each ride, carry out the following: + +- If it is time for the ride to take more people, all people on the ride are taken off (and put into the walking state), and the maximum number of people from the front of the queue (i.e., the ride's capacity, or the number of people queuing, if that is less) are put on the ride. + +The Perl simulation allows the QP system to be easily modified, by simply changing the allocation matrix entries. In addition, all the following parameters can easily be adjusted: + +- length of a time slot in the QP allocation matrix; +- length of a day; +- the function $\lambda(t)$ giving the mean number of people entering at time $t$ ; +- the function $\mu(t)$ giving the mean duration of a stay for a person entering at time $t$ ; +- the function $\sigma(t)$ giving the standard deviation of stay durations for people entering at time $t$ ; +- the probability that a walking person carries on walking at the next time step; +the number of rides, +- for each ride, the capacity, the duration, the popularity, and the start-time delay (e.g., whether the ride runs at 0, 5, 10, ... or at 2, 7, 12, ... time units). + +# How the Excel Simulation Works + +The Excel simulation tabulates the number of people in different places at 10-min intervals. At each time, it knows: + +- the number of people entering, the number of people leaving, and the total number of people in the park; +- the number of people in the queue for the Big Ride and hence its length; +- the number of people on the Big Ride; +- the number of people elsewhere without a QP for the Big Ride; + +- the number of people elsewhere with a QP for the Big Ride; and +- the number of QPs issued for this slot and hence the remaining capacity on the Big Ride for this slot. + +The number of people entering and leaving at each step is specified in advance. The main calculation at each step involves the following: + +- Add people coming off the Big Ride from the previous step to "elsewhere." +- Check how many QPs can be issued now for slots later in the day and give them out to anybody elsewhere without a QP who is interested in the Big Ride (fixed interest level, no baulking); change their status to having a QP. +- Check how many people are in the queue for the Big Ride. +- Deduce via the appropriate baulking models what fraction of people elsewhere without a QP are willing to join the queue for the Big Ride, and what fraction with a QP are nonetheless willing to queue for an early go on the Big Ride. Add these people to the queue. +- Remove from the queue people taking the Big Ride without QPs. + +The QP system can be easily modified (by changing the allocation matrix entries) and other parameters can be adjusted, but there is less flexibility than in the Perl simulation. + +# Results and Interpretation + +# Comparing the Two Simulations + +We first ran the two simulations with no QP allocation (system QP1), to calibrate the simulations; in particular, this involved setting the Big Ride popularity parameter appropriately. The Perl simulation gave generally longer wait times, but the overall qualitative behaviour of the simulations was the same. + +Figure 6 shows the queue profile from one run of the Excel simulation, and Figure 7 shows a waiting frequency barchart, averaged over 3 runs of the simulation. The Perl simulation generates similar results, but people tend to wait a little longer. In both simulations, the queue builds up over the day. It is long over several hours, so many people get a long wait. + +After the initial calibration, we did not modify the parameters; nonetheless, the simulations continued to give similar results, so we felt justified in using only the Excel simulation for testing allocation strategies. + +![](images/c0a8f39f61b445bb9e624727e48bb1d621ce9fcfd9d2f59ce15ffcfaf3786a7c.jpg) +Figure 6. Results of run of Excel simulation for no QuickPasses: queue profile. + +![](images/7aee477e9ed2e1d58e413a667c38bb2add33e0f4627bf49ebb2ef7b006aeac02.jpg) +Figure 7. Results of run of Excel simulation for no QuickPasses: waiting frequency. + +# Key Results + +We tried several models: + +- System QP2 allocates the full capacity of the ride to QPs, giving them out as soon as people want them. The queue for the Big Ride becomes huge, and many people have to wait a very long time, because no capacity remains for non QP-holders. This is a very bad way of allocating QPs. +- System QP3 allocates only half the capacity of the ride in QPs but still gives them out as soon as people want them. Now the queue stays short while QPs are issued (because people take a QP and tend to go away); but as soon as the QPs run out, the queue grows large, because the ride's capacity has been halved. Many people get a long wait. Things are worse than with no QPs, because the system shortens queues early (when they are short anyway), then reduces capacity at the busy time. +- System QP4 allocates QPs only up to $3 \mathrm{~h}$ in the future. This is better. A lot of people have a short wait and not many have an excessive one. +- System QP5 uses the allocation matrix in Figure 8 to issue QPs. It gives out many QPs for slots early in the morning and late afternoon, in a way carefully adapted to the arrival distribution. This system gives the best results, shown in Figures 9 and 10. The queue is roughly constant for much of the day, many people have a short wait (the QP users!), and nobody waits more than $40\mathrm{min}$ . + +![](images/ff19283f9d5d584ed87dfac1279502a9c0d46c06d0b63b343ffec9ecf5427224.jpg) +Figure 8. Allocation matrix for scheme QP5. + +# Recommendation + +We recommend System QP5; it gives a lot of people a queue-free ride, while otherwise leaving the queue situation not much worse than without QPs. This is really the best we can hope for, because QPs cut normal capacity and so tend to worsen the normal queueing situation. Here is how QP5 performs on our criteria: + +![](images/e453d7a017c68e544c6138336d6226a3a33f1c9e35a83e20a2862274c05c37a7.jpg) +Figure 9. QP5: queue profile. + +![](images/88b7ee493fc2c061584d0e605b3254059ccbb9b80076be8c9fd95d092f377545.jpg) +Figure 10. QP5: waiting frequency. + +- No more than $50\%$ of the ride is ever filled with QP users. (GOOD) +- The Big Ride never has a queue longer than $45\mathrm{min}$ . (GOOD) +- The average waiting time is similar to without QPs. (OK) +- Waiting times are not very evenly distributed, although better than in some systems. (POOR) +- The system is not fair. (POOR) +- QPs are not allocated for slots far in the future. (GOOD) + +# Assessment + +# Stability + +# Overview + +A good model should be stable: A small variation in the input parameters should cause a small change in the results. In addition, the direction of the change should usually be consistent with common sense. + +We varied the following parameters to see how much they changed results and whether the change matched what we expected. + +- relative popularity of the rides: We increased the popularity of the less popular rides (but kept them less popular than the Big Ride). This had the expected effect of decreasing the queue lengths. +- total number of rides: We removed 10 of the less popular rides (out of 20 rides). This had the expected effect of increasing the queue lengths by $50\%$ . +- probability of continuing to walk around: We increased it from 0.8 to 0.95. This had the effect of decreasing the queue length, because people were now more likely to carry on walking, so fewer people queued for rides. +- baulking models: We changed the fourth power in the inverse quartic model to the sixth power and changed the 15-min cutoff in the linear model to 20-min. The maximum queue length decreased from 50 min to 35 min. +- arrival rate: We changed the arrival rate for the first hour from 13 people/min to 50 people/min. This made the queue length grow much more steeply at the start of the day, as expected. + +# Conclusion + +The results of the tests above are all favourable. The model is stable and responds in the expected way to changes. + +As a further check, we calculated the standard deviation of the mean waiting times from 40 runs of the Perl simulation of systems QP1 and QP5. These were $0.46\mathrm{min}$ and $0.39\mathrm{min}$ , respectively, which is again favourable, as it indicates little change between the runs. + +# Robustness + +We checked the impact of big changes in the distribution of arrivals at the park (caused, for example, by weather conditions). + +We modified the arrival distribution to reflect a typical weekend day, or a day with very good weather. Queues lengths increased significantly in both cases, but were only slightly worse with QPs, which is good. + +Next, we modified the arrival distribution to reflect a day with bad weather in the morning—most arrivals after lunch. Again, the QP system coped with the change. + +So our recommendation is quite robust. + +# Improvements and Extensions + +In this section we discuss possible improvements to our model and extensions to the QP system. + +# Guests with Memory + +Our model assumes that a guest's behaviour at a given time is independent of their previous behaviour. In fact, people don't go on the same ride time and again. We could modify the Perl simulation to account for this: the more times a person takes a ride, the less likely they are to take it again (except perhaps after their first go, which could actually encourage them). The Excel simulation's structure makes it impossible to implement this change. + +# Finer-Grained QuickPass Allocation + +A larger QP allocation matrix could be used; this would be easy to implement, but such a level of detail would be incompatible with the limited accuracy of the simulations. + +# Charging for QuickPasses + +Charging for QPs would increase revenue to the park and (by reducing QP uptake) leave more capacity on the Big Ride for the normal queue. + +We model the extent to which cost deters guests. If we make the probability of a guest paying for a QP constant, the net effect is the same as issuing fewer QPs. + +A more plausible model has long queues making people more willing to pay; Figure 11 gives an example graph for a function from the family + +$$ +\mathrm {P} (\text {p a y f o r Q u i c k P a s s}) = \min \left[ 1, \frac {1}{k} \log_ {1 0} \left(1 + \frac {q d}{2 c}\right) \right], +$$ + +where + +$q$ is the queue length (people), + +$d$ is the ride duration (minutes), + +$c$ is the ride capacity (people taken each time the ride runs), and + +$k$ is an arbitrary constant, initially set to 2. + +In the morning, fewer QPs sell; later, when there are long queues, they all sell. + +![](images/3d192e48c3a5d4cd81ec2f377de666bde5fcb473db90380cb013c91ff48a7c70.jpg) +Figure 11. Logarithmic cost-reaction model: probability of paying for a QuickPass vs. queue length. + +A bigger effect could be achieved by charging more, corresponding to increasing $k$ . In principle, we could determine how $k$ varies with the cost of a QP; then running the simulation with different values of $k$ and recording the number of QPs sold, one could maximise revenue. Figure 12 shows the response of sales to price, assuming that a price of £p per QP gives $k = p$ . + +The graph's slope is sufficiently shallow that the highest price gives the highest revenue. However, the assumption that a price of $\pmb{\ell}p$ gives $k = p$ is unrealistic; a subtler model is needed. + +At any rate, charging certainly improves the queueing situation as well as raising money. On the other hand, it reduces guest satisfaction, a major drawback. + +![](images/686fa75c7569a678eee708bb6572d781ea34a5155f7adbe951cdf03d7da81099.jpg) +Figure 12. Sales of QuickPasses vs. price. + +# More than One Ride Offers QuickPasses + +We modified the Perl simulation to test what happens if QPs are issued for more than one ride (but never more than one at a time per guest); the behaviour is much the same as for QPs for only one ride. So we recommend the use of QPs for all popular rides. + +# More than One QuickPass at a Time + +Issuing people more than one QP at a time would require a sophisticated allocation system to avoid clashes, but the potential benefit is to move the system toward full scheduling and efficiency. We did not have time to test this idea. + +# Conclusion + +# Review of Our Approach + +Our two different simulations provide a strong safeguard against incorrect results. Extensive testing of the stability of the model and the robustness of our recommendation provides further support for our conclusions. + +On the other hand, our model has limitations. Treating guests as having a memory and using a finer-grained QP allocation matrix are important possible improvements. + +# Key Conclusions + +We recommend a QP allocation system that allows many people a queueing-free ride, without causing the normal queue to grow too much. + +However, early guests may be given QPs for a late slot, or even refused a QP, while later arrivals fare better. Our investigations show that unfairness is crucial to a good QP system. + +# References + +Disney Tickets. 2004. http://www.disney-ticketshop.com/. +Grimmett, G., and D. Stirzaker. 2001. Probability and Random Processes. Oxford, UK: Oxford University Press. +Grimmett, G., and D. Welsh. 2001. Probability: An Introduction. Oxford, UK: Oxford University Press. +Six Flags brings Lo-Q technology to nine parks for 2002. 2002. http://www ultimateaterollercoaster.com/news/stories/021402_03.shtml. +Thorpe Park Guide. http://www.thorpeparkguide.com/. +Universal Studios. n.d. Universal Studios discount tickets. http://universalstudios-orlandoflorida.com/universalexpress.htm. + +# Developing Improved Algorithms for QuickPass Systems + +Moorea L. Brega + +Alejandro L. Cantarero + +Corry L. Lee + +University of Colorado at Boulder + +Boulder, CO + +Advisor: Bengt Fornberg + +# Summary + +We model the arrivals at a "main attraction" of an amusement park by a Poisson process with constant rate; in a more advanced model, we vary the arrival rates throughout the day. The park is open $10\mathrm{h}$ /day, with a "peak arrival time" between 2.5 and $6\mathrm{h}$ of the park opening. + +We model how a group arriving at the attraction decides whether to enter the normal queue or to obtain a QuickPass (QP)—a pass to return later for a shorter wait. Their decision is governed by their desire to ride, the length of the normal queue, and the return time for the QP. + +We explore several models for assigning QPs. The basic model, which gives absolute priority to the QP line, is problematic, since it can bring the normal line to a halt. Our more advanced models avoid this problem by using a "boarding ratio"—either fixed or dynamically varying—for how many from each line to load onto the ride. We use polynomial regression to predict the behavior of the queues, and we determine a dynamic ratio that minimizes the wait times when the number of QPs that can be assigned per time interval is fixed. Finally, we combine these algorithms and determine dynamically both the boarding ratio and the number of QPs to issue throughout the day. + +Our advanced models tend to be extremely robust to small perturbations in starting parameters, as well as to moderate variations in the number of arrivals. We avoid the problems attributed to the current QP system, as long as the ride is not "slammed" with substantially more guests than its capacity. In addition, our system cannot print shorter return times than have previously been issued. Averaging over a two-month period, the total wait times, the + +number of people in each queue when the park closes, and the number of QPs issued are consistent across all models. + +# Introduction + +In amusement parks, guests spend a great deal of time waiting in line, especially for popular rides. To reduce the time in line and hence increase overall enjoyment, a "QuickPass" system has been implemented at various locations. Rather than waiting in the normal queue for a ride, a guest can choose to enter a virtual queue during a one-hour time window later in the day. + +Our model takes into account various factors, including the length of the normal queue, the number of people with a QP for the ride, and the percentage of the ride capacity that the QPs can commandeer. + +# Disney's "FastPass" System + +We base some of our system design on the "FastPass" system implemented by Disney Theme parks for their most popular rides. A guest who approaches a ride sees the projected wait time in the normal queue, as well as the current FastPass time window; if the guest chooses a FastPass, the system prints a ticket for that time window, which tells the guest when they can enter the FastPass line. Wait times in the FastPass queue tend to range from 5 to $10\mathrm{min}$ [R.Y.I. Enterprises 2004]. FastPasses are set to commandeer $40 - 90\%$ of the given ride's capacity [Jayne 2003]. A guest is allowed to get a FastPass every $45\mathrm{min}$ to $2\mathrm{h}$ , depending on how busy the park is. At popular attractions, FastPasses are often sold out before noon on busy days [Jayne 2003]. + +# Simplifying Assumptions + +- At all times, we know the number of people in the amusement park (determined using turnstiles to count the entries and departures). +- At all times, we know the number of people in both the normal and the QP lines. +- Groups arriving together act together (e.g., all wait in the normal line or all obtain OPs). +- People who obtain QPs always return during their allotted time and enter the QP queue. + +# Queue Flows and Wait Times + +The flow rates (people/min) for the queues are determined as follows: + +$$ +f _ {\mathrm {i n}} ^ {\mathrm {N L}} = \frac {\sum_ {i = 1} ^ {L} N _ {\mathrm {a r r i v a l}} ^ {\mathrm {N L}} (t + i \Delta t)}{\Delta t _ {\mathrm {f l o w}}}, +$$ + +$$ +f _ {o u t} ^ {\mathrm {N L}} = \frac {\sum_ {i = 1} ^ {L} N _ {\mathrm {e x i t}} ^ {\mathrm {N L}} (t + i \Delta t)}{\Delta t _ {\mathrm {f l o w}}}, +$$ + +$$ +f _ {\mathrm {i n}} ^ {\mathrm {Q P}} = \frac {\sum_ {i = 1} ^ {L} N _ {\mathrm {a r r i v a l}} ^ {\mathrm {Q P}} (t + i \Delta t)}{\Delta t _ {\mathrm {f l o w}}}, +$$ + +$$ +f _ {o u t} ^ {\mathrm {Q P}} = \frac {\sum_ {i = 1} ^ {L} N _ {\mathrm {e x i t}} ^ {\mathrm {Q P}} (t + i \Delta t)}{\Delta t _ {\mathrm {f l o w}}}, +$$ + +where + +- $N_{\text{arrival}}^{\text{NL}}(t)$ is the number of people entering the normal queue at time $t$ , +- $N_{\text {arrival }}^{\mathrm{QP}}(t)$ is the number of people entering the QP queue at time $t$ , +- $N_{\mathrm{exit}}(t)$ is the number of people leaving each queue at time $t$ , +- $L$ is a fixed constant defining the size of the interval over which we wish to compute the flow, and +- $\Delta t_{\mathrm{flow}}$ is the total time over which the sum is computed, $L\Delta t - t$ . + +Note that $L$ can vary over the day and the spacing $\Delta t$ is not necessarily uniform. + +We use the flow rates to estimate the waiting times in the two queues. Because the flow rates can change suddenly, we use linear regression on the previous two flow values and the current flow value to help smooth the data. + +# Basic Model + +We begin with a basic model describing the important aspects of the "primary attraction." We are interested in: + +- the frequency of arrivals at the attraction, +- the number of people in each group that arrives at the attraction, +- the lengths of the normal and QP queues when the group arrives, +- how many groups obtain a QP, and +- the current state of the QP system (e.g., can it assign any more QPs today, or is it "sold out"). + +Figure 1 is a flow-chart of the logic in the basic model. We first simulate the number of groups arriving at the attraction using a Poisson process with constant rate $\lambda$ . + +We use a Poisson random variable with a mean of 3.6 to simulate the size of a group; the sampled value is added to the minimum group size of 1 person, + +![](images/0d8ca7871f97bdb3cd363b98fa69db9818027b4561e45a852c923a18845fa882.jpg) +Figure 1. Flowchart for the primary processes in the basic model. + +resulting in a mean group size of 4.6 people. This choice of rate gives $84\%$ probability that groups have between 1 and 6 people. + +If the wait-time of the normal queue is less than $30\mathrm{min}$ , the group enters the normal queue; if it is longer, they get a QP $50\%$ of the time (unless the QP system is sold out, in which case the group enters the normal queue). + +The return time for the QP is $\max\left(t_{\mathrm{NL}}, t_{\mathrm{syst}}\right)$ , where $t_{\mathrm{NL}}$ is the predicted wait-time in the normal queue (based on the current number of people in both the queues and the ride capacity) and $t_{\mathrm{syst}}$ is the internal time of the QP system. The internal time is determined as follows: + +1. When the QP system first turns on, the system time (and the start time for the first QP issued) is set to $t_{C} + t_{\mathrm{NL}}$ , where $t_{C}$ is the current clock time (say, 1 h after the park opens). +2. From this point on, each time someone arrives at the attraction, we check: + +(a) If the QP system has reached the maximum number of QPs issued for a given start time, we increment the system time by a fixed value (e.g., 5 min). +(b) If $t_C + t_{\mathrm{NL}} \leq t_{\mathrm{syst}}$ , we issue a QP with $t_{\mathrm{start}} = t_{\mathrm{syst}}$ ; otherwise +(c) if $t_{\mathrm{syst}} < t_C + t_{\mathrm{NL}}$ then we issue a QP with $t_{\mathrm{start}} = t_C + t_{\mathrm{NL}}$ , and update the system time to $t_{\mathrm{syst}} = t_C + t_{\mathrm{NL}}$ . + +3. Once $t_{\mathrm{syst}} \geq T - 1.25 \, \mathrm{h}$ , where $T$ is the length of time that the park is open, we no longer issue QPs. + +This system avoids the problem of the current system, where if the length of the normal line fluctuates drastically, a QP can be assigned for a time, say + +$4\mathrm{h}$ away, and a short while later for a time only $1\mathrm{h}$ away. By resetting the system time to $t_{C} + t_{\mathrm{NL}}$ if this number is greater than the current system time, we guarantee that subsequent QPs always print a start time later than (or the same as) previous QPs. + +The QP is issued to each guest with a time window from the specified $t_{\text{start}}$ to that time plus one hour. We assume that all guests who obtain a QP return during their allotted time; their return is simulated using a uniform distribution over the hour for which the ticket is valid. + +Once the current group enters the normal queue or obtains a QP, we check to see if any of the attraction's cars have left since the last group arrived and update the number of people in each queue. + +# Improvements on the Basic Model + +# An Improved Decision Algorithm + +We now include a decision algorithm that enables a group to make a choice based on three factors: + +their desire to ride the main attraction $(d\in [0,1])$ +- the length of the normal queue $(L_{\mathrm{NL}})$ , and +the return-time for the QP $(L_{\mathrm{QP}})$ + +We define + +$$ +N \equiv L _ {\mathrm {N L}} - \mu_ {\mathrm {N L}}, \quad Q \equiv 0. 3 L _ {\mathrm {Q P}} - \mu_ {\mathrm {Q P}}, \tag {1} +$$ + +where + +$$ +\mu_ {\mathrm {N L}} \equiv \min \big (f (d), L _ {\mathrm {N L}} \big), \qquad \mu_ {\mathrm {Q P}} \equiv \min \big (f (1 - d), L _ {\mathrm {Q P}} \big). +$$ + +The function $f(d)$ translates the group's desire to ride the attraction into the length of time that they are willing to wait in line. The function $f(1 - d)$ determines how much later in the day the group would be willing to return for the QP queue. Using $0.3L_{\mathrm{QP}}$ instead of $L_{\mathrm{QP}}$ takes into account that people are more willing to wait in a "virtual" queue (where they can spend time riding other rides) than physically in line. + +We define $f$ as a quadratic function passing through the points $(0, 0), (1, T)$ and $(-1, T)$ , where $T$ is the length of time that the park is open. Thus, a person with zero desire would be willing to wait for no time, and a desire level of 1 would indicate that the group would wait all day for this ride. As a more realistic example, a desire level $d = 0.6$ indicates that the group is willing to wait in the normal line for $3.6 \, \text{h}$ ; they would prefer the QP option if the return time were less than $0.48 \, \text{h}$ . + +The value for $d$ , the group's level of desire to ride, is taken from a normal distribution $N(\mu = 0.5, \sigma = 0.1938)$ , with $99.2\%$ of its area contained in [0, 1]. We compute $N$ and $Q$ from (1). The quantity with the minimum value determines whether the group enters the normal line or opts for the QP. + +# Arrival Time at the Attraction + +Amusement-park-goers know that the best times to ride the big attractions are early in the morning and late in the evening, when there are fewer people and the queues are shortest. In the basic model, we simulate the interarrival time between groups as an exponential random variable with constant rate $\lambda$ . In a more realistic model, $\lambda$ should vary with the time of day—people arrive less frequently near the beginning and near the end of the day. We define this rate for the Poisson process as the piecewise continuous function: + +$$ +\lambda = \left\{ \begin{array}{l l} \left(\frac {1}{M} - \frac {1}{m}\right) t + \frac {1}{m _ {B}}, & 0 \leq t < a; \\ \frac {1}{M}, & a \leq t < b; \\ \left(\frac {1}{M} - \frac {1}{m _ {E}}\right) t - \left(\frac {1}{M} - \frac {1}{m _ {E}}\right) b + \frac {1}{M}, & b \leq t < T - c; \\ \frac {1}{m _ {E}}, & T - c \leq t < T, \end{array} \right. \tag {2} +$$ + +where + +- $M$ is the expected number of groups per minute at the peak of the day, +- $m_B$ is the expected number of groups per minute at the start of the day, +- $m_E$ is the expected number at the end of the day, +- $a$ and $b$ are the beginning and end of the peak arrival time, +- $c$ is the time at which the arrival rate assumes a constant value of $1 / m_{E}$ , and +- $T$ is the number of hours that the park is open. + +In our standard simulation, we take $M = 5$ , $m_{B} = 1.5$ , $m_{E} = 0.1$ , $a = 2.5$ h, $b = 6$ h, $c = 1$ h, and $T = 10$ h. + +# Dynamically-Calculated Number of QuickPasses + +In the simple model, the wait time for the QP queue is kept low by placing all QP guests on the ride before anyone from the normal queue. As a result, the normal queue may build up to a wait time of several hours. A better system is to fix $\alpha$ , the ratio of the number of people allowed to board from the normal queue to the capacity of the ride; $\alpha = 0$ reverts to the simple model of boarding all QP guests before anyone from the normal queue. + +To ensure that the wait time of the normal line remains reasonable, we need an educated guess of how many QPs to give out for future time intervals. Figure 2 shows the logic. The machine uses past and current information about the flow into and out of each queue to determine a projected wait time. The number of QPs for a time interval is determined by trying to keep the wait times of the queue below $20\mathrm{min}$ (QP) and $2.5\mathrm{h}$ (normal). + +![](images/2ed031ff0d21020b91147be1ab3cab661d66e6add2fadedef59a6d4a2f82c54b.jpg) +Figure 2. A flowchart of the QuickPass system that dynamically varies the number of QuickPasses offered for any one time interval. + +Every $15\mathrm{min}$ , the QP machine calculates the average number of people entering and leaving each line during that time interval. Then, using a polynomial regression through the last three available flow rates, it calculates estimate of the flow rates at $15\mathrm{min}$ past the start of the QP return time window, $t_f = t_r + 15\mathrm{min}$ . We approximate the flow rate from the current time $t_c$ to $t_f$ as the average of the current flow $f_c$ and the flow determined by polynomial regression: $\hat{f} = \frac{1}{2}(f_p + f_c)$ . An approximation to the number of people in the queue at time $t_f$ is + +$$ +N _ {f} \approx \bar {N} - \hat {f} _ {\text {o u t}} t _ {f} + \hat {f} _ {\text {i n}} t _ {f} = \bar {N} - \frac {1}{2} \left(f _ {p, \text {o u t}} + f _ {c, \text {o u t}}\right) t _ {f} + \frac {1}{2} \left(f _ {p, \text {i n}} + f _ {c, \text {i n}}\right) t _ {f}, \tag {3} +$$ + +where $\bar{N}$ is the estimated number of people currently in the queue. The number of people in the queue depends only on the current and projected flow rates for that queue and the estimated number of people currently in the queue. If (3) produces a negative number (indicating an unrealistic approximation of the flow rate), we use instead the current flow rate, + +$$ +N _ {f} = \bar {N} \frac {f _ {c , \mathrm {i n}}}{f _ {c , \mathrm {o u t}}}. +$$ + +The projected wait time $t_{\mathrm{NL}}$ for the normal line is given by the projected number of people in the queue, $N_f^{\mathrm{NL}}$ , divided by the estimated future outflow rate: + +$$ +t _ {\mathrm {N L}} = \frac {N _ {f} ^ {\mathrm {N L}}}{f _ {p , \mathrm {o u t}} ^ {\mathrm {N L}}}. +$$ + +The wait time for the QP queue is computed in a similar manner. If the wait times for the normal queue and the QP queue are below their maximum acceptable wait times and the flow for the QP line is zero, the number of QPs issued is determined using + +$$ +n = 4 (1 - \alpha) f _ {\mathrm {o u t} _ {\mathrm {N L}}} \min \left(2 0 \min , \frac {t _ {\mathrm {N L}}}{3}\right). +$$ + +If the flow for the QP queue is nonzero, we use + +$$ +n = 4 f _ {\mathrm {o u t} \mathrm {Q P}} \min \left(2 0 \min , \frac {t _ {\mathrm {N L}}}{3}\right). +$$ + +This calculation is done every 15 min or whenever all the QPs for the time interval have been issued. + +# Dynamic Ride-Loading Ratio + +In the previous model, we fix the number of guests from each queue who can enter the ride and dynamically vary the number of QPs issued for each time + +interval. Here, we consider the opposite idea: The QP system issues a fixed number of tickets for every time interval, and the parameter $\alpha$ (the number of people from the normal queue divided by the total capacity of the ride) varies. Figure 3 shows the logic chart for this system. + +![](images/09299d4bf2ea1c18ec5c709df138d9c6b74f83c9cead9d9a00953003452db3c7.jpg) +Figure 3. Regulating queue lengths by dynamically varying the ratio of people who board the ride from each queue. + +We begin the day with an arbitrary $\alpha$ between 0 and 1. Once the QP queue forms, a new value is calculated by minimizing the dimensionless weighted sum of the wait times for each queue. First, the average wait time for each line is calculated by determining the average outflow rates for both lines and the average number of people in each queue during that time, + +$$ +t _ {w} = \frac {\bar {N}}{\bar {f} _ {\mathrm {o u t}}}. +$$ + +Each waiting time is then normalized by the maximum acceptable waiting time, $20\mathrm{min}$ for the QP queue and $2.5\mathrm{h}$ for the normal queue, to create the weighting factors + +$$ +\beta = \frac {t _ {w} ^ {\mathrm {Q P}}}{1 2 0 0}, \qquad \eta = \frac {t _ {w} ^ {N P}}{9 0 0 0}, +$$ + +with times in seconds. We then determine the value for $\alpha = 1 - \kappa$ that minimizes + +the dimensionless wait time: + +$$ +\mathrm {d i m e n s i o n l e s s w a i t t i m e} = \left(\beta \frac {f _ {\mathrm {i n , Q P}}}{1 4 4 \kappa^ {2}} + \eta \frac {f _ {\mathrm {i n , N L}}}{1 4 4 (1 - \kappa) ^ {2}}\right) +$$ + +by solving for the real root between 0 and 1 of the cubic polynomial + +$$ +0 = (\mu + \gamma) \kappa^ {3} - 3 \gamma \kappa^ {2} + 3 \gamma \kappa - \gamma , +$$ + +where + +$$ +\gamma = \beta \bar {N} _ {\mathrm {Q P}} f _ {\mathrm {i n}} ^ {\mathrm {Q P}}, \mu = \eta \bar {N} _ {\mathrm {N L}} f _ {\mathrm {i n}} ^ {\mathrm {N L}}. +$$ + +# Dynamic Ps and Boarding Ratio + +In this model, we simultaneously vary both the number of QPs per time interval and the boarding ratio $\alpha$ . Hence, we need only ensure that the initial parameter value for $\alpha$ is reasonable. If the initial $\alpha$ is too large, the algorithm will indicate that the system should issue no QPs at all; the algorithm to update $\alpha$ will then continue using the same $\alpha$ because no one has arrived in the QP queue. Hence, we expect this new model to be sensitive to the initial value of $\alpha$ . + +# Three-Tiered Queueing + +The final improvement to the system adds a third queue, the Priority-OnePass (POP) queue, which is to the QP queue as the QP queue is to the regular line. This new queue will be shorter than the QP (now Priority II) queue, with a wait no longer than the arrival time between cars on the ride, but also with a return window of only 15 min that can be booked for as soon as 45 min from the current time. In essence, this new queue allows guests to "make an appointment" to ride the attraction. We assume that everyone who takes a POP or QP returns during their designated time window. + +The POP queue has a maximum of 600 tickets for the day: 15 for every 15-min interval throughout the day. However, because the POP machine starts only after the wait time for the regular queue has exceeded half an hour, some POP tickets may not be issued. + +The decision algorithm for the queues is relatively basic: If the return times for both queues are about the same, guests take the QP ticket $70\%$ of the time because it allows more flexibility. The other cases are shown in detail in (Figure 4). This model can give first return times that are not in chronological order. + +Once the group has decided to take a POP, their return time is chosen according to a $\chi^2(2)$ distribution, which favors return times closer to the current time while still allowing for times later in the day. + +We did not fully implement this model; in comparing the models, we focus on the previously described systems. + +![](images/01a3027a90b51c69c68b2975f8a567ca60f99c2a0ccf5074acd638cbdc0372d8.jpg) +Figure 4. Flowchart of decision algorithm for choosing a QP) or a POP. + +# Results + +# Basic Model + +The primary parameter in the basic model is $\lambda$ , the rate for the Poisson process for the arrival of groups; changing $\lambda$ effectively changes the popularity of the ride. + +In Table 1 we present daily totals for the basic model with a constant interarrival rate. Results are shown for various values of $\lambda$ , with a value of $\beta = 20$ QP tickets before incrementing the internal time by $5\mathrm{min}$ . The capacity of the ride is 7,200 people/day ( $\lambda^{-1} = 23$ ). + +As expected, when the total number of arrivals at the attraction is about $7,200 (\lambda^{-1} = 23)$ , the QP system never activates, because we never have a wait-time in the normal queue of longer than $30\mathrm{min}$ . For both $\lambda^{-1} = 10$ and + +Table 1. Results for the basic model for various values of the parameter $\lambda$ (the rate of arrival of groups at the attraction). + +
λ-1GuestsQPsTotal ridersQP ridersNormal line riders
1016,6572,3357,2002,3354,865
1710,0472,3627,2002,3624,838
217,8634417,1574416,716
237,37107,02507,025
+ +$\lambda^{-1} = 17$ , corresponding to numbers of guests that swap the system, the basic model issues about the same number of QPs. + +In addition to the day-end totals, it is interesting to look at the behavior of the queues (normal and QP), the flow-rates in and out of these queues, estimated waiting times in the queues, the expected wait-time for the ride (the total, at each time interval, of the predicted wait in the physical QP queue and that in the normal queue), start-times issued by the QP system, and the ratio of people choosing the QP queue over the normal queue, all as functions of time. [EDITOR'S NOTE: The authors' complete paper included numerous more graphs and analyses of these features; we cannot include them all here.] + +# When Demand Is Near Capacity ... + +The only parameter of the QP model that can be adjusted is $\beta$ , the maximum number of QPs that can be issued before the system's time clock ( $t_{\mathrm{syst}}$ ) advances. For example, with $\lambda^{-1} = 20$ (the mean interarrival time between groups is 20 sec), and with $\beta = 20$ QPs that can be issued before incrementing the system time by 5 min, we issue about 1,000 QPs. When we can distribute ten times as many— $\beta = 200$ QPs—before incrementing, we issue only about twice as many QPs; the limited number of people arriving during the 5-minute interval prevents a huge increase in the number of QPs issued (Figure 5). + +# What Happens When Demand Swamps Capacity? + +We compare the situation of $\lambda^{-1} = 20$ (the number of guests is approximately the capacity of the ride) with $\lambda^{-1} = 10$ (twice as many guests as the capacity), considering in each instance the cases $\beta = 20$ and $\beta = 200$ , the maximum number of QPs issued before the QP clock is advanced by $5\mathrm{min}$ . + +For the $\lambda^{-1} = 20$ cases, the overall wait-time increases linearly from when the park opens up until the QP system goes online, at which point it plateaus; the wait times increase again once the QP system is sold out for the day. The wait-time of the QP queue stays very short ( $< 4\mathrm{min}$ ). + +For $\lambda^{-1} = 10$ , guests begin entering the QP queue only an hour after the park opens, and QPs sell out 3 to $5\mathrm{h}$ after the park opens (5 h is halfway through the day), depending upon how many QPs are allowed. + +![](images/cace6cead453f864a46976a818b44ea943579d6c356380c4c9c03c5826fac892.jpg) + +![](images/22b6227940399ef118723938aeab62130b2f2e727f0e3a9eea269a60026b4f91.jpg) + +![](images/54edd982c43906a0f01921811642e05cc6c53705f6ce8c4a2082b550f03c066a.jpg) +a. $\lambda^{-1} = 20, \beta = 20$ . + +![](images/77c1c2f48e41562efe523da77735305ddb4dc3410854cf0d5726d9c88d095ef9.jpg) +b. $\lambda^{-1} = 20, \beta = 200$ . +Figure 5. Basic model: Inflow and expected wait time with $\lambda^{-1} = 20$ (8,000 guests), for two values of $\beta$ , the number of QPs that can be issued before the QuickPass system increments the start time by $5\mathrm{min}$ : a. $\beta = 20$ . b. $\beta = 200$ . + +Because the number of total arrivals stays approximately constant throughout the day ( $\sim 30$ people/min), the high priority for QP riders substantially slows down the normal queue (even though when the QP system is active, the flow rate into the normal queue drops to $\sim 15$ people/minute), increasing the average wait-time for people in the normal line and leaving an enormous number of people in the normal line when the park closes (Figure 6). This is clearly not an ideal system. + +# Improved Model + +The basic model has groups take a QP $50\%$ of the time when the normal queue had a predicted wait-time longer than $30\mathrm{min}$ . In the improved model, the decision of which queue to enter is based on the desire to ride the attraction, the length of the normal queue, and how much later in the day they would return if they took a QP. + +![](images/7100fa42ec85d9305f76bf7d6341dcb3276f918f2fe64d132ccda274a7134421.jpg) + +![](images/a35a233fc12f958fe019fddbcb452b93f8d75b5a54b15d255a4dc3a37b553b45.jpg) + +![](images/986e8d690f0fc6a5b9ea5b09b6f0df4da65e66ab4d558c9ad23bd0516801cdaf.jpg) +a. $\lambda^{-1} = 20$ (8,000 guests), $\beta = 200$ max QPs before time increment. +Figure 6. Basic model: Inflow and length of normal queue with $\beta = 200$ , for varying arrival intensity. a. $\lambda^{-1} = 20$ (8,000 guests). b. $\lambda^{-1} = 10$ (16,000 guests). + +![](images/4b920247051e6dff231bbffba9eb23050b49db88e5d4c126a1baea64ccffd9ca.jpg) +a. $\lambda^{-1} = 20$ (8,000 guests), $\beta = 200$ max QPs before time increment. + +The behavior for the two decision algorithms is similar when the number of arrivals corresponds reasonably closely with the ride capacity ( $\lambda = 1/20$ ). When arrivals overwhelm capacity, however, the new decision algorithm tends to "flood the queue." The plots in Figure 7 give the predicted wait-time for the QP queue throughout the day with the new decision model, for two levels of ride demand. Because the number of arrivals is so much greater than the ride capacity when $\lambda^{-1} = 10$ , the normal queue has no outflow during the middle of the day and its length grows so long that more and more people take a QP, until the QPs sell out. Even though many more people choose QPs in this situation, the QP queue still empties out nicely by the end of the day. + +# Basic Model, Variable Interarrival Rate + +We revert to the original "dummy" decision algorithm, where people choose the QP $50\%$ of the time when the normal line's wait-time exceeds $30\mathrm{min}$ and the interarrival spacing is given in (2). We choose a standard set of parameters + +![](images/8ef06409688f58b92377cc6252747cafafb408952fa61d62fdf07e9ddc0cd27a.jpg) +Figure 7. Basic model with improved decision algorithm: Expected wait time in the QuickPass with $\beta = 200$ QPs issued before the QP system increments $t_{\mathrm{start}}$ by 5 minutes, for two levels of arrival intensity. a. $\lambda^{-1} = 20$ . b. $\lambda^{-1} = 10$ . + +![](images/549be7280090ea1fbad2a262cc579909746baa5db616ae7da40205841b2d31f3.jpg) + +that result in a rush of arrivals during the peak hours but that does not cause a total demand greater than the ride's capacity. We define the peak arrival time to be between $2.5 < t < 6\mathrm{~h}$ after the park opens. The standard parameters that we use are: $M = 5$ , $m_{B} = 1.5$ , and $m_{E} = 0.1$ groups/min, where $M$ is the maximum number of groups/min during the peak-times of the day, $m_{B}$ is the minimum number as seen at the beginning of the day, and $m_{E}$ is the minimum as seen at the end of the day. With the interarrival spacing dependent on the time of day (and hence on the occupancy of the park), the normal queue nearly empties by closing time. + +# Variable Rate and Improved Decision Algorithm + +Using both of the improvements to the basic model, and approximately 7,400 people arriving at the attraction, we issue 1,500 QPs if we allow $\beta = 20$ QPs per $5\mathrm{min}$ and 3,000 if we allow $\beta = 200$ QPs per $5\mathrm{min}$ . The peak wait-times are respectively $3\mathrm{h}$ and $2\mathrm{h}$ . + +# Dynamically Varying Parameters + +[EDITOR'S NOTE: Although the authors presented results on separately varying the boarding fraction $\alpha$ and the number of QPs, space does not permit including those results here.] + +# Varying Both $\alpha$ and the Number of QuickPasses + +The final model allows both the borading ratio and the number of QPs issued per time interval to vary dynamically. Figure 8 shows the results of this model with our standard set of parameters describing arrival times throughout + +the day. We begin the day with $\alpha = 0.5$ . During the day, this value climbs as high as 0.6. The number of QuickPasses made available per 5 min period rises to 700 slightly more than $2\mathrm{h}$ after the ride opens, before declining to 0 about $2\mathrm{h}$ later. Although the expected wait time in the QP queue has a maximum at a little over $1\mathrm{h}$ , wait times in the normal queue reach $4\mathrm{h}$ , with the queue more than $1\mathrm{h}$ long at closing time. + +![](images/9881cf9d7df79f5aa903a6a773dc51521e400d0d1ca8a5662a6a748d2eb8ed0d.jpg) +Figure 8. Expected wait times in the queues with dynamically varying $\alpha$ (proportion of people boarding from the normal queue) and $\beta$ (the number of QuickPasses issued before incrementing the system time), for an initial $\alpha = 0.5$ . + +![](images/b9ab0ec54662b9976ee5873b724d04120ca030e2a5049db13ac18e3b64f52811.jpg) + +# Statistical Analysis + +We ran two-month trials for each QP system and summarized the overall performance of each QP system in terms of mean and standard deviation of various quantities. The average hourly wait-times are very similar for the four models, but with a larger variance (by as much as a factor of two) for the models that vary the number of QPs issued. Those tend to result in fewer people remaining in the queues when the ride shuts down for the day; they also tend to issue fewer QPs. The total and maximum queue wait-times are surprisingly consistent across the four models. + +# Strengths and Weaknesses of the Models + +# Strengths + +- Our models are fairly robust to changes in parameters, including the two most important parameters, the boarding ratio and the number of QPs to issue. +- Our QP system cannot move "backward" in time. For example, it will not + +print out a QP for four hours in the future and then a half hour later print a ticket for one hour in the future. + +# Weaknesses + +- All the models rely on flow data, which can vary rapidly over short time intervals. Thus, the wait times for the two queues can change rapidly, even when we use linear regression to estimate better the average flows. Because we use flows to determine wait time, the average wait time is only a rough approximation of the actual average; to obtain a better sense of the average, we would need to follow individuals through each queue and determine exactly how long each guest waits for the ride. +- The models assume that everyone who obtains a QP returns during their allotted window and that everyone in line stays in line until they reach the ride. In reality, some guests with QP tickets miss their window or decide not to return, and some in the normal queue get frustrated with the wait and leave. +- In addition, our models look at only a single ride. If several rides have a QP system, all ride systems must interact to determine how many QPs to give out for each ride in a single interval. A more complex model would have to take into consideration how people move between rides and how long they are willing to wait based on the lines of other rides in the park. + +# Conclusion + +We model the arrival of groups at an attraction and their decision process when faced with the option of obtaining a QP. We analyze the effect of different versions of a QP system, including dynamically adjusting the system. Our system avoids current problems, such as printing sooner return times than those previously issued. + +For all our models, we obtain reasonable behavior when the number of people arriving at the attraction does not greatly exceed its capacity. Averaging behaviour over a two-month period, we find that the total waiting time, the number of people in each queue at the time the park closes, and the number of QPs issued per day are consistent across all our models within their statistical errors. Results of individual days show larger differences when the ride is "slammed." + +Finally, we developed but did not implement a more sophisticated algorithm with an additional PriorityOnePass option. + +# References + +Burden, R.L., and J.D. Faires. 2001. Numerical Analysis. 7th ed. Belmont, CA: Brooks/Cole, Thomson Learning. +Jayne, A.W. 2003. How much time does Disney's Fast Pass save? http://members.aol.com/ajaynejr/fastsave.htm. +R.Y.I. Enterprises, LLC. 2004. Fastpass. http://allearsnet.com/tp/fastpass.htm. +Ross, S.M. 2002. A First Course in Probability. 6th ed. Upper Saddle River, NJ: Prentice Hall. +__________ 2003. Introduction to Probability Models. 8th ed. New York: Academic Press. +Strang, G. 1988. Linear Algebra and its Applications. 3rd ed. Philadelphia, PA: Harcourt College Publishers. +Tijms, H.C. 1994. Stochastic Models: An Algorithmic Approach. New York: John Wiley & Sons. +Wackerly, D.D., W. Mendenhall, and R.L. Scheaffer. 2002. Mathematical Statistics with Applications. 6th ed. Duxbury, CA: Duxbury, Thomson Learning. +Werner Technologies. n.d. Disney Fastpass information. http://www.wdwinfo.com/wdwinfo/fastpass.htm. +Yakowitz, S.J. 1977. Computational Probability and Simulation. Reading, MA: Addison-Wesley Publishing Company. + +# KalmanQueue: An Adaptive Approach to Virtual Queueing + +Tracy Clark Lovejoy +Aleksandr Yakovlevitch Aravkin +Casey Schneider-Mizell +University of Washington +Seattle, WA + +Advisor: James Allen Morrow + +# Summary + +QuickPass (QP) is a virtual queueing system to allow some theme-park guests to cut their waiting time by scheduling a ride in advance. We propose innovative QP systems that maximize guest enjoyment. Only a small portion of guests can effectively use QP, and a good system maximizes this group subject to the constraints that regular users are not significantly affected and maximum waiting time for QP users is small. + +We define and test a simple model for single-line formation and then develop two QP systems, GhostQueue and KalmanQueue. GhostQueue is intuitive and simple but would be far from optimal in practice. We then propose that the best model is one that adapts to its environment rather than trying to enforce rigid parameters. We implement KalmanQueue, a highly adaptive system that uses an algorithm inspired by the Kalman filter to adjust the number of QPs given today based on the maximum length of the QP line yesterday, while filtering out random noise. We simulate the KalmanQueue system with a $\mathbb{C}++$ program and randomized input from our line-formation model. This system quickly converges to a nearly optimal solution. It is, however, sensitive to some parameters. We discuss the expected effectiveness of the system in a real environment and conclude that KalmanQueue is a good solution. + +# Introduction + +The underlying idea of how to reduce waiting time for some theme-park guests is simple: Rather than wait in line, guests get tickets that tell them when to come back; when they return, they wait in a shorter line before going on the ride. + +Many such systems have been implemented, including QuickPass, FastPass, Freeway, Q-lo, and Ticket-To-Ride; some have failed and others have thrived. The appeal to the parks is twofold: The systems increase guests' enjoyment, and guests spend more money in the park instead of waiting in line. + +All systems that we researched assume that the number of QP users is small and manage the system either by restricting the total number of QPs or by charging a fee for them. + +# Plan of Attack + +We seek to maximize guest enjoyment. + +- Define Terms: We state a definition of "guest enjoyment" and explain what affects it, why, and how we model it. +- State Assumptions: We restate the problem in a mathematical way. +- Describe a Good Model: An effective QP system has certain desirable characteristics. describing these steers our model in the right direction. + +We then present our models: + +- Line-Formation Model: The QP system is a line-manipulation system, and we cannot hope to design it without first understanding line formation. +- GhostQueue: A Simple Model: We describe a simple but limited approach to a QP system. +- KalmanQueue: An Adaptive Algorithm: We propose, model, and test an adaptive algorithm as a solution to the QP system. We provide a simple implementation using an adapted Kalman filter and test it using randomly generated input. We then discuss the strengths and weaknesses of our specific implementation and of the model in general. + +# Increasing Enjoyment + +Guests enjoy wandering freely more than they enjoy waiting in a line. This is the basic assumption that makes virtual queueing potentially useful for increasing guest enjoyment. When not in line for a specific ride, guests + +can enjoy more of the park's attractions, including other rides, food courts, and shopping areas. We assume that enjoyment of the park increases as overall waiting time decreases. + +- All guests must perceive that they are treated fairly. A QP system must operate logically and be comprehensible, at least in function, to the guests. A system that is perceived as random may cause discontent even if it minimizes waiting times. We see an example of this in the problem statement, where unexplained changes in scheduled times between adjacent tickets causes complaints. +- QP must not significantly deter from the enjoyment of those not using it. The population of QP users is small compared to the total park population. Regardless of QP implementation, rides must operate at capacity as long as there is demand, otherwise the general population is affected and upset. The QP system should be more enjoyable to those who use it and not significantly affect those who do not. + +# Properties of a Good Model + +- Solves the problem. Our model should maximize user enjoyment as we have defined it, subject to the constraints we defined. It need not be optimal, but it should be very good. +- Ease of implementation. We intend for this system to be used in an actual theme park. Thus, we aim for simplicity of implementation rather than mathematical complexity. +- Ease of use. We do not want a system that runs smoothly only when everybody shows up exactly on time but degenerates when this is not the case. +- Not be sensitive to random events. Park attendance varies, and how guests use rides can be modeled by various probability distributions. We want the effectiveness of our QP system not to decrease due to chance. +- Adjustable and adaptive. We do not want a model with a large number of parameters that must be re-set every day because of various conditions. We want a model that can easily be adjusted, or adjusts itself, based on its environment. + +# Basic Queueing Theory: Is It Useful? + +Queueing theory is a well-researched branch of mathematics, with applications ranging from grocery-store-line models to computer-processing event queues. We discuss its basic concepts and apply them to our problem. + +The QP system operates during peak hours, when lines for major attractions are not increasing at a significant rate. Thus, we can assume that we are in a steady state. This assumption makes sense, because we expect guests to stop getting into lines if they grow too large. + +In a steady state, we can make the following key assumptions: + +1. Mean guests served per minute, $\mu$ , is constant. +2. Mean guests arriving per minute, $\lambda$ , is constant. +3. On average, more guests are served per minute than arrive. That is, $\mu >\lambda$ + +Assumptions 1 and 2 mean that neither services nor arrivals depend on other factors, most importantly time and pre-existing line length. For both of these parameters, only the time-averaged input and output rates are considered, but the time between any two consecutive arrivals or departures need not be the same. This randomness leads to nonintuitive conclusions (below). Assumption 3 is valid, since if it were not the case, the line would continue to grow. + +Given these assumptions, the results are [Ruiz-Pala et al. 1967]: + +$$ +\text {m e a n n u m b e r o f g u e s t s i n l i n e} = \frac {\lambda^ {2}}{\mu (\mu - \lambda)}, +$$ + +$$ +\text {m e a n w a i t i n g} = \frac {1}{\mu - \lambda}, \tag {1} +$$ + +$$ +\text {p r o b a b i l i t y} = \rho = \frac {\lambda}{\mu}. +$$ + +Problems arise when $\lambda \approx \mu$ ; both the mean waiting time and the line grow arbitrarily large when $\rho$ is near 1. Consider the case of a ride with a wait time of $1\mathrm{h}$ , like those in Table 1: + +$$ +6 0 \min = \frac {1}{\mu - \lambda} \quad \longrightarrow \quad \mu = \lambda + \frac {1}{6 0}. +$$ + +To predict the waiting time accurately, even on the order of $1\mathrm{h}$ , one must know $\lambda$ and $\mu$ to at least two decimal places. This may be possible given accurate statistics over a period of time; however, these figures are not easily found, perhaps due to competition in the theme-park business. + +In short, we need a new model for long lines that can predict the long wait times shown in Table 1 and is not terribly sensitive to the parameters $\mu$ and $\lambda$ . + +We pursue this goal later; now we present a short example that suggests that queueing theory is useful when wait times and line lengths are small. + +Table 1. +Statistics for 10 popular rides at Cedar Point Amusement Park (with somewhat tongue-in-cheek "Thrill rating") [Cedar Point Information 2003]. + +
Thrill rating (out of 5)Average wait time (min)Riders/hrRide
315-301,400Blue Streak
315-302,000Iron Dragon
2151,800Jr. Gemini
430-452,000Magnum
4.5451,800Mantis
560+1,600Millennium Force
4.5451,800Raptor
560-1801,000Top Thrill Dragster
4.5451,000Wicked Twister
3.5301,800Wild Cat
+ +# Queueing Theory in Our Cafeteria + +We collected data during the noon lunch rush in our university's cafeteria (Figure 1), from which we find $\lambda = 1$ , $\mu_{\mathrm{subs}} = 1.1$ and $\mu_{\mathrm{pizza}} = 4$ , and + +$$ +\mathrm {p i z z a m e a n w a i t t i m e} = \frac {1}{4 - 1} = 0. 3 \mathrm {m i n}, +$$ + +$$ +\text {s u b m e a n w a i t t i m e} = \frac {1}{1 . 1 - 1} = 1 0 \quad \min . +$$ + +These figures agree with our experience. If our QP system has the same characteristics as the sub shop and the pizzeria, then we are in great shape. + +# Long-Line Formation Model with Limited Sensitivity + +We need to consider only one line, because we can treat every ride as independent. The number of visitors to a ride who have QPs for another ride is assumed to be small, so we neglect their impact. + +The queueing-theory results assume that the average arrivals and the average rate of service are constant. Here we discard these assumptions in favor of a differential-equation approach to modeling a line. Then we selectively add assumptions as necessary to produce a realistic approach. + +The rate of change of the length of a line should depend on the number of guests in the park, the probability that they want to join the line, and the constant service rate of the ride. For a line of length $L$ , the rate of change is given the input rate $I$ minus the output rate $O$ : + +$$ +\frac {d L (t)}{d t} = I - O. +$$ + +![](images/a6e5985378b979c0edf20bd890d57c4265ad91ad7e0a7e176bbfc0b6809fd1ef.jpg) +Figure 1. Observed line lengths as a function of time in the cafeteria during the lunch rush. + +The input is the number of guests who join the line. This is given by the product of the population $P$ who could get on the ride with the probability $\alpha$ that they are interested in it during one time interval. Hence, + +$$ +\frac {d L (t)}{d t} = \alpha P (t) - O. \tag {2} +$$ + +# Here + +- $\alpha$ , the probability that someone joins the line, is a function of the current length of the line and the perceived fun of that ride; +- $P(t)$ , the number of guests in the park, is a function of time; and +- $O$ is constant, since rides run only as often as the machinery allows. + +Let $\alpha$ be constant. Then, for the estimate of park attendance $P(t)$ shown in Figure 2, the solution to (2) is intuitive: The line has zero length until the park population reaches the $O / \alpha$ line, when it briefly has an increasing slope. Next, the slope is constant until park attendance begins to decrease. Only then does the line reach its maximum as park attendance falls below $O / \alpha$ . + +The longest lines occur around the peak. This is also the flattest part of the line-length curve, varying on the order of $\pm 10\%$ during the time span about + +![](images/62f6ac6e189e1f93d975de47fbe8cceb6f3f0db03bdb495ec68727f72450ee4b.jpg) +Figure 2. Line length $L(t)$ as predicted by (2) for constant $\alpha$ and the park population $P(t)$ shown. + +the peak. Since this is exactly when our model should be effective, we assume that the line length does not change greatly over time. + +The differential equations assume a lack of variation on the part of guests and ride operators; if both act with clockwork precision, the approach is sufficient. However, queueing theory and common sense tell us that the differential equation approach is too deterministic; a good model must thus take statistical deviations into account. Even so, the line length predicted in Figure 2 matches remarkably well with the data in Figure 1. + +To incorporate statistical deviations, we introduce a computer simulation. + +# Computer Simulation of Long Line Model + +Our computer simulation dequeues (removes from the queue) at a fixed probability, enques (adds to the queue) at a probability dependent on time, and both are subject to noise from a random-number generator. We subdivide time into $N$ equal discrete time steps. + +Figure 3 shows an example of the output for $N = 2,000$ , an average of 1 guest dequeued per time step, and 0 to 2 guests enqueued per time step. The shape of the figure closely resembles the model for line growth in Figure 2 and our data in Figure 1. + +# GhostQueue + +The GhostQueue process behaves as if the guest has a "ghost" who stands in line instead, calling the guest back only when the ghost reaches the front of the line. We find that GhostQueue works well at very limited capacity; but as capacity grows, it suffers from the same problem as a normal line. + +We assume that the wait time for the normal line is known to the system. Many virtual-queueing systems, such as Disneyland's FastPass, display this information at the QP kiosk [O'Brien 2001]. + +The GhostQueue system works in the following way: + +![](images/61394b01393a09dfbe72f70e313616180ab9144768d6d5700f307768989350c2.jpg) +Figure 3. Simulated line length for the population input shown. + +- The system checks the length of the normal line, computes the expected time when the guest would enter the ride, and gives the guest a ticket stamped with this time as the beginning of a short time window during which the guest can enter the GhostQueue line. +- The guest is free to roam about the park. +- When the guest's window is about to begin, the guest goes back to the ride. +- The ride takes guests first from the GhostQueue line, then from the normal line. + +In theory, nothing changes for guests in the normal line; it acts exactly as if all guests were still present, even though only their ghosts wait. The average wait time $\bar{w}$ is given by + +$$ +\bar {w} = \frac {\text {t o t a l t i m e w a i t e d}}{\text {t o t a l g u e s t s}}. +$$ + +With some guests not waiting, yet the total number of guests staying the same, $\bar{w}$ seems to go down with each new guest using GhostQueue. The optimal solution thus appears to be (but is not!) assigning everyone a ghost and watching the average wait time drop to zero. + +Why is this not the best approach? Guests returning at a predetermined time is a probabilistic rather than a deterministic process, so queueing-theory results apply. To run at capacity and not create a line, guests must arrive at the same rate that the ride is boarding. In terms of the parameters of queueing theory, this means $\lambda = \mu$ . However, (2) tells us that + +$$ +\text {m e a n w a i t t i m e f o r e n t r y} = \frac {1}{\mu - \lambda} \xrightarrow {\lambda \rightarrow \mu} \infty . +$$ + +Thus, even if there is a balance between arrivals and departures, not only is the expected wait time not zero, but if we run the fully ghosted system at capacity, actual wait times get arbitrarily long! + +Reducing the number of ghost spots is equivalent to reducing $\lambda$ . Since we wish to keep the wait time short for users of the ghost queue, we must both + +make $1 / (\mu -\lambda)$ small, and +- keep the system stable to variations in $\lambda$ . + +The second goal is met when + +$$ +\frac {d}{d \lambda} \frac {1}{\mu - \lambda} = \frac {1}{(\mu - \lambda) ^ {2}} +$$ + +is small. The first goal implies that we should make $\mu - \lambda$ as large as possible. Both goals thus encourage the same end result. However, if the ride is not always filled by the ghost queue (which occurs sometimes because of the random distribution of arrivals), then the ride runs at less than full capacity. Therefore, there must be a normal line to keep the ride full. + +From the perspective of guests, the length of the visible normal line must be related to its wait time, or else they will view the line as unfair. For example, if there are only 20 guests in the normal line but only 1 guest per minute is boarding from that line—the rest coming from the ghost queue—then this would not be an attractive line in which to stand. A balance needs to be created between perceived fairness and the average wait time. + +Hence, the GhostQueue system is feasible only if the number of guests who use it is kept low relative to the number using the normal line. + +In a similar system, Lo-Q at Six Flags amusement parks, the user limit comes from a fixed number of devices that must be rented to access the ghost-queueing feature. Based on the claim that 750,000 guests had used the system by October of 2001 [O'Brien 2001], and that the total 2001 attendance across the six parks was approximately 13 million visitors [O'Brien 2002], the utilization is $6\%$ , in agreement with what we would predict. + +# An Adaptive Algorithm + +# The Idea of an Adaptive Algorithm + +We introduce an adaptive model that does not try to maximize anything per se but instead adapts to make today's performance better than yesterday's. + +A simple adaptive algorithm might count the number of guests who wait more than $10\mathrm{min}$ in the QP line today and assign that many fewer QPs tomorrow. Problems with this algorithm are the sensitivity to random variations in attendance and that lumping the whole day into one block of time is too coarse to capture many subtleties of park attendance. We propose an algorithm that breaks the day into more blocks and is not as sensitive to random fluctuations. + +# Setup + +The setup is described largely in Figure 4. We have a traffic control box that knows how many guests are in each line and determines how many QP tickets to make available in each hour. This information, as well as the wait times for the normal and QP lines, is fed into the kiosk, which displays waiting times and gives guests options for when to return. In the figure, the options noon and 1 P.M. are blacked out, indicating that those time slots are full. + +![](images/e774b355cbd166b137f6e9f8c0b242fcbef2c8f0cb56bbea5aeff098efdc76a5.jpg) +Figure 4. Schematic diagram of the park showing the traffic control center, the kiosk, and, of course, the fun. + +# Assumptions for the Adaptive Algorithm + +- There is little day-to-day variation in the overall park attendance. The QP system is used during peak seasons and peak hours, when the pattern of line formation for a ride changes slowly. + +- We set the number and times of QPs at the beginning of the day. Guests can get these at any point during the day, even nonpeak hours, on a first-come-first-serve basis. The system is thus fair and logical and will not behave strangely if there is variation in line formation. +- Line speeds at peak hours are nearly constant. As long as the ride does not break down, guests are using it at a constant rate. +- We allot a percentage of ride seats for QP users. The number of such seats per unit time is the same as the rate at which the normal and QP lines can be mixed. The maximum feasible mixing rate $M$ could be found from a pilot study or just taken to be reasonably small (5–10%). +- We declare a target maximum QP queue length. + +# KalmanQueue + +The Kalman filter is a set of recursive equations that provides a computationally efficient solution to the least-squares method [Welch and Bishop 2001]. Kalman filters have many applications, notably in autonomous navigation systems. They are appropriate here because our model is a discrete-time controlled process, and also because Kalman filters should satisfy our requirements for a good model. We briefly describe general Kalman filters and then adapt them for use in our model. + +A Kalman filter estimates a state $X \in \Re^n$ at time $k + 1$ from the state $X$ at time $k$ and an observation $Z$ at time $k$ . + +Kalman filters are adaptive, yet they filter out random noise to yield a stable system. A general Kalman filter with no control input is described by + +$$ +X _ {k + 1} = A X _ {k} + w _ {k}, +$$ + +$$ +Z _ {k} = H X _ {k} + v _ {k} +$$ + +where + +- $A$ relates the previous state to the next state, +- $w_{k}$ is Gaussian process noise, +- $Z_{k}$ is the observation at time $k$ , +- $H$ relates the magnitude of the state to the magnitude of the observation, and $v_{k}$ is Gaussian measurement noise. + +Now we outline our model: + +- State: Number of QPs available for a given block of time. + +- Measurement $U_{k} =$ (Target QP length - Observed QP length): + +If the difference is positive, we assign more QPs the next day, since the QP system is not running at capacity; if it is negative, the QP line is too long and we assign fewer QPs the next day. + +- Finding $H$ : The scalar $H$ relates the magnitudes of state and input. Since our state and our observation are on the same scale (guests in line), $H$ is simply 1. + +We now give the recursive equations for our adaptive algorithm derived from the Kalman filter. We let $X_{k}$ be the number of QPs available on day $k$ , while $P_{k}$ and $K_{k}$ determine how much we trust our observed data. + +$$ +K _ {k + 1} = \frac {R}{P _ {k} + R}, +$$ + +$$ +X _ {k + 1} = X _ {k} + K _ {k} U _ {k}, +$$ + +$$ +P _ {k + 1} = \left(P _ {k} + V _ {k}\right) \frac {P _ {k}}{P _ {k} + R}. +$$ + +Here $V_{k}$ is a measure of line fluctuation for day $k$ , $R$ is the expected variance (from previous data), and $K_{k}$ is a scalar by which we weight the observation before we change state. + +The variables $X$ , $P$ , and $K$ are related, since each adjusts per iteration depending on the others. The measurement $U_{k}$ is scaled by $K_{k}$ , a measure of how much we trust the observation when computing the next state based on previous experience. Naturally, we expect $K_{k}$ to go to 1 when there are no fluctuations, and this indeed happens; it also should approach 0 when the observed variance is very large, and it does this too. Additionally, if the observed variance is about the same as expected, then we trust the observation with scale factor of approximately $1/2$ . + +# Testing the Adaptive Algorithm + +Our algorithm takes as input three parameters: $P_0, K_0$ , and $R$ . The first two are self-adjusting, so the filter is not sensitive to their initial values; but $R$ strongly affects convergence of the model. Amusement parks closely guard their attendance data, so we do not have data for these parameters and must guess reasonable values. However, even with rough guesses, our adaptive algorithm settles quickly to equilibrium. + +We tested the filter by iterating our computer simulation. Given initial values, the first relevant output from the Kalman algorithm is the number of QPs assigned per hour block of time. We assume that guests arrive uniformly over their assigned block. We graph how many guests arrive as a function of time, exactly the input for our computer simulation. + +This test does not capture the true power of our adaptive algorithm, because we do not know the value of the parameter $R$ , and our model for line growth is a fairly rough and simple probability model. + +# The Test: + +1. We assume that the peak time of day is subdivided into 6 equal blocks. We "couple" the blocks by having the final queue length of block $i$ as the initial queue length for block $i + 1$ . +2. We use the same model for line formation as in our simulation. Additionally, we employ our algorithm on each block. +3. We guess the parameter $R$ and pick initial values for $K_{0}, P_{0}$ , and $X_{0}$ (the number of QPs per time block). +4. Subject to noise, we input this $X_0$ into our line-formation model. Our algorithm measures the deviation of the actual line from the ideal line. +5. The Kalman filter outputs new values for $X$ , $K$ , and $P$ . We now have a new value for the number of QPs for every block. +6. We iterate this process 1,000 times. The input to step 5 is the output of step 4. +7. We confirm visually that regardless of initial values, the filter converges to a steady optimal QP number per block that results in a stable and optimal QP queue length over time. + +A pictographic version of the results of a trial is shown in Figure 5; a plot of the actual results is shown in Figure 6. The system output was programmed to vary around 100 guests/h, but we input an initial value of 120 QPs/h. + +We can think of each time step as a previous similar day; for a Saturday, we look to last Saturday's data. For the first day, the line grows rapidly, as the ride cannot service the demand. By day 10, the line has visibly deformed. The distribution on day 30 is almost as good as on day 900, so the algorithm converges quickly, even with a bad initial guess, (in actual implementation we can start with very good initial values). + +# Justification of Uniform Arrival Rate + +Guests arrive with some probability distribution throughout their block. However, we should be able to overlap the blocks in such a way that the average arrival is constant. For example, Figure 7 shows how we overlap the blocks when guests arrive with a normal distribution about the center of their block. + +![](images/94113fb4e8a01e5e56a0254d75a1db41ed5b2199f6732fb9f08cb7aed3a9e56b.jpg) + +![](images/7b2df283db12bab0f5c88ad63df060df83e69b9522686facf34ddec348bf9c5e.jpg) + +![](images/2e17fb639e56e2f4312739cd40adbe625c8d22f74190003d5ae3cc3cc0d1d9f3.jpg) + +![](images/d9cf66ff160b52878228f57f5e2d6fcebd9b12776a46a149c123e22b360955f2.jpg) +Figure 5. Number of QPs allotted per hour (histogram) and QP line length (black line) for days 1, 10, 20, 30, and 900. + +![](images/14bfb3c6aa5fa70b8af24b192e993c19193c147df5ecd0f1857df6ff2aef7a23.jpg) +Left axis: Q-Passes distributed Right axis: Line length (in people) + +![](images/98321dfd5f1f8d9809ded24c877541595c3a9549436478f442effdf8cb82f294.jpg) +Figure 6. A plot of QPs allotted day by day, given wildly wrong initial values. Nevertheless, the algorithm stabilizes. + +![](images/4fc8d5ed9c7fff6adda268d7554487908a3802bb65f7c4d4d619edb2844a88b0.jpg) +Figure 7. How to add normal distributions to achieve constant arrival. + +# Conclusions + +The GhostQueue decreases wait times and increases happiness, but in a way hard to optimize, and fairness is difficult to define and test. It is a good solution only if an external way (such as selling access) keeps utilization low. + +The KalmanQueue process meets the criteria for a good model, despite some sensitivity in its parameters. The dynamic optimization approach is also easy to use, since the system adjusts itself to meet an ideal line condition. Hence, we recommend use of a KalmanQueue system for rides with long lines. + +# Strengths & Weaknesses + +# General Model + +# Strengths + +- Our line-formation model agrees with our rough data, and our computer model agrees with both. +- Our line model incorporates the natural randomness of human behavior + +# - Weaknesses + +- Current line length is not taken into account by the line formation model. In real life, a guest is more likely to join a short line than a long one. +- We ignore the effect of virtually-queued guests joining normal lines, thus increasing the normal line wait times. +- Our model is valid only during peak hours. +- Our model lacks a rigorous definition of optimality. + +# GhostQueue + +# Strengths + +- The return-time calculation is simple and comprehensible to the guest. + +- No guest waits longer than without the GhostQueue. + +# - Weaknesses + +- Utilization must be kept low for GhostQueue to be beneficial. +- Fairness is a dominant factor in optimal utilization; but our assumptions do not quantify fairness, so optimizing is beyond the scope of the model. + +# KalmanQueue + +# Strengths + +- The primary input is the desired behavior of the QP line, and the model adjusts itself accordingly. +- The KalmanQueue process satisfies all six properties of a good model. +- The Kalman filter is highly adaptive to changes in the queueing process (e.g., time-varying output rates), so the core framework of our Kalman-based algorithm is valid for a wide variety of situations. + +# - Weaknesses + +- Our model assumes a constant mixing rate. An extension of the model (a two-dimensional Kalman filter) would allow for the determination of the mixing rate based on the relative lengths of the normal and QP lines. +- $R$ , the random variance of the number of guests in line, must be determined accurately for the model to be useful. +- The Kalman filter tries to decrease the maximum QP line length. A future model should try to decrease the average QP line length. + +# References + +Cedar Point Information. 2003. The Point Online. www.thepointol.com. + +Ruiz-Pala, Ernesto, Carlos Abila-Beloso, and William W. Hines. 1967. Waiting-Line Models: An Introduction to Their Theory and Application. New York: Reinhold Publishing Co. + +Mouse Planet Inc. 2003. Disneyland Information Guide—What is Fastpass? http://www.mouseplanet.com/al/docs/fast.htm. + +O'Brien, Tim. 2001. Six Flags debuts queue management. Amusement Business (5 March 2001). http://www.lo-q.com/Press/Amusement%20Business%20March%202001.htm. + +2002. North American parks down slightly from 2002. Amusement Business 115 (51) 9. + +Welch, Greg, and Gary Bishop. 2002. An introduction to the Kalman filter. http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html. + +# Theme Park Simulation with a Nash-Equilibrium-Based Visitor Behavior Model + +Andrew Spann + +Daniel Gulotta + +Daniel Kane + +Massachusetts Institute of Technology + +Cambridge, MA + +Advisor: Martin Z. Bazant + +# Summary + +We build from the ground up a computer simulation consisting of a fictional theme park, MATHCOT. We populate MATHCOT with visitors and define an "enjoyment function" in which visitors gain points for going on rides and lose points as they stand in line. + +We propose two QuickPass systems. In the Appointment System, QuickPasses represent an appointment to visit the ride later that day. In the Place-holder System, a QuickPass represents a virtual place in line. We then choose test cases to represent both systems and run the computer simulation. + +With each set of parameters, we adjust the probability weights that govern visitor behavior to fit a Nash equilibrium. The Nash equilibrium adapts the behavior of park visitors to a greedy equilibrium that is not optimal for the group but represents individuals weighing to decisions based on immediate benefit. + +Our results suggest that it is in the park's best interest to allocate a high percentage of the rides to QuickPass. Reserving too few seats for QuickPass users can result in lower average visitor enjoyment than without a QuickPass system. Both the Placeholder System and the Appointment System (with $75\%$ of ride capacity allocated to QuickPass users) show strong increases in visitor enjoyment. + +Varying the length of the time window for the QuickPass has little effect on visitor enjoyment. + +# Judges' Commentary: + +# The Quick Pass Fusaro Award Paper + +Peter Anspach + +National Security Agency + +Ft. Meade, MD + +anspach@aol.com + +Kathleen M. Shannon + +Dept. of Mathematics and Computer Science + +Salisbury University + +Salisbury, MD 21801 + +kmshannon@salisbury.edu + +The most distinctive feature of the Quick Pass Fusaro Award paper (summary on preceding p. 353) by the team from MIT was its creativity. The basic problem was a queueing problem, which the team members recognized and addressed. However, rather than choosing as their objective minimizing time spent in line, the team made a real effort to model human behavior and to maximize enjoyment. Although the judges questioned whether they had appropriately applied the Nash equilibrium, we were impressed by the idea of using game theory. The team referenced attempts by "real-world consultants" to simulate human behavior in virtual worlds. + +Basically, the team simulated behavior by creating virtual visitors to their virtual theme park, giving them randomly generated preferences and tolerances. They then ran a simulation to find optimal parameters for the park itself, under various schemes for the QuickPass system. They treated visitors as individuals employing individual strategies but acknowledged that their assumption might not model reality fully, since people tend to come to theme parks in groups and group dynamics would definitely have an influence. + +The team certainly developed the one of the most sophisticated and detailed models to address the problem, made well-thought-out and well-explained assumptions, went through all of the steps of the modeling process, and presented a well-written report. The purpose of the Fusaro Award is to recognize just such activities. + +Statement of Ownership, Management, and Circulation + +
1. Publication Title +The UMAP Journal2. Publication Number3. Filing Date +11/8/2004
0197=3622
4. Issue Frequency +Quarterly5. Number of Issues Published Annually +4 + annual collection6. Annual Subscription Price +$99.00
7. Complete Mailing Address of Known Office of Publication (Not printer) (Street, city, county, state, and ZIP+4) +57 Bedford St., Suite 210, Lexington MA 02420 --Contact Person +Kevin Darcy +Telephone +788-862-7878
+ +8. Complete Mailing Address of Headquarters or General Business Office of Publisher (Not printer) + +# SAME + +9. Full Names and Complete Mailing Addresses of Publisher, Editor, and Managing Editor (Do not leave blank) + +Publisher (Name and complete mailing address) + +Solomon Garfunkel, 57 Bedford St., Suite 210, Lexington MA 02420 + +Editor (Name and complete mailing address) + +Paul J. Campbell, Beloit College, 700 College St., Beloit WI 53511 + +Managing Editor (Name and complete mailing address) + +Pauline Wright 57 Bedfors St., Suite 210, Lexington MA 02420 + +10. Owner (Do not leave blank. If the publication is owned by a corporation, give the name and address of the corporation immediately followed by the names and addresses of all stockholders owning or holding 1 percent or more of the total amount of stock. If not owned by a corporation, give the names and addresses of the individual owners. If owned by a partnership or other unincorporated firm, give its name and address as well as those of each individual owner. If the publication is published by a nonprofit organization, give its name and address.) + +
Full NameComplete Mailing Address
Consortium For Mathematics57 Bedford st., +Suite 210 +Lexington, MA 02420
And Its Applications, Inc.
(COMAP, INC)
+ +1. Known Bondholders, Mortgagees, and Other Security Holders Owning or Holding 1 Percent or More of Total Amount of Bonds, Mortgages, or Other Securities. If none, check box + +
Full NameComplete Mailing Address
?
13. Publication Title +The UMAP Journal14. Issue Date for Circulation Data Below
11/30/2002
15. Extent and Nature of CirculationAverage No. Copies Each Issue During Preceding 12 MonthsNo. Copies of Single Issue Published Nearest to Filling Date
a. Total Number of Copies (Net press run)900950
b. Paid and/or Requested Circulation(1)Paid/Requested Outside-County Mail Subscriptions Stated on Form 3541. (Include advertiser's proof and exchange copies)750817
(2)Paid In-County Subscriptions (Include advertiser's proof and exchange copies)00
(3)Sales Through Dealers and Carriers, Street Vendors, Counter Sales, and Other Non-USPS Paid Distribution7070
(4)Other Classes Mailed Through the USPS00
c. Total Paid and/or Requested Circulation [Sum of 15b. (1), (2), (3), and (4)]8208887
d. Free Distribution by Mail (Samples, compliment, any, and other free)(1)Outside-County as Stated on Form 3541
(2)In-County as Stated on Form 3541
(3)Other Classes Mailed Through the USPS3020
e. Free Distribution Outside the Mail (Carriers or other means)00
f. Total Free Distribution (Sum of 15d. and 15e.)3020
g. Total Distribution (Sum of 15c. and 15f.)850907
h. Copies not Distributed5043
i. Total (Sum of 15g. and h.)900950
j. Percent Paid and/or Requested Circulation (15c. divided by 15g. times 100)9698
16. Publication of Statement of Ownership +Publication required. Will be printed in the third issue of this publication. □ Publication not required.
17. Signature and Title of Editor, Publisher, Business Manager, or Owner +Adoon AnfulDate +11/08/04
+ +I certify that all information furnished on this form is true and complete. I understand that anyone who furnishes false or misleading information on this form or who omits material or information requested on the form may be subject to criminal sanctions (including fines and imprisonment) and/or civil sanctions (Including civil penalties). + +# Instructions to Publishers + +1. Complete and file one copy of this form with your postmaster annually on or before October 1. Keep a copy of the completed form for your records. +2. In cases where the stockholder or security holder is a trustee, include in items 10 and 11 the name of the person or corporation for whom the trustee is acting. Also include the names and addresses of individuals who are stockholders who own or hold 1 percent or more of the total amount of bonds, mortgages, or other securities of the publishing corporation. In item 11, if none, check the box. Use blank sheets if more space is required. +3. Be sure to furnish all circulation information called for in item 15. Free circulation must be shown in items 15d, e, and f. +4. Item 15h, Copies not Distributed, must include (1) newsstand copies originally stated on Form 3541, and returned to the publisher, (2) estimated returns from news agents, and (3), copies for office use, leftovers, spoiled, and all other copies not distributed. +5. If the publication had Periodicals authorization as a general or requester publication, this Statement of Ownership, Management, and Circulation must be published; it must be printed in any issue in October or, if the publication is not published during October, the first issue printed after October. +In item 16, indicate the date of the issue in which this Statement of Ownership will be published. +7. Item 17 must be signed. + +Failure to file or publish a statement of ownership may lead to suspension of Periodicals authorization. \ No newline at end of file diff --git a/MCM/1995-2008/2005ICM/2005ICM.md b/MCM/1995-2008/2005ICM/2005ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..b8eafa0248e7f216c3899e22b20eb11e50c7770d --- /dev/null +++ b/MCM/1995-2008/2005ICM/2005ICM.md @@ -0,0 +1,2736 @@ +# The U + +# M + +# AP Journal + +# Publisher + +COMAP, Inc. + +# Executive Publisher + +Solomon A. Garfunkel + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# ILAP Editor + +Chris Arney + +Associate Director, + +Mathematics Division + +Program Manager, + +Cooperative Systems + +Army Research Office + +P.O.Box 12211 + +Research Triangle Park, + +NC 27709-2211 + +David.Arney1@arl.army.mil + +# On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +# Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery + +Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +# Chief Operating Officer + +Laurie W. Aragon + +# Production Manager + +George W. Ward + +# Director of Educ. Technology + +Roland Cheyne + +# Production Editor + +Pauline Wright + +# Copy Editor + +Timothy McLean + +# Distribution + +Kevin Darcy + +John Tomicek + +# Graphic Designer + +Daiva Kiliulis + +# Vol. 26, No. 2 + +# Associate Editors + +Don Adolphson + +Brigham Young University + +Aaron Archer + +AT&T Shannon Research Laboratory + +Chris Arney + +Army Research Office + +Ron Barnes + +University of Houston-Downtown + +Arthur Benjamin + +Harvey Mudd College + +Robert Bosch + +Oberlin College + +James M. Cargal + +Troy State University—Montgomery Campus + +Murray K. Clayton + +University of Wisconsin—Madison + +Lisette De Pillis + +Harvey Mudd College + +James P. Fink + +Gettysburg College + +Solomon A. Garfunkel + +COMAP, Inc. + +William B. Gearhart + +California State University, Fullerton + +William C. Giauque + +Brigham Young University + +Richard Haberman + +Southern Methodist University + +Jon Jacobsen + +Harvey Mudd College + +Walter Meyer + +Adelphi University + +Yves Nievergelt + +Eastern Washington University + +Michael O'Leary + +Towson University + +Catherine A. Roberts + +College of the Holy Cross + +John S. Robertson + +Georgia Military College + +Philip D. Straffin + +Beloit College + +J.T. Sutcliffe + +St. Mark's School, Dallas + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes print copies of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2520 $ 99 + +(Outside U.S.) #2521 $111 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2570 $456 + +(Outside U.S.) #2571 $479 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2540 $198 + +(Outside U.S.) #2541 $220 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2510 $41 + +(Outside U.S.) #2510 $41 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc. 57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2005 by COMAP, Inc. All rights reserved. + +# Vol. 26, No. 2 2005 + +# Table of Contents + +# Editorial + +Where Else to Publish Paul J. Campbell and Kunio Mitsumi 93 + +# Special Section on the ICM + +Results of the 2005 Interdisciplinary Contest in Modeling Chris Arney 115 + +The Coming Oil Crisis Wei Deling, Chen Jie, and Xu Hui 127 + +Preventing the Hydrocylapse: A Model for Predicting and Managing Worldwide Water Resources Steven Krumholz, Frances Haugen, and Daniel Lindquist 145 + +The Petroleum Armageddon Jonathan Giuffrida, Palmer Mebane, and Daniel Lacker 163 + +Judge's Commentary: The Outstanding Exhaustible Resource Papers Ted Hromadka II 175 + +Author's Commentary: The Outstanding Exhaustible Resource Papers Paul J. Campbell 179 + +![](images/6de243d8ac552a7e20a7e2b722628aee8a565c0e28857164f4f9709e17ad5dc8.jpg) + +# Editorial + +# Where Else to Publish + +Paul J. Campbell + +Dept. of Mathematics and Computer Science + +Beloit College + +700 College St. + +Beloit WI 53511-5595 + +campbell@beloit.edu + +Kunio Mitsuma + +Dept. of Mathematics and Computer Science + +Kutztown University of Pennsylvania + +Kutztown, PA 19530 + +mitsuma@kutztown.edu + +# Introduction + +Our Guide for Authors (Vol. 26, No. 1, pp. 91-92) advises that + +The UMAP Journal focuses on mathematical modeling and applications of mathematics at the undergraduate level. + +The editor also welcomes + +expository articles for the On Jargon column, +reviews of books and other materials, and +- guest editorials on new ideas in mathematics education or on interaction between mathematics and application fields. + +Major vehicles for achieving the goals of the Journal are + +- UMAP Modules: A UMAP Module is a teaching/learning module with exercises and often a sample exam (with solutions) and in particular precise statements of + +- the target audience, +- the mathematical prerequisites, and +- the time frame for completion. + +UMAP Modules are designed for class use to learn about applications of mathematics but are also often useful for independent study and student projects. + +- ILAP Modules: An ILAP (Interdisciplinary Lively Application Project) is an interdisciplinary student group project, jointly authored by faculty from mathematics and a partner department. The project usually includes + +- some instructional material, +- requirements that the student teams must fulfill, including preparing a report. + +- Minimodules: While a UMAP Module or ILAP may run from 12 to over 50 pages, a Minimodule is usually 6 pages or less. +- the annual undergraduate Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling. The $2 \%$ or so of submitted papers rated by the judges as Outstanding are published in special sections of the Journal, including an entire issue devoted to the Mathematical Contest in Modeling. +- Articles: Articles do not need to be tightly focussed toward direct classroom use, nor must they include exercises (though they may). Simply being in a field that has traditionally been designated as “applied mathematics,” e.g., numerical analysis, differential equations) does not meet the needs of our readers; articles, like the modular materials, must treat mathematical modeling or applications of mathematics. +- On Jargon columns: An On Jargon column exposites a mathematical term or concept (which need not be related directly to modeling or applications). (The Notices of the American Mathematical Society recently adopted this idea in the form of their "What Is..." column (which treats concepts that arise at a much higher level). On Jargon columns have appeared in the Journal since its first issue but have been relatively rare in recent issues. The editor especially encourages readers to submit material for this department of the Journal. + +Most UMAP Modules and ILAP Modules appear in the Journal, and all Modules (including ones too long for a regular issue of the Journal) appear in the Journal's annual supplement UMAP Modules: Tools for Teaching. + +For UMAP Modules, ILAP Modules, and Minimodules, the ideals are as follows: + +- The occasion is a real-world situation (with real data), usually outside the mathematical sciences, that is described in some detail. + +- The problems are solvable by undergraduates, using mathematical techniques and modeling aids (graphing calculators, computer algebra systems, statistical packages, simulation, differential equations solvers, etc.). +- The modeling concludes by returning to the application to discuss how the modeling leads to insight about the application. + +The corpus of UMAP Modules is available on a CD-ROM with a companion database; purchasers are authorized to duplicate Modules for their students only (\(299 PROD #7954; see the COMAP Website at www.comap.com or call (800)-77-COMAP). + +# Where Else to Publish? + +The major reason for this Journal to reject a manuscript is that it does not emphasize modeling and grounding in applications. + +The editor tries to steer authors of unsuitable manuscripts to other journals. Below we give a list of journals, including each journal's focus and editor's address, completely updated—and with numerous additions and deletions, including electronic journals—from a list published in Vol. 17, No. 1, 1-14. + +Our aim in providing this list is not to dissuade authors from submitting manuscripts to the Journal! Far from it! The Journal can always use more good papers to help realize its mission. Rather, there are relatively few outlets for mathematical exposition at the undergraduate level compared with the plenitude of research journals. This list exhibits more opportunities than most authors realize; we offer it so that they can more readily find suitable outlets for their work. + +We welcome suggestions from readers for additions or revisions to this list. + +# Criteria + +We list only journals that + +arefamiliartous, +- publish articles in English, and +- specifically focus on mathematics (including computing aspects and statistics) at the undergraduate college/university level. + +We exclude departmental journals that usually include authors from only the home institution (e.g., Eureka at Cambridge University, or Journal of Undergraduate Mathematics at Puget Sound at the University of Puget Sound). + +Appearance of any journal in this list, or failure to list any particular journal, does not imply any endorsement or judgment by us about the quality of the + +journal, nor about the suitability of that journal for any author's manuscript. Descriptions of the journals are derived largely from their own published statements of purpose. + +More journals from throughout the world are listed in the Source Journal Index (Zeitschriften-verzeichnis) of Zentralblatt für Didaktik der Mathematik (International Reviews on Mathematical Education), available from + +Gerhard Koenig + +Fachinformationszentrum Karlsruhe + +Gesellschaft für wissenschaftlich-technische Information mbH + +D-7514 Eggenstein-Leopoldshafen 2 + +Germany + +Links to many journals on computer science education and the uses of computers and information technology in education can be found at + +http://www.cs.washington.edu/research/edtech/pubs-orgs/. + +# List of Journals + +$(\ast)$ : indicates electronic distribution only. + +AMATYC Review + +The AMATYC Review + +Barbara S. Rives, Editor + +Lamar State College-Orange + +410 Front Street + +Orange, TX 77630 + +rivesbs@gt.rr.com + +http://www.amatyc.org/Review/index.html + +A semi-annual publication of the American Mathematical Association of Two-Year Colleges. Its purpose is to provide an avenue of communication for all mathematics educators concerned with the views, ideas, and experiences pertinent to two-year college teachers and students. + +# American Mathematical Monthly + +Bruce Palka, Editor + +Department of Mathematics + +University of Texas at Austin + +1 University Station + +Austin, TX 78712-1082 + +monthly@math.utexas.edu + +http://www.maa.org/pubs/monthly.html + +Problems or Solutions to: + +Doug Hensley, Monthly Problems + +Department of Mathematics + +Texas A&M University + +College Station, TX 77840 + +hwaldman@maa.org + +Articles, as well as notes and other features, about mathematics and the profession. Its readers span a broad spectrum of mathematical interests, and include professional mathematicians as well as students of mathematics at all collegiate levels. Authors are invited to submit articles and notes that bring interesting mathematical ideas to a wide audience of Monthly readers. + +# Bulletin of Mathematics Books and Computer Software + +Steven Roman, Publisher + +The Roman Press + +8 Night Star + +Irvine, CA 92715 + +Disseminates publishers' and reviewers' information about mathematics books and software. + +# Chance + +Dalene Stangl, Editor + +Box 90251, ISDS + +Duke University + +Durham, NC 27708 + +dalene@stat.duke.edu + +http://www.stat.duke.edu/chance/ + +Jointly published by the American Statistical Association and Springer-Verlag, about statistics and the use of statistics in society. It is "intended for everyone who has an interest in the analysis of data. Chance features articles that showcase the use of statistical methods and ideas in the social, biological, physical, and medical sciences. It also presents material about statistical computing and graphical presentation of data. Through its regular departments and columns, Chance will keep its readers informed about developments and ideas in a variety of areas including government statistics and sports. The goal is to promote the field of statistics and make its contributions accessible to a broad audience." + +# $(\ast)$ Chance News + +J. Laurie Snell, Editor + +jlsnell@dartmouth.edu + +http://wwwdartmouth.edu/\~chance/ + +Reviews articles in the news that teachers of probability and statistics might want to use in their classes. Please send suggestions to jlsnell@ dartmouth.edu. You are encouraged to include your comments on the article. + +# College Mathematics Journal + +Lowell Beineke + +Mathematics Department + +Indiana University-Purdue University Ft. Wayne + +Ft. Wayne, IN 46805-1499 + +http://www.maa.org/pubs/cmj.html + +Seeks lively, well-motivated articles that can enrich undergraduate instruction and enhance classroom learning. The CMJ also invites expository papers that stimulate the thinking and broaden the perspectives of those who teach undergraduate-level mathematics, especially the first two years. Articles involving all aspects of mathematics are welcome: history, philosophy, problem solving, applications, computer-related mathematics, and so on. + +Classroom Capsules + +Michael Kinyon + +Dept. of Mathematical Sciences + +Indiana University South Bend + +South Bend, IN 46634 + +Fallacies, Flaws, and Flamblam + +Ed Barbeau + +Department of Mathematics + +University of Toronto + +Toronto, Ontario + +Canada M5S 1A1 + +Problems and Solutions + +James Bruenin + +Department of Mathematics + +Southeast Missouri State University + +Cape Girardeau, MO 63702 + +Student Research Projects + +Brigitte Servatius + +Dept. of Mathematical Sciences + +Worcester Polytechnic Institute + +Worcester, MA 01609-2280 + +Media Highlights + +Warren Page + +Department of Mathematics + +New York City Technical College + +300 Jay Street + +Brooklyn, NY 11201 + +Software Reviews + +L. Carl Leinbach + +Dept. of Mathematics and + +Computer Science + +Gettysburg College + +Gettysburg, PA 17325 + +Proofs without words, letters to the editor, quotations, verse, cartoons, mathematical facetiae, and all other material + +Lowell Beineke + +Mathematics Department + +Indiana University-Purdue University Ft. Wayne + +Ft. Wayne, IN 46805-1499 + +# Computers & Graphics: An International Journal of Systems & Applications in Computer Graphics + +José L. Encarnac, o + +Technical University Darmstadt, Gris + +Fraunhoferstrasse 5 + +D-64283 Darmstadt + +Germany + +jle@igd.fhg.de + +http://authors.elsevier.com/JournalDetail.html?PubID=371& + +Precis=DESC + +Research and applications, tutorial papers, state-of-the-art papers, and information on innovative uses of computer graphics, including computers and graphics in education. + +# Computers & Mathematics with Applications + +E.Y. Rodin + +Dept. of Systems Science and Mathematics + +Box 1040 + +Washington University + +St. Louis, MO 63130 + +http://www.elsevier.com/wps/find/journaldescription.cws_home/ + +301/description#description + +The journal pays particular attention to applications in "non-classified" fields, such as environmental science, ecology, biology, urban systems and also to appropriate papers in applied mathematics. + +# $(*)$ Convergence + +Victor Katz, Editor + +The Mathematical Association of America + +1529 18th St. N.W. + +Washington, DC 20036-1385 + +vkatz@udc.edu + +http://convergence.mathdl.org/jsp/index.jsp + +"Sponsored by the Mathematical Association of America with the cooperation of the National Council of Teachers of Mathematics, Convergence is intended to be a resource and forum for mathematics teachers of grades 9-14 mathematics who are interested in using mathematics history as a learning/teaching tool. + +- Expository articles on aspects or concepts from the history of mathematics that the author feels possess a special pedagogical or learning appeal. +- A sharing of classroom experiences. + +- Animated mathematical demonstrations that can be downloaded for classroom use. +- Translations and commentaries of mathematical works that shed particular light on mathematical discovery and understanding. +- Discussions of particular problems from an historical context. +- Reviews of materials, books, websites and teaching aids that lend themselves to historical enrichment." + +Cruz Mathematicorum with Mathematical Mayhem + +James Totten, Editor-in-Chief + +Department of Mathematics & Statistics + +University College of the Cariboo + +Kamloops, BC V2C 5N + +crux-editors@cms.math.ca + +http://journals.cms.math.ca/CRUX/ + +A problem-solving journal at the secondary and university undergraduate levels. + +Educational Studies in Mathematics + +Editor Social Sciences Division + +Kluwer Academic Publishers + +P.O.Box 17 + +3300 AA Dordrecht + +The Netherlands + +http://www.kluweronline.com/issn/0013-1954/contents + +" Presents new ideas and developments of major importance to those working in the field of mathematical education. It seeks to reflect both the variety of research concerns within this field and the range of methods used to study them. It deals with didactical, methodological and pedagogical subjects, rather than with specific programmes for teaching mathematics." + +Elemente der Mathematik + +Jürg Kramer, Managing Editor + +Humboldt-Universität zu Berlin + +Institut für Mathematik + +Unter den Linden + +D-10099 Berlin + +kramer@mathematik.hu-berlin.de + +http://www.springeronline.com/sgw/cda/frontpage/ + +0,10735,5-10042-70-1176268-detailsPage%253Djournal + +%257Cdescription%257Cdescription,00.html + +Survey articles about important developments in the field of mathematics; stimulating shorter communications that tackle more specialized questions; and papers that report on the latest advances in mathematics and applications in other disciplines. + +# L'enseignement mathématique + +Case postale 240 + +CH-1211 Geneva 24 + +Switzerland + +http://www.unige.ch/math/EnsMath/EM_en/welcome.html + +Articles on teaching mathematics, in French, English, German, or Italian. + +# Experimental Mathematics + +Rafael de la Llave, Editor-in-Chief + +A K Peters, Ltd. + +888 Worcester Street + +Suite 230 + +Wellesley, MA 02482 + +exmath@akpeters.com + +http://www.expmath.org + +Formal results inspired by experimentation, conjectures suggested by experiments, descriptions of algorithms and software for mathematical exploration, surveys of areas of mathematics from the experimental point of view, and general articles of interest to the community. + +# Fibonacci Quarterly + +Gerald E. Bergum, Editor + +South Dakota State University + +Box 2201 + +Brookings, SD 57007-1596 + +bergumg@mg.sdstate.edu + +http://www.mathpropress.com/problemColumns/fq/fqInfo.html + +Articles that are intelligible, yet stimulating, to its readers, most of whom are university teachers and students. These articles should be lively and well motivated, with new ideas that develop enthusiasm for number sequences or the exploration of number facts. Illustrations and tables should be wisely used to clarify the ideas of the manuscript. Unanswered questions are encouraged, and a complete list of references is absolutely necessary. + +Elementary Problems + +Stanley Rabinowitz + +12 Vine Brook Rd. + +Westbrook, MA 01886-4212 + +Fibonacci@MathPro.com + +Advanced Problems + +Raymond E. Whitney + +Mathematics Dept. + +Lock Haven University + +Lock Haven, PA 17745 + +rwhitney@LHUP.edu + +# (*) Furman University Electronic Journal of Undergraduate Mathematics + +Mark Woodard, Editor + +Department of Mathematics + +Furman University + +Greenville, SC 29613-0448 + +mark.woodard@furman.edu + +http://math.furman.edu/\~mwoodard/fuejum/content/toc.html + +"The Journal accepts papers of significant mathematical interest written by students containing work done prior to the students' obtaining a Bachelor's degree. Papers of all types will be considered, including technical, historical, and expository papers. Each paper must be sponsored by a mathematician familiar with the student's work, a full-time faculty member willing to endorse the student's work. The sponsor is largely responsible for ensuring the quality and veracity of the student's work." + +# Historia Mathematica + +Editorial Office + +525 B Street, Suite 1900 + +San Diego, CA 92101-4495 + +hm@elsevier.com + +http://authors.elsevier.com/JournalDetail.html?PubID=622841 + +&Precis $\equiv$ DESC + +Historical scholarship on mathematics and its development in all cultures and time periods. In particular, the journal encourages informed studies on mathematicians and their work in historical context, on the histories of institutions and organizations supportive of the mathematical endeavor, on historiographical topics in the history of mathematics, and on the interrelations between mathematical ideas, science, and the broader culture. + +# Humanistic Mathematics Network Journal + +Sandra and Philip Keith, Managing Editors + +St. Cloud State University + +St. Cloud, MN 56301 + +szkeith@stcloudstate.edu + +http://www2.hmc.edu/www_common/hmnj/ + +Essays, book reviews, syllabi, and letters on mathematics as a humanistic endeavor. + +# International Journal of Mathematical Education in Science and Technology + +M.C. Harrison, Editor + +Department of Mathematical Sciences + +Mathematics Education Centre + +Loughborough University + +Loughborough + +Leicestershire + +LE11 3TU + +U.K. + +m.c.harrison@lboro.ac.uk + +http://www.tandf.co.uk/journals/titles/0020739X.asp + +"A medium by which a wide range of experience in mathematical education can be presented, assimilated and eventually adapted to everyday needs in schools, colleges, polytechnics, universities, industry and commerce. Contributions will be welcomed from lecturers, teachers and users of mathematics at all levels on the contents of syllabuses and methods of presentation. Increasing use of technology is being made in the teaching, learning, assessment and presentation of mathematics today; original and interesting contributions in this new area will be especially welcome. Mathematical models arising from real situations, the use of computers, new teaching aids and techniques also form an important feature. Discussion will be encouraged on methods of widening applications throughout science and technology. The need for communication between teacher and user will be emphasized and reports of relevant conferences and meetings will be included." + +# Journal for Research in Mathematics Education + +Steven R. Williams, Editor + +JRME + +Department of Mathematics Education + +Brigham Young University + +P.O. Box 26537 + +Provo, UT 84602-6537 + +williams@mathed.byu.edu + +http://my.nctm.org/eresources/journal_home.asp?journal_id=1 + +Promotes and disseminates disciplined scholarly inquiry into the teaching and learning of mathematics at all levels, including research reports, book reviews, and commentaries. + +# Journal of Computers in Mathematics and Science Teaching + +Association for the Advancement of Computing in Education + +P.O.Box 3728 + +Norfolk, VA 23514 + +pubs@aace.org + +http://www.aace.org/pubs/jcmst/default.htm + +"Offers an in-depth forum for the interchange of information in the fields of science, mathematics, and computer science. JCMST is the only periodical devoted specifically to using information technology in the teaching of mathematics and science." + +# Journal of Educational Computing Research + +Robert H. Seidman, Editor + +New Hampshire College Graduate School + +2500 North River Road + +Manchester, NH 03106 + +seidmaro@nhc.edu + +http://www.epicent.com/journals/journals/j_ed_comp_research.html + +"Articles of value and interest to the educator, researcher, scientist. Designed to convey the latest in research reports and critical analyses to both theorists and practitioners." + +# (*) Journal of Online Mathematics and its Applications + +David A. Smith, Editor-in-Chief + +das@math.duke.edu + +http://www.joma.org/about.html + +Takes advantage of the World Wide Web as a publication medium for materials containing dynamic, full-color graphics; internal and external hyperlinks to related resources; applets in Java, Flash, Shockwave, or other languages; MathML, SVG, and other XML overlays; audio and video clips; and other Web-based features. + +# Journal of Recreational Mathematics + +Baywood Publishing Company, Inc. + +26 Austin Ave. + +P.O.Box 337 + +Amityville, NY 11701 + +http://www.ashbacher.com/jrecmath.stm + +Articles, book reviews, alphametrics problem section, and problem and conjectures section. + +Book Reviews + +Problems and Solutions + +Charles Ashbacher + +Steven Kahan + +Charles Ashbacher Technologies + +41 St Quentin Drive + +Box 294 + +Sheffield + +119 Northwood Drive + +S17 4PN + +Hiawatha, IA 52233 + +U.K. + +# (*) Journal of Statistics Education + +W. Robert Stephenson, Editor + +327 Snedecor Hall + +Dept. of Statistics + +Iowa State University + +Ames, IA 50011-1210 + +wrstephe@iastate.edu. + +http://www.amstat.org/publications/jse/ + +"The intended audience includes anyone who teaches statistics, as well as those interested in research on statistical and probabilistic reasoning. + +"Possible topics for manuscripts include, but are not restricted to: curricular reform in statistics, the use of cooperative learning and projects, innovative methods of instruction, assessment, and research (including case studies) on students' understanding of probability and statistics, research on the teaching of statistics, attitudes and beliefs about statistics, creative and tested ideas (including experiments and demonstrations) for teaching probability and statistics topics, the use of computers and other media in teaching, statistical literacy, and distance education. Articles that provide a scholarly overview of the literature on a particular topic are also of interest. Reviews of software, books, and other teaching materials will also be considered, provided these reviews describe actual experiences using the materials. + +"In addition, JSE also features departments called 'Teaching Bits: A Resource for Teachers of Statistics' and 'Datasets and Stories.' 'Teaching Bits' summarizes interesting current events and research that can be used as examples in the statistics classroom, as well as pertinent items from the education literature. The 'Datasets and Stories' department not only identifies interesting datasets and describes their useful pedagogical features, but enables instructors to download the datasets for further analysis or dissemination to students." + +# (*) The MAA Online Book Review + +Fernando Gouvêa, Editor + +Department of Mathematics + +Colby College + +Waterville, ME 04901 + +fqgouvea@colby.edu + +http://www.maa.org/reviews/reviews.html + +Book reviews in the following categories: + +- Books on mathematics intended for the "general public". +- Books intended for a general mathematical audience. For example, expository works on mathematical subjects, particularly if they are accessible to people who have an undergraduate background in mathematics. +- Books designed or usable as supplements to classroom instruction in mathematics. For example, this includes problem books and books with mathematics-related readings. +- Books on the history and philosophy of mathematics, if they are of broad appeal. This includes books on the mathematical community, biographies of mathematicians and scientists whose work is closely related to mathematics, etc. + +- Books on mathematics teaching, especially those focusing on undergraduate teaching. We are particularly interested in seeing more books in this area. +- Innovative textbooks, especially those covering topics not usually taught at the undergraduate level. +- Science books whose topics are closely related to mathematics, especially if they can help mathematics professors learn about new points of contact between mathematics and other disciplines. + +# Math Horizons + +Art Benjamin, Co-Editor + +Harvey Mudd College + +1250 N. Dartmouth Ave + +Claremont, CA 91711 + +benjamin@hmc.edu, jquinn@oxy.edu + +http://www.maa.org/mathhorizons/ + +Intended primarily for undergraduates interested in mathematics. "Our purpose is to introduce students to the world of mathematics outside the classroom. Thus, while we especially value and desire to publish high quality exposition of beautiful mathematics we also wish to publish lively articles about the culture of mathematics. We interpret this quite broadly—we welcome stories of mathematical people, the history of an idea or circle of ideas, applications, fiction, folklore, traditions, institutions, humor, puzzles, games, book reviews, student math club activities, and career opportunities and advice." + +# Mathematical Gazette + +Gerry Leversha, Editor + +The Mathematical Association + +259 London Road + +Leicester, LE2 3BE + +U.K. + +gazette@m-a.org.uk + +http://www.ma.org.uk/resources-periodicals/the mathematical_gazette/ + +Articles about the teaching and learning of mathematics, with a focus on the 15-20 age range, and expositions of attractive areas of mathematics. Regular sections include letters, extensive book reviews, and a problem corner. + +# Mathematical Scientist + +Executive Editor + +School of Mathematics and Statistics + +University of Sheffield + +Sheffield S3 7RH + +U.K. + +http://www.shef.ac.uk/uni/companies/apt/tms.html + +- Research papers of general interest in the mathematical sciences, particularly those where the use of mathematical theory, methods and models provides insight into phenomena studied in the engineering, physical, biological and social sciences. +- Review papers, and historical surveys. +- Expository papers on any branch of mathematics which is of general interest. +- Reports on applications of the mathematical sciences to problems in real life. +- Abstracts and proceedings of appropriate conferences. +- Unsolved problems, letters to the editor and readers' comments on any branch of mathematics of general interest. + +# Mathematical Spectrum + +David W. Sharpe, Editor + +Hicks Building + +University of Sheffield + +Sheffield S3 7RH + +U.K. + +http://www.appliedprobability.org/ms.html + +Articles from all branches of mathematics, as well as regular features on mathematics in the classroom, a computer column, letters, problems and solutions, book and software reviews. + +# Mathematics and Computer Education + +George M. Miller, Editor-in-Chief + +P.O.Box 158 + +Old Bethpage, NY 11804 + +http://www.macejournal.org/index.html + +- Critical evaluation and dissemination of articles. +- Development of materials for the improvement of classroom effectiveness in the first years of college. +- Encouragement of high academic standards. + +# Mathematics & Informatics Quarterly + +Jordan Tabov + +Bulgarian Academy of Sciences + +Institute of Mathematics with Computer Center + +bl. 8, Acad. G. Bontchev Str. + +1113 SOFIA + +Bulgaria + +banmath@bgearn.bitnet + +http://olympiads.win.tue.nl/loi/misc/miq.html + +Articles, notes, problems and solutions in school mathematics and informatics. + +# Mathematics Magazine + +Allen J. Schwenk, Editor + +Western Michigan University + +Department of Mathematics + +Kalamazoo, MI 49008-3899 + +http://www.maa.org/pubs/mathmag.html + +"Articles submitted to the Magazine should be written in a clear and lively expository style. The Magazine is not a research journal; papers in a terse "theorem-proof" style are unsuitable for publication. The best contributions provide a context for the mathematics they deliver, with examples, applications, illustrations, and historical background. We especially welcome papers with historical content, and ones that draw connections among various branches of the mathematical sciences, or connect mathematics to other disciplines." + +# Mathematics Today + +(formerly Bulletin of the Institute of Mathematics and Its Applications) + +Gayna Leggott, Editorial Officer + +The Institute of Mathematics and its Applications + +Catherine Richards House + +16 Nelson Street + +Southend-on-Sea + +Essex SS1 1EF + +U.K. + +http://www.ima.org.uk/institute/mathstoday.htm + +"Mathematics Today is a general interest mathematics publication aimed primarily at Institute members. It contains articles, reviews, reports and other news on developments in mathematics and its applications. Authors are encouraged to discuss proposed articles with Gayna Leggott (gayna.leggott@ima.org.uk) before submission." + +# The Missouri Journal of Mathematical Sciences + +Shing So, Coordinating Editor +Department of Mathematics and Computer Science +Central Missouri State University +Warrensburg, MO 64093 +so@cmsu1.cmsu.edu +http://www.math-cs.cmsu.edu/~mjms/mjms.html + +- "Commentaries on issues pertaining to mathematics/computer science or the teaching/learning of mathematics/computer science. +- Articles concerning the teaching/learning of mathematics or computer science. +- Research or survey articles in any of the mathematical sciences. +- Interesting mathematical problems and solutions." + +# The Pentagon + +Steve Nimmo, Editor +Morningside College +1501 Morningside Ave. +Sioux City, IA 51106 +sdn001@alpha.morningside.edu +http://www.kme.eku.edu/pentagon.html + +Articles of interest to undergraduate mathematics students are included, assisting the Society in achieving its objectives. + +# Philosophia Mathematica + +R.S.D. Thomas, Editor +Department of Mathematics +The University of Manitoba +Winnipeg, Manitoba +Canada R3T 2N2 +thomas@cc.umanitoba.ca +http://www.umanitoba.ca/pm/ + +Work in the philosophy of pure and applied mathematics including computing. + +# Pi Mu Epsilon Journal + +Brigitte Servatius, Editor +Department of Mathematics +Worcester Polytechnic Institute +Worcester, MA 01609 +bservat@wpi.edu +http://www.pme-math.org/journal/overview.html + +Research or expository papers by undergraduates, with occasional articles by faculty members. + +# PRIMUS + +"Problems, Resources and Issues in Mathematics Undergraduate Studies" + +Brian J. Winkel, Editor + +Department of Mathematical Sciences + +United States Military Academy + +West Point, NY 10996 + +ab3646@usma2.usma.edu + +http://www.dean.usma.edu/math/pubs/primus/ + +A forum for the exchange of ideas in mathematics education at the college level. + +(*) Rose-Hulman Institute of Technology Undergraduate Math Journal + +Editor, Rose-Hulman Undergraduate Mathematics Journal + +Rose-Hulman + +Terre Haute, IN 47803 + +mathjournal@rose-hulman.edu + +"Devoted entirely to papers written by undergraduates on topics related to mathematics; the work must have been completed before graduation. Although the paper need not contain original research in mathematics, it must be interesting, well-written, and at a level that is clearly beyond a typical homework assignment. Readers of the journal should expect to see new results, new and interesting proofs of old results, historical developments of a theorem or area of mathematics, relationships between areas of mathematics and/or other fields of study, or interesting applications of mathematics." + +# SIAM Review + +Margaret H. Wright, Editor-in-Chief + +Computer Science Department + +Courant Institute of Mathematical Sciences + +New York University + +New York, NY 10012 + +mhw@cs.nyu.edu + +http://www.siam.org/journals/sirev/sirev.htm + +Consists of five sections, all containing articles of broad interest: + +- Survey and Review features papers with a deliberately integrative and up-to-date perspective on a major topic in applied or computational mathematics or scientific computing. +- Problems and Techniques contains focused, specialized papers about interesting problems, techniques, and tools, including descriptions of mathematical formulations, solution methods, and open questions. + +- SIGEST highlights a recent paper from one of SIAM's nine specialized research journals, chosen on the basis of exceptional interest to the entire SIAM community and revised and condensed as needed for greater accessibility. +- Education consists primarily of modules that are self-contained presentations of specific topics in applied mathematics, scientific computation, or their applications; each module provides the primary material needed to teach a given topic as well as supplementary material. Editor: Bobby Schnabel, bobby@cs.colorado.edu. +- The Book Reviews section contains a featured review that provides an overview of several books in a subject area. Shorter reviews of individual books are also included. + +# Significance + +Helen Joyce, Editor + +Royal Statistical Society + +12 Errol Street + +London + +EC1Y 8LX + +U.K. + +significance@rss.org.uk + +http://www.blackwellpublishing.com/journal.asp?ref=1740-9705 + +"New quarterly magazine for anyone interested in statistics and the analysis and interpretation of data. Its aim is to communicate and demonstrate in an entertaining and thought-provoking way the practical use of statistics in all walks of life and to show how statistics benefit society. Articles will be largely non-technical and hence accessible and appealing, not only to members of the profession, but also to all users of statistics. As well as promoting the discipline and covering topics of professional relevance, Significance will contain a mixture of statistics in the news, case-studies, reviews of existing and newly developing areas of statistics, the application of techniques in practice and problem solving, all with an international flavour." + +# Stats: The Magazine for Students of Statistics + +Allan Rossman and Beth Chance + +Department of Statistics + +Cal Poly State University + +San Luis Obispo, CA 93407 + +arossman@calpoly.edu, bchance@calpoly.edu + +http://www.amstat.org/publications/stats/ + +Contributions of statisticians to important and interesting problems. This generally includes presenting a scientific problem and the nature of interaction between the statistician and others working on the problem. + +# Symmetry: Art and Science + +(continuation of Symmetry: Culture and Science) + +Denes Nagy + +Institute for the Advancement of Research + +Australian Catholic University + +Locked Bag 4115 + +Fitzroy, Victoria 3065 + +Australia + +d.nagy@patrick.acu.edu.au + +or + +George Lugosi + +ISIS-Symmetry Melbourne Centre (Symmetrion) + +2 Union Street + +Kew 3101 + +Victoria + +Australia + +g.lugosi@hfi.unimelb.edu.au + +http://www.mi.sanu.ac.yu/vismath/isis6.htm + +"Symmetry, or the lack of symmetry, fulfils an important methodological function in modern art and science. Inspired by various cultural traditions, from Europe to Africa and from the Far-East to America, symmetry can bridge different branches of science and art, as well as different human cultures, and thus avoid overspecialization and some related problems. This process, matured by the end of the 1980s, became the starting point of a remarkable intellectual movement." + +# Teaching Statistics + +Gerald Goodall, Editor + +Royal Statistical Society + +12 Errol St. + +London EC1Y 8LX + +U.K. + +Gerald.Goodall@brunel.ac.uk + +http://science.ntu.ac.uk/rsscse/TS/ + +"Aimed at teachers and students aged up to age 19 who use statistics in their work. The emphasis is on teaching the subject and addressing problems which arise in the classroom. The journal seeks to support not only specialist statistics teachers but also those in other disciplines, such as economics, biology and geography, who make widespread use of statistics in their teaching. Teaching Statistics seeks to inform, enlighten, stimulate, correct, entertain and encourage. Contributions should be light and readable. Formal mathematics should be kept to a minimum." + +# The UMAP Journal + +"Undergraduate Mathematics Applications Project" + +Paul J. Campbell, Editor + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +http://cs.beloit.edu/campbell/umap + +Articles on mathematical modeling and applications of mathematics at the undergraduate level. The editor also welcomes expository articles for the On Jargon column, reviews of books and other materials, and guest editorials on new ideas in mathematics education or on interaction between mathematics and application fields. + +# Undergraduate Mathematics Journal + +Roger Lautzenheiser, Editor-in-Chief + +Rose-Hulman Institute of Technology + +Terre Haute, IN 47803 + +mathjournal@rose-hulman.edu + +http://www.rose-hulman.edu/mathjournal/ + +Papers written by undergraduates on topics related to mathematics. + +# $(\ast)$ Visual Mathematics + +Slavik Jablan, Co-editor + +Knez Mihailova 35, P.O. Box 367 + +YU-11001 Belgrade + +Yugoslavia (Serbia) + +jablans@mi.sanu.ac.yu + +http://www.mi.sanu.ac.yu/vismath/vm.htm + +"A forum for the dialogue between artists and scientist. VM publishes original works in the following sense: + +- mathematical research papers with new results and some attractive illustrations, +- artistic papers with new pieces of visual information and some mathematical links, +- mathematical-educational papers with new methods or approaches, +- mathematical-historic papers with new facts or new interpretations, +- survey papers with new approaches. + +The main goal of $VM$ is to show the beauty of mathematics in a broad artistic-scientific context. As a secondary aim, $VM$ tries to correct the + +negative tendency that led to the unpopularity of mathematics in school and the lack of public understanding of this field. + +This online journal supplements the printed journal Symmetry: Art and Science (see above). Some papers are published in both electronic form and in print. + +# About the Authors + +![](images/40f6c1f7d0ae997965a0c41c5b93d1744821f79988415c85e7ff5d20e678f44f.jpg) + +Paul Campbell graduated summa cum laude from the University of Dayton and received an M.S. in algebra and a Ph.D. in mathematical logic from Cornell University. He has been at Beloit College since 1977, where he served as Director of Academic Computing from 1987 to 1990. He is Reviews Editor for Mathematics Magazine. He has been editor of The UMAP Journal since 1984. + +![](images/c371f8b9226cc79569365fdb9731b1492328abe9d3d2442d7964e1df72227322.jpg) + +Kunio Mitsuma was born in Tokyo, Japan. After finishing college as a mathematics major, he moved to the U.S. for his master's (West Virginia University) and Ph.D. (Pennsylvania State University) degrees in mathematics. He is currently on the mathematics faculty at Kutztown University of Pennsylvania. + +# Modeling Forum + +# Results of the 2005 Interdisciplinary Contest in Modeling + +Chris Arney, ICM Co-Director + +Division Chief, Mathematical Sciences Division + +Program Manager, Cooperative Systems + +Army Research Office + +PO Box 12211 + +Research Triangle Park, NC 27709-2211 + +David.Arney1@arl.army.mil + +# Introduction + +A total of 164 teams of undergraduates, from 88 institutions in 4 countries, spent a weekend in February working on an applied mathematics problem in the 7th Interdisciplinary Contest in Modeling (ICM). + +This year's contest began at 8:00 P.M. (EST) on Thursday, Feb. 3, and ended at 8:00 P.M. (EST) on Monday, Feb. 7. During that time, the teams of up to three undergraduates or high-school students researched, modeled, analyzed, solved, wrote, and submitted their solutions to an open-ended complex interdisciplinary modeling problem involving the depletion of a nonrenewable or exhaustible resource. After the weekend of challenging and productive work, the solution papers were sent to COMAP for judging. Three of the top papers, which were judged to be Outstanding by the panel of judges, appear in this issue of The UMAP Journal. Results and winning papers from the first six contests were published in special issues in 1999 through 2004. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are unique among modeling competitions in that they are the only international contests in which students work in teams to find a solution. Centering its educational philosophy on mathematical modeling, COMAP supports the use of mathematical tools to explore real-world problems. It serves + +society by developing students as problem solvers in order to become better informed—and prepared—citizens, consumers, workers, and leaders. + +This year's nonrenewable resource problem was particularly challenging. It required teams to select a nonrenewable or exhaustible resource and model its depletion over time. The problem contained economic, demographic, political, environmental, security, and technological issues to be analyzed, along with several challenging requirements needing scientific and mathematical analysis. The problem also included the ever-present requirements of the ICM to use thorough data analysis, creativity, approximation, precision, and effective communication. The authors of the problem were Paul Campbell, editor of *The UMAP Journal* and Professor of Mathematics and Computer Science at Beloit College, and geoscientist and civil engineer Ted Hromadka, who served on the panel of final judges. The problem originated from their research and interest in resource management. Commentary from both Dr. Campbell and Dr. Hromadka appear in this issue of *The UMAP Journal*. + +All 164 of the competing teams are to be congratulated for their excellent work and dedication to scientific modeling and problem solving. This year's judges remarked that the quality of the papers was high and the modeling very robust. The 2005 ICM was managed by COMAP via its information system connected to the World Wide Web, where teams registered, obtained contest materials, and downloaded the problem at the appropriate time through COMAP'S ICM Website. + +Start-up funding for the ICM was provided by a grant from the National Science Foundation (through Project INTERMATH) and COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS). + +# The Exhaustible Resource Problem + +Select a vital nonrenewable or exhaustible resource (water resources, mineral, energy source, food source, etc.) for which your team can find appropriate worldwide historic data on its endowment, discovery, annual consumption, and price. + +The modeling tasks are to: + +1. Using the endowment, discovery, and consumption data, model the depletion or degradation of the commodity over a long horizon using resource modeling principles. +2. Adjust the model to account for future economic, demographic, political and environmental factors. Be sure to reveal the details of your model, provide visualizations of the models output, and explain the limitations of the model. +3. Create a fair, practical "harvesting/management" policy, that may include economic incentives or disincentives, which sustains the usage over a long + +period of time while avoiding severe disruption of consumption, degradation or rapid exhaustion of the resource. + +4. Develop a "security" policy that protects the resource against theft, misuse, disruption, and unnecessary degradation or destruction of the resource(s). Other issues that may need to be addressed are political and security management alternatives associated with these policies. +5. Develop policies to control any short or long-term "environmental effects" of the harvesting. Be sure to include issues such as pollutants, increased susceptibility to natural disasters, waste handling and storage, and other factors you deem appropriate. +6. Compare this resource(s) with any other alternatives for its purpose. What new science or technologies could be developed to mitigate the use and potential exhaustion of this resource? Develop a research policy to advance these new areas. + +# The Results + +Solution papers were coded at COMAP headquarters so that names and affiliations of authors were unknown to the judges. Each paper was then read preliminarily by at least two "triage" judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary, the model description, and overall organization are the primary elements in judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, additional triage judges evaluated the paper. + +Final judging by a team of modelers, analysts, and subject-matter experts took place on March 4 and 5, again at West Point, NY. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
IT Security3268946164
+ +The three papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries by the author and a final judge. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +
Institution and AdvisorTeam Members
“The Coming Oil Crisis”
East China University of Science and TechnologyWei Deling
Shanghai City, ChinaChen Jie
Ni ZhongxinXu Hui
“Preventing the Hydrocylapse: A Model for Predicting and Managing Worldwide Water Resources”
Franklin W. Olin College of EngineeringSteven Krumhol
Needham, MA (INFORMS Prize winner)Frances Haugen
Burt TilleyDaniel Lindquis
“The Petroleum Armageddon”
Maggie Walker Governor's School
Richmond, VAJonathan Giuffri
John BarnesPalmer Mebane
Daniel Lacker
+ +# Meritorious Teams (26 teams) + +Beijing Language and Cultural University, China (Rou Song) + +Central University of Finance and Economics, China (Weihong Yu) + +Dalian Maritime University, China (Guoyan Chen) + +Dalian Nationalities University, China (Xiangdong Liu) + +Dalian University of Technology, China (Mingfeng He) + +East China University of Science & Technology, China (Lu Yuanhong) + +Jinan University, China (Daiqiang Hu) + +Maggie Walker Governor's School, Richmond, VA (John Barnes) + +Nanjing University of Posts & Telecommunications, China (LiWei Xu) + +National University of Defense Technology, China (Mengda Wu) + +Ningbo Institute of Technology, China (Jufeng Wang) + +Päivölä College, Finland (Esa Lappi) + +Peking University, China (3 teams) (Yulong Liu: 2 teams; Huang Hai) + +Shandong University, China (2 teams) (Jiahua Ma) (Baodong Liu) + +Sun Yat-Sen University, China (Qi-Ru Wang) + +United States Military Academy, West Point, NY (2 teams) (Bart Stewart) (Michael Smith) + +University of Science & Technology of China, Hefei, China (Qiang Meng) + +University of Virginia, Charlottesville, VA (Robert Hirosky) + +Wuhan University, China (Zhong Liuyi) + +Xidian University, China (Xiaogang Ql) + +Zhejiang University City College, China (2 teams) (Xusheng Kang) (Waibin Huang) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and by the Head Judge. Additional awards were presented to the Olin College of Engineering team advised by Burt Tilley from the Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Contest Directors + +Chris Arney, Mathematical Sciences Division, Army Research Office, Research Triangle Park, NC + +Gary W. Krahn, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Associate Director + +Richard Cassady, Dept. of Industrial Engineering, University of Arkansas, Fayetteville, AR + +Judges + +Laura Hromadka, Hromadka and Associates, Costa Mesa, CA + +Theodore V. Hromadka, Hromadka and Associates, Costa Mesa, CA + +V. Frederick Rickey, Dept. of Mathematical Sciences, U.S. Military Academy West Point, NY + +Triage Judges + +Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY: + +Scott Billie, Mason Crow, David Ellison, Andrew Glen, Alex Heidenberg, John Jackson, Michael Johnson, Gary Krahn, Gary Lambert, Amy Lin, Keith McClung, Barbara Melendez, Fernando Miguel, Joe Myers, Mike Phillips, Jack Picciuto, Frederick Rickey, Tyge Rugenstein, Bart Stewart, Rodney Sturdivant, Frank Wattenberg, and Brian Winkel. + +Materials Division, Army Research Laboratory, Aberdeen, MD: William de Rosset. + +# Source of the Problem + +The Exhaustible Resources Problem was contributed by Paul J. Campbell (Dept. of Mathematics and Computer Science, Beloit College, WI) and Ted Hromakda (Hromadka and Associates, Costa Mesa, CA). + +# Acknowledgments + +We thank: + +- the Institute for Operations Research and the Management Sciences (INFORMS) for its support in judging and providing prizes for the winning team; +- all the ICM judges and ICM Board members for their valuable and unflagging efforts; +- the staff of the Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY, for hosting the triage and final judgings. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, style has been adjusted to that of The UMAP Journal, and the papers have been edited for length. Please peruse these student efforts in that context. + +To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Editor's Note + +As usual, the Outstanding papers were longer than we can accommodate in the Journal, so space considerations forced me to edit them for length. It was not possible to include all of the many tables and figures. + +In editing, I endeavored to preserve the substance and style of the paper, especially the approach to the modeling. + +Paul J. Campbell, Editor + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORI
CALIFORNIA
California State Univ., Monterey BaySeasideJeffrey GroahP
Hongde HuH
Harvey Mudd CollegeClaremontHank KriegerH
COLORADO
Regis UniversityDenverJim SeibertH
University of ColoradoColorado SpringsRadu CascavalH
DenverLynn BennethumH
INDIANA
Earlham CollegeRichmondCharlien PeckH
IOWA
Simpson CollegeIndianolaJames BohyH
KENTUCKY
Asbury CollegeWilmoreDavid CoullietteH
Kenneth RietzH
Duk LeeH
Thomas More CollegeCrestview HillsSteven LameierP
MARYLAND
Villa Julie CollegeStevensonEileen McGrawP
MASSACHUSETTS
Olin College of EngineeringNeedhamBurt TilleyO
MONTANA
Carroll CollegeHelenaMark ParkerH,P
NEW YORK
United States Military AcademyWest PointBart StewartM
Michael SmithM
NORTH CAROLINA
Duke UniversityDurhamDavid KrainesH
OHIO
Youngstown State UniversityYoungstownGeorge YatesH,H
+ +Scott Martin + +
INSTITUTIONCITYADVISORI
PENNSYLVANIA
Clarion University of PennsylvaniaClarionJohn HeardP
Curt FoltzP
SOUTH CAROLINA
Midlands Technical CollegeWest ColumbiaRichard BaileyP
VIRGINIA
Maggie Walker Governor's SchoolRichmondJohn BarnesO,M
Martha HicksH
James Madison UniversityHarrisonburgDavid WaltonH
University of VirginiaCharlottesvilleRobert HiroskyM
WASHINGTON
University of WashingtonSeattleSara BilleyH
Sandor KovacsH,H
CHINA
Anhui
Anhui UniversityHefeiWang JianP
Hefei University of TechnologyHefeiBao ChaoweiH
Liang WeizhongH
Gong KunH
University of Science and Technology of ChinaHefeiCheng YezengH
Meng QiangM
Wang HuiwenH
Beijing
Beihang UniversityBeijingWu SanxingP
Beijing Institute of TechnologyBeijingLi BingzhaoH,P
Chen YihongH
Beijing Jiaotong University, School of ScienceBeijingWang XiaoxiaP
Feng GuochenP
Ren LiweiH
Beijing Language and Culture UniversityBeijingSong RouM
Beijing University of Chemical TechnologyBeijingLiu HuiH
Huang JinyangH
Cheng YanP
Beijing Univ. of Posts and TelecommunicationsBeijingDing JinkouH
Sun HongxiangH
Wu YunfengH
Zhang WenboH
Beijing University of TechnologyBeijingXue YiH,P
Chang YuH
Central University of Finance and EconomicsBeijingYu WeihongM
Peking UniversityBeijingWang MingH
Liu YulongM,M
Huang HaiM
(Earth and Space Science)Liu ChuxiongH
(Health Science Center)Shu XueH,H
Tsinghua UniversityBeijingYe JunH
Xie JinxingP
(School of Science)Huang HongxuanP
Chongqing
Chongqing UniversityChongqingLi ZhiliangH
Gong QuH
Guangdong
Jinan UniversityGuangzhouHu DaiqiangM
Fan SuohaiP
Zhang ChuanlinP
South-China Normal UniversityGuangzhouWang HenggengP
South China University of TechnologyGuangzhouTao Zhi SuiP
Pan Shao HuaP
Liang Man FaH
Sun Yat-Sen UniversityGuangzhouWang Qi-RuM
Bao YunP
Li Cai WeiH
Heilongjiang
Harbin Engineering UniversityHarbinGao ZhenBinH,P
Zhang XiaoWeiH
Yu FeiH
Yu TaoP
Shen JiHongH
Harbin Institute of TechnologyHarbinZhang YunfeiH,P
Harbin University of Science and TechnologyHarbinWang ShuzhongH
Chen DongyanP
Li DongmeiH
Northeast Agricultural UniversityHarbinTang YanyaH
Hubei
Wuhan UniversityWuhanZhong LiuyiM
Liu DichenH
Hunan
Central South UniversityChangshaHou MuzhouH,H
Hunan UniversityChangshaLi XiaopeiH
National University of Defense TechnologyChangshaWu MengdaM,H
Jiangsu
China University of Mining and TechnologyXuzhouZhou ShengwuH
Nanjing University of Post and TelecommunicationNanjingXu LiWeiM,H
Nanjing University of Science & Tech.NanjingZhang HaifeiH
Huang ZhenyouP
Southeast UniversityNanjingChen EnshuiH
Wang FengH,P
Jilin
Jilin UniversityChangchunHuang QingdaoP
Ji YouqingP
Cao ChunlingH
Liaoning
Dalian Maritime UniversityDalianChen GuoyannM
Dalian Nationalities University (Economics and Management)DalianLiu XiangdongM
Zhang HengboP
Ge RendongP
Dalian University (Information and Engineering) (Physics)DalianGang JiataiH
Wang YanchunH
Dalian University of TechnologyDalianHe MingfengM
Wang YiH
(Institute of University Students' Innovation)He MingfengH,H
Pan QiuhuiP
(School of Software)Yu ChangliangH,P
Shaanxi
North University of ChinaTaiyuanLe YingjieH
Northwestern Polytechnical UniversityXi'anLv QuanyiH
Xi'an Jiaotong UniversityXi'anZhou YicangP
Dai YonghongH
Xidian UniversityXi'anZhou ShuishengH
Qi XiaogangM
Feng HailinH
Shandong
Shandong UniversityJinanLiu BaodongM
Ma JianhuaM
Shanghai
Donghua UniversityShanghaiMa BiaoH
Ma Yu-fangH
East China University of Science and TechnologyShanghaiLu YuanhongM
Ni ZhongxinO
Fudan UniversityShanghaiCao YuanH
Cai ZhijieH
Shanghai Jiao Tong UniversityShanghaiSong BaoruiH
Sichuan +Univ. of Electronic Science and Tech. of ChinaChengduZhang YongH
Du HongfeiH,H
Tianjin +Tianjin UniversityTianjinLin DanH
Song ZhanjieH
Zhejiang +Academy of ScienceHangzhouShi GuoshengP
Zhejiang Gongshang UniversityHangzhouZhu LingP,P
Zhao HengP
Zhejiang University (Applied Mathematics) +(Mathematics)HangzhouYang QifanP
Tan ZhiyiP
(College of Science)HangzhouJiang YiweiH
(City College)Kang XushengM
Wang GuiH
Huang WaibinM
(Chu Kechen Honors College)HangzhouWu JianH
(Ningbo Institute of Technology)NingboLi ZheningP
Wang JufengM,P
FINLAND +Päivölä CollegeTarttilaJukka IlmonenP
Esa LappiM,H
INDONESIA +Institut Teknologi BandungBandungEdy SoewonoP
Agus GunawanH
+ +# Editor's Note + +Unless otherwise specified, the sponsoring department is the Dept. of Mathematics, Applied Mathematics, Mathematical Sciences, or Mathematics and Computer Science. + +For team advisors from China, we have endeavored to list family name first. + +# The Coming Oil Crisis + +Wei Deling + +Chen Jie + +Xu Hui + +East China University of Science and Technology + +Shanghai City, China + +Advisor: Ni Zhongxin + +# Summary + +We model depletion of oil, a typical vital nonrenewable resource. Based on the theory of supply and demand, we establish a differential equation system that includes demand, supply, and price, and derive explicit formulas for the three variables. We modify the model to reflect exponentially increasing worldwide oil demand. + +We fit the modified model to worldwide oil demand data 1970-2003. We conclude that all oil will be used up in 2032 without countermeasures. We then take economic, demographic, political, and environmental factors into account. + +To meet the needs of people today without compromising those of future generations, we establish a criterion of rational oil allocation between generations and model optimal oil allocation under this criterion, with an illustration. + +We provide a strategy for oil exploitation to reduce the possibility of disasters in the short term. + +Finally, according to marginal utility replacement rules, we study the tradeoff between oil and its alternatives. Since our model is based on demand-supply theory and the intrinsic law of nonrenewable resources, it can be applied to general nonrenewable resources. + +# Task 1: Modeling the Depletion of Oil + +Under the following assumptions, no restriction is made to protect oil, so it will be exhausted in the fastest way. + +# Assumptions + +- Oil refining capacity is adequate. +- All undiscovered oil is available when necessary—as long as there is demand, there is supply, until all the oil on Earth is completely used up. + +# Notations + +- $U(t)$ : Oil undiscovered in year $t$ . +- $R(t)$ : Oil discovered but has not been used (reserves) in year $t$ . +- $D(t)$ : Worldwide oil demand in year $t$ (in thousands of barrels per day $(bpd)$ ) +- $S(t)$ : Worldwide oil supply in year $t$ . +- $P(t)$ : Oil price in year $t$ . +- $P_0$ : Equilibrium price of oil. + +# Modeling + +From the above definitions, $U(t) + R(t)$ is the total remaining oil on Earth in year $t$ , and $\sum_{i = t}^{n}D(i)$ is the total demand from year $t$ through year $n$ . + +To learn when the total remaining oil will be used up, we find $n$ such that + +$$ +\sum_ {i = 1} ^ {n} D (i) \leq U (t) + R (t) < \sum_ {i = 1} ^ {n + 1} D (i); \tag {1} +$$ + +then oil will be depleted between year $n$ and year $n + 1$ . + +# Data + +- Estimated undiscovered oil worldwide in 1997 was 180 billion barrels, that is, $U(1997) = 180 \left( \times 10^{9} \text{ bbl} \right)$ [Campbell 1997]. +- Worldwide oil reserve in 1997 was 1,018.5 billion barrels, that is, $R(1997) = 1,018.5 \left( \times 10^{9} \text{ bbl} \right)$ [Energy Information Administration 2004]. +- The worldwide oil demand from 1980 to 2003, $D(i)$ ( $i = 1980, \ldots, 2003$ ), is shown in Table 1 [Energy Information Administration 2004].1 + +To predict future demand, we consider the following system of first-order linear ordinary differential equations that express "supply-demand" principles: + +World-wide oil demand, 1970-2003 (thousands of barrels/day (bpd)). + +Source: Energy Information Administration [2004]. + +Table 1. + +
197046,808198063,108199066,443200076,954
197149,416198160,944199167,061200178,105
197253,094198259,543199267,273200278,439
197357,237198358,779199367,372200379,813
197456,677198459,822199468,679
197556,198198560,087199569,955
197659,673198661,825199671,522
197761,826198763,104199773,292
197864,158198864,963199873,932
197965,220198966,092199975,826
+ +$$ +\frac {d S}{d t} = a \tilde {P}, \tag {2} +$$ + +$$ +\frac {d \tilde {P}}{d t} = - b (S - D), \tag {3} +$$ + +$$ +\frac {d D}{d t} = - c \tilde {P}, \tag {4} +$$ + +where $\tilde{P} = P(t) - P_0$ and $a, b, c$ are positive constants. + +Eq. (2) means that if the oil price is greater than its equilibrium price, the output will increase accordingly, and vice versa. Eq. (3) says that if oil supply exceeds demand, the price will decline. Eq. (4) indicates when the price goes up or down, demand of shrinks or expands accordingly. + +After careful calculation, we get the solution of the system: + +$$ +\tilde {P} (t) = k \sin (\omega t + \phi), \tag {5} +$$ + +$$ +S (t) = S _ {0} - \frac {a k}{\omega} \cos (\omega t + \phi), \tag {6} +$$ + +$$ +D (t) = D _ {0} + \frac {c k}{\omega} \cos (\omega t + \phi), \tag {7} +$$ + +where + +$k = \sqrt{\tilde{c}_1^2 + \tilde{c}_2^2},$ +$\phi = \arctan (\tilde{c}_1 / \tilde{c}_2),$ +$\bullet \omega = \sqrt{b(a + c)}$ , and +- $S_0 = D_0, \tilde{c}_1$ , and $\tilde{c}_2$ are parameters to be determined. + +We are particularly interested in (7). It implies that oil demand is periodic. However, as time passes, the world population is expanding exponentially, and the demand of oil increases accordingly. Therefore, we modify (7) to reflect this intrinsic tendency to increase. We add to the right-hand side of (7) an exponential term $k_{1} \exp \bigl (k_{2}(t - t_{0})\bigr)$ , where $k_{1}, k_{2}, t_{0}$ are constants), getting + +$$ +D (t) = a _ {1} + a _ {2} \cos \left(a _ {3} t + a _ {4}\right) + a _ {5} \exp \left(a _ {6} t\right). \tag {8} +$$ + +Fitting (8) to the data in Table 1, we get the curve in Figure 1, for the function + +$$ +D (t) = 3 1 9 5 0 + 5 5 6. 7 \cos (1. 6 0 5 t - 3 1 5 9. 6 5 9) + 1. 2 3 9 \times 1 0 ^ {- 1 6} \exp (0. 0 2 3 6 6 t). +$$ + +![](images/8dc3c1dc77531549884841711cfc5145ec8b4e6365c14b9756e313d362f06699.jpg) +Figure 1. Data and fitted curve for oil demand per day; vertical scale is in $10^{7}$ bpd. + +With the passage of time, the third term $[a_5\exp (a_6t)]$ on the right-hand side of (8) will play a more important role and the second term $[a_2\cos (a_3t + a_4)]$ can be neglected. Thus, for the sake of convenience, we reduce (8) to + +$$ +D (t) = a _ {1} + a _ {5} \exp (a _ {6} t). +$$ + +For comparison, we also do linear fitting plus an unvarying-demand case in which future demand is the same as in 2003. Fitting to Table 1 gives for $t \geq 2004$ : + +$$ +\text {E x p o n e n t i a l f i t} \quad D (t) = 2 9 8 2 0 + 2. 2 6 5 \times 1 0 ^ {- 1 5} \exp (0. 0 2 2 2 3 t) +$$ + +$$ +\text {L i n e a r f i t} \quad D (t) = 7 7 1. 2 t - 1. 4 6 7 \times 1 0 ^ {6} +$$ + +The predicted demand is shown in Figure 2 as average daily demand. + +All oil on Earth will be used up by 2032 and 2033 according to the exponential and linear fits, and by 2037 if future demand remains at the level of 2003. + +With an increase in oil demand, its price will accordingly rise. As shown by the broken lines in Figure 2, the rising price will lead to a decline in demand; we discuss this phenomenon in detail later. + +![](images/7e54d26964a403f1a71f9c807fd900e203f1be2eba84b2133ee9f858e16bc1ae.jpg) +Figure 2. Estimated future oil demand, for several scenarios; vertical scale is in $10^{7}$ bbl/d. + +# Sensitivity Analysis + +It is difficult to get an accurate value for $U(t)$ , undiscovered oil on Earth in year $t$ . But we can estimate and vary the estimate to see whether the variation vastly changes $n$ . Varying $U(1997)$ by $\pm 10\%$ , for each demand model, the change in $n$ is less than one year. + +# Task 2: Other Factors + +We modify the exponential model to include other factors. + +# Assumptions + +Annual demand for oil reflects annual consumption. +- We do not take into account interactions between factors. +- We ignore small fluctuations in the future consumption of oil. + +# Economic Factors + +We use GDP as the measure of economy. Table 2 shows recent data for world total GDP and the corresponding oil consumption. + +The correlation between world GDP and oil consumption is .9930, with linear regression equation + +$$ +\text {c o n s u m p t i o n} = 1 1 8 3 \times \mathrm {G D P} + 3 8 1 4 0. \tag {9} +$$ + +Table 2. +World total GDP (\(10\)\mathrm{^8}\)) and world oil consumption (\(10^3\) bpd), 1995-2003. + +
Year199519961997199819992000200120022003
GDP27.13428.24729.43330.25731.37732.8533.6434.648736
Oil699557152273292739327582676954781057843979813
+ +Using (9), we predict cumulative consumption. We take 2001 as the starting point, when total remaining oil (undiscovered plus known reserves) was $U(2001) + R(2001) = 1.1178 \times 10^{12}$ bbl. We calculate the time to oil exhaustion under different cases: GDP growing at $10\%, 5\%, 3\%$ , and $1\%$ (Figure 3). + +![](images/9ed83e2eb4f7f7ba375d4771bfee62d1f40253974d5584ed7a9734aacdc95133.jpg) +Figure 3. Cumulative oil consumption under some rates of GDP growth; vertical scale is in $10^{12}$ bbl. + +The horizontal line denotes the total remaining oil in 2001. The $x$ -axis coordinate of the intersection of the horizontal line and a curve denotes oil exhaustion time. The faster the GDP growth rate, the larger the oil consumption and the sooner the time of exhaustion. For $10\%$ , oil will be depleted in 2020; for $5\%$ , in 2026; for $3\%$ , in 2029; and finally, for $1\%$ , oil will be used up in 2035. + +# Demographic Influence + +We resort to a logistic model to predict the world population $x(t)$ : + +$$ +x (t) = \frac {k}{1 + \left(\frac {k}{x _ {0}} - 1\right) e ^ {- (t - t _ {0}) r}}, +$$ + +where + +- $t$ is time, with initial time $t_0 = 1980$ ; +- $x(t)$ is the population, in billions of people, with $x_0 = x(1980) = 4.4585$ ; +- $k$ is the environment capacity—the maximum population that the Earth can accommodate—in billions of people, and we take $k = 10$ ; and +- $r$ is the intrinsic growth rate of the population, determined from data. + +We use population data from 1980, 1990, and 2000 to fit the equation and get + +$$ +x (t) = \frac {1 0}{1 + \left(\frac {1 0}{4 . 4 5 8 5} - 1\right) e ^ {- 0 . 0 3 2 7 (t - 1 9 8 0)}}, \tag {10} +$$ + +Using (10), we predict the future population, as shown in Table 3. + +Table 3. World population estimated from the logistic model. + +
198019902000201020202030204020502060
4.45855.27366.07446.82127.48498.04958.51268.88119.1560
+ +Similarly, we obtain a relationship between consumption and total population. The correlation coefficient is .9877, with linear regression + +$$ +\text {c o n s u m p t i o n} = 1 4 4 3 \times \text {p o p u l a t i o n} - 1 1 1 7 0. \tag {11} +$$ + +The time of oil exhaustion, based on the logistic growth of the population, is 2033. + +# Political Influence + +Here, we mainly discuss the influence of wars. + +$$ +D (t) = 9. 1 4 \times 1 0 ^ {- 1 1} \times e ^ {0. 0 1 7 1 8 t}. +$$ + +The annual rate of growth of consumption is + +$$ +r = \frac {D (t + 1)}{D (t)} - 1 = e ^ {0.01718} - 1 = 1.73 \%. +$$ + +Figure 4 shows the annual growth rate for oil consumption during the past decades, plus a horizontal line at $r$ . The growth rate declined sharply in 1974, 1980, and 1990, coinciding with the fourth Middle East War (1973), the Iran-Iraq War (1980), and the Gulf War (1990), all in the Middle East, the center of oil production. So wars strongly impact the price of oil, and consequently demand. + +![](images/3a5440781a0692117e35e603e465240aced3b57b7cbedbc46a3e67f6c6873e42.jpg) +Figure 4. Historic growth of oil consumption. + +# Environmental Influence + +The use of oil inevitably leads to environmental pollution. To protect the environment against excessive pollution, governments can adopt measures to limit the use of oil, thus curbing oil demand. We take the amount of carbon dioxide discharged by oil consumption as the scale to measure environment pollution; world data are shown in Table 4. + +Table 4. World carbon dioxide emissions from the consumption of oil (10 $^6$ metric tons). + +
199319941995199619971998199920002001
92209284938895869691976699391013810292
+ +The correlation between oil consumption and carbon dioxide emission from oil consumption is 0.9937, with the regression + +$$ +\text {c o n s u m p t i o n} = 1 0. 0 9 \times \left(\mathrm {C O} _ {2} \text {f r o m o i l}\right) - 2 5 3 2 0. \tag {12} +$$ + +Using (12), we determine the amount of consumption to allow under different controlled annual emission growth rates. Figure 5 shows the results and the corresponding dates for exhaustion of oil, for emission growth rates of $1\%$ and $3\%$ . + +![](images/461db9dc971a737439867dcee7ab9ce176fef26b51b21e4cd7137e45f414be5c.jpg) +Figure 5. Cumulative oil consumption under various rates of $\mathrm{CO}_{2}$ emission growth rates. + +# Limitations + +The above models are based on the assumption that all the other factors are fixed when modeling for a specific factor. But this cannot be true in reality, because one factor may interact with others. Thus, interactions should be taken into account in further study. + +# Task 3: Sustainable Use + +To prevent excessive consumption and rapid depletion, and to take into account our offspring's interests, we should allocate consumption rationally between generations. + +# Assumptions + +- Annual demand for oil truly reflects oil consumption. +- Oil consumption in year $t$ cannot be far less than that in year $t - 1$ . +- We must provide a rational consumption allocation between generations. +- A generation consists of $n$ years. + +# Allocation of Oil Between Generations + +The total remaining oil in year $t$ is $U(t) + R(t)$ , so + +$$ +U (t) + R (t) = m _ {1} + m _ {2}, +$$ + +where $m_{1}$ is the amount of oil for people today and $m_{2}$ is the amount of oil left for offspring. We define the degree of rational consumption allocation for oil as + +$$ +\eta = \frac {m _ {1}}{m _ {2}} \times 100 \% \quad (0 \leq \eta \leq \infty). +$$ + +If the value of $\eta$ is too high, the amount of oil for contemporary human beings is too small to meet their needs. + +# Modeling the Rational Consumption Allocation + +We expect that future oil demand will not undergo a sharp decline, and we want oil to be allocated among generations fairly. Meanwhile, we want the resource to be used in the most efficient way. + +We model an interval of $n$ years, i.e., one generation. We have the following linear programming optimization problem: + +$$ +\max \sum_ {i = 1} ^ {n} c _ {i} d _ {i} \tag {13} +$$ + +such that + +$$ +\frac {r}{\sum_ {i = 1} ^ {n} d _ {i}} \geq \eta^ {\prime}, +$$ + +$$ +d _ {i} \geq \alpha d _ {i - 1}, \quad i = 1, 2, \dots , n, +$$ + +$$ +d _ {i} \geq 0, \quad i = 1, 2, \dots , n, +$$ + +where + +- $c_{i}$ is the utilization rate of oil (crude oil available divided by refinery capacity) at year $i$ ; +- $\eta^{\prime} =$ degree of rational consumption allocation of oil between generations; +- $d_{i}$ is oil consumption in year $i$ (with $d_{0}$ oil consumption at the initial time); +- $r$ is the total remaining oil in the first year of one generation; +- $\alpha$ is a set percentage such that the oil consumption in a given year must not be less than $\alpha$ times the consumption in the previous year, with $\alpha$ close to 1. + +The objective is maximum utilization over $n$ years. The first constraint assures a rate of allocation between generations, while the second assures that oil consumption in year $i$ is not less than $\alpha$ times the consumption in year $i - 1$ . When $\eta', c_i, r, \alpha$ are given, we can obtain the optimal consumption allocation of oil over $n$ years by solving the linear programming problem (14). As for estimating $c_i$ , we believe that the utilization rate should increase as time passes but should always be smaller than 1. Thus, we should have + +$$ +c _ {i} = 1 - a _ {1} e ^ {a _ {2} t}, \tag {14} +$$ + +where $a_1$ and $a_2$ are constants determined by fitting historical data. + +We give an illustration. We set $n = 20$ , $\alpha = 1$ , $\eta' = 1.67$ , $r = 1.0 \times 10^{12}$ , and year 2004 as the base year, so that $d_0 =$ oil consumption in 2004. Figure 6 shows oil consumption under optimal allocation from 2005 on. Oil consumption under optimal allocation is far less than under exponential growth. The optimal consumption varies smoothly until the late phase of the prediction, when it jumps sharply. This may be because we chose an inappropriate $\eta'$ -value. However, choosing the $\eta'$ -value is rather difficult, because it should incorporate many factors such as population, price, specific economic environment, etc. + +# Implementation + +- We could levy a relatively heavy tax on oil compared to other resources. +- We could encourage the development of alternatives to oil. + +# Task 4: The "Security" Policy for Oil + +We believe that the problem of oil security arises mainly due to the different utilization rates among countries. If a country with low oil utilization is assigned a redundancy of oil, whereas a country with high utilization gets an insufficient, there will be great waste. We can establish a model to find the optimal distribution of oil among countries with different utilization rates. + +![](images/2201dae3112b774f0d5331196f3ae09e3626c274e3a5b35d71c51137b9216106.jpg) +Figure 6. Oil consumption under optimal allocation (dotted line), compared with exponential growth (solid line). + +# Assumptions + +- The annual oil consumption of the whole world is according to the optimal oil allocation model in Task 3. +- We do not take trade barriers into account, hence assume that reallocation of oil among countries is feasible. + +# Modeling + +Suppose there are $n$ main oil-consuming countries in the world. Given the year $t$ , we can establish the linear programming as follows: + +$$ +\max \sum_ {i = 1} ^ {n} l _ {i} (t) x _ {i} (t) \tag {15} +$$ + +such that + +$$ +\begin{array}{l} \sum_ {i = 1} ^ {n} x _ {i} (t) = d (t), \\ x _ {i} (t) \geq \alpha_ {i} (t) x _ {i} (t - 1), \quad i = 1, 2, \dots , n, \\ x _ {i} (t) \geq 0, \quad i = 1, 2, \dots , n, \\ \end{array} +$$ + +where + +- $l_{i}(t)$ is the oil utilization rate of country $i$ in year $t$ ; +- $x_{i}(t)$ is the oil use of country $i$ in year $t$ ; +- $d(t)$ is worldwide oil consumption allocation in year $t$ , which can be obtained using the model in Task 3; and +- $\alpha_{i}(t)$ is the minimum ratio of $x_{i}(t)$ to $x_{i}(t - 1)$ , as a percentage. + +The first constraint means that total oil consumption by all countries in a given year should equal total oil consumption under optimal oil allocation. The second constraint means that oil consumption of a country in a particular year is not less than a certain proportion of the previous year's consumption. + +# Limitations of the Model + +In reality, countries would more likely consider their own interests, leading to trade barriers and making it impossible to get the optimal distribution. + +# Implementation + +For countries with low utilization rates, we could levy a relatively heavy tax on oil or set a limit on annual oil consumption. + +# Task 5: Natural Disasters + +An oil field occupies a large area, destroys vegetation in the vicinity, changes the components of the soil, and deteriorates the environment nearby, to the detriment of animals' habitat. Exploitation of the field influences the groundwater and causes desertification. And then there are oil spills, which often lead to the pollution of nearby waters. However, we mainly consider short-term effects of natural disasters on oil exploitation. + +# Assumptions + +- All oil produced is consumed. +- The total amount of oil produced meets the needs of economic development. + +# Short-term Effects + +Let $n$ be the number of oil fields on Earth. For sustainable development of the economy, we must keep the total output of all oil fields the same as worldwide oil consumption under the optimal allocation of Task 3. Thus, we have + +$$ +\sum_ {i = 1} ^ {n} x _ {i} (t) = d (t), +$$ + +where $x_{i}(t)$ is the output of the oil field $i$ in year $t$ , and $d(t)$ is the worldwide oil consumption under the optimal oil allocation of Task 3. + +We believe that the susceptibility of an oil field to disasters is a function of its output in a given year and the ratio of its cumulative output to its initial oil reserve. Naturally, the more the oil produced, the greater the likelihood of disasters. We believe that the relationship is linear. And of course, a new oil field and an old one will have different effects on the environment. This difference is given by the ratio of the cumulative output to the initial oil reserve. We introduce a penalty function $e^{-a[1 - \lambda_i(t)]}$ and set + +$$ +p _ {i} (t) = k _ {i} x _ {i} (t) e ^ {- a [ 1 - \lambda_ {i} (t) ]}, +$$ + +where + +- $p_i$ is the susceptibility to disasters; +- $k_{i}$ is the proportion coefficient, which is determined by the age and exploitation method of the oil field—a small value of $k_{i}$ represents a young field with an advanced exploitation method; +- $\lambda_{i}(t)$ is the ratio for oil field $i$ of its cumulative output until year $t$ to its initial oil reserve. + +We wish to minimize the total susceptibilities of different oil fields under the condition that worldwide oil demand is satisfied. That is: + +$$ +\min \sum_ {i = 1} ^ {n} p _ {i} (t) \quad \text {s u c h t h a t} \quad \sum_ {i = 1} ^ {n} x _ {i} (t) = d (t). +$$ + +In the solution, oil fields with small $k_{i}$ will tend to have larger outputs, and vice versa. We also increase the value of $n$ tentatively, i.e., increase the number of oil fields, and find that the total susceptibility decreases. This is mainly due to the fact that during early exploitation of an oil field the penalty function damps down the risk of disaster, thus favoring the development of new oil fields and decreasing the likelihood of disasters. + +# Implementation + +We increase the output of old oil fields with small $k_{i}$ (young fields with advanced methods of extraction) and reduce the output of those with large $k_{i}$ . + +(older fields with obsolete methods of extraction). Also, if possible, we should exploit as many new oil fields as possible and decrease the exploitation of old oil fields, so as to control the susceptibilities to disaster. + +# Task 6: The Development of Alternatives + +Even if we control the use of oil, its use can be prolonged only 4-5 years beyond an exhaustion horizon of 30 years. We urgently need an alternative to oil. For the sake of sustainable development, we must gradually accelerate consumption of the alternative as oil is depleted. The question is how to develop the alternative to keep the economy stable during the transition period. + +# Assumptions + +- We consider a single kind of alternative. +- Oil and its alternative are interchangeable as energy resources. +- The measure of oil and its alternative is their contributions to GDP. +- The quantity of oil to produce a unit of energy does not change with time. + +# Analysis + +Let the cost for oil to produce a unit of energy be $C_1$ , and that of its alternative be $C_2$ , with $C_1 \leq C_2$ (the cost for oil to produce a unit of energy is the lowest compared with any other resources [Vernon 1976]). But as the total amount of remaining oil declines, the price of oil will correspondingly increase. On the other hand, with advances in the technology for the alternative, its price will fall. The general tendency is shown in Figure 7. + +With rising cost, oil consumption will decrease, while demand for the alternative will increase, until the day when oil is completely replaced. The question is, How fast should oil be replaced? + +# Modeling + +From our model for optimal oil distribution among countries, we concluded that the consumption in future years will increase slowly. Let $G$ be the value of GD, $x$ the consumption of oil, $y$ the consumption of the alternative, and $t$ time. Let $G = G(x,y)$ , so that + +$$ +\frac {d G}{d t} = \frac {\partial G}{\partial x} \frac {d x}{d t} + \frac {\partial G}{\partial y} \frac {d y}{d t}. \tag {16} +$$ + +![](images/106afa0fc4c91183d876c529f2f9b60ef1c7371b1894b639a7f2e6821efcd44c.jpg) +Figure 7. Trend of cost of oil (lower curve) and of its alternative (upper curve), per unit of energy. + +We think that a smooth exponential decline of use of oil is reasonable, so we let + +$$ +x (t) = x (t _ {0}) e ^ {- b (t - t _ {0})} \quad (t > t _ {0}, b > 0), +$$ + +so that + +$$ +\frac {d x}{d t} = - x \left(t _ {0}\right) b e ^ {- b \left(t - t _ {0}\right)}, \tag {17} +$$ + +where $t_0$ is the year when oil demand begins to decline. Substituting (17) into (16), we get: + +$$ +\frac {d y}{d t} = \frac {\frac {d G}{d t} - x (t _ {0}) b e ^ {- b (t - t _ {0})} \frac {\partial G}{\partial x}}{\frac {\partial G}{\partial y}}, +$$ + +where + +- $dy/dt$ is the replacement rate, +- $\partial G / \partial x$ is the contribution rate of oil to GDP, and +- $\partial G / \partial y$ is the contribution rate of the alternative to GDP. + +Knowing the other quantities, we can determine $\frac{dy}{dt}$ and hence the consumption of the alternative to guarantee a stable economy. + +We choose 2010 as $t_0$ and simulate the consumption of oil and the alternative. The result is shown in Figure 8. + +# Potential Oil Substitutes + +Potential oil substitutes include solar energy, wind power, geothermal energy, hydroelectricity, and tides, as well as oil substitutes such as compressed natural gas, biofuels (biodiesel and ethanol), and gas hydrates. + +![](images/fc5bf4e90f126a0d8717eb3fa883af16a194d4785d54a21c456e27e2af2c7dac.jpg) +Figure 8. Consumption of oil (curve from upper left) and of its alternative (starting in 2010). + +Biofuels are produced from agricultural crops that assimilate carbon dioxide from the atmosphere. The carbon dioxide released this year from burning these fuels will, in effect, be recaptured next year by crops grown to produce more biofuel. Also, biofuels contain no sulfur, aromatic hydrocarbons, or metals. Absence of sulfur means reduction of acid rain; lack of carcinogenic aromatics (benzene, toluene, xylene) means reduced impact on human health. + +Gas hydrate is an ice-like crystalline solid; its basic unit is a gas molecule surrounded by a cage of water molecules. Gas hydrates are found in suboceanic sediments in the polar regions (shallow water) and in continental slope sediments (deep water). + +Although these substitutes may provide some relief from the oil crisis, whether any of them—or all together—can solve the problem completely is unknown. + +# Conclusion + +Considering the concurrent problems of population size and the adjustment of economies and lifestyles, the challenge of conversion to alternative energy resources is both urgent and formidable. A realistic appraisal should encourage people to prepare for the future. Delay in dealing with the issues will surely result in unpleasant surprises. Let us get on with the task of moving orderly into the post-petroleum paradigm. + +# References + +Campbell, C.J. 1997. The Coming Oil Crisis. Multi-science Publishing Co. +Energy Information Administration, U.S. Dept. of Energy. 2004. Table 4.6 World Oil Demand, 1970-2003. http://www.eia.doe.gov/emeu/ipsr/t24.xls. +Keeling, C.D., T.P. Whorf, and the Carbon Dioxide Research Group. 2002. Atmospheric CO2 concentrations (ppmv) derived from in situ air samples collected at Mauna Loa Observatory, Hawaii. http://serc.carleton.edu/files/introgeo/interactive/examples/mlco2.doc. +Ramirez, Vincent. 1999. Oil crises delay—A world price forecast. http://www.betterworld.com/getreallist/article.php?story=20040214014558571. +Vernon, Raymond. 1976. The Oil Crisis. Toronto, Canada: George J. McLeod Limited. + +![](images/1881f72fb887c19411e8cb718a40db40d1d840870bf4e9b4d1f91f3155872e54.jpg) + +From left to right: Chen Jie, Xu Hui, advisor Ni Zhongxin, and Wei Deling. + +# Preventing the Hydrocalypse: A Model for Predicting and Managing Worldwide Water Resources + +Steven Krumholz +Frances Haugen +Daniel Lindquist +Franklin W. Olin College of Engineering +Needham, MA + +Advisor: Burt Tilley + +# Summary + +We examine and model trends in water withdrawal throughout the world and develop plans to prevent using water beyond its renewable capacity. + +We look at the three major components of water consumption: agricultural, industrial, and municipal uses. We formulate a differential model to account for the rates of change of these uses, and how this change would affect the overall consumption of water within the studied region. We also incorporate feedback based on economic and political stimuli that force a decrease in water usage as it approaches dangerous consumption levels. + +Using historical data from the United States, we determine initial conditions for our model and project U.S. water usage to the year 2025. The model simulates how a country could react to water scarcity without drawing from nonrenewable water sources. + +In addition to the model, we also discuss policies for effective water management by reducing freshwater usage and preventing tapping into nonrenewable resources. By being able to predict problem areas and suggesting methods of improving water usage in those areas, we can hope to prevent the "hydrocylapse." + +# Introduction + +Two-thirds of the Earth is covered in water, but only $2.5\%$ of it is freshwater. Fortunately, each year some seawater is naturally desalinated by evaporating from the ocean and precipitating on continents and islands. Globally, this supply of freshwater from rain is plentiful—humanity does not use it all, and most of it simply washes back into the oceans—but locally, water can be very scarce. Some regions use vastly more water than is naturally supplied to them each year. To make up for this deficit, fossil water sources are tapped and exploited. Many communities are walking a dangerous road, as they may exhaust their fossil water sources within the next 20 to 50 years. + +# The Basic Model + +We model past data for water use with linear and with logistic functions. The two models correspond to continuing withdrawal vs. eventual plateauing, due to factors such as availability, population growth, and arable land. A more complex model follows that considers these factors, as well as possible political and economic influences. + +For simplicity, we model net water withdrawal in the United States. + +Table 1 shows data for net water withdrawal of the United States from 1950-2000 [Shiklomanov 1999]. + +Table 1. U.S. water withdrawal 1950-2000. + +
Year (after 1900)Water withdrawal (km3/yr)
50247
60347
70470
80538
90492
95503
100512
+ +The models show trends but have two major limitations. + +- Most importantly, they fail to incorporate any external factors, such as population and technological growth, as well as economic and political influences. The models are likely to fail if a country's water withdrawal rates approach the amount of renewable water available, as increased prices and political regulation drive down the amount of additional water consumed. +- The models assume that the area modeled has a stable enough past for the trend to predict the future accurately. Even for the U.S., there is enough variability in the data that we cannot convincingly extrapolate. + +![](images/b422eeb08fc359fdc00b7bc4baeebcd436a3ca7e84d57d2f20e7fb05e248c554.jpg) +US Water Withdrawal - Linear Projection Through 2050 +Figure 1. Linear regression of U.S. water withdrawal 1950-2000, extrapolated to 2050. + +![](images/587148fc93c17e04ffaae7f97e36edddb5094101fa198b50ea321b22433f4ff8.jpg) +Figure 2. Logistic fit of U.S. water withdrawal 1950-2000, extrapolated to 2050. + +# A Better Model + +While a simple data fit may indicate trends, a better model should take into account how the water is used, how the use of water is changing over time, and what other influences affect the use of water. The three major categories of human water use are agricultural, industrial and municipal. Each has its own trends in growth and water use and can be affected differently by the economy or by political influences. Thus, it is important to consider them separately. + +Table 2 shows the variables that we use, their definitions, and measurement units. + +Table 2. Variables used in the model. + +
SymbolDefinitionUnit
RRenewable waterkm3/yr
CWater withdrawnkm3/yr
ASize of agriculture103Ha
ISize of Industry (GDP in 1990)$US/person
MSize of the municipality103people
CAWater withdrawn by agriculturekm3/yr
CIWater withdrawn by industrykm3/yr
CMWater withdrawn by the municipalitykm3/yr
tTimeyr
+ +# Agriculture + +We discuss the influence on agriculture and extrapolate our conclusions to industry and municipality. As with the simple model, we use the U.S. as an example. + +First, we consider the rate at which agriculture changes. We quantify agriculture as the net irrigation area within the region. We employ a logistic model, since there is a fixed amount of arable land available, and as land use approaches that, the net increase in agriculture will tend towards zero. We arrive at the logistic model by linear regression of change in irrigation area on time. Using data for the U.S. 1970-1995 [Shiklomanov 1999], we find + +$$ +\frac {d A}{d t} = 1 5 3 2 6 \ln t - 4 9 0 3 9 (R ^ {2} = . 8 8 9). +$$ + +To calculate the additional demands for water that increased agriculture will place on the U.S. We multiply current water consumption due to agriculture, $C_A$ , by the rate of change of agriculture, and normalize by dividing by the current amount of agriculture: + +$$ +\frac {d C _ {A}}{d t} = \frac {d A}{d t} \frac {C _ {A}}{A}. \tag {1} +$$ + +Finally, we must adjust this for political and economic factors. + +- Consider the case when net consumption of water (from all three categories) does not approach the amount of renewable resource. There should be little, if any, political or economic inhibition of water use. +- Now consider the case when water use approaches available resources. In this scenario, the price of water will increase, and the government will likely place restrictions on each of the three sectors to help keep water consumption within limits of the resource. +- Finally, consider the scenario when consumption exceeds resource. Ideally, in this scenario, political and economic factors will actively drive the use of water down over time, decreasing net use, and returning the region to a stable state. + +This set of circumstances can be modeled by factoring the following coefficient into (1): + +$$ +\text {P o l i t i c a l i n f l u e n c e} = \left(1 - \frac {C}{R}\right) ^ {P _ {A}}, +$$ + +where $R$ is the amount of renewable water, $C$ is the amount of water withdrawn, we define the parameter $P_A$ to be the economic and political influence on the agricultural sector of the region. This parameter can be easily scaled to simulate the economy and government's response to changes in environmental factors, such as a drought. When $C > R$ , the influence on water usage is negative (the government inhibits further water usage), and when $C < R$ , it is positive (the government encourages further water usage). + +Combining these two equations, the change in water consumption over time due to agriculture is + +$$ +\frac {d C _ {A}}{d t} = \left(1 - \frac {C}{R}\right) ^ {P _ {A}} \frac {d A}{d t} \frac {C _ {A}}{A}. \tag {2} +$$ + +# Industry + +The reasoning for the industrial use of water falls along similar lines as agriculture. We quantify industry as the region's Gross Domestic Product (GDP) per person. Working from GDP data from the Groningen Growth & Development Centre [2005] and population data from UNESCO [Shiklomanov 1999] for 1970-1995, we fit a linear regression of GDP/person for the U.S. to time, since industry tends to grow at a steady rate, despite the nonlinear dynamics of economy and population. We find + +$$ +\frac {d I}{d t} = 3 9 2. 9 9 t - 1 2 9 2 9 \quad (R ^ {2} = . 9 9 8). +$$ + +Since the rate of change of water consumption by industry is likely to follow the same trends as agriculture, just with a different power of the political and + +economic scaling factor, we use the same differential equation, replacing $P_A$ with $P_I$ . Thus: + +$$ +\frac {d C _ {I}}{d t} = \left(1 - \frac {C}{R}\right) ^ {P _ {I}} \frac {d I}{d t} \frac {C _ {I}}{I}. \tag {3} +$$ + +# Municipality + +Water consumption of the municipality is probably the easiest of the three models; population is the best indicator of the size of a municipality. Further, population growth tends to be logistic—it will plateau at a certain level. Fitting a logistic model to U.S. population 1970-1995 gives + +$$ +\frac {d M}{d t} = 1 9 4 1 6 3 \ln t - 6 1 6 6 0 8 (R ^ {2} = . 9 9 1). +$$ + +Similarly, only the power of the political and economic scaling factor will differentiate consumption of the municipality from that of the other two sectors, so + +$$ +\frac {d C _ {M}}{d t} = \left(1 - \frac {C}{R}\right) ^ {P _ {M}} \frac {d M}{d t} \frac {C _ {M}}{I}. \tag {4} +$$ + +# Bringing it Together + +We combine the rates of change of water consumption for each of the three sectors into one governing equation. Since total consumption is the sum of the consumption of these sectors, the rate of change of the total consumption is also simply a sum of the rates of change of the three sectors, or the equations (2, 3, 4). Therefore, our final governing equation is: + +$$ +\frac {d C}{d t} = \left(\frac {d C _ {A}}{d t} + \frac {d C _ {I}}{d t} + \frac {d C _ {M}}{d t}\right). \tag {5} +$$ + +# Derivation of the Values of the Parameters $P$ + +We use initial conditions to identify the values of the parameters $P$ (powers of the political influence). We rearrange (2) to get: + +$$ +P _ {A} = \frac {\ln \left(\frac {\frac {d C _ {A}}{d t}}{\frac {d A}{d t} \frac {C _ {A}}{A}}\right)}{\ln \left(1 - \frac {C}{R}\right)}. +$$ + +Using data for the U.S. 1990-1995 [Shiklomanov 1999], we solve for the country's three political and economic constants. We find $dC_A / dt$ by calculating the change in water consumption due to agriculture over the five years; $C$ and $R$ are also known for the U.S. in 1990 ( $C = 492\mathrm{km}^3$ , $R = 2930\mathrm{km}^3$ ) and $dA / dT$ can be found from the logistic fit for agriculture; and $C_A$ and $A$ are both known for 1990 [Shiklomanov 1999]. Plugging these values (and their counterparts for industry and municipality), we find: + +$$ +P _ {A} = 1. 0 5 2, \qquad P _ {I} = 7. 3 6, \qquad P _ {M} = 3. 4 5. +$$ + +# Running the Model + +To solve the differential equations in our model, we use the ODE45 numerical integrator in MATLAB on (5) to find the results in Figure 3. Agricultural, industrial, and municipal withdrawal rates each increase steadily, as does the total withdrawal rate. The economic and political scaling factor makes virtually no attempt to curb the increasing water usage, since the U.S. has a significant surplus of renewable water sources. + +![](images/7603508ff11547d5480f4d87c1bd68e20e663e11e281453fa97950a10f07899b.jpg) +Figure 3. Projected water usage in the U.S. 1990-2050. + +The second example, seen in Figure 4, is a simulation of a water-taxed country that is currently OK but approaching dangerously low levels of renewable water resources. As total withdrawal approaches total available, the economic and political scaling factor becomes negative and forces reduction of water use, + +even though the population is still increasing. This fictional country is similar in agricultural, industrial, and municipal trends to the U.S.; but because it has greatly decreased water renewability, the outlook is particularly bleak. + +![](images/756a49cdb7238bf5af769d40b962f7fa40bc789590e1b1fd933abd6d8ce5d652.jpg) +Figure 4. A theoretical country on track for water problems. Initial withdrawal is $360\mathrm{km}^3/\mathrm{yr}$ with total renewable resources $430\mathrm{km}^3/\mathrm{yr}$ . + +In our model, we treat each of the values of the parameters $P$ as a regional constant derived from past data. However, it is much more likely that each value changes dynamically. The assumption of constant $P$ prevents our model from adapting to radically changing times. + +Figure 5 shows how changing $P_A$ has drastic effects on how water scarcity is affected by political and economic factors. For our fictional country, $P_A$ is 1.05, based on our initial values. Different values do very little to stymie agricultural withdrawal until it is essentially too late. The larger values exhibit a very large dampening effect, keeping agricultural withdrawal low. $P_I$ and $P_M$ affect industrial withdrawal and municipal withdrawal in similar ways. + +# Limitations of the Model + +- The model will have difficulty adjusting to a drastic change in one of the three sectors that instantaneously throws the region from stable to unstable. This is because we chose our political and economic factors to be a constant property of the region. If these values were adjustable, the model would likely be able to adjust for a catastrophe. + +![](images/3d09c110ca660eea0d5966a0a6ab8423af99dd4b458a30e206717350212fbecb.jpg) +Figure 5. Graph illustrating the damping effects of deliberate variation of the $P_A$ constant + +- Since the model relies on trends in population, GDP, and agricultural data, a country with an unstable past might not be able to extrapolate domestic trends accurately enough for the model to be useful. +- We must consider the scope of the region being modeled. More often than not, applying aggregated data for a country to all regions in the country will result in inaccurate assumptions about those regions. For example, even though the U.S. as a whole, has an overabundance of water, the Southwest uses more water than is naturally renewed. A smaller region can apply its own historical data to this model to attain a more accurate representation of its current and projected water situation. + +# New Approaches to Water Harvesting + +Only $2.5\%$ of the Earth's water is freshwater and less than $1\%$ of that amount is renewed each year by natural means [Sayegh 2004]. Growing communities, when faced with the need for freshwater, rely more and more heavily upon fossil water sources, aquifers, and wells to remedy their water deficits. While these sources can naturally refill over long periods of time, people are drawing from them at a rate that is far too fast for the sources to regenerate. + +Localities must change how they conceptualize water acquisition. In the + +status quo, too much emphasis is placed on finding the cheapest water source to fuel economic growth. Instead, communities should focus on strategies to use naturally renewable water sources efficiently and prefer those over nonrenewable sources. + +# Individual Responsibility + +To reduce the strain on centralized water acquisition and distribution systems, the responsibility for water harvesting must first be shifted from the community to the individual. Domestic rainwater harvesting systems would provide a feasible alternative to the inefficiencies of centralized water systems or well-drilling, by allowing individual households to supply a substantial fraction of their water needs. Furthermore, implementing such rain-harvesting technologies is not as far fetched as one might think. Modern rainwater-harvesting systems range in complexity from inexpensive rain barrels to contractor-designed and -installed systems costing thousands of dollars. By using locally harvested water, individuals can mitigate water's growing environmental and economic costs and avoid health concerns regarding its source and treatment [Texas Water Development Board 1997]. In addition, by collecting their own water, citizens can further appreciate the efforts that are necessary to harvest water, and hopefully be more willing to embrace the concept of water conservation. Already, island states such as Hawaii and entire continents such as Australia promote rainwater harvesting as the principal means of supplying household water. Throughout the Caribbean, public buildings, private houses, and resorts collect and store rainwater. Rainwater harvesting can even be used in urban areas with high population densities. In Hong Kong, skyscrapers collect and store rainwater to help supply the buildings' water requirements [Jiwarajka 2002]. Municipalities should require the installation of rainwater harvesting devices in all new construction and encourage the retrofitting of older properties through subsidies or tax incentives. + +# Municipal Strategies + +While residential and commercially-based rain harvesting systems will take significant pressure off of municipal water grids, it is likely that there will not be a sufficient or reliable source of water year round. Municipalities should begin positioning their current water supplies as the "fallback" for when individual water harvesting is not sufficient, and as a result charge more for municipal water. Prices should be intentionally set at a premium over the cost of harvesting the water, to encourage people to conserve and to promote improvements in water use efficiency by returning the excess capital to the community in the form of grants. + +Similarly, municipalities must also work to improve the efficiency of their water use. The City of New York has a long tradition of investing in the protec + +tion and improvement of much of the watershed from which it receives the 1.3 billion gallons of water it needs every day. As a result of this careful planning, the city uses no fossil water, relying solely on its network of 19 reservoirs in a 1,969 square-mile watershed that extends 125 miles north and west of New York City [City of New York ... 2002]. + +Some cities, however, may not have an abundant supply of renewable water. When naturally renewable water sources seem to be exhausted, municipalities should prioritize investing in additional technology to account for their shortfall of water, instead of turning to fossil-water pumping. This may be more expensive in the short term; but investing in efficiency programs, improving water purification and desalination techniques, or buying and rerouting water from other regions with abundant natural sources will save in the long run, by lowering dependence on fossil water and preventing exhaustion of nonrenewable water sources. + +# Agricultural Rain Harvesting + +In agriculture, rainfall can be captured, diverted, and stored for plant use. If fields are plowed so that the plowing contours wrap around the terrain rather than run down inclines, a higher fraction of the water can infiltrate into the ground. This method also reduces water pollution by preventing soil erosion, preventing contamination of usable water downstream. Similar effects can be achieved by deploying precision leveling of fields, eliminating inclines and thus the means for wasteful runoff. Both of these techniques do not require sophisticated machinery but instead simply modify current practices. Improving agriculture water efficiency in the United States alone would save over half a cubic kilometer of water per year, enough to satisfy the needs of 3.6 million people [Pacific Institute 2002]. + +# Last Resort: Fossil Water + +Communities and individuals should turn to fossil water as a last resort and should take steps to protect and maintain the aquifers or wells they harvest from. In addition, drawing from these wells should come at an added premium, to further discourage use. Fortunately, many regions with seasonal water scarcities also have a "rainy season." By installing new methods of groundwater management such as artificial recharge or injection of surface waters during seasonal wet periods, it is possible to extend the life of many freshwater aquifers. Such practices have already been successful in the U.S. [Slattery 2004]. + +For illustration of the ideal progression of water use, see Figure 6. + +![](images/698baedf7f121aa1fb290a1ab11ff249789b443013cc525bf4b89b75cc5f7db4.jpg) +Figure 6. Ideal progression of water usage. + +# Developing Alternatives to Water + +# Salt Water + +Though salt water has limited applications in agriculture and industry, due to the corrosive nature of salt, China already uses some salt water to conserve freshwater. A major domestic use of water is flushing toilets (30–35% of domestic water). However, there is little reason that toilets must contain freshwater. Instead, some Asian cities have begun experimenting with parallelizing the plumbing in houses in an attempt to replace the freshwater in toilets with seawater. Using seawater in this way not only allows freshwater supplies to be stretched over a longer period of time, it is also more cost-effective. In China, residents pay for processed seawater only 0.5 yuan (\(U.S. 0.06) per ton, about one-eighth the price of fresh tap water [A liter here . . . 2004]. + +# Greywater + +Greywater—wastewater except toilet wastes and food wastes from garbage grinders—can serve as a substitute in some applications; $30 - 50\%$ of all water used domestically is greywater, most of which can be easily reused in a variety of applications. Industrially, greywater can be used for air conditioning, cooling, general washdown, and street cleaning. Fire protection is another potential use, as are construction activities such as making cement [Emmerson 1998]. Strategies to reuse greywater do not have to change radically how people interact with water. For example, washing industrial parts can be done in stages starting with greywater and using progressively cleaner water [Accepta 2005]. Greywater, after filtering, can also be used for landscaping or agriculture irrigation. Though it may require more upfront infrastructure to use, greywater reuse can ultimately save municipalities money by reducing sewage flows and reducing the demand on potable water supplies [Martin 1997]. + +# Desalination of Salt Water + +Regrettably, desalination of seawater is not the answer to the world's freshwater needs, because it is highly energy-intensive. Since most of the world's energy is generated by fossil fuels, intensive desalination would simply replace one nonrenewable resource (water) with another (coal, oil, gas). However, desalination is still an option for locations that desperately need additional water. + +# Regulation of Harvesting + +Almost every body of water in the world has been negatively impacted by human water harvesting, whether from over-harvesting, nutrient enrichment, agricultural runoff, or toxic pollution. Mexico's Lake Chalapa, the country's largest body of freshwater, lost $75\%$ of its original volume as a result of overharvesting water from its tributaries, for irrigation purposes and for the water supply of Guadalajara. [Living Lakes 2003] In Central Asia, the freshwater Aral Sea, which lies astride Kazakhstan and Uzbekistan, was once the fourth largest lake in the world. Today, the lake is only $20\%$ of its original volume as a result of Soviet diversion for agriculture of water from its tributaries. In Africa, Lake Chad, which spanned $25,000~\mathrm{km}^2$ of surface area in 1963, has shrunk to $1,350~\mathrm{km}^2$ today as a result of aggressive expansion of irrigated agricultural projects [Coe 1998]. In each case, local wildlife paid the price of the water diversions as the salinity of the lakes increased and available habitats were destroyed. + +Large lakes are natural gauges of water use: Use a lot of water, and they go down; use too much water, and they die. In Los Angeles, simple water reduction and reclamation measures were implemented to compensate for the water that would have been used from Mono Lake, saving it from annihilation. + +Water storage is an important issue in many regions throughout the world. During dry seasons, such localities often experience water shortages, while during the rainy season, usable water is lost to flooding. Aquifer recharging provides a simple and effective way to store water during the plentiful rainy season. Excess rain water is channeled into recharge basins where it naturally filters through hundreds of feet of earth before entering the groundwater aquifer. In this way, groundwater supplies can be naturally recharged and available for future use during the dry season [City of Peoria ... 2003]. Actively maintaining these natural freshwater storage regions is essential to securing freshwater for much of the world. Underground aquifers store $97\%$ of the world's unfrozen freshwater, and they provide drinking water to almost one-third of the world's people. In Asia alone, more than a billion people rely on groundwater for drinking, and in Europe it is estimated that $65\%$ of public water supplies come from groundwater sources [Ramsar Convention ... 1995]. + +Another danger of excessive water harvesting is greater susceptibility to droughts. As communities harvest more water from renewable sources, their + +members grow accustomed to elevated levels of water availability. However, during a drought, the amount of naturally renewable water is much less than typical. If the drought drops renewable water levels below the threshold of water needed by the community or nation, a water crisis may result [Dykstra 1999]. Minimizing water needs (and water-harvesting) through conservation or efficiency improvements can insulate communities from droughts, since communities will not have grown accustomed or dependent on unnecessarily generous water use policies. Beyond basic water dependency, excessive water-harvesting also increases vulnerability to droughts by altering the water table and distribution of water. + +# Protecting the World's Water + +Unfortunately, water is not as abundant as it may seem—by 2025, the United Nations projects that 1 in 3 people in the World will not have adequate freshwater for life [CNN.com 2000]. Thus, it is imperative that the governments of the world take steps to help prevent degradation of the world's freshwater supply. Such measures can be taken by focusing efforts in three different areas: + +- effective international allocation of water, +- building consciousness among those who may not be aware of the need to preserve water, and +- prevention of lost water due to pollution. + +# International Allocation + +Probably the area of most contention, and the one that requires the most governmental regulation, is allocation of international waters. Over the past 50 years, there have been are 1,800 international incidents concerning the use of international freshwaterways. More than 500 were conflicts, and 21 resulted in military action [Cosgrove 2003, 68-70]. Generally, what causes these disputes is one country monopolizing a waterway, preventing the flow of some water to neighboring countries. Thus, it is crucial that the international community help countries work together to solve disputes over water. + +Another way to solve international water allocation disputes is purification technology. One example of technology resolving a conflict is Israel's use of desalination. Of the freshwater for the West Bank of the Palestinian Territory, $80\%$ is owned by Israel. To preempt conflict, Israel (with U.S. help) will construct a large desalination plant to purify water from the Mediterranean Sea and pump it to regions in the West Bank [Pearce 2004]. + +Other water purification technologies can drastically help prevent degradation of the world's freshwater supply. Currently, many countries discharge + +waste directly into freshwater sources. While this is the easiest and least expensive solution to wastewater disposal, every cubic meter of waste discharged in this way pollutes about 8-10 cubic meters of consumable freshwater [Rosegrant et al. 1995, 255]. This unnecessary pollution could be prevented by building basic sewage treatment systems, perhaps with international help. + +# Building Consciousness + +Governments can provide some answers to water conservation, but it is also important for individual citizens to be educated about the world's water problems. Generally, when people feel as though a resource is abundant enough to last forever, they use it with reckless abandon. However, if people were to realize that water is a resource to be conserved, drastic improvements in the amount of wasted consumable water can be seen. In the U.S. despite a growing population, per capita use of water has decreased steadily since 1995; and net water use in many countries, including highly populated ones such as China, is beginning to plateau [Gleick 2000, 290-293]. These changes come from domestic and economic reforms of the countries' governments but also from increased societal awareness about water conservation. From domestic improvements such as low-flush toilets to improved agricultural water-saving techniques, countries are beginning to conserve water in daily life. + +# Avoiding Water Pollution + +Governmental regulations have helped control negative side-effects of industrial water use. For more drastic reductions, new regulations or social initiatives need to "shift the corporation's thinking from [pollution] compliance to pollution prevention" [Greer et al. 1999]. The company itself can save substantially with reduced resource consumption and waste [Greer et al. 1999]. An initiative to support such organizations, and universalize their scientific wastewater reduction procedures, would serve as yet another mechanism to decrease the continuing abuse of the world's freshwater supplies. + +# References + +Accepta. 2005. Water efficiency in the textile & leather Industry. http://www.accepta.com/Industry_Water_Treatment/Water_efficiency_textile_industry.asp. Accessed: 4 Feb 2005 +Bokek, E. 2002. Proceedings of the two parallel conferences: "The Dead Sea—Between Life and Death" and "Learning from Other Lakes." http://www.livinglakes.org/images/200210proceedings.pdf. Accessed: 4 Feb 2005 +A liter here, a liter there, is water saved. 2004. China Daily http://www.chinadaily.com.cn/english/doc/2004-03/23/content_317082.htm. + +City of New York Department of Environmental Protection. 2002. New York City's water supply system: Watershed agreement. http://www.ci.nyc.ny.us/html/dep/html/agreement.html. Accessed 4 Feb 2005. +City of Peoria, AZ. 2003. Recharging Groundwater. Water Report 2003. http: //www.peoriaaz.com/Water03/Water-Report-recharge.asp. Accessed: 4 Feb 2005 +CNN.com. 2000. Global water supply central issue at Stockholm conference. cnn.com http://archives.cnn.com/2000/NATURE/08/14/water.shortage. reut/. Accessed 4 Feb 2005 +Coe, M. 1998. Lake Chad has shrunk into "a ghost." Afrol News http:// www.afrol.com/Categories/Environment/env062_lake_chad.htm. Accessed: 4 Feb 2005. +Cosgrove, William J. 2003. Water Security and Peace: A Synthesis of Studies Prepared under the PCCP-Water for Peace Process. Paris, France: UNESCO. +Dunne, George W., and Roland F. Eisenbeis. 1973. Droughts. Nature Bulletin No. 488-A, 7 April 1973. Forest Preserve District of Cook County (Illinois). http://www.newtondep.anl.gov/natbltn/400-499/nb488.htm. +Dykstra, P. 1999. Droughts come and go, but growing demand for water remains. http://www.cnn.com/SPECIALS/viewes/y/1999/08/dykstra.water.aug12/. +Emmerson, G. 1998. Every Drop Is Precious: Greywater as an Alternative Water Source. Brisbane, Australia: Queensland Parliamentary Library Publications. http://www.parliament.qld.gov.au/Parlib/Publications_pdfs/books/rb0498ge.pdf. Accessed 4 Feb 2005 +Engelman, R., P. LeRoy, and T. Gardner-Outlaw. 1993. Sustaining Water Population and the Future of Renewable Water Supplies. Washington, DC: Population Action International. +Farley, Maggie. 2001. Report predicts thirstier world. Los Angeles Times (August, 2001). http://www.commondreams.org/cgi-bin/print.cgi?file=/headlines01/0814-03.htm. Accessed: 2/05/2005. +Food and Agriculture Organization of the United Nations. 2003. Review of World Water Resources by Country. Water Reports 23. Rome, Italy: United Nations. http://www.fao.org/documents/show_cdr.asp?url_file= /DOCREP/005/Y4473E/Y4473E00.htm. +Gleick, Peter H. 2000. The World's Water, 2000-2001: The Biennial Report on Freshwater Resources. Washington, DC: Island Press. +Greer, Linda. 1999. Preventing industrial pollution at its source: A final report of the Michigan Source Reduction Initiative. National Resources Defense Council. http://www.nrdc.org/water/pollution/msri/msriinx.asp. + +Groningen Growth & Development Centre. 2005. Total Economy Database: Real Gross Domestic Product. University of Groningen. http://www.ggdc.net/dseries/gdp.shtml. Accessed: 2/07/2005. +Japan Aerospace Exploration Agency. 2004. Shrinking sea in the desert: The Aral Sea. http://www.eorc.jaxa.jp/en/imgdata/topics/2004(tp040217.html. Accessed 4 Feb 2005. +Jiwarajka, Shri Sushil Kumar. 2002. Rainwater harvesting in urban habitats: Issues & implications. Welcome address, Seminar on Rainwater Harvesting, 7 November 2002. http://www.cleantechindia.com/eicnew/MUMBAI/jiwarajka'sSpeechforRainwaterharvesting\%20seminar.htm. Accessed: 4 Feb 2005. +Living Lakes. 2003. Mexico's largest lake is shrinking fast—International Living Lakes network supports protection of Lake Chapala. Global Nature Fund Press Releases. http://www.globalnature.org/docs/02\vorlage.asp?id=13873\&sp=E\&m1=11089\&m2=13812\&m3=13873\&m4=\&domid=1011\&newsid=1168. +Martin A., G. Ho, and K. Mathew. 1997. Greywater reuse: Some options for Western Australia. Permaculture Association of Western Australia. http://www.rosneath.com.au/ipc6/ch08/anda/. Accessed: 4 Feb 2005. +Nelson, L.C., W.K. Sawyer, and L.Z. Shuck. 1999. Recharging Appalachian aquifers using watershed specific technology and methodology and technical considerations relative to mountain top removal. In 1999 Appalachian Rivers II Conference Proceedings. +Pacific Institute for Studies in Development, Environment, and Security. 2002. ]Water: Facts, trends, threats, and solutions. http://www.pacinst.org/ reports/waterFactsheet/water_factsheet.pdf . +Pearce, Fred. 2004. Israel lays claim to Palestine's water. New Scientist Online http://www.newscientist.com/article.ns?id=dn5037. Accessed: 2/05/2005. +Ramsar Convention on Wetlands. 1995. Groundwater replenishment—Fact sheet 2 of 10 on wetland values and functions. Background papers on Wetland Values and Functions. http://www.ramsar.org/values-groundwater_e.htm. Accessed: 4 Feb 2005 +Rosegrant, M.W. 1995. Dealing with water scarcity in the next century. 2020 Vision Brief. http://www.ifpri.org/2020/briefs/number21.htm. Accessed: 5 Feb 2005 +_____, X. Cai, and S.A. Cline. 2002. World Water and Food to 2025: Dealing with Scarcity Appendix A: Model Formulation and Implementation: The Business-as-Usual Scenario. Washington, DC: International Food Policy Research Institute and the International Water Management Institute. + +Sayegh, J. 2004. Hydropolitics and geopolitics in Africa: Impact of water on communities, food security, transnational relationships, and development. http://www.sas.upenn.edu/African_Studies/Current_Events/hydro0405.html. +Shiklomanov, Igor A. 1999. World Water Resources and Their Use. State Hydrological Institute. St. Petersburg, Russia. http://espejo.unesco.org uy/part'3/_read'me.html. +________. 2000. Appraisal and assessment of world water resources. Water International 25(1): 11-3. +Slattery, R.N. 2004. Recharge to and discharge from the Edwards Aquifer in the San Antonio Area, Texas, 2003. U.S. Geological Survey. http://tx.usgs.gov/reports/dist/dist-2004-01. Accessed 4 Feb 2005 +Texas Water Development Board. 1997. Texas Guide to Rainwater Harvesting. 2nd ed. Austin, TX http://www.twdb.state.tx.us/publications/reports/RainHarv.pdf. + +![](images/0509ae015f06de8be123e6d2c0fc7b5b40a1d774f496f9598e22bcf46c3b8ea8.jpg) +Frances Haugen, Daniel Lindquist, Burt Tilley, and Steven Krumholz + +# The Petroleum Armageddon + +Jonathan Giuffrida + +Palmer Mebane + +Daniel Lacker + +Maggie Walker Governor's School + +Richmond, VA + +Advisor: John Barnes + +# Summary + +We describe the depletion of petroleum, a vital nonrenewable resource, over the next few decades. Petroleum is a fossil fuel that fuels our industries, heats our buildings, and powers our automobiles; plastics and fertilizers are also derived from oil. The production of nonrenewable resources that cannot be returned to the environment is generally considered to follow a bell-shaped curve; as interest and demand increase, the rate of production likewise increases until the world is producing at capacity, whence the rate decreases as the resource is slowly exhausted. + +Our model assumes that production and consumption are equivalent; this does not account for stockpiles of oil or the delay caused by shipping and distribution. We also assume that total discoveries of new reserves and total production follow logistic curves; this is heavily supported by professional opinion and by our data. + +Our approach includes four major functions of time: total production, total known oil, total remaining oil, and total demand. "Total" means cumulative; the derivatives of these functions are the production, discovery, and demand of oil at any time (except for total remaining oil). The equations include parameters for production, discovery, and demand, which allow our functions to follow historical data. The function for demand is based heavily on total production, in accordance with the law of supply and demand, which in turn depends on total known oil (as a carrying capacity) and on demand. + +By varying the parameters, the model is flexible enough to provide for technological advance, economic limit/incentive, natural or manmade disaster, and increase or decrease in demand. The model also includes a management policy for future production, involving government limits on production to enable + +production at a nearly constant rate well into the 22nd century, at which point a decent alternative should be available. Policies for increasing the security of the oil supply, decreasing the impact on the environment, and developing an alternative to oil are suggested as part of this management policy. + +Strengths of the model include its ability to adjust to virtually any factors influencing production, even when those factors overlap. Prominent among its weaknesses is its dependence on the assumptions. Another weakness is the possibility of a change in total recoverable oil, which would severely affect all four functions, although the model could easily be adjusted. + +# Introduction + +The United States per capita GNP (gross national product) rose by a factor of 7.5 between 1870 and 1980. The fuel for such unparalleled growth was petroleum. Nonrenewable fuels now supply almost $90\%$ of the energy produced domestically. But petroleum is a nonrenewable resource, with a limited supply. And we have used almost half of the world's total supply, demand is increasing, and world production will soon peak. + +In 1956, near the height of the growth rate of the U.S. oil industry, geologist M.K. Hubbert drew a bell-shaped curve to depict production of oil in the U.S. over the coming decades. With remarkable accuracy, even despite a large unforeseen find in Alaska, Hubbert correctly predicted domestic oil production would peak in 1970. In 1989, the United States imported more fuel than it produced domestically; currently, it gets $60\%$ of its fuel from imports. However, the lack of recent discoveries is even more disconcerting. We have already located over 1,600 billion barrels of oil in the world, whether already produced or still underground. If we accept the preferred estimate of 1,800 billion barrels as the total amount of oil recoverable for a profit, then we have only 200 billion barrels left to discover, which will only add about $20\%$ to our current reserves. + +Many people believe that once oil prices rise high enough due to scarcity, it will be profitable for oil companies to harvest reserves that are not yet economically viable. However, there is something else more prevalent here than the price in dollars: the price in energy. In Hubbert's own words, "When the energy cost of recovering a barrel of oil becomes greater than the energy content of the oil, production will cease no matter what the monetary price may be" [Ecosystems 2005]. + +The fundamental question is, How long will our oil supply last? World production is predicted to peak between 2000 and 2020. The world supply can be expected to run out between 40 and 60 years from now. + +Our model addresses depletion of oil over a long horizon by using historical data from 1930 onward. This model is flexible enough to account for almost any economic, political, and natural factors. + +# Historical Data + +Figure 1 shows oil discovered in the last 70 years, together with a logistic curve fitted by least-squares. + +![](images/e2e880d01867ba45fd58dd4800e17076bcb8fd444ff844d8bdb72abd70e4c632.jpg) +Figure 1. Total oil discovered with fitted logistic curve. Data source: Campbell [n.d.]. + +This logistic curve has a carrying capacity of $1,800\mathrm{Gb}$ (gigabarrels = billion barrels), the total of known oil (harvestable at a profit of both money and energy with modern technology). This number is one of the most disputed among scientists in this field; 1,800 Gb is approximately the median estimate [Campbell 1997]. + +The logistic curve models cumulative production; its derivative models actual production, or the rate of harvest. The derivative of a logistic curve is a normal distribution (bell curve). Figure 2 shows a bell curve fit by least-squares to production data. The rise from expected levels in the early 1970s is due to the OPEC price increase. The decline from the peak at 1979 is due to concern over oil supplies following the Iranian revolution. In the data set, the post-WWII economic boom and the early-1980s recession are clearly visible. + +# Assumptions + +- Production and consumption are identical, that is, there is no delay between production and consumption. +- The demand function must obey the economic laws of supply and demand: that supply and price are inversely proportional, and that demand and price are directly proportional. +- The model year $0 (t = 0)$ is 2000. +- Oil cannot be artificially produced. + +![](images/6404f7c4a317b0998c3f348f97df76e3ba9535dfdf52678f0095bac0d086b415.jpg) +Figure 2. Oil production with fitted bell curve. Data source: Ramirez [1999]. + +- Harvesting oil will follow the bell curve based on past data, and discovery the logistic curve indicated, although actual past production and discoveries do not fit the data exactly, nor can future figures be expected to. According to our model, the midpoint of cumulative oil production will occur in early 2006, with peak oil production per year of $28\mathrm{Gb}$ /year. By the end of 2072, $99\%$ of the total oil will have been produced. + +# Model + +Our model has four main functions: + +- $S(t)$ , the cumulative amount of oil discovered by time $t$ ; +- $H(t)$ , the cumulative amount of oil harvested by time $t$ ; +- $D(t)$ , the cumulative amount of oil demanded by time $t$ ; and +- $M(t)$ , the total amount of untapped oil at time $t$ . + +Both $S(t)$ and $H(t)$ correspond to data and both $D(t)$ and $M(t)$ depend on $H(t)$ . + +We model the discovery function, $S(t)$ , as growing logistically toward a carrying capacity $M_0$ and also depends on demand: + +$$ +S ^ {\prime} = \frac {d S}{d t} = k D S \left(1 - \frac {S}{M _ {0}}\right), +$$ + +where $k$ is a constant. + +The total amount of oil ever harvested by time $t$ , $H(t)$ , also follows a logistic curve. It too should increase with demand. + +The carrying capacity to which $H(t)$ levels off is $S(t)$ , since oil harvested cannot exceed oil discovered. Thus, we have the following differential equation: + +$$ +H ^ {\prime} = \frac {d H}{d t} = b D H \left(1 - \frac {H}{S}\right), +$$ + +where $b$ is a constant. + +Total world oil, $M(t)$ , is given by + +$$ +M (t) = M _ {0} - H (t). +$$ + +However, considering natural disasters and outside manipulations, expressing $M(t)$ as follows is more relevant and more practical: + +$$ +\frac {d M}{d t} = M ^ {\prime} = - \frac {d H}{d t} = - b D H \left(1 - \frac {H}{S}\right). +$$ + +By the basic economic laws of supply and demand, supply and price are inversely proportional, and price and demand are directly proportional. Transitively, supply and demand are inversely proportional, or $D(t) = c / H(t)$ for some constant of proportionality $c$ . From this relationship, we get + +$$ +D ^ {\prime} = \frac {d D}{d t} = \frac {c}{b D H \left(1 - \frac {H}{S}\right)}. +$$ + +We used these four functions and the improved Euler's method to create a spreadsheet to project estimates from known initial values. The tangent at the initial value is calculated; then, the tangent at a point some distance $h$ along the $x$ -axis from the initial value is calculated using the differential equation. As $h \to 0$ , the estimate becomes increasingly accurate. + +Figure 3 illustrates the depletion and cumulative discovery, harvest, and demand of oil. For many purposes, the derivatives of these functions are more relevant: At time $t$ , $H'$ is production rate, $D'$ is demand rate, and $S'$ is discovery rate. The interplay between these rates is illustrated in Figure 4. + +Production noticeably lags behind discovery but follows a similar bell curve. Due to the sensitivity of the demand function, it takes a very low production to cause a perceptible demand increase. + +To customize the model, we add several more factors. + +- We implement a limiting factor $L(t)$ for $H'$ in the simple linear form $L(t) = mt + r$ , where $m$ is the limit of $H'$ and $r$ is a constant. WHen $H'(t) > L(t)$ , we use the value of $L(t)$ instead of $H'(t)$ . Doing so allows the model to simulate governmental or other external restrictions on the rate of harvesting. +- We make the constant $b$ in the differential equation for $H'$ more flexible by dividing it into two different factors: $b$ and a second harvesting constant. The new harvesting constant comes into effect at a certain starting time. This implementation allows the model to be modified easily to simulate the effects of a future technological innovation or other external change in harvesting rate. The difference between the limiting factor and the harvest constant is that while the limiting factor caps the rate of harvesting, the harvest constant sets no such limit but simply changes the rate of harvesting at a certain time. + +![](images/277a9c2c96586c38091cb18ea70d3a03bcc8dac22b69a57caba90c71cffa9b52.jpg) +Figure 3. Depletion of oil (year $0 = 2000$ ). + +![](images/1c2309e258ef61b305a6943eaa93c812f08f792ae5cef6296d894bc53e7024e2.jpg) +Figure 4. Rates of harvesting, demand, and discovery (year $0 = 2000$ ). + +# Manipulations + +To apply this model to hypothetical real-world situations, we manipulate the customization parameters. First, we imagine a moderate limit of 12 Gb/year on the consumption rate, beginning in 2010. Figure 5 shows the result. + +![](images/55e480be80af55d55bfa5481cd9260a2502c9df01d2efb94dda55367d6d9611a.jpg) +Figure 5. Depletion of oil with moderate annual limit of $12\mathrm{Gb / yr}$ + +Such a worldwide limit would be difficult to implement. Eventually, no matter how production is limited, the oil supply will run out (unless of course production is completely halted); in this scenario, the oil is depleted just about as quickly without a limit. All that can be manipulated is short-term versus long-term satisfaction of demand. A harsh limitation on harvesting would satisfy less of the demand for a longer period of time, while a less restrictive limitation would satisfy more of the demand but for a shorter period of time. Additionally, the sharp drops in rate of production in 2010 and 2075 would damage the world economy and deprive a large percentage of the population of the oil it needs. Thus, the problem of oil depletion cannot be mitigated, only manipulated. + +An alternative, but less effective, policy would be a $60\%$ downgrade in efficiency of oil-harvesting methods or technologies that occurs or is imposed suddenly in 2010. Mathematically, we decrease the harvest constant, $b$ , by a factor of 0.4. Such a restriction would conserve oil for a longer period of time while causing a sharp drop in current production; however, the effect would be more gradual, producing an economic recession rather than economic collapse. A corresponding increase in efficiency of oil harvesting, due to technological innovation, would accelerate depletion. + +To simulate a natural disaster, we make manual adjustments to $S(t)$ and $M(t)$ . Figure 6 illustrates the effects of a disaster in 2010 that destroys 400 billion barrels of known but unharvested oil (imagine wiping out a very productive oil field). After the natural disaster, the supply approaches a new carrying capacity of $M_0 - 400$ . + +![](images/229d55a2c19e67cd06a29b6576ccb86d292ba2b93253934cfec4d411ad9e322a.jpg) +Figure 6. Depletion of oil with natural disaster occurring in 2010 to unharvested oil. + +Theft, terrorism, or any oil-wasting (like an oil spill) would have a similar effect but on already-harvested oil rather than unharvested oil. Thus, $H$ would decrease by the same amount as did $S$ in the previous example, but $M(t)$ would not change. This kind of disaster would not cause the world's oil supply to deplete more rapidly, as the natural disaster did. In fact, it decreases the rate of depletion. However, it eventually reaches the same result. + +For a future technological development necessitating more oil, that is, a sudden increase in demand, the model reflects the expected effect of shortening the horizon to oil exhaustion (Figure 7); the opposite is seen with a development, such as introduction of an oil substitute, that reduces demand. + +# Future Alternatives + +We must develop another fuel source, or combination of sources, to replace oil. This fuel source need not be renewable; the U.S. has enough coal reserves to last for centuries. But there must ultimately be a switch to renewable fuel sources (such as nuclear, hydroelectric, solar, and wind). We assume that a + +![](images/0c47da5dbf244d614846d51a3cfbc248dcb0dcc6fc1e550bacbbb824160f9040.jpg) +Figure 7. Depletion of oil with sudden increase in demand. + +conglomeration of renewable and nonrenewable fuel sources ultimately completely replaces oil, long after the model's range. + +We create a complex management policy to govern the harvesting of oil for the next century, starting in 2010. From then on, scientific alternatives to oil would be encouraged by any means necessary: government funding, taxes, etc. The world energy crisis would be given precedence above all other projects. Hoping that by the turn of the century this scientific development would be near completion, the model projects to conserve oil so that in 2100 no less than $10\%$ of the initial world oil supply would remain: $0.1M_0$ . With $M_0 = 1800$ , the policy would provide that $H(100) = 1620$ . + +We manipulate the harvest constant rather than enforce a production limit. The total deficit is the same; the difference is when the deficit occurs and how quickly it grows. Imposing a rate limit fixes production until the rate limit exceeds the default $H'$ ; thus, short-term deficit and future deficit are equal. By decreasing the harvest constant, the short-term deficit is less than the future deficit, since production decreases over time. We assume a preference for short-term production over long-term production, since the sudden drop in oil production in 2010 caused by a limit would be much harder to cope with than a gradual drop caused by a decreased harvest constant. + +We find that the optimal reduction of the harvest constant is by $70.5\%$ , causing $H(100)$ to be as close as possible to 1620. + +As soon as an alternative to oil is available and marketable, demand for oil will drop (we assume by a factor of 20). As long as production is not too + +low by this point, the limited oil supply will have sustained the demand. For example, Figure 8 displays the result if in 2050 an alternative to oil is introduced and widely accepted, making the management policy obsolete. + +![](images/12cf3765ba3b8a8dbd5bbbacaf75eba88c82354c3fe08a79a9f32a539d5aad51.jpg) +Chart 18. Management Policy with Decrease in Demand at 2050 +Figure 8. Effects of management policy with $95\%$ decrease in demand in 2050. + +Figure 9 shows the corresponding production scenario. Production drops suddenly by more than $75\%$ at the beginning of this management policy until the appearance of an oil substitute. The drops caused by the change in the harvest rate constant and by the drop in demand are visible in 2010 and 2050. + +The sudden drop in 2010 could be mitigated by severe conservation without reducing harvesting, so that at the time that production plummets there are large storehouses of unused petroleum that can appear on the market during the next few years and alleviate the economic crash and bankruptcies. Because it affects actual use instead of production, such a policy cannot be shown through the model, which is unable to distinguish between production and consumption. This sudden drop in production is necessary if any management policy is to be followed with any haste; even a gradual drop in production is bound to be accompanied by failing industries and economic recessions. + +![](images/022892a4cab3a42129817bf201647098271d33beec65ec63605a37806bd06156.jpg) +Figure 9. Production under management policy that decreases demand $95\%$ in 2050. + +# References + +Campbell, C.J. 1997. The Coming Oil Crisis. Multi-science Publishing Co. + +______ . n.d. Discovery trend: Cumulative discovery by region. http://www.hubbertpeak.com/campbell/images/983fig5.gif. + +Ecosystems. 2005. M. King Hubbert. http://www.hubbertpeak.com/hubbert/. + +Gever, John, Robert Kaufmann, David Skole, and Charles Vorosmarty (eds.). 1986. Beyond Oil: The Threat to Food and Fuel in the Coming Decades. Cambridge, MA: Ballinger. 1991. Boulder, CO: University of Colorado Press. + +Ramirez, Vincent. 1999. Oil crises delay—A world oil price forecast. http://members.aol.com/vrex/oil/price_forecast.htm. + +![](images/2d1319dabc735990e697dcb027ec730605764ac47624748951f6a3368f02bd27.jpg) +Daniel Lacker, Palmer Mebane, and Jonathan Giuffrida with advisor John Barnes behind middle. + +# Judge's Commentary: The Outstanding Exhaustible Resource Papers + +Ted Hromadka II + +Hromadka & Associates + +3151 Airway Ave, Suite H-2 + +Costa Mesa, CA 92626 + +thromadka@hromadka-associates.com + +# Introduction + +The Interdisciplinary Contest in Modeling (ICM) provides an exciting and competitive environment for time-constrained innovative thinking. The judging of the papers was accomplished by several stages of review, culminating with a final set of papers that were reviewed for placement in the top two categories of Outstanding Winners and Meritorious Winners. Consequently, each paper ranked in the top two categories was refereed by at least 7 reviewers. + +This year's ICM problem examined the eventual depletion of a nonrenewable or exhaustible resource, with the team selecting the resource to be analyzed. Consequently, several different topics were considered, ranging from oil resources to the availability of lumber, among other topics. Participating teams prepared several exciting investigations that demonstrated innovative thinking and good topic research into the underpinnings of the resource selected. To rank the contributions across the many selected topics analyzed, judges assessed the following qualities: + +Summary: Adequacy of the one-page summary in describing the paper, its results, and its methodology. The summary was deemed to be a very important factor in the overall paper's scoring. + +Science: Thoroughness of research into the literature regarding the nature and handling of the selected resource, alternatives to the subject resource, and new technology in increasing use efficiency or providing alternatives. + +Modeling: Assumptions used, documentation of assumed parameter values, appropriateness of the governing mathematical equations, and conceptual model construct. + +Analysis: Adequacy of model calibration to historic data and trends, analysis of model strengths and weaknesses, sensitivity analysis of modeling components, variations in modeling predictions due to changes in various types of society reactions to continuing depletion of resource. + +Presentation: Quality of report text, graphics, and mathematical development. + +Clarity in paper presentation. Use of proper references and citations. + +Each judge independently scored each paper on these qualities and then determined an overall score. There was little variability among judges' scores. A large majority of the papers demonstrated an in-depth investigation of the selected resource, and frequently innovative thinking was applied towards solving the resulting governing mathematical equations. The judges were uniformly impressed with the quality of research presented in the papers and the amount of work achieved in the very short time frame. + +# The Problem + +The 2005 ICM Problem considered a highly relevant issue, the fate of a nonrenewable or exhaustible resource, with the problem incorporating linkages of consumption to economic, political, environmental, security, demographic, emerging technologies, and other factors. The first task for a participating team was to choose a resource and to understand the underpinnings of its nature and interdependencies on the factors that affect its rate of consumption. For example, such a resource typically has associated alternatives which, although possibly more expensive to utilize, could extend the life of the resource. + +Oil was the most popular resource selected among the teams. Other resources considered included potable water, lumber, natural gas, minerals, uranium, and living space. + +# Modeling Approach + +Ideally, once a conceptual model of supply and demand is developed and calibrated to the history of discovery, development, and consumption, the model can be used to predict the future of the resource under various conditions. Teams also did research on emerging technologies that provide alternatives or more efficient use of the resource. Using historical data, teams used regression to assess demand and supply trends. + +Teams typically noted that the historic consumption trend was increasing with time and at an increasing rate. The recommended mathematical analogs + +were typically of the exponential type with parameters calibrated by a least-squares fit to data. Interestingly, although teams considered different resources, the resulting mathematical equations tended to be similar. + +Many teams examined world population trends and developed relationships between resource consumption and world population. They paid little attention, however, to the resource-relevant distribution of the population growth; most consumption has occurred in developed nations, which have a different population growth trend than developing countries do. In any case, teams readily noted that at some point demand will exceed supply. For oil, this "undersupply" was estimated to occur between 2015 to 2050. + +Many teams went no further; they did not focus on how their model's predictions would change under different global conditions and reactions to decreasing availability of the resource. In other words, they assumed that the future will reflect the past and that the world will not react to decreasing availability of the resource. However, a few teams did examine global influences on their model. For example, one team quantified the effects of an oil embargo by correlating the impacts of past embargos on oil consumption. + +Many teams researched their resource and investigated alternatives methods to improve efficiency. Probably due to the limited time frame, however, they paid little attention to modeling the effect of implementation of these alternatives or efficiency improvements in delaying a possible "undersupply point in time." + +# Presentation of Results + +The judges were impressed with the hard work that went into the paper write-ups. Excellent graphs and presentation of equations were typical. However, the presentation of the model development and modeling results varied greatly. In a few cases, the equations presented were not appropriate for the model description in the text of the paper; possibly, these incidents were simply carelessness or typographical errors. The top-ranked papers were of the highest quality in research into the literature, development of the mathematical model, approximation or solution of the governing equations, analysis of the recommended model, and presentation of the results. + +# Conclusions and Recommendations + +For myself, being involved with the review and judging of the 2005 ICM Problem was an enjoyable experience. The judging demonstrated to me once again the continuing potential for young people to absorb new technology and to accept challenges to improve themselves by independent work. It gives me comfort to know that perhaps a few of the 2005 ICM Problem teams will be interested and challenged by this very relevant problem, and may one day discover + +new technology or policies that will postpone the so-called "undersupply point in time," or find an alternative technology that does not use nonrenewable or exhaustible resources. + +The following are recommendations for future ICM Problem solvers: + +- Write-ups: Check your equations to avoid a typographical error resulting in a relationship that is inconsistent with the relevant written description. +- Clearly state modeling assumptions and their limitations, and cite references to justify specific choices (such as ranges for modeling parameter values). +- Provide a relevant list of references that are clearly used in the text. Don't list references that you don't cite in the report. +- Do sensitivity testing of your model and discuss your testing results. +- Evaluate your modeling results and discuss their implications. If your results agree with the literature, say so and cite references; if not, state the disagreement and cite references. +- Double-check your grammar and do a spell-check of the report. + +# About the Author + +Ted Hromadka II has three Ph.D. degrees, in the fields of applied mathematics, civil engineering (water resources emphasis), and advanced computational modeling. He is a Certified Hydrologist in both surface and groundwater, a registered civil engineer in the States of California, Nevada, Arizona and Hawaii, and a licensed Geoscientist in the State of Texas. His background includes concurrently holding both academic and consulting positions since 1973, with positions such as Research Hydrologist at the USGS, Research Associate at Princeton University, and Professor in the Departments of Mathematics, Geological Sciences, and Environmental Studies at California State University, Emeritus. He currently holds an adjunct position at the Wessex Institute of Technology, England, and is a Principal Hydrologist at the consulting firm Hromadka & Associates. Prior to this position, Dr. Hromadka founded and was Practice Director of the Hydrology & Atmospheric Sciences practice at Exponent Failure Analysis & Associates. You can learn more about Ted at http://www.hromadka.net. + +# Author's Commentary: The Outstanding Exhaustible Resource Papers + +Paul J. Campbell + +Mathematics and Computer Science + +Beloit College + +Beloit, WI 53511 + +campbell@beloit.edu + +# Introduction + +This modeling problem was inspired by revisions that I was making in the last chapter in COMAP's overwhelmingly successful college-level textbook in applied mathematics for liberal arts students [Garfunkel et al. 2006]. That chapter, "The Economics of Resources," applies concepts and formulas from simple finance to assess how large the Earth's population may become, how long nonrenewable resources can last, and why renewable resources are extinguished in pursuit of economic gain. The revision includes M.K. Hubbert's model for exhaustion of oil and ends with a retelling of the ecological and human tragedy of the despoilment of Easter Island [Diamond 1995; Diamond 2005]. + +The compound interest formula serves as a basic model for growth of a biological population. The chapter also considers the logistic model, used by many teams in this year's ICM. The other main formula, for savings at interest, provides a way to estimate cumulative usage of a nonrenewable resource whose rate of use is increasing at a fixed rate. + +The chapter introduces concepts and terminology of Michael Olinick (Middlebury College) [1991]: The static reserve of a resource is how long a fixed supply $S$ will last at a constant annual rate of use $U$ , namely, $S / U$ years. The exponential reserve is how long the supply will last at an initial rate of use $U$ that is growing at rate $r$ per year (that is, growing exponentially), namely, + +$$ +\frac {\ln \left(1 + \frac {S}{U}\right)}{\ln (1 + r)}. +$$ + +In keeping with the spirit of the book, the chapter illustrates these concepts with data on real resources. For example, as several teams in this year's ICM noted, the U.S. has recoverable reserves of coal that would last about 250 years—at the current rate of use. However, the coal will last only 85 years if the rate of use increases at $2.25\%$ per year, as it did from 2002 to 2003. Exercises ask students to calculate the exponential reserves for oil and natural gas under the curious projections of the U.S. Geological Survey that world consumption will increase at respectively $1.9\%$ and $2.2\%$ per year through 2025. + +Perhaps the most eye-opening exercise for students is the exercise that asks, "Can our energy problems be solved by increasing the supply?" This exercise asks students to compare the exponential reserve for an amount $S$ of a resource with those for $10S$ and $100S$ , under a rate of consumption that is increasing $2.5\%$ per year (as U.S. oil use has been doing since 1973) [Nering 2001]. + +Exercise. U.S. oil consumption in 2004 was 7.5 billion barrels (bbl), of which almost $60\%$ was imported. As I write this in spring 2005, the U.S. House of Representatives has approved harvesting oil from the Alaska National Wildlife Refuge (ANWR) and the Senate is to take up the matter. Seismic estimates put the amount of recoverable oil in the ANWR at 5 to 8 billion bbl [Korpela 2002], which can become available starting in 5 years or so (so perhaps in 2010). How many months of supply of oil for the U.S. will the entire output of ANWR oil amount to in 2010, if U.S. usage continues to increase at $2.5\%$ per year? (Predicts Korpela [2002], "That it will be drilled one day is a foregone conclusion, for when shortages appear every argument against drilling it will be dismissed by the public's clamour for oil.... The most appealing argument is to save it as long as possible, for once all efforts have been made to shift into a thrifty living, any oil from it would go further.") + +Searching for data for the book's examples and exercises led me to the literature on models for exhaustion of nonrenewable resources. + +# The Outstanding Papers + +The Outstanding papers go into these matters far more deeply than a book that cannot presume knowledge of calculus or differential equations. + +The paper from the Olin School of Engineering examines data on U.S. water withdrawals and correctly concludes that the data are too variable to allow extrapolation. It then uses a common parametric family of models to model water withdrawal in the agricultural, industrial, and municipal sectors. This model incorporates a factor of "economic and political influence"—the fraction of under/over-draw of water relative to the renewable amount—taken to a power that is a parameter for the sector. This feature is a clever idea that avoids directly considering pricing, lets the three sectors "compete" for the same water, and (via different powers of the fraction) lets the model adapt + +to differential "pressures" in the three sectors. Using linear regressions of change in irrigation area and of population growth on log time and of GDP on time, plus data on U.S. water usage 1990-1995, the paper identifies the power parameters. These values vary considerably in size but the paper does not go into why this is the case. The paper then projects water usage for the U.S. and (with the same power parameter values) for an imaginary country with different initial withdrawal amounts and a different amount of renewable water. The projections are point projections, without any specification of range of variability. The paper concludes with a thorough roundup of literature on alternatives to drawing down nonrenewable water sources. + +The other two papers treat oil. The paper from the Maggie Walker Governor's School follows the ideas of Hubbert [Laherrère 2000; Deffeyes 2001; 2005], fitting a logistic curve to oil discovery and a normal-distribution curve to oil production (a Hubbert curve is the derivative of a logistic curve). The paper uses a spreadsheet and Euler's method to project estimates for cumulative discovered, harvested, demanded, and remaining untapped oil. A key assumption that avoids considering price (and its volatility) is simple inverse proportionality of cumulative oil discovered and cumulative demand for oil, in the form $y = c / x$ . The rationale offered for this assumption is simply the law of supply and demand; but the paper does not try to argue for the functional form used, nor for why past discoveries and past demand should be so affected. The paper concludes with applications of the model to various alternative scenarios, including disasters and alternative fuels. + +The paper from East China University of Science and Technology begins with a simple system of linear first-order differential equations involving oil supply, demand, and price over time. Surprisingly, the (analytic) solution for demand fluctuates; with addition of an exponential forcing term, the demand function fits historical data well. The paper then fits demand as just an exponential plus a constant but does not return to the system to examine the consequences for supply and price (they are exponentially driven, too). The paper uses data for 1995-2003 and finds linear regressions of demand on world GDP, on population, and (rather obviously) on carbon dioxide emission from oil consumption. The linear fits are excellent in part because the time interval is short enough to mask the exponential trend in demand fitted earlier. The paper projects the date of oil exhaustion under various assumptions of growth in GDP and under a logistic model of population growth, as well as the allowable oil consumption under various rates of growth of carbon dioxide emissions. Utterly innovative is the paper's idea to allocate oil between generations (setting aside some oil for the future, by smoothly decreasing the amount of oil used) and between countries in terms of refinery capacity. Balancing conservation with development also suggests optimizing for GDP produced per barrel of oil. Countries vary enormously in the energy used per dollar of GDP; China uses 3 times as much as the U.S., and Ukraine uses 17 times as much. But some differences are unavoidable because natural resources (e.g., aluminum) help determine industries (energy-intensive smelting of ore, as in Jamaica). Coun + +tries also differ in how severely changes in the price of oil would affect their GDP (growth or decrease) [Bacon 2005, 48-52]. + +The two Outstanding papers on oil model the world as a whole; a finer analysis would disaggregate the world into geographical sectors, as the Olin paper did with water and economic sectors. + +One concept only implicit in the Outstanding papers is price elasticity of demand [Nievergelt 1987]. How elastic is the demand for oil, water, or other exhaustible resources? For example, by how much did the 2004 increase in oil price from $30 to over$ 50 per barrel lessen demand in the U.S.? How much would a gasoline tax of $4/gallon (as in Europe) affect demand? For price $P$ and demand $Q$ , the price elasticity of demand is the relative change in price quantity divided by the relative change in demand: + +$$ +\epsilon = \frac {d Q / Q}{d P / P} = \frac {d Q}{d P} \frac {P}{Q}. +$$ + +The quantity is always nonpositive; values farther from 0 correspond to greater elasticity of demand; values above $-1$ indicate inelastic demand. For the U.S., elasticity of demand for oil is $-0.06$ , about the same as for coffee or cigarettes; demand for oil is more elastic than for pet food or breakfast cereal, but less elastic than for ice cream, beer, or wine. Elasticity is less when there is no good substitute (not yet for oil in transport), when consumers spend only a little on it at a time (as for gasoline), and when it is seen as a necessity (as gasoline and home heating oil are) [Besanko and Braeutigam 2005, 44-52]. + +The elasticity of $-0.06$ is for the short term, during which consumers don't have time to adjust completely to the price change. In the long run, U.S. price elasticity for oil is $-0.45$ , reflecting opportunity to plan for reduced use of oil. + +Another consideration that the Outstanding papers do not handle (and it would be very difficult to do so) is that the market for oil is not completely subject to supply and demand principles. The market is partly manipulated by OPEC, whose members account for about \(40\%\) of production and (if they cooperate) can adjust their output to meet targets for world supply and price. Despite the run-up in oil prices from 2002 (\(20/bbl) through 2005 (over \)50/bbl), OPEC revenues today are far less than in 1980 (\(66/bbl in 2004 dollars). One result—about whose other consequences one can speculate—is that per capita income in Saudi Arabia declined by \(70\%\) from 1980 to 1999 [Wikipedia 2005], the worst ever for any nation in history. Oil prices are denominated in U.S. dollars; the decline of the dollar since 2002 would have required a \(30\%\) increase in the price of oil just to maintain purchasing power of the producers in other currencies. A key question for producers is how to optimize present value of future revenues from oil—and over how long a time frame. As I explain in the chapter in For All Practical Purposes, if economic returns from other investments are expected to be higher, it is more profitable to pump oil now (all of it, if possible) and invest the proceeds (e.g., buy the U.S. economy). On the other hand, if the cost of oil can be expected to rise faster than the returns on other investments, it pays to keep it in the ground as long as possible. + +# Action? + +The U.S. faces no countrywide shortage of water, but over 30 years ago it received a "wake-up" call about oil. America has adjusted to rising energy costs (gas, coal, and wood all go up with oil) by gradually improving efficiencies of industrial production, home heating, and home appliances—but not cars. + +Americans feel they have a right to cheap gasoline and are highly averse to increased taxes on it; the American Automobile Association (AAA) demands that such taxes go for highways. The $0.50/gal energy conservation tax recommended in 1992 by presidential candidates Paul Tsongas and Ross Perot helped sink their candidacies, and Bill Clinton settled in 1993 for a$ 0.04/gal increase. + +Last fall, I estimated the cost and economic benefit of putting photovoltaic cells or solar water heating on the roof of our house in Wisconsin. Despite enough sun and some incentives from Wisconsin, neither is a "good investment," and fewer than 10 of each are installed each year in the state. Is economics the only basis for economic decision-making? Are there peculiar economic, political, and particularly religious [Moyers 2005] considerations in U.S. culture that lead us to focus on short-term profits, economies, and pleasures? + +But we are not alone in our indifference and in our ambivalence about providing for the future. I am currently living in a building in a Western European country completely dependent on oil imports, where gasoline and electricity cost almost three times as much as in the U.S. This country produces half as much solid waste per person, and uses half as much energy per dollar of GDP, as the U.S. An enormous solar collector field built by farmers south of town is economically viable because the government encourages expansion of non-fossil-fuel electric capacity by requiring electric companies to buy such power at twice what they can sell it at (there is probably a governmental subsidy to the utilities). Yet in our brand-new "energy-efficient" building (it has amazingly good insulation), the hall lights are not on timers, the light bulbs in our apartment are not compact fluorescents but incandescents and halogens (120 to 180 W—I won't go into their short lifetimes), and the environmental organizations on the floor below do not always completely separate waste office paper from trash. + +Meanwhile, are Americans still debating the question, from my mother's generation or earlier, and long since definitively settled [Greenfield 2004; Rea et al. 1987]: Should they shut off the light if they leave a room for a few minutes? + +# References + +Bacon, Robert. 2005. The impact of higher oil prices on low income countries and on the poor. International Bank for Reconstruction and Development, United Nations Development Program, World Bank Energy Sector Management Assistance Programme (ESMAP) Formal Report 299/05. http://wbln0018.worldbank.org/esmap/site.nsf/files/299-05_HigherOilPrices_Bacon.pdf. + +Besanko, David, and Ronald R. Braeutigam. 2005. Microeconomics. 2nd ed. New York: Wiley. +Deffeyes, Kenneth S. 2001. *Hubbert's Peak: The Impending World Oil Shortage.* 2nd ed. 2003. Princeton, NJ: Princeton University Press. +2005. Beyond Oil: The View from Hubbert's Peak. New York: Hill and Wang. +Diamond, Jared. 1995. *Easter's end*. Discover 16 (8) (August 1995): 63-69. +_____. 2005. Collapse: How Societies Choose to Fail or Succeed. New York: Viking. +Garfunkel, Solomon, et al. 2006. For All Practical Purposes: Mathematical Literacy in Today's World. 7th ed. New York: W.H. Freeman. +Editions of this book have been used by more than half a million students since the first edition in 1988, and translations have appeared in German, Greek, and Spanish. +Greenfield, Alan. 2004. A state of enlightenment. Actuary Australia (June 2004) 4-6. www.actuaries.asn.au/PublicSite/pdf/actaust0406.pdf. +Korpela, Seppo A. 2002. Oil depletion in the United States and the world. http://greatchange.org/ov-korpela.US_and_world_depletion.html. +Laherrère, J.H. 2000. The Hubbert curve: Its strengths and weaknesses. Oil and Gas Journal (18 February 2000). http://dieoff.org/page191.htm. +Moyers, Bill. 2005. There is no tomorrow. Minneapolis Star Tribune (30 January 2005) AA1, AA5. http://www.informationclearinghouse.info/article7960.htm. Correction noted by the Washington Post: "Although that statement ["after the last tree is felled, Christ will come back"] has been widely attributed to Watt [James Watt, first Secretary of the Interior under Ronald Reagan], there is no historical record that he made it." +Nering, Evar D. 2001. The mirage of a growing fuel supply. New York Times (4 June 2001) Op-Ed page. +Nievergelt, Yves. 1987. Price elasticity of demand: Gambling, heroin, marijuana, whiskey, prostitution, and fish. UMAP Modules in Undergraduate Mathematics and Its Applications, Unit 674. The UMAP Journal 8 (1987) 31-61. Reprinted in UMAP Modules: Tools for Teaching 1987, edited by Paul J. Campbell, 151-181. Lexington, MA: COMAP, Inc., 1987. +Olinick, Michael. 1991. Modelling depletion of nonrenewable resources. Mathematical Computer Modelling 15(6): 91-95. +Rea, M.S., R.F. Dillon, and A.W. Levy. 1987. The effectiveness of light switch reminders in reducing light usage. *Lighting Research & Technology* 19 (3): 81-85. +Wikipedia. 2005. Saudi Arabia. http://en.wikipedia.org/wiki/Saudi_Arabia. \ No newline at end of file diff --git a/MCM/1995-2008/2005MCM/2005MCM.md b/MCM/1995-2008/2005MCM/2005MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..c1fdff0761d33f2504683a328ce5d7fb7b539e90 --- /dev/null +++ b/MCM/1995-2008/2005MCM/2005MCM.md @@ -0,0 +1,4876 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Associate Director, + +Mathematics Division + +Program Manager, + +Cooperative Systems + +Army Research Office + +P.O.Box 12211 + +Research Triangle Park, + +NC 27709-2211 + +David.Arney1@arl.army.mil + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Director of Educ. Technology + +Roland Cheyney + +Production Editor + +Pauline Wright + +Copy Editor + +Timothy McLean + +Distribution + +Kevin Darcy + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 26, No. 3 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meyer + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young University + +Army Research Office + +AT&T Shannon Research Laboratory + +University of Houston-Downtown + +Harvey Mudd College + +Oberlin College + +Troy University—Montgomery Campus + +University of Wisconsin—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Harvey Mudd College + +Adelphi University + +Eastern Washington University + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes a CD-ROM of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2520 $ 99 + +(Outside U.S.) #2521 $111 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2570 $456 + +(Outside U.S.) #2571 $479 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2540 $198 + +(Outside U.S.) #2541 $220 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2510 $41 + +(Outside U.S.) #2510 $41 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc. 57 Bedford Street, Suite 210, Lexington, MA 02420 + +Copyright 2005 by COMAP, Inc. All rights reserved. + +# Table of Contents + +# Editorial + +Back to the Future + +Solomon A. Garfunkel. 185 + +About This Issue (and Others to Come) 187 + +# Special Section on the MCM + +Results of the 2005 Mathematical Contest in Modeling + +Frank Giordano 189 + +Abstracts of the Outstanding Papers and the Fusaro Papers 217 + +From Lake Murray to a Dam Slurry + +Clay Hambrick, Katie Lewis, and Lori Thomas 229 + +Through the Breach: Modeling Flooding from a Dam Failure in + +South Carolina + +Jennifer Kohlenberg, Michael Barnett, and Scott Wood 245 + +Analysis of Dam Failure in the Saluda River Valley + +Ryan Bressler, Christina Polwarth, and Braxton Osting 263 + +Judge's Commentary: The Outstanding Flood Planning Papers + +Daniel Zwillinger 279 + +The Booth Tolls for Thee + +Adam Chandler, Matthew Mian, and Pradeep Baliga 283 + +A Single-Car Interaction Model of Traffic for a Highway Toll Plaza + +Ivan Corwin, Sheel Ganatra, Nikita Rozenblyum 299 + +Lane Changes and Close Following: Troublesome Tollbooth Traffic + +Andrew Spann, Daniel Kane, and Dan Gulotta 317 + +A Quasi-Sequential Cellular-Automaton Approach to + +Traffic Modeling + +John Evans and Meral Reyhan 331 + +The Multiple Single Server Queueing System + +Azra Panjwani, Yang Liu, and HuanHuan Qi 345 + +Two Tools for Tollbooth Optimization + +Ephrat Bitton, Anand Kulkarni, and Mark Shlimovich 355 + +For Whom the Booth Tolls + +Brian Camley, Bradley Klingenberg, and Pascal Getreuer 373 + +Judge's Commentary: The Outstanding Tollbooths Papers + +Kelly Black..... + +![](images/6de243d8ac552a7e20a7e2b722628aee8a565c0e28857164f4f9709e17ad5dc8.jpg) + +# Publisher's Editorial Back to the Future + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +57 Bedford St., Suite 210 + +Lexington, MA 02420 + +s.garfunkel@mail.comap.com + +First, a mea culpa. I recently attended a showing of the movie "Good Night and Good Luck" and was taken by the courage of Edward R. Murrow, Fred Friendly, et al. at CBS during the McCarthy era. And I felt embarrassed. For a number of years, I have watched several of my colleagues in mathematics and mathematics education work hard to destroy much of the progress that we have made—and I have been silent. The pressure to stay quiet is strong. COMAP lives in part on our ability to secure grants from the National Science Foundation. NSF doesn't enjoy controversy. Moreover, any number of the people that I see as destructive sit on proposal review panels from time to time. And they are very political. Single-issue politics is always quite ugly. I suspect that most if not all of this group would consider themselves to be liberals; but they have no trouble working with this most conservative of administrations as long as their views of mathematics education prevail—even if that means the end of science education at NSF. + +In the early 1970s, when I first became seriously involved in mathematics education and the world of proposals, grants, etc., there was a constant complaint from program officers at NSF: Mathematicians simply did not write proposals, and when called upon to review proposals, they invariably savaged their mathematical colleagues. Hence mathematics education grants were well below the numbers that the importance of the subject justified. NSF program officers came to mathematics conferences and all but begged us to ask them for money. They also pleaded with us to speak with one voice, that is, to iron out our differences, figure out for ourselves what the priorities should be, and get to work together on attacking the important problems rather than each other. + +And believe it or not, by the end of the 1980s, mostly because of the courage of the NCTM leadership and the good offices of MSEB, we got our collective act together. We all quoted the Standards and A Nation at Risk, and Everybody Counts, + +and we wrote proposals to make the vision of those documents come to pass. We began to receive funding at a level that made real change possible—and we are reaping the benefits today with increased NAEP and SAT scores. + +So what did we decide to do? Shoot ourselves in whatever foot we could find! We began the math wars; we went back to savaging our colleagues; we made NSF look bad in the eyes of Congress; we gave succor and ammunition to our enemies; and we lost our intellectual honesty in the name of winning political favor. The result is an NSF science education budget that is greatly reduced and skewed against mathematics. Funding has moved dramatically from curriculum- and staff-development to "research" in mathematics education. People and organizations directly responsible for the demonstrated successes of the past two decades are being told that there is no room at the inn. In the name of practicality, we reward mediocrity. + +There are consequences of these actions. Just as we learned that natural disasters require competent people and institutions (and not simply political hacks at the helm), there will be real and serious consequences for mathematics and mathematics education because of the present funding environment. Of course, these consequences are several years away and so safe from existing politicians' blame or attention. Nevertheless, we are headed for disaster unless we have the courage to stand up and fight—today. This is precisely equivalent to the global warming debate. We are poisoning our profession just as we poison our atmosphere. And we are running out of time, because systems take as long to fix as they do to break. + +I call on the staff of NSF to take credit for their own successes and tell the community about what they know works—even if the current administration wants to tell a different story. I call on the private foundations to step into the coming void and fill in until the current federal policies turn around. And I call on members of our community who understand what's at stake to stand up and be counted. + +# About the Author + +Sol Garfunkel received his Ph.D. in mathematical logic from the University of Wisconsin in 1967. He was at Cornell University and at the University of Connecticut at Storrs for 11 years and has dedicated the last 25 years to research and development efforts in mathematics education. He has been the Executive Director of COMAP since its inception in 1980. + +He has directed a wide variety of projects, including UMAP (Undergraduate Mathematics and Its Applications Project), which led to the founding of this Journal, and HiMAP (High School Mathematics and Its Applications Project), both funded by the NSF. For Annenberg/CPB, he directed three telecourse projects: For All Practical Purposes (in which he also appeared as the on-camera host), Against All Odds: Inside Statistics (still showing on late-night TV in New York!), and In Simplest Terms: College Algebra. He is currently co-director of the Applications Reform in Secondary Education (ARISE) project, a comprehensive curriculum development project for secondary school mathematics. + +# Important Note from the Editor + +# About This Issue (and Others to Come) + +Paul J. Campbell + +Editor + +This issue of The UMAP Journal represents a departure from past practice and a further step toward electronic publishing. + +For its first five years, the Journal published 512 pp/yr with a supplementary volume of UMAP Modules: Tools for Teaching (at extra cost to subscribers) that ran between 1044 and 1258 pp. Those were typescript pages that held only half the content of a page of today's Journal. + +In 1984, the Mathematical Contest in Modeling (MCM) was founded, the Journal began to be typeset, and the annual UMAP Modules: Tools for Teaching supplement was bundled into the COMAP membership option for receiving the Journal. That supplemental volume, which libraries shelve as a separate serial, collected together UMAP Modules—and later also ILAP Modules—from the year's issues of the Journal, together with additional Modules (particularly longer ones) for which there was no room in the Journal. + +We aimed for four 92-page issues per year and devoted one issue to the MCM. The size of the MCM issue has varied with the number of Outstanding teams, the length of their papers (sometimes the size of a small telephone book), and my ability to edit the papers down to a modeling core. The last year that we had only 368 pp in the Journal was 1992; the MCM issue in fact became a double issue (and more). In 2000, we had 530 pp, and the combined total for the Journal and the Tools volume has varied over the past 10 years between 642 and 715 pp. This year, the MCM yielded a record 10 Outstanding teams. + +Meanwhile, costs for publishing on paper have risen faster than the Journal's income. + +Hence, spurred by the desire to control costs, but also by the intention to make the contents of the Journal and the Tools volume more usable by members, we have settled on the following plan: + +COMAP members will receive four 92-page issues of the Journal, plus a CD-ROM bundled into the MCM issue. + +Here are the particulars: + +- As with other COMAP electronic products, the files on the CD-ROM will be Adobe Acrobat PDF files. In particular, color images that are rendered only in black and white in the printed copy will appear in the PDF files in color. +- The ICM and MCM issues, like the other two issues, will be limited to 92 print pages each. +- If an issue runs longer than $92\mathrm{pp}$ , some articles will not appear in print but only on the CD-ROM. However, all articles on the CD-ROM will appear in the printed table of contents and are regarded as published in the Journal. Paging will run continuously, including in sequence articles that do not appear in printed form. So, if you notice that, say, page 350 in the printed copy is followed by page 403, your copy is not necessarily defective! The articles corresponding to the intervening pages should be on the CD-ROM. +- The CD-ROM will contain an entire year of Journal issues. +- The Tools volume will no longer appear as a printed volume but only in electronic form on the CD-ROM. +- We remind readers of the COMAP's policy concerning usage of material appearing in the Journal, which applies to all material on the CD-ROM. The policy appears as a footnote on the first page of each article: + +Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +We hope that you will find this arrangement, if not entirely satisfying, at least satisfactory. It will mean that we will not have to procrusteanize the content of the Journal to fit a fixed number of allocated pages. For example, we might otherwise need to select only two or three of the MCM Outstanding papers to publish (a hard task indeed!). Instead, we can continue to bring you the full content as in the past. + +# Modeling Forum + +# Results of the 2005 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +frgiorda@nps.navy.mil + +# Introduction + +A total of 664 teams of undergraduates, from 259 institutions and 306 departments in 10 countries, spent the second weekend in February working on applied mathematics problems in the 21st Mathematical Contest in Modeling (MCM). + +The 2005 MCM began at 8:00 P.M. EST on Thursday, February 3 and ended at 8:00 P.M. EST on Monday, February 7. During that time, teams of up to three undergraduates were to research and submit an optimal solution for one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problems at the appropriate time, and entered completion data through COMAP'S MCM Website. After a weekend of hard work, solution papers were sent to COMAP on Monday. The top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first 20 contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2004). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first 10 years of the contest and a winning paper for each year. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. That volume is available on COMAP's special Modeling Resource CD-ROM (http://www.comap.com/product/?idx=613). In addition, COMAP will shortly release a new volume, The MCM at 21, which will contain + +all of the 20 problems from the second 10 years of the contest and a winning paper for each year. + +This year's Problem A asked teams to develop a model showing the consequences of a massive dam failure. Problem B asked teams to propose a model to help determine the optimal number of tollbooths in a barrier-toll plaza. The 10 Outstanding solution papers are published in this issue of *The UMAP Journal*, along with commentary from problem authors, contest judges, and outside experts. + +In addition to the MCM, COMAP also sponsors the Interdisciplinary Contest in Modeling (ICM) and the High School Mathematical Contest in Modeling (HiMCM). The ICM, which runs concurrently with MCM, offers a modeling problem involving concepts in operations research, information science, and interdisciplinary issues in security and safety. Results of this year's ICM are on the COMAP Website at http://www.comap.com/undergraduate/contests; results and Outstanding papers appeared in Vol. 26 (2005), No. 2. The HiMCM offers high school students a modeling opportunity similar to the MCM. Further details about the HiMCM are at http://www.comap.com/highschool/ contests. + +# Problem A: Flood Planning + +Lake Murray in central South Carolina is formed by a large earthen dam, which was completed in 1930 for power production. Model the flooding downstream in the event there is a catastrophic earthquake that breaches the dam. + +Two particular questions: + +1. Rawls Creek is a year-round stream that flows into the Saluda River a short distance downriver from the dam. How much flooding will occur in Rawls Creek from a dam failure, and how far back will it extend? +2. Could the flood be so massive downstream that water would reach up to the S.C. State Capitol Building, which is on a hill overlooking the Congaree River? + +# Problem B: Tollbooths + +Heavily-traveled toll roads such as the Garden State Parkway in New Jersey, Interstate 95, and so forth, are multilane divided highways that are interrupted at intervals by toll plazas. Because collecting tolls is usually unpopular, it is desirable to minimize motorist annoyance by limiting the amount of traffic disruption caused by the toll plazas. Commonly, a much larger number of tollbooths is provided than the number of travel lanes entering the toll plaza. Upon entering the toll plaza, the flow of vehicles fans out to the larger number of tollbooths, and when leaving the toll plaza, the flow of vehicles is required + +to squeeze back down to a number of travel lanes equal to the number of travel lanes before the toll plaza. Consequently, when traffic is heavy, congestion increases upon departure from the toll plaza. When traffic is very heavy, congestion also builds at the entry to the toll plaza because of the time required for each vehicle to pay the toll. + +Make a model to help you determine the optimal number of tollbooths to deploy in a barrier-toll plaza. Explicitly consider the scenario where there is exactly one tollbooth per incoming travel lane. Under what conditions is this more or less effective than the current practice? Note that the definition of "optimal" is up to you to determine. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at either Appalachian State University (Flood Planning Problem) or at the National Security Agency (Tollbooths Problem). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +This year, again an additional Regional Judging site was created at the U.S. Military Academy to support the growing number of contest submissions. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Flood Planning Problem3255094172
Tollbooths Problem760145280492
1085195374664
+ +The 10 papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams +Institution and Advisor Team Members + +# Flood Planning Papers + +"From Lake Murray to a Dam Slurry" + +Harvey Mudd College + +Claremont, CA + +Jon Jacobsen + +Clay Hambrick + +Katie Lewis + +Lori Thomas + +"Through the Breach: Modeling Flooding from a Dam Failure in South Carolina" + +University of Saskatchewan + +Saskatoon, SK, Canada + +James Brooke + +Jennifer Kohlenberg + +Michael Barnett + +Scott Wood + +"Analysis of Dam Failure in the Saluda River Valley" + +University of Washington + +Seattle, WA + +Rekha Thomas + +Ryan Bressler + +Christina Polwarth + +Braxton Osting + +# Tollbooths Papers + +"The Booth Tolls for Thee" + +Duke University + +Durham, NC + +William G. Mitchener + +Adam Chandler + +Matthew Mian + +Pradeep Baliga + +"A Single-Car Interaction Model of Traffic for a Highway Toll Plaza" + +Harvard University + +Cambridge, MA + +Clifford H. Taubes + +Sheel Ganatra + +Ivan Corwin + +Nikita Rozenblyum + +"Lane Changes and Close Following: Troublesome Tollbooth Traffic" + +Massachusetts Institute of Technology + +Cambridge, MA + +Martin Bazant + +Andrew Spann + +Daniel Kane + +Dan Gulotta + +"A Quasi-Sequential Cellular Automaton Approach to Traffic Modeling" + +Rensselaer Polytechnic Institute + +Troy, NY + +Peter Kramer + +John Evans + +Meral Reyhan + +"Two Tools for Tollbooth Optimization" + +University of California, Berkeley + +Berkeley, CA + +L. Craig Evans + +Ephrat Bitton + +Anand Kulkarni + +Mark Shlimovich + +"The Multiple Single-Server Queueing System" + +University of California, Berkeley + +Berkeley, CA + +Jim Pitman + +Azra Panjwani + +Yang Liu + +Huan Qi + +"For Whom the Booth Tolls" + +University of Colorado + +Boulder, CO + +Anne Dougherty + +Brian Camley + +Bradley Klingenberg + +Pascal Getreuer + +# Meritorious Teams + +Flood Planning Papers (25 teams) + +Albion College, Albion, MI (Darren Mason) + +Bucknell University, Lewisburg, PA (Karl Voss) + +Carroll College, Helena, MT (Sam Alvey) + +China University of Mining and Technology, Xuzhou, Jiangsu, China (Zhang Xingyong) + +College of Science, Southeast University, Nanjing, Jiangsu, China (Jia Xingang) + +Duke University, Durham, NC (Owen Astrachan) + +Fudan University, Shanghai, Shanghai, China (Cai Zhijie) + +Harvey Mudd College, Claremont, CA (Jon Jacobsen) + +James Madison University, Harrisonburg, VA (James Sochacki) + +Lewis and Clark College, Portland, OR (Robert Owens) + +McGill University, Montreal, Quebec, Canada (Nilima Nigam) + +Midlands Technical College, West Columbia, SC (John Long) + +Nanjing University, Nanjing, Jiangsu, China (Bo Wen) + +National University of Defense Technology, Changsha, Hunan, China (Yi Wu) + +NC School of Science & Mathematics, Durham, NC (Daniel Teague) + +United States Military Academy, West Point, NY (John Jackson) + +University of Delaware, Newark, DE (Louis Rossi) + +University of Electronic Science and Technology of China, Chengdu, Sichuan, China (Gao Qing) + +University of Washington, Seattle, WA (James Morrow) + +Western Washington University, Bellingham, WA (Saim Ural) + +Wuhan University, Wuhan, Hubei, China (Deng Aijiao) + +Wuhan University, Wuhan, Hubei, China (Hu Xinqi) + +Youngstown State University, Youngstown, OH (Angela Spalsbury) + +Zhejiang Gongshang University, Hangzhou, Zhejiang, China (Ding Zhengzhong) + +Zhejiang University City College, Hangzhou, Zhejiang, China (Huang Huang) + +Tollbooths Papers (60 teams) + +Albertson College, Caldwell, ID (Michael Hitchman) + +Asbury College, Wilmore, KY (David Couliette) + +Asbury College, Wilmore, KY (Kenneth Rietz) + +Beijing Normal University, School of Mathematical Sciences, Beijing, China (Huang Haiyang) + +Bethel University, St. Paul, MN (William Kinney) + +California Polytechnic State University, San Luis Obispo, CA (Jonathan Shapiro) + +Central Washington University, Ellensburg, WA (Stuart Boersma) + +Chongqing University, Chongqing, China (Li Fu) + +Chongqing University, Chongqing, China (He Renbin) + +College of Mount St. Joseph, Cincinnati, OH (Scott Sportsman) + +Cornell University, Ithaca, NY (Alexander Vladimirsky) + +Davidson College, Davidson, NC (Malcolm Campbell) + +Davidson College, Davidson, NC (Mark Foley) + +Duke University, Durham, NC (Owen Astrachan) + +Duke University, Durham, NC (William Mitchener) + +Eastern Oregon University, La Grande, OR (Anthony Tovar) + +Harbin Institute of Technology Science Faculty, Harbin, Heilongjian, China (Shang Shouting) + +Harvey Mudd College, Claremont, CA (Ran Libeskind-Hadas) (two teams) Hastings + +College, Hastings, NE (Dave Cooke) + +Jiangsu University, Zhenjiang, Jiangsu, China (Gang Xu) + +Kansas State University, Manhattan, KS (David Auckly) + +Lafayette College, Easton, PA (Ethan Berkove) + +Luther College, Decorah, IA (Reginald Laursen) (two teams) + +Nanchang University, Nanchang, Jiangxi, China (Liao Chuangrong) + +Northern Kentucky University, Highland Heights, KY (Gail Mackin) + +Northwest University, Xián, Shaanxi, China (Wang Liantang) + +School of Economics & Management, Tsinghua University, Beijing, China (Xie Qun) + +School of Financial Mathematics, Peking University, Beijing, China (Lan Wu) + +School of Mathematical Sciences, Peking University, Beijing (Liu Xufeng) + +School of Science, Beijing University of Posts and Telecommunications, Beijing, China (Sun Hongxiang) + +Shanghai Jiao Tong University, Shanghai, China (Song Baorui) + +Shanghai Jiao Tong University, Shanghai, China (Huang Jianguo) + +South China University of Technology, Guangzhou, Guangdong, China (Liu Shen Quan) + +South-China Normal University, Guangzhou, Guangdong, China (Wang Henggeng) + +Southeast University, Nanjing, Jiangsu, China (Dan He) + +Southeast University, Nanjing, Jiangsu, China (Wang Liyan) + +Truman State University, Kirksville, MO (Steve Smith) + +Tsinghua University, Beijing, China (Hu Zhiming) + +Tsinghua University, Beijing, China (Lu Mei) + +University College Cork, Cork, Ireland (Donal Hurley) + +University of California, Berkeley, Berkeley, CA (Lawrence Evans) + +University of Colorado at Boulder, Boulder, CO (Michael Ritzwoller) + +University of Delaware, Newark, DE (Louis Rossi) + +University of Pittsburgh, Pittsburgh, PA (Christopher Earls) + +University of Pittsburgh, Pittsburgh, PA (Jonathan Rubin) + +University of Puget Sound, Tacoma, WA (DeWayne Derryberry) + +University of Richmond, Richmond, VA (Kathy Hoke) (two teams) + +University of Saskatchewan, Saskatoon, SK, Canada (James Brooke) + +University of Western Ontario, London, ON, Canada (Allan MacIsaac) + +Wake Forest University, Winston-Salem, NC (Miaohua Jiang) (two teams) + +Wesleyan College, Macon, GA (Joseph Iskra) + +Western Washington University, Bellingham, WA (Saim Ural) + +Worcester Polytechnic Institute, Worcester, MA (Suzanne Weekes) + +Wuhan University, Wuhan, Hubei, China (Chen Wenyi) + +Wuhan University of Technology, Wuhan, Hubei, China (Huang Wei) + +Zhejiang University, Hangzhou, Zhejiang, China (Yong He) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, recognized the teams from the University of Washington (Flood Planning Problem) and University of California, Berkeley (Advisor: Jim Pitman) (Tollbooths Problem) as INFORMS Outstanding teams and provided the following recognition: + +- a letter of congratulations from the current president of INFORMS to each team member and to the faculty advisor; +- a check in the amount of $300 to each team member; +- a bronze plaque for display at the team's institution, commemorating their achievement; +- individual certificates for team members and faculty advisor as a personal commemoration of this achievement; + +- a one-year student membership in INFORMS for each team member, which includes their choice of a professional journal plus the OR/MS Today periodical and the INFORMS society newsletter; +- a one-year subscription access to the COMAP modeling materials Website for the faculty advisor. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from Harvey Mudd College (Flood Planning Problem) and Rensselaer Polytechnic Institute (Tollbooths Problem). Each of the team members was awarded a $300 cash prize and the teams received partial expenses to present their results in a special Minisymposium at the SIAM Annual Meeting in New Orleans in July. Their schools were given a framed hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding team from each problem as an MAA Winner. The teams were from the University of Saskatchewan (Flood Planning Problem) and Duke University (Tollbooths Problem). With partial travel support from the MAA, both teams presented their solutions at a special session of the MAA Mathfest in Albuquerque, NM in August. Each team member was presented a certificate by Richard S. Neal, Co-Chair of the MAA Committee on Undergraduate Student Activities and Chapters. + +# Ben Fusaro Award + +Two Meritorious papers were selected for the Ben Fusaro Award, named for the Founding Director of the MCM and awarded for the second time this year. It recognizes an especially creative approach; details concerning the award, its judging, and Ben Fusaro are in The UMAP Journal 25 (3) (2004): 195-196. The Ben Fusaro Award teams were from McGill University (Flood Planning Problem) and University of California, Berkeley (Advisor: Lawrence Evans) (Tollbooths Problem). Each team received a plaque from COMAP. + +# Judging + +Director + +Frank R. Giordano, Naval Postgraduate School, Monterey, CA + +Associate Directors + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA +Patrick J. Driscoll, Dept. of Systems Engineering, U.S. Military Academy, West Point, NY + +Contest Coordinator + +Kevin Darcy, COMAP Inc., Lexington, MA + +# Flood Planning Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, Stillwater, OK (MAA) + +Associate Judges + +Peter Anspach, National Security Agency, Ft. Meade, MD (Triage) + +Courtney Coleman, Mathematics Dept., Harvey Mudd College, Claremont, CA (SIAM) + +Ben Fusaro, Mathematics Dept., Florida State University, Tallahassee, FL + +Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC + +John Kobza, Mathematics Dept., Texas Tech University, Lubbock, TX (INFORMS) + +Michael Moody, Olin College of Engineering, Needham, MA + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, Salisbury University, Salisbury, MD (MAA) + +Daniel Zwillinger, Newton, MA (SIAM) + +# Tollbooths Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, Appalachian State University, Boone, NC (Triage) + +Kelly Black, Mathematics Dept., University of New Hampshire, Durham, NH (SIAM) + +Karen D. Bolinger, Mathematics Dept., Clarion University of Pennsylvania, Clarion, PA (SIAM) + +J. Douglas Faires, Youngstown State University, Youngstown, OH (SIAM) + +William P. Fox, Mathematics Dept., Francis Marion University, Florence, SC + +Mario Juncosa, RAND Corporation, Santa Monica, CA (retired) + +Don Miller, Mathematics Dept., St. Mary's College, Notre Dame, IN + +John L. Scharf, Mathematics Dept., Carroll College, Helena, MT + +Dan Solow, Mathematics Dept., Case Western Reserve University, Cleveland, OH (INFORMS) + +Michael Tortorella, Dept. of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ + +Marie Vanisko, Dept. of Mathematics, California State University, Stanislaus, CA (MAA) + +Richard Douglas West, Francis Marion University, Florence, SC + +# Regional Judging Session + +Head Judge + +Patrick J. Driscoll, Dept. of Systems Engineering + +Associate Judges + +Darrall Henderson, Dept. of Mathematical Sciences + +Steven Henderson, Dept. of Systems Engineering + +Steven Horton, Dept. of Mathematical Sciences + +Michael Jaye, Dept. of Mathematical Sciences + +—all of the U.S. Military Academy, West Point, NY + +# Triage Sessions: + +# Flood Planning Problem + +Head Triage Judge + +Peter Anspach, National Security Agency (NSA), Ft. Meade, MD + +Associate Judges + +Dean McCullough, High Performance Technologies, Inc. + +Robert L. Ward (retired) + +Blair Kelly, + +CraigOrr, + +Brian Pilz, + +Eric Schram, + +and other members of NSA. + +# Tollbooths Problem + +Head Triage Judge + +William C. Bauldry, Chair + +Associate Judges + +Terry Anderson, + +MarkGinn, + +Jeff Hirst, + +Rick Klima, + +Katie Mawhinney, + +and + +Vickie Williams + +—all from Dept. of Math'1 Sciences, Appalachian State University, Boone, NC + +# Fusaro Award Committee + +Flood Planning Problem: + +Peter Anspach, National Security Agency, Ft. Meade, MD + +Michael Moody, Olin College of Engineering, Needham, MA + +# Tollbooths Problem: + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, Appalachian State University, Boone, NC + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, Salisbury University, Salisbury, MD + +# Sources of the Problems + +The Flood Planning Problem was contributed by Jerry Griggs (Mathematics Dept., University of South Carolina, Columbia, SC). + +The Tollbooths Problem was contributed by Michael Tortorella (Dept. of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ). + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency and by COMAP. We thank Dr. Gene Berg of NSA for his coordinating efforts. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +This year we have two new sponsors, whom we thank for their involvement and support: + +- IBM Business Consulting Services, Center for Business Optimization; and +- Two Sigma Investments. (This group of experienced, analytical, and technical financial professionals based in New York builds and operates sophisticated quantitative trading strategies for domestic and international markets. The firm is successfully managing several billion dollars using highly automated trading technologies. For more information about Two Sigma, please visit \http://www.twosigma.com.) + +We thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Editing (and sometimes substantial cutting) has taken place: Minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORAB
CALIFORNIA
Cal Poly PomonaPomonaHubertus von BremenP
Ioana MihailaP
Peter SiegelP
California Baptist U.RiversideCatherine KongP
Calif. Poly. State U.San Luis ObispoJonathan ShapiroM,P
Calif. State Poly. U.PomonaKurt VandervoortP
Calif. State U.SeasideHongde HuP
Jeffrey GroahP
Hartnell CollegeSalinasKelly LockeP
Harvey Mudd College (CS)ClaremontJon JacobsenO,M
Ran Libeskind-HadasM,M
Pomona CollegeClaremontAmi RadunskayaP
Univ. of California (Stat)BerkeleyL. Craig EvansO,M
Jim PitmanO
COLORADO
Colorado CollegeColorado SpringsDavid BrownP
Colorado State Univ.PuebloBruce LundbergP
Regis UniversityDenverDavid BahrH,P
USAF AcademyUSAFTimothy CooleyPH
James RolfP
Univ. of ColoradoBoulderAnne DoughertyO
Bengt FornbergH
Michael RitzwollerM
DenverLynn BennethumH
Michael JacobsonH
U. of Northern Colo.GreeleyNathaniel MillerP
CONNECTICUT
Southern Conn. State U.New HavenRoss GingrichH
Western Conn. State U.DanburyJosephine HamerP
DELAWARE
Univ. of DelawareNewarkLouis RossiMM
+ +
INSTITUTIONCITYADVISORAB
FLORIDA
Embry-Riddle UniversityDaytona BeachGreg SpradlinH
Jacksonville UniversityJacksonvilleRobert HollisterH
GEORGIA
Georgia Southern Univ.StatesboroLaurene FausettHP
State Univ. of West GeorgiaCarrolltonScott GordonH
Wesleyan CollegeMaconCharles BeneshP
Joseph IskraM,H
IDAHO
Albertson CollegeCaldwellMichael HitchmanPM
Idaho State UniversityPocatelloRobert Van KirkP
ILLINOIS
Greenville CollegeGreenvilleGeorge PetersP
Illinois Institute of Tech.ChicagoMichael PelsmajerP
Monmouth CollegeMonmouthHoward DwyerH
Christopher FasanoH
Northern Illinois Univ.DeKalbChris HurlburtH,H
Wheaton CollegeWheatonPaul IsiharaP
INDIANA
Earlham College (CS)RichmondMic JacksonPP
Charlie PeckP
Franklin CollegeFranklinJohn BoardmanP
Rose-Hulman Inst. of Tech.Terre HauteDavid RaderH,H
Saint Mary's CollegeNotre DameJoanne SnowH,P
IOWA
Grinnell CollegeGrinnellCharles CunninghamP
Karen ShumanP,P
Luther CollegeDecorahSteve HubbardP
Reginald LaursenM,M
Mt. Mercy CollegeCedar RapidsK.R. KnoppP
Simpson CollegeIndianolaJames BohyH
Jeff ParmeleeP
Murphy WaggonerH,P
Wartburg CollegeWaverlyBrian BirgenP
KANSAS
Emporia State UniversityEmporiaBrian HollenbeckP
Kansas State UniversityManhattanDavid AucklyM,P
+ +
INSTITUTIONCITYADVISORAB
KENTUCKY
Asbury CollegeWilmoreDavid CoullietteM
Kenneth RietzM
Brescia UniversityOwensboroChris TiahrtP
Morehead State UniversityMoreheadMichael DobranskiP
Northern Kentucky UniversityHighland HeightsGail MackinPM
Thomas More CollegeCrestview HillsRobert RiehemannP
MAINE
Colby CollegeWatervilleJan HollyP
MARYLAND
Hood CollegeFrederickBetty MayfieldH
Johns Hopkins UniversityBaltimoreGreg EyinkH
Fred TorcasoH,P
Loyola CollegeBaltimoreJiyuan TaoP
Mount St. Mary's UniversityEmmitsburgFred PortierPP
Salisbury UniversitySalisburyMichael BardzellP
Villa Julie CollegeStevensonEileen McGrawP
Washington CollegeChestertownEugene HamiltonP
MASSACHUSETTS
College of the Holy CrossWorcesterGareth RobertsP
Harvard UniversityCambridgeClifford TaubesO
MITCambridgeMartin BazantO
Olin College of EngineeringNeedhamBurt TilleyH
Salem State CollegeSalemKenny ChingP
Simon's Rock CollegeGreat BarringtonAllen AltmanP,P
Michael BergmanPP
Smith CollegeNorthamptonRuth HaasH
University of MassachusettsLowellJames Graham-EagleP
Western New England CollegeSpringfieldLorna HanesP
Worcester Polytechnic InstituteWorcesterSuzanne WeekesM,P
MICHIGAN
Albion CollegeAlbionDarren MasonMP
Ferris State UniversityBig RapidsHolly PriceH
Lawrence Technological UniversitySouthfieldRuth FavroHP
Valentina TobosH
Siena Heights UniversityAdrianPamela WartonP,P
Tim HusbandH
MINNESOTA
Bethel UniversitySt. PaulWilliam KinneyM,P
Minnesota State UniversityMoorheadEllen HillP
Saint John's UniversityCollegegevilleRobert HesseH
MISSOURI
Drury UniversitySpringfieldBruce CallenP
Bob RobertsonHP
Northwest Missouri State UniversityMaryvilleRussell EulerH
Saint Louis UniversitySt. LouisJames DowdyH
Southeast Missouri State UniversityCape GirardeauRobert SheetsP
Truman State UniversityKirksvilleSteve SmithM
MONTANA
Carroll CollegeHelenaSam AlveyMP
Kelly ClineH,P
NEBRASKA
Hastings CollegeHastingsDave CookeM
NEW JERSEY
New Jersey Institute of TechnologyNewarkRoy GoodmanP
Rowan UniversityGlassboroHieu NguyenH,H
NEW MEXICO
New Mexico TechSocorroBrian BorchersP
NEW YORK
Clarkson UniversityPotsdamKathleen FowlerHH
William HesseP
Colgate UniversityHamiltonWarren WeckesserH
Concordia CollegeBronxvilleJohn LoaseH,P
Cornell UniversityIthacaAlexander VladimirskyPM
Hobart and William Smith CollegesGenevaScotty OrrP
Ithaca CollegeIthacaJohn MaceliH
Nazareth CollegeRochesterDanielirmajerH
Rensselaer Polytechnic InstituteTroyPeter KramerO,P
Roberts Wesleyan CollegeRochesterGary RadunsP
United States Military AcademyWest PointJ. BillieH
John JacksonM
Sakura TherrienP
Westchester Community CollegeValhallaMarvin LittmanP
NORTH CAROLINA
Appalachian State UniversityBooneHolly HirstP
Davidson CollegeDavidsonMalcolm CampbellM
Tim ChartierH,H
Mark FoleyM
Duke UniversityDurhamWilliam MitchenerO,M
(CS)Owen AstrachanMM
Meredith CollegeRaleighCammey ColeP
NC School of Science & Math.DurhamDaniel TeagueMP
Wake Forest UniversityWinston-SalemMiaohua JiangM,M
OHIO
Bowling Green State Univ.Bowling GreenJuan BesP
College of Mount St. JosephCincinnatiScott SportsmanM
Malone CollegeCantonDavid HahnP
Miami UniversityOxfordDoug WardP
University of DaytonDaytonYoussef RaffoulH
Youngstown State University (CS)YoungstownAngela SpalsburyMH
Michael CrescimannoP
OKLAHOMA
Oklahoma State UniversityStillwaterLisa MantiniH
OREGON
Eastern Oregon UniversityLa GrandeDavid AllenP
Anthony TovarM
Lewis and Clark CollegePortlandRobert OwensMP
Linfield CollegeMcMinneapolisJennifer NordstromP,P
Pacific UniversityForest GroveChristine GuentherP
Southern Oregon UniversityAshlandKemble YatesH
Western Oregon UniversityMonmouthMaria FungP
PENNSYLVANIA
Bloomsburg UniversityBloomsburgKevin FerlandHP
Bucknell UniversityLewisburgKarl VossM
Clarion Univ. of PennsylvaniaClarionDana MadisonH
Drexel UniversityPhiladelphiaHugo WoerdemanP
Gannon UniversityErieMichael CaulfieldH,P
Gettysburg CollegeGettysburgBogdan DoytchinovP
Juniata CollegeHuntingdonJohn BukowskiH
Lafayette CollegeEastonEthan BerkoveM,P
Slippery Rock UniversitySlippery RockRichard MarchandP
University of Pittsburgh (Eng)PittsburghJonathan RubinHM
Christopher EarlsM
Westminster CollegeNew WilmingtonBarbara FairesH
SOUTH CAROLINA
Benedict CollegeColumbiaBalaji IyangarP
Francis Marion UniversityFlorenceThomas FitzkeeP
Midlands Technical CollegeWest ColumbiaJohn LongM,H
SOUTH DAKOTA
Mount Marty CollegeYanktonBonita GacnikP
Stephanie GruverPP
James MinerP
SD School of Mines and TechnologyRapid CityRobert KowalskiP
Kyle RileyH
TENNESSEE
Austin Peay State UniversityClarksvilleNell RayburnP
TEXAS
Angelo State UniversitySan AngeloKarl HavlakP
Austin CollegeShermanJohn JaromaP
Trinity UniversitySan AntonioRichard CooperP
Allen HolderP
VIRGINIA
Eastern Mennonite UniversityHarrisonburgCharles CooleyP
Leah BoyerH,H
James Madison UniversityHarrisonburgHasan HamdanH
Caroline SmithP
James SochackiM
Maggie Walker Governor's SchoolRichmondJohn BarnesP,P
Harold HoughtonP,P
Roanoke CollegeSalemJeffrey SpielmanP
University of RichmondRichmondKathy HokeM,M
Virginia Western Community CollegeRoanokeSteve HammerP
Ruth ShermanP
WASHINGTON
Central Washington UniversityEllensburgStuart BoersmaM
Heritage UniversityToppenishRichard SwearingenPH
Pacific Lutheran UniversityTacomaDaniel HeathP,P
University of Puget SoundTacomaDeWayne DerryberryM
University of WashingtonSeattleJames MorrowMH
Rekha ThomasOH
Western Washington UniversityBellinghamSaim UralMM
Tjalling YpmaP,P
WISCONSIN
Northland CollegeAshlandWilliam LongP,P
CANADA
Dalhousie UniversityHalifaxDorothea PronkP,P
McGill UniversityMontrealAntony HumphriesH
Nilima NigamM
University of SaskatchewanSaskatoonJames BrookeOM
University of Western OntarioLondonAllan MacIsaacM
York UniversityTorontoHongmei ZhuP
Huaiping ZhuHP
CHINA
Anhui
Anhui UniversityHefeiWang XuejunH
Zhu XiaobaoP
Zhang QuanbingP
Wu YunqiP
Anhui Univ. of Technology and ScienceWuhuWang ChuanyuP
Hefei University of TechnologyHefeiGu JunliH
Zheng QiP
Du XueqiaoP
Huang YouduP
Univ. of Science and Technology of ChinaHefeiLiu YanjunP
Huang ZhangjinP
Yang ZhouwangP
(CS)Sun GuangzhongP
Beijing
BeiHang UniversityBeijingWu SanxingH
Beijing Institute of Science and TechnologyBeijingSun HuafeiH
Beijing Institute of TechnologyBeijingWang HongzhouHP
Yan GuifengHP
Beijing Jiaotong UniversityBeijingWu FaenP
Wang XiujuanP,P
(Eng)Deng XiaoqinP
(Info)Wang BingtuanP,P
(Sci)Feng GuochenP
Liu MinghuiHP
Wang XiaoxiaH
Beijing Language and Culture University (CS)BeijingLiu GuilongPP
Beijing Materials InstituteBeijingLi ZhenpingPP
Cheng XiaohongP,P
Beijing Normal UniversityBeijingCui HengjianPH
Shen FuxingH
He QingP,P
Peng FanglinP
Wang JiayinH
Huang HaiyangM
Liu LaifuP
Beijing University of Chemical TechnologyBeijingLiu DaminP
Jiang GuangfengH
Yuan WenyanP
Jiang XinhuaP
Beijing University of Posts and Telecomm.BeijingDing JinkouH
Zhang WenboP
Wu YunfengH
(Sci)Sun HongxiangM
He ZuguoHH
(Applied Sci)Xue YiP
Beijing University of TechnologyBeijingChang JingangP
Guo SiliP
Yang ShilinPP
Central University of Finance and EconomicsBeijingGe BinhuaHH
Huang HuiqingH,P
China Agricultural UniversityBeijingZou HuiH,P
Peking UniversityBeijingWang MingP
Deng MinghuaH,P
(CS)Tang HuazhongP
(Econ)Lan WuM
Liu XufengM,P
Renmin University of China (Statistics)BeijingJin YangP
Tsinghua UniversityBeijingHu ZhimingM,H
Lu MeiM,H
(Econ)Xie QunM
Chongqing
Chongqing UniversityChongqingLi ChuandongP
Liu QiongfangP
He RenbinM
Duan ZhengminH
Wang ZongliP
(Chem)Li ZhiliangH
(CS)Fu LiM
Fujian
Xiamen University (Info)XiamenZheng XiaolianH
Guangdong
Jinan UniversityGuangzhouHu DaiqiangH
Fan SuohaiP
(CS)Luo ShizhuangH
(Electronics)Ye ShiqiP
Shandong UniversityJinanMa JianhuaP
South-China Normal UniversityGuangzhouWang HenggengM
(CS)Li HunanPP
(Info)Yu JianhuaH
(Phys)Liu XiuxiangH,P
South China University of TechnologyGuangzhouLiang Man FaP
Liu Shen QuanM
Qin Yong AnP
Liu Xiao LanH
Sun Yat-Sen UniversityGuangzhouFeng GuoH
Jiang Xiao LongH
Chen Ze PengP
Yuan ZhouH
Hebei
Hebei Polytechnic UniversityTangshanWan XinghuoH
Xiao JixianP
Tan YiliH
North China Electric Power UniversityBaodingGu GendaiH
Liu JinggangH
Shi HuifengH
Zhang PoP
Shijiazhuang University of EconomicsShijiazhuangPeng JianpingP
Kang NaP
Heilongjiang
Jia Mu-Si UniversityJia Mu-siFan WeiH
Zhang HongPP
Harbin Engineering UniversityHarbinYu FeiP
Zhang XiaoWeiP
Luo Yue ShengP
Harbin Institute of TechnologyHarbinShang ShoutingHM
Zhang ChipingP,P
Jiao GuanghongP,P
Liu KeanPH
Wang XilianHP
(Econ)Wei ShangH
(Sci)Hong GeP,P
Harbin Medical UniversityHarbinWang QiangHuP,P
Harbin University of Science and TechnologyHarbinLi DongmeiH
Chen DongyanH
Tian GuangyueH
Wang ShuzhongH
Northeast Agricultural UniversityHarbinLi FanggeP
Hubei
China University of Geosciences (CS)WuhanLuo WenqiangP
Cai ZhihuaP
Huazhong University of Science & TechnologyWuhanYuan LinjieP
Wang YongjiP
Wuhan UniversityWuhanDeng AijiaoMH
Zhong LiuyiH
Chen WenyiM,H
Hu XinqiM
Yi XumingP
(Eng)Luo ZhuangchuP
Wuhan University of TechnologyWuhanChen YeH
Chu JieP
He LangH
Huang WeiM
Li GuangH,P
Hunan
Central South UniversityChangshaHe WeiH
Yi KunnanP
Zhang HongyanP
(Bio)Zhang DianzhongP
Hunan UniversityChangshaLi XiaopeiP
(Appplied Math.)Ma BolinP
(Info)Ma ChuanxiuP
(Stat)Luo HanP
National University of Defense TechnologyChangshaDuan XiaojunP
Mao ZiyangP
(Math. & System Science)Cheng LizhiP
Wu YiM
Inner Mongolia
Inner Mongolia UniversityHohhotWang MeiP
Ma ZhuangP
Jiangsu
China Univ. of Mining and TechnologyXuzhouWu ZongxiangH
Zhang XingyongM
Zhu KaiyongH,P
HoHai UniversitySuzhouRong ShenP
Jiangsu UniversityZhenjiangXu GangM,H
Li YiminH,P
Nanjing UniversityNanjingWu ZhaoyangP
Chunying DuanP
Yao TianxingH,H
(Phys)Wen BoM
Nanjing Univ. of Finance and EconomicsNanjingWang GengP
Nanjing Univ. of Posts and Telecomm.NanjingHe MingH,H
Nanjing University of Sci. & Tech.NanjingXu ChungenH
Liu LiweiP
Chen PeixinH
Zhang ZhengjunP
Southeast UniversityNanjingHe DanM
Wang LiyanM
Zhang ZhiqiangPP
(Sci)Jia XingangM,P
Sun ZhizhongP,P
Xuzhou Institute of TechnologyXuzhouJiang YingziHH
Jiangxi
East China Inst. of Tech. (Foreign Lang.)FuzhouCai YingP
Jiangxi Normal UniversityNanchangWu GengxiuH
XiongjunP
Nanchang UniversityNanchangChen TaoH
Chen YujuP
Liao ChuangrongM
Ma Xinsheng MaP
Jilin
Jilin UniversityChangchunZou YongkuiH,P
(Bio)Zhou LaiH
(Eng)Fang PeichenH,H
Pei YongchenP,P
Northeast Normal UniversityChangchunLi ZuofengPP
Liaoning
Dalian Maritime University (CS)DalianZhang YunjiePP
Yang ShuqinP,P
Dalian Nationalities University (CS)DalianGuo QiangPH
Li XiaoniuH,H
Dalian University (Info)DalianTan XinxinH,P
Gang JiataiP
Dalian University of TechnologyDalianYu HongquanH,P
Liu JianguoP
Zhao LizhongH,H
Wang YiH
Li LianfuP
Gao XubinP,P
(Inst. of Univ. Students' Innovation)Zhou QiH,H
Pan QiuhuiH
Liaoning High Police Academic School Northeastern University (Info)DalianShen CongP,P
ShenyangSun PingH,P
Hao PeifengH,P
He XuehongH,H
(Eng) (CS)Cui JianjiangPH
Liu HuilinPH
Shenyang Institute of Aero. EngineeringShenyangShan, FengP,P
Zhu LimeiH,H
Shaanxi
Northwestern Polytechnical UniversityXi'anZhao XuanminH
Sun HaoP
Liu XiaodongP
(Chem)Peng GuohuaP
Zhang ShengguiP
(Phys)Shi YiminH
Xiao HuayongH
Northwest UniversityXi'anDou JihongP
He RuichanP
Wang LiantangM
Xi'an Communication Institute (Info)Xi'anWang HongP
Kang JinlongP
Song XiaofengP
(Sci)Xi'anLi GuoP
Yang DongshengP
Zhang JianhangP
Jiang YanP
Xi'an Jiaotong UniversityXi'anHe XiaoliangH,H
Dai YonghongH
(Applied Math.)Zhou YicangP
Xidian UniversityXi'anLiu HongweiP
Bo LiefengP
Ye FengP
Tang HoujianP
Shandong
Shandong UniversityJinanLiu BaodongP
Huang ShuxiangP
Huang ShuxiangP,P
Ma JianhuaP,P
Huang ShuxiangP
(CS)Liu DongP
Shanghai
Donghua UniversityShanghaiYou SurongP
Chen ChaoP
He GuoxinP
Wang ZhijieP
East China University of Sci. and TechnologyShanghaiLiu ZhaohuiP
Qin YanH
Su ChunjieP
Sun JunP
Wang HaitaoP
(Bio)Chen HaomingP
Fudan UniversityShanghaiCao YuanP
Cai ZhijieM
Jiading No. 1 Middle SchoolShanghaiXie Xilin and
Fang YunpingP,P
Shanghai Foreign Language SchoolShanghaiPan LiquinH,H
Sun YuH,P
Shanghai Jiao Tong UniversityShanghaiSong BaoruiM
Huang JianguoM,P
(Minhang Branch)ShanghaiZhou GangPP
Zhou GuobiaoP,P
Shanghai Normal UniversityShanghaiLiu RongguanP
Guo ShenghuanP
Shi YongbingH
Zhang Jizhou and
Zhu DetongP
Shanghai University of Finance and EconomicsShanghaiDong Dong-chengP
Yu JuntaiH
Yin ChenyuanH
Li TaoH
Shanghai Xiangming High SchoolShanghaiFeng QiangP,P
Shanghai Youth Centre of Sci. and Tech. Educ.ShanghaiChen GanP
Shanghai Yucai High SchoolShanghaiLi ZhengtaiP
Tongji UniversityShanghaiZhang HualongP
Chen XiongdaH
Gui ZipengP
Sichuan
Chengdu University of TechnologyChengduYuan YongP
Wei YouhuaP
Sichuan UniversityChengduNiu HaiP
Zhou JieH
Univ. of Electronic Sci. and Tech. of ChinaChengduGao QingMH
Qin SiyiH
Xu QuanziH
Southwest Jiaotong UniversityE'meiZhao LianwenPP
Tianjin
Nankai UniversityTianjinWang YiP
Zhang ChunshengP
Chen DianfaP
Zhou XingweiH
Wang ZhaojunP
Tianjin UniversityTianjinLiang FengzhenH
Xu GenqiP,P
Lan GuoliangH
Rong XinH
Zhejiang
Zhejiang Gongshang UniversityHangzhouDing ZhengzhongMP
Hua JiukunH,P
Zhejiang Sci-Tech Univ. (Academy of Science)HangzhouHu JueliangH
Luo HuaH
Zhejiang UniversityHangzhouYang QifanH
He YongM,H
Tan ZhiyiP
(City College)Huang WaibinM
Wang GuiP
Kang XushengP
Zhao YananP,P
(Chu Kechen Honors College)Wu JianH
Chen LingxiH,P
Zhou YongmingH
(Ningbo Institute of Technology)NingboSun HainaP
Tu LihuiPP
Li ZheningP
Zhejiang Univ. of Finance and EconomicsHangzhouWang FulaiP
Luo JiP
Zhejiang University of TechnologyHangzhouZhou MinghuaP
Wu XuejunP
(Jianxing College)Wang ShimingP,P
FINLAND
Helsinki Mathematical High SchoolHelsinkiTerhi OlkkonenPP
Päivölä CollegeTarttilaMerikki LappiP,P
GERMANY
International University BremenBremenPeter OswaldH
Universität KarlsruheKarlsruheLars BehnkeP
HONG KONG
City University of Hong KongHong KongHo To MingP
Hong Kong Baptist UniversityKowloonC.S. TongP
Wai Chee ShiuP
KOREA
Korea Adv. Inst. of Sci. and Tech. (KAIST)DaejeonChang-Ock LeeH,H
INDONESIA
Institute of Technology BandungBandungKuntjoro SidartoH
Rieske HadiantiP
IRELAND
Trinity College DublinDublinConor HoughtonP
University College CorkCorkAndrew UsherH
Donal HurleyM
James GrannellH
SOUTH AFRICA
University of StellenboschStellenboschJan van VuurenH,P
+ +Abbreviations for Organizational Unit Types (in parentheses in the listings) + +
(none)MathematicsM; Pure M; Applied M; Computing M; M and Computer Science; M and Computational Science; M and Information Science; M and Statistics; M, Computer Science, and Statistics; M, Computer Science, and Physics; Mathematical Sciences; Applied Mathematical and Computational Sciences; Natural Science and M; M and Systems Science; Applied M and Physics
BioBiologyB; B Science and Biotechnology; Biomathematics; Life Sciences
ChemChemistryC; Applied C; C and Physics; C, Chemical Engineering, and Applied C
CSComputerC Science; C and Computing Science; C Science and Technology; C Science and (Software) Engineering; Software; Software Engineering; Artificial Intelligence; Automation; Computing Machinery; Science and Technology of Computers
EconEconomicsE; E Mathematics; Financial Mathematics; Financial Mathematics and Statistics; Management; Business Management; Management Science and Engineering
EngEngineeringCivil E; Electrical Eng; Electronic E; Electrical and Computer E; Electrical E and Information Science; Electrical E and Systems E; Communications E; Civil, Environmental, and Chemical E; Propulsion E; Machinery and E; Control Science and E; Mechanisms; Operations Research and Industrial E; Automatic Control
InfoInformationI Science; I and Computation(al) Science; I and Calculation Science; I Science and Computation; I and Computer Science; I and Computing Science; I Engineering; Computer and I Technology; Computer and I Engineering; I and Optoelectronic Science and Engineering
PhysPhysicsP; Applied P; Mathematical P; Modern P; P and Engineering P; P and Geology; Mechanics; Electronics
SciScienceS; Natural S; Applied S; Integrated S
StatStatisticsS; S and Finance; Mathematical S; Probability and S
+ +EDITOR'S NOTE: For team advisors from China, I have endeavored to list family name first. For their advice in that connection, I thank Wang Meng and Jiang Liming of Fudan University, exchange students at Beloit College. + +# From Lake Murray to a Dam Slurry + +Clay Hambrick +Katie Lewis +Lori Thomas +Harvey Mudd College +Claremont, CA + +Advisor: Jon Jacobsen + +# Summary + +We predict the extent of flooding in the Saluda river if a large earthquake causes the Lake Murray dam to break. In particular, we predict how high the water would be when it reached Columbia and how far the flooding would spread up tributaries of the Saluda like Rawls Creek. We base our model on the Saint-Venant equations for open-channel water flow. We use a discrete version of them to predict the water level along the length of the river. Our model takes into account the width of the floodplain, the slope of the river, the size of the break in the dam, and other factors. We estimate parameters for Lake Murray, its dam, and the Saluda River and calculate the flood results. + +The South Carolina State Capitol is safe under even the most extreme circumstances, since it sits on a hill well above the highest possible water level. However, flood waters could still reach $17\mathrm{m}$ at Columbia and even higher upstream. Buildings in Columbia close to the water would be inundated, but there should be enough warning time for residents to escape. Both our model and local evacuation plans suggest that low-lying areas for miles around would be covered with water. + +The text of this paper appears on pp. 229-244. + +# Through the Breach: + +# Modeling Flooding from a + +# Dam Failure in South Carolina + +Jennifer Kohlenberg + +Michael Barnett + +Scott Wood + +University of Saskatchewan + +Saskatoon, SK, Canada + +Advisor: James Brooke + +# Summary + +The Saluda Dam, separating Lake Murray from the Saluda River in South Carolina, could breach in the event of an earthquake. + +We develop a model to analyze the flow from four possible types of dam breaches and the propagation of the floodwaters: + +- instant total failure, where a large portion of the dam erodes instantly; +- delayed total failure, where a large portion of the dam slowly erodes; +- piping, where a small hole forms and eventually opens into a full breach; and +- overtopping, where the dam erodes to form a trapezoidal breach. + +We develop two models for the spread of the downstream floodwaters. Both use a discrete-grid approach, modelling the region as a set of cells, each with an elevation and a volume of water. The Force Model uses cell velocities, gravity, and the pressure of neighbouring cells to model water flow. The Downhill Model assumes that flow rates are proportional to the height differences between the water in adjacent cells. + +The Downhill Model is efficient, intuitive, flexible, and could be applied to any region with known elevation data. Its two parameters smooth and regulate water flow, but the model's predictions depend little on their values. + +For a Saluda Dam breach, the total extent of the flooding is $106.5\mathrm{km}^2$ ; it does not reach the State Capitol. The flooding in Rawls Creek extends $4.4\mathrm{km}$ upstream and covers an area of $1.6 - 2.4\mathrm{km}^2$ . + +The text of this paper appears on pp. 245-261. + +# Analysis of Dam Failure in the Saluda River Valley + +Ryan Bressler + +Christina Polwarth + +Braxton Osting + +University of Washington + +Seattle, WA + +Advisor: Rekha Thomas + +# Summary + +We identify and model two possible failure modes for the Saluda Dam: gradual failure due to an enlarging breach, and sudden catastrophic failure due to liquefaction of the dam. + +For the first case, we describe the breach using a linear sediment-transport model to determine the flow from the dam. We construct a high-resolution digital model of the downstream river valley and apply the continuity equations and a modified Manning equation to model the flow downstream. + +For the case of dam annihilation, we use a model based on the Saint-Venant equations for one-dimensional flood propagation in open-channel flow. Assuming shallow water conditions along the Saluda River, we approximate the depth and speed of a dam break wave, using a sinusoidal perturbation of the dynamic wave model. + +We calibrate the models with flow data from two river observation stations. + +We conclude that the flood levels would not reach the Capitol Building but would intrude deeply into Rawls Creek. + +The text of this paper appears on pp. 263-278. + +# Catastrophic Consequences of Earthquake Destruction of the Saluda Dam + +Miika Klemetti + +Colin McNally + +Chris Payette + +McGill University + +Montréal, Québec, Canada + +Advisor: Nilima Nigam + +# Summary + +We model the flow of water in the Saluda river valley to determine the extent of flooding resulting from a failure of the Saluda Dam due to an earthquake. The model is divided into two parts: the flow of water in the river, and the evolution of the dam breach. We consider two questions in detail: How far up Rawls Creek, $3.3\mathrm{km}$ from the dam, will the flooding extend? And will the State Capitol in Columbia, $14\mathrm{km}$ downriver from the dam, get wet? + +We assume that the dam fails as a result of overtopping after the dam slumps due to soil liquefaction. We model the shape of the breach as an enlarging trapezoid. This model provides the essential time-varying boundary conditions for the flow in the river and results in the dam collapsing in 3 to $4\mathrm{min}$ . + +The model for the water flow is based on dividing the river into sections of varying sizes. Tunable parameters for each section allow shaping of the valley along the river. The geometry of each cross section is modeled as a piecewise-linear function with three parameters (two for the slopes, one for the length). In addition, the length of each section of the river can be adjusted to obtain greater resolution for regions of interest. We model the flow of the water by the transfer of momentum and volume between the sections of the river. The equations governing these exchanges comprise a low-order finite-volume advection scheme. For our geometry, the flow is sub-critical and momentum-dominated, allowing the above simplified physics model for the flow. + +We check convergence and stability of the results by varying the time resolution. + +The simulations of the model indicate major flooding in Rawls Creek up to $2.4\mathrm{km}$ from the Saluda River, but flooding will not extend to the State Capitol. + +[EDITOR'S NOTE: This Meritorious paper won the Ben Fusaro Award for the Flood Planning Problem. Only this abstract of the paper appears in this issue of the Journal.] + +# The Booth Tolls for Thee + +Adam Chandler + +Matthew Mian + +Pradeep Baliga + +Duke University + +Durham, NC + +Advisor: William G. Mitchener + +# Summary + +We determine the optimal number of tollbooths for a given number of incoming highway lanes. We interpret optimality as minimizing "total cost to the system," the time that the public wastes while waiting to be processed plus the operating cost of the tollbooths. + +We develop a microscopic simulation of line-formation in front of the toll-booths. We fit a Fourier series to hourly demand data from a major New Jersey parkway. Using threshold analysis, we set upper bounds on the number of tollbooths. This simulation does not take bottlenecking into account, but it does inform a more general macroscopic framework for toll plaza design. + +Finally, we formulate a model for traffic flow through a plaza using cellular automata. Our results are summarized in the formula for the optimal number $B$ of tollbooths for $L$ lanes: $B = \lfloor 1.65L + 0.9\rfloor$ . + +The text of this paper appears on pp. 283-297. + +# A Single-Car Interaction Model of Traffic for a Highway Toll Plaza + +Ivan Corwin + +Sheel Ganatra + +Nikita Rozenblyum + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +We find the optimal number of tollbooths in a highway toll-platz for a given number of highway lanes: the number of tollbooths that minimizes average delay experienced by cars. + +Making assumptions about the homogeneity of cars and tollbooths, we create the Single-Car Model, describing the motion of a car in the toll-plaza in terms of safety considerations and reaction time. The Multi-Car Interaction Model, a real-time traffic simulation, takes into account global car behavior near tollbooths and merging areas. + +Drawing on data from the Orlando-Orange Country Expressway Authority, we simulate realistic conditions. For high traffic density, the optimal number of tollbooths exceeds the number of highway lanes by about $50\%$ , while for low traffic density the optimal number of tollbooths equals the number of lanes. + +The text of this paper appears on pp. 299-315. + +# Lane Changes and Close Following: Troublesome Tollbooth Traffic + +Andrew Spann + +Daniel Kane + +Dan Gulotta + +Massachusetts Institute of Technology + +Cambridge, MA + +Advisor: Martin Zdenek Bazant + +# Summary + +We develop a cellular-automaton model to address the slow speeds and emphasis on lane-changing in tollbooth plazas. We make assumptions about car-following, based on distance and relative speeds, and arrive at the criterion that cars maximize their speeds subject to + +$$ +\operatorname {g a p} > \left\lfloor \frac {V _ {\mathrm {c a r}}}{2} \right\rfloor + \frac {1}{2} \left(V _ {\mathrm {c a r}} - V _ {\mathrm {f r o n t c a r}}\right) \left(V _ {\mathrm {c a r}} + V _ {\mathrm {f r o n t c a r}} + 1\right). +$$ + +We invent lane-change rules for cars to determine if they can turn safely and if changing lanes would allow higher speed. Cars modify these preferences based on whether changing lanes would bring them closer to a desired type of tollbooth. Overall, our assumptions encourage people to be a bit more aggressive than in traditional models when merging or driving at low speeds. + +We simulate a 70-min period at a tollbooth plaza, with intervals of light and heavy traffic. We look at statistics from this simulation and comment on the behavior of individual cars. + +In addition to determining the number of tollbooths needed, we discuss how tollbooth plazas can be improved with road barriers to direct lane expansion or by assigning the correct number of booths to electronic toll collection. We set up a generalized lane-expansion structure to test configurations. + +Booths should be ordered to encourage safe behavior, such as putting faster electronic booths together. Rigid barriers affect wait time adversely. + +Under typical traffic loads, there should be at least twice as many booths as highway lanes. + +The text of this paper appears on pp. 317-330. + +# A Quasi-Sequential Cellular-Automaton Approach to Traffic Modeling + +John Evans +Meral Reyhan +Rensselaer Polytechnic Institute +Troy, NY + +Advisor: Peter Kramer + +# Summary + +The most popular discrete models to simulate traffic flow are cellular automata, discrete dynamical systems whose behavior is completely specified in terms of its local region. Space is represented as a grid, with each cell containing some data, and these cells act in accordance to some set of rules at each temporal step. Of particular interest to this problem are sequential cellular automata (SCA), where the cells are updated in a sequential manner at each temporal step. + +We develop a discrete model with a grid to represent the area around a toll plaza and cells to hold cars. The cars are modeled as 5-dimensional vectors, with each dimension representing a different characteristic (e.g., speed). By discretizing the grid into different regimes (transition from highway, tollbooth, etc.), we develop rules for cars to follow in their movement. Finally, we model incoming traffic flow using a negative exponential distribution. + +We plot the average time for a car to move through the grid vs. incoming traffic flow rate for three different cases: 4 incoming lanes and tollbooths, 4 incoming lanes and 4, 5, and 6 tollbooths. In each plots, we noted at certain values for the flow rate, there is a boundary layer in our solution. As we increase the ratio of tollbooths to incoming lanes, this boundary layer shifts to the right. Hence, the optimum solution is to pick the minimum number of tollbooths for which the maximum flow rate expected is located to the left of the boundary layer. + +The text of this paper appears on pp. 331-344. + +# The Multiple Single Server Queueing System + +Azra Panjwani + +Yang Liu + +HuanHuan Qi + +University of California, Berkeley + +Berkeley, CA + +Advisor: Jim Pitman + +# Summary + +Our model determines the optimal number of tollbooths at a toll plaza in terms of that minimizing the time that a car spends in the plaza. + +We treat the toll collection process as a network of two exponential queueing systems, the Toll Collection system and the Lane Merge System. The random, memoryless nature of successive car interarrival and service times allows us to conclude that the two are exponentially distributed. + +We use properties of single server and multiple server queuing systems to develop our Multiple Single Server Queuing System. We simulate our network in Matlab, analyzing the model's performance in light, medium, and heavy traffic for tollways with 3 to 6 lanes. The optimal number of tollbooths is roughly double the number of lanes. + +We also evaluate a single tollbooth vs. multiple tollbooths per lane. The optimal number of booths improves the processing time by $22\%$ in light traffic and $61\%$ in medium traffic. In heavy traffic, one tollbooth per lane results in infinite queues. + +Our model produces consistent results for all traffic situations, and its flexibility allows us to apply it to a wide range of toll-plaza systems. However, the minimum time predicted is an average value, hence it does not reflect the maximum time that an individual may spend in the network. + +The text of this paper appears on pp. 345-354. + +# Two Tools for Tollbooth Optimization + +Ephrat Bitton + +Anand Kulkarni + +Mark Shlimovich + +University of California, Berkeley + +Berkeley, CA + +Advisor: L. Craig Evans + +# Summary + +We determine the optimal number of lanes in a toll plaza to maximize the transit rate of vehicles through the system. We use two different approaches, one macroscopic and one discrete, to model traffic through the toll plaza. + +In our first approach, we derive results about flows through a sequence of bottlenecks and demonstrate that maximum flow occurs when the flow rate through all bottlenecks is equal. We apply these results to the toll-plaza system to determine the optimal number of toll lanes. At high densities, the optimal number of tollbooths exhibits a linear relationship with the number of toll lanes. + +We then construct a discrete traffic simulation based on stochastic cellular automata, a microscopic approach to traffic modeling, which we use to validate the optimality of our model. Furthermore, we demonstrate that the simulation generates flow rates very close to those of toll plazas on the Garden State Parkway in New Jersey, which further confirms the accuracy of our predictions. + +Having the number of toll lanes equal the number of highway lanes is optimal only when a highway has consistently low density and is suboptimal otherwise. For medium- to high-density traffic, the optimal number of toll lanes is three to four times the number of highway lanes. Both models demonstrate that if a tollway has lanes in excess of the optimal, flow will not increase or abate. + +Finally, we examine how well our models can be generalized and comment on their applicability to the real world. + +[EDITOR'S NOTE: This Outstanding paper won the Ben Fusaro Award for the Tollbooths Problem. The text of the paper appears on pp. 355-371.] + +# For Whom the Booth Tolls + +Brian Camley + +Bradley Klingenberg + +Pascal Getreuer + +University of Colorado + +Boulder, CO + +Advisor: Anne Dougherty + +# Summary + +We model traffic near a toll plaza with a combination of queueing theory and cellular automata in order to determine the optimum number of tollbooths. We assume that cars arrive at the toll plaza in a Poisson process, and that the probability of leaving the tollbooth is memoryless. This allows us to completely and analytically describe the accumulation of cars waiting for open tollbooths as an $\mathrm{M}|\mathrm{M}|n$ queue. We then use a modified Nagel-Schreckenberg (NS) cellular automata scheme to model both the cars waiting for tollbooths and the cars merging onto the highway. The models offer results that are strikingly consistent, which serves to validate the conclusions drawn from the simulation. + +We use our NS model to measure the average wait time at the toll plaza. From this we demonstrate a general method for choosing the number of toll-booths to minimize the wait time. For a 2-lane highway, the optimal number of booths is 4; for a 3-lane highway, it is 6. For larger numbers of lanes, the result depends on the arrival rate of the traffic. + +The consistency of our model with a variety of theory and experiment suggests that it is accurate and robust. There is a high degree of agreement between the queueing theory results and the corresponding NS results. Special cases of our NS results are confirmed by empirical data from the literature. In addition, changing the distribution of the tollbooth wait time and changing the probability of random braking does not significantly alter the recommendations. This presents a compelling validation of our models and general approach. + +The text of this paper appears on pp. 373-390. + +# From Lake Murray to a Dam Slurry + +Clay Hambrick +Katie Lewis +Lori Thomas +Harvey Mudd College +Claremont, CA + +Advisor: Jon Jacobsen + +# Summary + +We predict the extent of flooding in the Saluda river if a large earthquake causes the Lake Murray dam to break. In particular, we predict how high the water would be when it reached Columbia and how far the flooding would spread up tributaries of the Saluda like Rawls Creek. We base our model on the Saint-Venant equations for open-channel water flow. We use a discrete version of them to predict the water level along the length of the river. Our model takes into account the width of the floodplain, the slope of the river, the size of the break in the dam, and other factors. We estimate parameters for Lake Murray, its dam, and the Saluda River and calculate the flood results. + +The South Carolina State Capitol is safe under even the most extreme circumstances, since it sits on a hill well above the highest possible water level. However, flood waters could still reach $17\mathrm{m}$ at Columbia and even higher upstream. Buildings in Columbia close to the water would be inundated, but there should be enough warning time for residents to escape. Both our model and local evacuation plans suggest that low-lying areas for miles around would be covered with water. + +# Introduction + +In central South Carolina, a lake is held back by a 75-year-old earthen dam. What would happen if an earthquake breached the dam? The concern is based on an earthquake in 1886 at Charleston that scientists believe measured 7.3 on the Richter Scale [Federal Energy Regulatory Commission 2002]. The location of fault lines almost directly under Lake Murray [SCIway 2000; South Carolina + +Geological Survey 1997; 1998] and the frequency of small earthquakes in the area led authorities to consider the consequences of such a disaster. + +Our task is to predict how water levels would change along the Saluda River, from Lake Murray Dam to Columbia, if an earthquake on the same scale as the 1886 breaches the dam. In particular, how far would the tributary Rawls Creek flow back and how high would the water rise near the State Capitol in Columbia, South Carolina. + +![](images/56710e2ca5071b4c924f6bc3739286d79aee68ea9543966b948d72cc12d67090.jpg) +Figure 1. Topographical map of the Saluda River from the base of Lake Murray to the Congaree River [Topozone 2004]. + +We lay out our assumptions and set up a submodel of Lake Murray and the Lake Murray dam to simulate the overflow when the dam breaks. + +We then build a model based on the Saint-Venant equations [Moussa and Bocquillon 2000], using conservation of water and momentum to capture the nature of a flood where the main water channel overflows into the surrounding area. We convert the model to a system of difference equations and feed the dam outflow into the beginning of the river. + +To increase accuracy, we measure along the river the ratio of the floodplain to the river width and use these values to modify the equations. We then use data from Lake Murray and the Saluda river to model several scenarios. + +Finally, we discuss the implications of our model, analyze its strengths and weaknesses, and discuss how the model could be extended. + +# Background of Earthquake Effects + +Effect on the dam + +- How the dam is compromised (size and shape of the initial breach) +- Interaction between the lake water and the initial breach +- Breach size and shape over time + +![](images/d53883adfedb714a24b2c3dad79a58f6e364d4905c4994d4374c3b63ca05f0a8.jpg) +Figure 2. Schematic of the earth dam and the new planned dam at Lake Murray [Lake Murray 2005]. + +- Effect on the water + +- Earthquake's effect on the lake in so far as it affects the dam + +- Effect on the surrounding countryside + +- Whether earthquake alteration of the landscape opens or closes available floodplains +- Whether earthquake damage could divert the Saluda river +- Whether earthquake damage would make the Saluda's path choppier and slow down the water + +The situation looks something like this: A large earthquake compromises the Lake Murray dam. Earthen dams do not usually fail completely or instantaneously [U.S. Army Corps of Engineers 1997]. Instead, the dam begins to leak. Over time, the water causes further erosion, allowing more and more water to flow out of the lake, until the lake and dam reach a new equilibrium. Depending on the initial breach and the dam construction, the final equilibrium may take anywhere from a few minutes to a few hours [U.S. Army Corps of Engineers 1997] to reach. The fully formed breach usually has a width somewhere between half the height of the dam and three times the height of the dam [U.S. Army Corps of Engineers 1997; 1980]. At half a mile wide (1,609 m) and 208 ft (63 m) tall [SouthCarolinaLakes.net], the Lake Murray Dam is about 25 times + +as wide as it is tall, which suggests a breach width much smaller than the dam width. + +Below the dam, the increasing flow of water puts stress on the countryside, with flooding and hillside carving. Water back-flows up smaller creeks such as Rawls Creek and pools in the flatter sections. Far downstream, either the water pools enough to stay within normal channels, or excess water creates its own channel, or excess water continues flowing from river to river to the sea. + +# Assumptions + +# Earthquake + +- Aftershocks disregarded: Earthquakes generally consist of a main shock and smaller aftershocks. Although an aftershock is itself a significant event and might cause a spike in dam destruction, for simplicity we neglect aftershocks. +- Dam breach only: The earthquake could affect the dam, the water involved, and the landscape. The earthquake's effect on the water matters only if the water damages the terrain or escapes from the lake or riverbed. Thus, by assuming that the earthquake affects only the dam, we bundle any effect of the earthquake on the water into the water's effect on other things. The earthquake could significantly affect the terrain, but such changes are unpredictable and we assume no significant terrain changes take place. + +# Weather and Terrain + +- No effect from wind: The effects of wind here are minuscule in comparison with other forces. +- Low land near river would flood: We assume that the river would overflow its banks and fill the surrounding floodplain. + +# Lake Murray and Dam + +- Lake has a simple shape: We assume that the lake has perfectly vertical sides and a flat bottom. +- Dam breach is rectangular: We can thus model a variety of dam breaches, since we can vary the height and width independently. +- Washed-out dam materials are negligible compared to water flooding: Since we already assume that the breach does not erode, there is no new source of earth after that initial point. This assumption should work well when the breach is small but less so when the breach is large. + +# Saluda River + +- River channel has constant width: The Saluda river widens slightly after $11.4\mathrm{km}$ [Topozone 2004]; but to model it simply, we assume that it has a constant width. +- River has steady elevation loss: Due to limits of our topographical data, we assume that the height of the river drops off steadily. +- River has constant initial depth: Because we assume that the river drops off steadily, there are no pockets where water could pool. Since the river starts in equilibrium, we assume that the depth is uniform from start to finish. +- River is straight: The curvature of a river contributes somewhat to slow the flow of water, and some models include a curvature parameter; but given how straight the Saluda is [Topozone 2004], it is reasonable to approximate it as a linear river. + +# Dam Model + +We use a submodel to simulate what happens on the lake and at the dam after an earthquake causes a breach. The submodel provides information about the volume and speed of water leaving the dam at any given time. This information depends on the volume of water in the lake, the surface area of the lake, and the size of the breach in the dam. + +We model the breach as a rectangular opening in the dam. We assume that water would flow out of the bottom of this breach and that its energy would be conserved. The potential energy is converted into kinetic energy, and so from the equation + +$$ +\frac {1}{2} m s ^ {2} = m g h +$$ + +we get + +$$ +s = \sqrt {2 g h}, +$$ + +where + +$s$ is the speed of the water, + +$m$ is the mass of the water, + +$g$ is acceleration due to gravity, and + +$h$ is the height of the water—the difference in height between the lake and the bottom of the breach. + +We assume that all water leaves at the maximum speed, a slight overestimate. We can write this equation in terms of our model as + +$$ +s _ {\text {w a t e r l e a v i n g}} = \sqrt {2 g \left(h _ {\text {l a k e}} - h _ {\text {d a m}}\right)}. +$$ + +The volume of water leaving in each time step is the area of the breach times the velocity of the water times the size of the time step: + +$$ +v _ {\text {w a t e r l e a v i n g}} = w _ {\text {b r e a c h}} \left(h _ {\text {l a k e}} - h _ {\text {d a m}}\right) s _ {\text {w a t e r l e a v i n g}} t _ {\text {t i m e s t p}}, +$$ + +where $v$ is volume, $w$ is width, $h$ is height, $s$ is speed, and $t$ is time. + +We assume in effect that the lake is a large straight-sided holding tank, so its area doesn't change when the water height does. This means that the height of the lake is simply the volume divided by area, or + +$$ +h _ {\mathrm {l a k e}} = \frac {v _ {\mathrm {l a k e}}}{a _ {\mathrm {l a k e}}} +$$ + +where $h$ is the height, $v$ the volume, and $a$ is the area. This assumption can be changed to make the area of the lake a function of the amount of water in it; for example, we could model the lake as a shallow cone. + +We also assume that the breach in the dam stays the same size throughout the simulation, though it would be simple to make the width and depth of the breach increase as a function of the amount and speed of the water flowing through. Doing so would mimic erosion caused by the force of the water traveling through the gap. + +# Saint-Venant Model + +Our primary model is based on the Saint-Venant system of (partial differential) equations. This choice was inspired by Moussa and Bocquillon [2000], who describes how to use them (slightly modified) to model floods. These equations govern open-channel fluid flow that is nonuniform and nonconstant, and they take into account variations in velocity, the topography of the river and surroundings, and friction with the ground. This makes the Saint-Venant system much preferable to simpler models, especially since friction is a dominant force in flood behavior (the floodwaters cover uneven ground with many obstacles—trees, houses, etc.). + +The (modified) Saint-Venant system consists of a water conservation equation, + +$$ +\eta \frac {\partial y}{\partial t} + y \frac {\partial V}{\partial x} + V \frac {\partial y}{\partial x} = 0, +$$ + +and a linear momentum equation, + +$$ +\frac {\partial V}{\partial t} + V \frac {\partial V}{\partial x} + g \left(\frac {\partial y}{\partial x} + S _ {f} - S\right) = 0, +$$ + +where + +$y$ is the height of the water; + +$x$ is the distance along the river; + +$t$ is time; + +$V$ is the speed of the water, + +$S$ is the river slope; + +$\eta$ , the new parameter introduced by Moussa and Bocquillon [2000], is the relative floodplain width (see below); and + +$S_{f}$ is the so-called energy-line slope. + +The energy-line slope represents how much friction the flowing water must overcome; it is calculated from the velocity and flow radius of the water via the Manning formula [Moussa and Bocquillon 2000], + +$$ +S _ {f} = n ^ {2} k V ^ {2} R ^ {- 4 / 3}, +$$ + +where $R$ , the hydraulic or flow radius, is given by $R = W_{1}y / (W_{1} + 2y)$ , where $W_{1}$ is the width of the channel. There are two constants: $n$ is the dimensionless "roughness parameter" characterizing the land that the water flows over, while $k$ is a constant equal to $1\mathrm{s}^2 /\mathrm{m}^{2 / 3}$ . + +But what does the introduction of the parameter $\eta$ do? The model assumes that outside the river channel there is a floodplain that has a very high fluid resistance (e.g., trees, houses). This means that the downstream flow of water in this area is negligible. However, the floodplain serves as a sink for water, so $\partial y / \partial t$ is modified by the factor $\eta$ , the ratio of the floodplain width to the channel width. This way, when the water rises, the actual height change in the channel is attenuated by $\eta$ , since some water is absorbed by the floodplain. We make $\eta$ a function of the distance along the river by measuring the width of the floodplain at various points. + +To model numerically, we turn this PDE system into a difference-equation system. As is common with numerical PDEs, and in particular fluid dynamics problems, special care must be taken to ensure the stability of the algorithm [Trefthen 1996]. We use the Lax-Wendroff difference formula, + +$$ +u _ {j} ^ {n + 1} = u _ {j} ^ {n} + \frac {1}{2} \lambda (u _ {j + 1} ^ {n} - u _ {j - 1} ^ {n}) - \frac {1}{2} \lambda^ {2} (u _ {j + 1} ^ {n} - 2 u _ {j} + n + u _ {j - 1} ^ {n}). +$$ + +Here the upper indices represent time and the lower, space; $\lambda$ is the ratio of the time to the space step size. (Our model converts distance and time to model units, so the step size in each is 1.) The second term acts to damp out spikes, since it looks at how much each point differs from the points on either side of it, and compensates. + +We find that the model is highly sensitive to the roughness parameter $n$ (note that this is the effective roughness in the channel only). When $n$ is large (even at 0.03, the standard value for large rivers), there is high resistance to + +the water flow, and the floodwater tends to pile up. This leads to excessive steepness in the water-depth profile and tends to make the model break down. Fortunately, we can assume a smaller value for $n$ , since we are considering only the water in the channel area, which is bounded on the sides not by rocks and grass (as a river is normally), but by other floodwater (covering the floodplain), which is moving a bit slower (in fact, we assume that it is stationary) but should be smoother than stationary rocks. Therefore, we take $n = 0.01$ . + +Further, the model is increasingly unstable at higher rates of lake outflow. This is presumably because the Saint-Venant equations are essentially perturbations about steady flow, so they tend to break down in massive flooding. We resort to periodic averaging of neighboring water depths (every 20 time steps, for the most part). This does not seem to affect the results much. + +# Rawls Creek Back-Flooding + +Our initial idea for computing the back-flooding at Rawls Creek was to use the same Saint-Venant modeling technique as for Saluda, adjusted for the different parameters of Rawls, and using the water depths calculated at the creek's mouth for the "dam". However, it is unclear what the initial speeds should be, since the back-flow water moves more or less perpendicularly to the main flood. Moreover, the model displays severe instability with the relevant data. Hence, we take the water height at the mouth and use the topographical map Topozone [2004] to find the matching place upstream. Though highly simplistic, this method is consistent with a modified Saint-Venant system, since it assumes that there is no flow outside the main channel and that the floodplain area is filled instantaneously along with the channel. The Rawls Creek valley is simply a wider section of floodplain (and we include it in calculating the floodplain widths). + +# Parameters + +# Lake Murray Dam + +We use the following parameter values: + +- $g = 9.8 \, \mathrm{m/s}^2$ , the gravitational constant +- $h_{0_{\text{lake}}} = 60 \, \text{m}$ [SouthCarolinaLakes.net]. This is the initial height of water in the lake. +- $v_{0_{\mathrm{lake}}} = 3 \times 10^{9} \, \mathrm{m}^{3}$ [Publications 2004]. This is the initial volume of water in the lake. +- $a_{0_{\mathrm{lake}}}$ , the area of the lake assuming that the sides are exactly vertical. + +# Saluda River + +- length_river = 16200 m, the length of the Saluda River as measured on the topographic map in Figure 1. +- $h_{\text{BedUpstream}} = 0 \, \text{m}$ , the height of the stream bed just after the dam, compared to the base of the dam. +- $h_{\text{BedDownstream}} = -10 \, \text{m}$ , the height of the stream bed as it joins the Congaree River outside Columbia. We obtain this value by comparing the height above sea level of the beginning and at the end of the Saluda river on the topographic map in Figure 1. +- $h_{0_{\text{water}}} = 1.2 \, \text{m}$ [South Carolina Department of Natural Resources], the initial water depth along the river, assumed uniform. + +# Floodplain + +Water flowing out from the dam would not stay entirely within the Saluda River bed. To model accurately the ratio of the river channel to the floodplain surrounding it, we examine topographical maps. The river has an elevation of approximately 170 ft (52 m). The area near the river rises gradually to approximately 200 ft (61 m), before nearby hills start. We assume that this area between the river and the hills is the approximate floodplain. We measure the width of this plain every $600\mathrm{m}$ . We assume that the width varies linearly between these measurements and interpolate plain widths for distances downstream that we didn't measure directly. This assumption allows us a much more accurate model than if we simply assume that the floodplain has constant width. + +# Results + +The Lake Murray Dam is roughly $800\mathrm{m}$ long (in the highest region) by $60\mathrm{m}$ high [Topozone 2004], so any breach up to this size is at least theoretically possible. + +# No Breach + +# Breach width: $0\mathrm{m}$ , breach height: $0\mathrm{m}$ + +Tested with no breach, the model performs as expected, with the water level staying very nearly constant, since replacement water from the ordinary hydroelectric pipes is included in the model. + +![](images/bc76329d0208f2d30e8a21ec00634cb97dc0c09cc01838e0123851df34343634.jpg) +Figure 4. Ratio of the width of the river channel and the flood plain as a function of distance along the river. The two spikes are tributaries; the left one is Rawls Creek. The widening at the end is the mouth of the Saluda where it enters the Congaree. + +# Realistic Breach + +# Breach width $800\mathrm{m}$ , breach height $10\mathrm{m}$ + +The most common earthquake failure mode for an earthen dam is for the underwater side to simply landslide down, producing a wide but shallow breach. + +In this scenario, flooding crests in the Rawls after $1.1\mathrm{h}$ at a height of $7.1\mathrm{m}$ . This means that the creek backfloods for some $2.4\mathrm{km}$ along its course, as measured on the topographical map Topozone [2004]. Crest at Columbia (where the Saluda flows into the Congaree) is reached after $7.5\mathrm{h}$ at a height of $4.15\mathrm{m}$ . Since the Capitol sits some $50\mathrm{m}$ above the river, it is in no danger. + +# Alternative Breach + +# Breach width $133\mathrm{m}$ , breach height $60\mathrm{m}$ + +To explore the effect of breach shape as well as size, we run a scenario with a breach of the same cross-section as the previous case but with the opposite rectangular shape. Since the breach is deeper, the speed of the escaping water is higher than before and more water escapes also, since the lake can drain to a lower level. + +The water crests at the Rawls after $1.4\mathrm{h}$ at $9.11\mathrm{m}$ . The backflooding extends for $3.0\mathrm{km}$ . Crest at Columbia occurs after $7.0\mathrm{h}$ at $6.23\mathrm{m}$ . + +![](images/73e8313325d1fd722a85d321e7dd71c79f6ef90b7f6830d39f75baa9e93189f6.jpg) +Figure 5. Contour map of the water level along the river from the start of the simulation to the end. The $x$ -axis is distance (m) along the river and the $y$ -axis is time (s) into the simulation. The color bar gives the scale for the height (100s of m) of the water. + +![](images/6cd3e12e1ee270a1e425688c0d6263717fef6a12d1fc607acba68af6bbdb6f77.jpg) +Figure 6. Water level where Rawls Creek joins the Saluda River, from the start of the simulation to the end. The $x$ -axis is time (s) into the simulation and the $y$ -axis is the height (m) of the water. + +![](images/f101d3b95ecfd813f9968ca938abbc07e975ff142c466f788d002158c79fb0ef.jpg) +Figure 7. Water level where the Saluda River meets the Congaree River in Columbia, from the start of the simulation to the end. The $x$ -axis is time (s) into the simulation (in seconds) and the $y$ -axis is the height (m) of the water. + +# Maximum Breach + +# Breach width: $800\mathrm{m}$ , breach height: $60\mathrm{m}$ + +What if the entire dam simply vanishes? Both the model and our assumptions are overextended by this scenario. Despite more frequent smoothing (every 5 time steps), the numbers consistently exploded after $3\mathrm{h}$ of simulated time. Fortunately, this was long enough for cresting at the Rawls and for a pretty good guess at the Columbia crest. Unfortunately, the water rises so high in the early sections of the river that our values for $\eta$ are no longer valid—the flood simply expands outside the normal floodplain. This means that the water would not actually be as high as the model indicates. + +The Rawls crest occurs after $0.4\mathrm{h}34.35\mathrm{m}$ . This height of water causes backflooding as much as $5\mathrm{km}$ upstream (a strong indication that our $\eta$ values are indeed too low for this level of flooding). The Columbia crest appears after 4 to $5\mathrm{h}$ and is no more than $17\mathrm{m}$ . The Capitol is still safe, by a large margin. + +# Interpretation + +While the Capitol is safe in all scenarios, massive flooding nonetheless occurs in low-lying areas and in the homes and businesses along the Columbia. Happily, based on the flood scenarios above, if a warning system is in place, there should be enough time to escape before the flood water arrives. + +# Analysis of Model + +# Strengths and Weaknesses + +Our model is built on trade-offs. One weakness is the transformation of PDEs into difference equations; the latter are prone to instability in extreme scenarios. + +Our assumptions represent other trade-offs. The floodplain, though a vast improvement over an extremely simple model where all water stays in the channel, requires us to assume that the water instantaneously drains from the river and immediately stops moving. Extending the system to be fully three-dimensional, with water flowing both downstream and outward from the riverbed, would represent a great improvement (and indeed, is performed admirably by various commercial software packages). + +On the other hand, we implement equations designed specifically to model situations like the one on the Saluda River and use data specific to Lake Murray and the Saluda River. + +# Comparison to Other Predictions + +The company that owns the dam provides an evacuation map that shows where the water is expected to go during a flood. This map seems to agree roughly with our worst-case model predictions. + +# Future Work + +- We could model an expanding trapezoidal breach, representing erosion of the original breach, using values from the literature [U.S. Army Corps of Engineers 1980] to select appropriate slope and time intervals. +- We could acquire data on the normal width of the Saluda River at intervals along its course between Lake Murray and the Congaree, rather than assuming a constant stream width. +- We could collect data on the elevation of the stream at regular intervals. For instance, the river might have a waterfall, which could affect the flood pattern. +- We could consider information on the distribution of the lake's water. In real life, the lake has large areas that are shallow, with a smaller deep region. +- We could move from the straight-stream assumption to a two-dimensional analysis; some momentum is lost in bends in the river. + +- Our assumptions (the earthquake affects just the dam, aftershocks can be disregarded, wind has no effect, and washed-out dam materials can be disregarded) are sturdy enough that an upgrade of the water-flow modeling technique used (Saint-Venant) should be attempted before correcting these assumptions. + +# Conclusion + +Dam-breach flooding is a rare but very serious problem, especially when the dam sits less than $20\mathrm{km}$ above a major city. We create a hydrodynamical model that gives the downstream results of both likely and possible earthquake-driven dam breach scenarios. Since the city of Columbia sits mainly on a hill, the predicted flood levels of $4\mathrm{m}$ to $17\mathrm{m}$ would flood only the few blocks closest to the river. However, upstream areas such as Rawls Creek would experience levels $7\mathrm{m}$ to $34\mathrm{m}$ higher. The water could arrive at Rawls in as little as half an hour, and flood $2.5 - 5\mathrm{km}$ upstream; so an early-warning system for dam breaches along the Saluda is a vital protective measure. + +Our model produces results that make intuitive sense when we vary the parameters: The flooding increases with a larger breach, and a deeper breach floods more than a shallower one of the same area. The water height falls off downstream, as some of the water is held in the floodplain, and this attenuation varies with the width of the floodplain. + +# References + +Castro, Gonzalo. 1999. Seismic stability and deformations of embankment dams. 2nd US-Japan Workshop on Dam Earthquake Engineering http://www.geiconsultants.com/images/library/sub31.pdf. +Dutch, Steven. 2003. Faults and earthquakes. http://www.uwgb.edu/dutchs/EarthSC202Notes/quakes.htm. +Federal Energy Regulatory Commission. 2002. News release: Commission moves to protect public near south carolina dam. http://www.ferc.gov/press-room/pr-archives/2002/2002-2/4-9-saludazz.pdf. +Jones, Lucile M. 1995. Foreshocks, mainshocks, and aftershocks. Southern California Earthquake Center. http://www.data.scec.org/eqcountry/aftershock.html. +Lake Murray. 2005. The lake murray home page. http://www.lakemurray.com/. Accessed 12 Feb 2005. +Moussa, Roger, and Claude Bocquillon. 2000. Approximation zones of the saint-venant equations for flood routing with overbank flow. Hydrology and Earth System Sciences 4 (2): 251-261. + +Publications, Gardener. 2004. Lakeside living. http://www.gardenerguides.com/ColumbiaPage-Neighborhoods-LakeM.htm. +Scana. 2005. Emergency information: Lake murray evacuation map. http://www.scana.com/SCEG/For+Living/Lake+Murray/emergency_information.htm. +SCIway. 2000. South carolina county maps. http://www.sciway.net/maps/cnty/. +of Natural Resources, South Carolina Department. Lake and stream data. http: //www.dnr.state.sc.us/pls/hydro/river.home. +South Carolina Geological Survey. 1997. Simplified map showing faults and related geologic structures. http://www.dnr.state.sc.us/geology/earthqua2.htm. +_____. 1998. Structural features of South Carolina. http://water.dnr.state.sc.us/geology/structur.htm. +SouthCarolinaLakes.net. Lake murray. http://wwwsouthcarolinalakes.net/murray.htm. +Topozone. 2004. http://www.topozone.com/map.asp?lat=34.042&lon=-81.2\&&s=50\&&size=1\&&symshow%=n\&&datum=nad83\&&layer=DRG25. +Trefethen, Lloyd N. 1996. Finite Difference and Spectral Methods for Ordinary and Partial Differential Equations. Unpublished textbook available at http://web.comlab.ox.ac.uk/oucl/work/nick.trefethen/pdetext.html. +U.S. Army Corps of Engineers. 1997. Engineer Manual 1110-2-1420. Engineering and Design—Hydrologic Engineering Requirements for Reservoirs. http://www.usace.army.mil/inet/usace-docs/eng-manuals/em1110-2-1420/. +_____, Hydrologic Engineering Center. 1980. Flood Emergency Plans: Guidelines for Corps Dams. RD-13, HEC. http://www.hec.usace.army.mil/publications/pub_download.html. + +![](images/e5a1b4cf784ba28c188f24406f597f87c396303761d35eabbd9367f8be5294ce.jpg) + +Lori Thomas, Clay Hambrick, Katie Lewis, and Jon Jacobsen (advisor). + +# Through the Breach: Modeling Flooding from a Dam Failure in South Carolina + +Jennifer Kohlenberg + +Michael Barnett + +Scott Wood + +University of Saskatchewan + +Saskatoon, SK, Canada + +Advisor: James Brooke + +# Summary + +The Saluda Dam, separating Lake Murray from the Saluda River in South Carolina, could breach in the event of an earthquake. + +We develop a model to analyze the flow from four possible types of dam breaches and the propagation of the floodwaters: + +- instant total failure, where a large portion of the dam erodes instantly; +- delayed total failure, where a large portion of the dam slowly erodes; +- piping, where a small hole forms and eventually opens into a full breach; and +- overtopping, where the dam erodes to form a trapezoidal breach. + +We develop two models for the spread of the downstream floodwaters. Both use a discrete-grid approach, modelling the region as a set of cells, each with an elevation and a volume of water. The Force Model uses cell velocities, gravity, and the pressure of neighbouring cells to model water flow. The Downhill Model assumes that flow rates are proportional to the height differences between the water in adjacent cells. + +The Downhill Model is efficient, intuitive, flexible, and could be applied to any region with known elevation data. Its two parameters smooth and regulate water flow, but the model's predictions depend little on their values. + +For a Saluda Dam breach, the total extent of the flooding is $106.5\mathrm{km}^2$ ; it does not reach the State Capitol. The flooding in Rawls Creek extends $4.4\mathrm{km}$ upstream and covers an area of $1.6 - 2.4\mathrm{km}^2$ . + +# Variables and Assumptions + +Table 1 shows the variables used in the design and simulation of the flooding model, and Table 2 lists the parameters in the simulation program. + +Table 1. Variables used in the model. + +
VariableDefinition
Voume flow rates from the dam
QTF1For instant total failure
QTF2For delayed total failure
QPIPEFor piping failure
QOTFor overtopping failure m
QpeakMaximum flow rate
Times when water ceases to flow through the dam
tTF1For instant total failure
tTF2For delayed total failure
tPIPEFor piping failure
tOTFor overtopping failure
ΔVTotal volume of water displaced from Lake Murray by flooding
VolLMNormal volume of Lake Murray
AreaLMNormal area of Lake Murray
dBreachDepth of the breach from the top of the dam
tbreachTime from when the breach begins to form until its final formation
mSlope of the sides of the cone approximating Lake Murray
+ +# General Assumptions + +- Normal water level is present in the lake prior to a dam breach. +- No seasonal variation of flows occurs in waterways. +- Volume of water in Lake Murray can be accurately approximated by a right circular cone (Figure 1). + +# Dam Assumptions + +- Saluda Dam fails in one of four ways: + +- instant total failure, +- delayed total failure, + +Table 2. +Parameters used in the simulation program. + +
ParameterTypical valueMeaning
BREACH_TYPEvariesone of INSTANT_TOTAL_FAILURE, DELAYED_TOTAL_FAILURE, PIPING, or OVERTOPPING
ΔT10.0Length of one time step (s)
MIN_DEPTH0.0001Depth below which a cell is considered empty (m)
TFINAL100000Time for the breach to empty completely the affected portion of the reservoir (s)
Tb3600Time until breach reaches maximum size (s)
Qpeak25000Maximum flow rate of the breach (m3/s)
dbreach30Maximum depth of breach below initial reservoir level (m)
VolumeLM2.714 × 109Initial volume of Lake Murray (m3)
AreaLM202 × 106Initial area of Lake Murray (m2)
k0.504Spreading factor (regulates amount of water exchanged between two cells)
MAX_LOSS_FRAC0.25Maximum fraction of a cell's water that it can donate in a single time step
+ +![](images/1152091a8add00def99dc145aa2971dfdb6f04c41b5de3a5fd9ab07882a239ae.jpg) +Figure 1. Reservoir approximation using a right circular cone. + +- piping, or +- overtopping. + +- Composition of the earthen dam is uniform throughout. +- Width of the base of the breach is between the height of the dam and three times the height of the dam [U.S. Army Corps of Engineers 1997]. +- No human attempt is made to prevent dam breaching. + +# Downstream Assumptions + +- Resistance to water flow due to structures such as bridges and buildings is negligible. +- Water does not alter the terrain significantly as it flows over the floodplain. +- Water does not make alluvial deposits as it flows over the floodplain. +- A negligible amount of water is present in the valley before flooding. +- Negligible water inflow occurs from sources other than the dam breach. +- No human attempt will be made to prevent flooding. + +# Accepted Facts + +- Area of Lake Murray: $200 \mathrm{~km}^{2}$ +Volume of Lake Murray: $2.710 \times 10^{6} \mathrm{~m}^{3}$ +- Height of dam: $63.4 \mathrm{~m}$ (crest at 370 ft above sea level) +Length of dam: $2.4 \mathrm{~km}$ +- Elevation of surface of Lake Murray: 106.5—110 m above sea level + +# Model Design Dam Breach + +Each type of dam breach is described by flow rate as a function of time, with corresponding parameters. + +# Instant Total Failure + +A model of flow rate for instant total failure is right triangular [U.S. Army Corps of Engineers 1997] (Figure 2). The parameters are breach depth and peak volume outflow, with values + +$$ +d _ {\text {b r e a c h}} = 2 0 \mathrm {m}, \quad Q _ {\text {p e a k}} = 3 0, 0 0 0 \mathrm {m} ^ {3} / \mathrm {s}. +$$ + +![](images/eadd4689f2790f14f858e2737be2065e8690189c5ae53abed238eaf3b9f44241.jpg) +Figure 2. Flow rate for an instant total failure. + +# Delayed Total Failure + +An isosceles triangle model makes sense for delayed failure because it takes half of the total volume of water removed from the lake to erode the dam and the flow rate does not peak until the erosion is complete [U.S. Army Corps of Engineers 1997] (Figure 3). Also, for an earthen dam, the erosion time may be longer than for other types of dams, such as concrete. + +This model has the same parameters and same values: + +$$ +d _ {\text {b r e a c h}} = 2 0 \mathrm {m}, \quad Q _ {\text {p e a k}} = 3 0, 0 0 0 \mathrm {m} ^ {3} / \mathrm {s}. +$$ + +![](images/bbbd90925f09c394d9fbce27afc4193f9bfe988579be46eabe6654d118759a99.jpg) +Figure 3. Flow rate for a delayed total failure. + +# Piping Failure + +For a piping failure, the breach begins in the middle of the dam face and grows until the material above the pipe collapses [Sedimentation and River Hydraulics Group 2004]. As the breach grows, the flow rate increases exponentially; the peak flow rate occurs when the material above the pipe collapses. From that point, the flow through the breach is similar to a total failure. We select an exponential decay so as to observe a different effect from the linear decay of the total failure models (Figure 4). + +We choose the growth rate so that the peak flow rate occurs at the breach time, and the decay rate so that the flow rate is less than $1\%$ of the peak flow rate at the final time. The parameters are the breach depth, the peak volume outflow of the dam, and the breach time, with values + +$$ +d _ {\text {b r e a c h}} = 2 0 \mathrm {m}, \quad Q _ {\text {p e a k}} = 3 0, 0 0 0 \mathrm {m} ^ {3} / \mathrm {s}, \quad t _ {\text {b r e a c h}} = 5 0, 0 0 0 \mathrm {s}. +$$ + +![](images/7599bb1a08c4009f5fa4a5da3fdacccce2ef10048c65898c09a84dcd3abe762c.jpg) +Figure 4. Flow rate for a piping failure. + +To demonstrate better the change in flow rate with time when the breach begins to form, we plot over a shorter range of time in Figure 5. + +# Overtopping Failure + +For an overtopping failure, the water begins flowing over the top of the breach, eroding the dam from above. We found little information about overtopping failures. From the piping failure, we estimate that the flow rate increases according to a parabolic shape until dam erosion is complete (Figure 6). After this point, which corresponds to the breach time, the flow rate behaves as in a total failure. + +The parameters are again breach depth, peak volume outflow of the dam, and breach time, with values + +$$ +d _ {\text {b r e a c h}} = 2 0 \mathrm {m}, \quad Q _ {\text {p e a k}} = 3 0, 0 0 0 \mathrm {m} ^ {3} / \mathrm {s}, \quad t _ {\text {b r e a c h}} = 3 0, 0 0 0 \mathrm {s}. +$$ + +![](images/7ef75c82c8ff8ade12664b81d918709a00f21a97b3fccc496e3769858b4eba4e.jpg) +Figure 5. Flow rate for beginning of a piping failure. + +![](images/6eb8a67d633a663726d5ca51a9953bf607c067375bb271c2eb42ece9dde15060.jpg) +Figure 6. Flow rate for an overtopping failure. + +# Downstream Flow + +We model the behaviour of the water in the region downstream of the breach using a discrete approach. The Force Model uses a physical analogy based on the Bernoulli equation for fluid flow; the Downhill Model uses a simpler, more intuitive mechanism for water flow. The Force Model produces unphysical results; therefore, we use Downhill Model in analysis of the flooding. + +For both models, the region surrounding the Saluda Dam is divided into a grid of square cells. Each cell covers a surface area of $210\mathrm{m}$ by $210\mathrm{m}$ and has an associated elevation above sea level and a volume of water (based on the mean depth of water in the cell). The elevation data are adapted from the U.S. Geological Survey's National Elevation Data [2004] by (to reduce processing time) averaging together groups of $7\times 7$ cells. Each model simulates the + +propagation of water among cells; the models differ in how neighbouring cells determine how much water to exchange per unit time. + +# Force Model + +# Design + +This model performs a force analysis on the water contained in the model cells. Each cell has an associated elevation above sea level, mean depth of water contained in the cell, and mean velocity ( $x$ - and $y$ -components) of water within the cell. The force acting on a particular cell is assumed to be due to two effects only: the pressure force exerted by the four cells in direct contact with it, and the gravitational force that accelerates the water to places of lower elevation (that is, downhill). + +The main principles of the model are: + +- Volume flow between cells is proportional to the difference in pressure between the four adjacent cells with a common face. +- Pressure difference in cells is proportional to the difference in mean depth of each cube. + +As demonstrated in the Figures 7 and 8, the mean pressure exerted by a cell is assumed to be the pressure at half the depth of the cell, or $P = \frac{1}{2}\rho gd$ . + +![](images/8f3cedc9ac7b2be94e52b7e41bb014721a65ab20f9a275e581c7a6635abe67bb.jpg) +Figure 7. Pressure forces acting on cell matrix. + +![](images/208b2e1b4fd863a613bb6b17ea78d76bc409cc2022597d54e38c14fd0ee9940e.jpg) +Figure 8. Gravitational forces acting on cell matrix. + +We assume that the area over which the pressure acts is the mean depth of the two adjacent cells, so that the depth of water varies linearly between them, with the depth at the boundary the average depth of the two. The force exerted by a neighbouring cell is the mean pressure times the area between the two cells. To find acceleration, we divide the force by the mass of water in the cell, taken to be the volume of the cell times the water density: + +$$ +a _ {x} = \frac {g (d _ {\mathrm {0 x}} ^ {2} - d _ {\mathrm {2 x}} ^ {2})}{4 w d _ {1}}. +$$ + +We calculate the acceleration due to gravity by estimating the gradient of the ground of the current cell and its four immediate neighbours. We determine the horizontal component of the acceleration geometrically (Figure 9) to be + +$$ +a _ {g} = \frac {2 \Delta h w g}{4 w ^ {2} + \Delta h ^ {2}}. +$$ + +The model iterates through a large number of time steps, typically each of 1 s duration. At the beginning of each time step, water is injected into the cells containing the dam breach; the amount is determined by the breach models described above. For each time step, the acceleration $(x-$ and $y$ -components) is calculated for each cell in the region, and the velocity of water in the region is updated according to + +$$ +v _ {\text {n e w}} = v _ {\text {o l d}} + a \Delta t. +$$ + +![](images/5331518901f1ca4597c04081154c726ab20c0260dfb03f256d3fa52f1122cbd6.jpg) + +![](images/4189c669bb86ba47e32803060ba8735c59d30c9b1cba90e5b0ce33cc3b9a3eca.jpg) +Figure 9. Determination of tangential component of gravity. + +The direction of the velocity determines in which direction each cell donates: For $v_{x} > 0$ , the cell donates to the right; for $v_{x} < 0$ , the cell donates to the left. The amount of water donated is proportional to the speed in that direction, so that the change in water depth of the current cell is + +$$ +d _ {\mathrm {d o n a t e d}} = \frac {d _ {\mathrm {a v g}} \Delta t}{2 w}. +$$ + +The water depth of the neighbouring cell receiving the donation is also updated, so that the total amount of water in the model is conserved (except for donations off the edges of the map and the water injected at the breach cell). + +For very large velocities, a cell can donate more water than it has. Specifically, if the speed times the time step size is larger than the cell width, the donation would be greater than the cell's current volume. If this occurs, the cell + +is assumed to donate all its water, and the donations in the $x$ - and $y$ -directions are scaled to account for this. + +# Justification + +This model is intuitively appealing: It models the behaviour of the water using a simple but meaningful physical analogy. The force analysis used is equivalent to taking the gradient of the Bernoulli equation and modeling the fluid discretely. + +The model computes and saves velocity information, allowing modeling of the manner in which regions are flooded. For example, the model could predict the speed of the water as it struck a particular building in Columbia, such as the State Capitol. + +# Reasons for Rejecting the Model + +The results from the model are unrealistic. Since cells with large volumes of water have small accelerations, these cells tend to empty very slowly, even if adjacent to completely empty cells; for the same reason, small cells tend to empty too quickly. The result is a checkerboard pattern: Large cells grow larger and their small neighbours grow smaller. This error relates to our assumption that all water within a cell has the same velocity: A single cell cannot spread out in all directions. For a simpler terrain (such as a simple downhill channel), this would not be a problem; however, this terrain is highly complicated and requires the water to propagate in several directions. + +Another problem with this model is its complexity. The model juggles a large number of parameters for each cell, making tuning and troubleshooting difficult. + +# Downhill Model + +# Design + +The Downhill Model assumes that the flow rate between two cells is proportional to the height difference between the centers of mass of those cells multiplied by the effective area between them. The model allows water to be donated in multiple directions by a single cell, if it is higher than several of its neighbours. As in the Force Model, the program iterates through time steps, adding water each step to the cells containing the dam breach. + +For each time step, each cell (except those on the bottom and right boundaries of the map, which are handled later) exchanges water with the two cells immediately below and to the right. This ensures that each cell exchanges water with its four neighbours exactly once per time step. To exchange with a neighbour, a cell changes its height according to the formula + +$$ +d _ {\mathrm {d o n a t e d}} = k d _ {\mathrm {a v g}} (h _ {0} - h _ {2}). +$$ + +The value of $k$ is based on the assumption that the water speed at the breach during the peak flow rate is $30\mathrm{m / s}$ . We later describe the model's response to a change in $k$ . + +The neighbouring cell then changes its height by the negative of this value. To ensure consistency, the changes in height are not applied until the end of the time step, after all cell height changes have been calculated. If a cell had donated more than MAX_LOSS_FRAC of the water that it originally contained, then its donations are scaled down so that it donates exactly this amount. The factor MAX_LOSS_FRAC is used to prevent sloshing: Large cells tend to empty completely into empty neighbours, which then donate back on the next turn, so that half of the cells are empty at any one time. + +For cells along the boundary, donations on their side(s) against the edges of the map are assumed to be equal to their donations on the opposite sides. Since these cells are far away from the breach or areas of interest, their precise behaviour is less important. Our approach ensures that water reaching the edges of the map leaves smoothly, without piling up unphysically. + +# Justification + +This model affords rapid computation and uses a simple principle that is easy to troubleshoot. Although the equation governing the water exchange between cells lacks a direct physical analogy, it produces results consistent with physical expectations. Water travels most quickly downhill or across the nearly flat floodplain, and creeps uphill only as water levels rise. + +# Testing and Results Testing + +To test our models, we use National Elevation Data from the U.S. Geological Survey [USGS 2004]. The data are a set of elevation values (in meters above sea level) arranged into rows and columns. Each element represents a square with sides $30\mathrm{m}$ in length. To reduce computation, we averaged groups of $7\times 7$ cells together, so that the cells that we used were squares $210\mathrm{m}$ on a side. + +We tested both the Force Model and the Downhill Model by placing the breach cell just in front of the dam face and modeling the spread of water for several choices of breach type (instant total failure, delayed total failure, piping, overtopping) and time period (360, 1800, 6240 s). We tested the $k$ dependence and MAX_LOSS_FRAC dependence by using the instant total failure breach model. + +To prevent errors from very small volumes in cells, we treated a cell as empty if its mean depth was less than $0.0001\mathrm{m}$ . This cutoff was especially important in the Force Model, where such small cells acquire enormous velocities ( $>10^{6}\mathrm{m/s}$ ) when placed next to a cell with a significant amount of water. + +We tested the Downhill Model for robustness by running the instant total failure model for 50,000 time steps. The model behaves poorly beyond 40,000 time steps, when the flow rate out of the map becomes much greater than the flow rate of the dam breach (which has slowed by this point). We also tested the model for very large flow rates. For rates that increased the height of the breach cell by more than $10\mathrm{m}$ per time interval (more than 30 times as large as any flow rate in the simulation), the simulation lasts only 1000 time steps before becoming unstable. + +# Results + +# Flooding Extent + +The extent of the flooding is largely independent of the type of breach (Figure 10); the difference between breach types is in how quickly flooding spreads. For instant total failure breach, the flooding has a maximum extent of $106.5\mathrm{km}^2$ . The flooding is greatest in the Saluda and Congaree valleys, which are quite flat and broad. The flooding in the city of Columbia itself, which is elevated from these valleys, is very minimal. We did not model the effects of the flooding farther down the Congaree, but we expect those to be comparable to the flooding within the region simulated. + +![](images/ed3ad1f92bbd3986376604fcb316d1f88ec60e75a5f63447e5caab3df87b24fa.jpg) +Overtopping Failure + +![](images/d333480985a91084c6d2cda9ca7bc2761f7d7ec2e443a121a67ed4ddc390e888.jpg) +Total Instantaneous Failure + +![](images/b212a1203ec78d4e0b003eaebbed5021d9be19eb6bae438419f15afbd2ea9612.jpg) +Piping Failure +Figure 10. Comparison of dam breach scenarios: flooding area after $24\mathrm{h}$ + +![](images/81cbe8a16ba4f141bd986bc0d43c11197f4216d36a80341a9e1ac0b667c8685f.jpg) +Total Slow Failure + +# Rawls Creek + +The flooding in Rawls creek is extensive in area but not in upstream extent. Although it is difficult to establish where the flooding of the Saluda river ends and the flooding of Rawls Creek begins, the area around Rawls Creek that becomes flooded we estimate as between 1.6 and $2.4\mathrm{km}^2$ . The farthest flooded point is $4.4\mathrm{km}$ upstream. + +# State Capitol + +Flooding does not reach the State Capitol, even for the most extreme case, instant total failure + +# Error Analysis, Sensitivity, and Robustness + +The model depends on the factor $k$ to scale the amount of water donated. Its value is based on the assumption that the water speed at the breach during the peak flow rate is $30\mathrm{m / s}$ . However, the value of $k$ does not affect the simulation greatly; the total flooded area after 1800 time steps varies by just $17\%$ when $k$ is varied by a factor of 100. + +The MAX_LOSS_FRAC is used to prevent each cell from donating too much water. However, the extent of the flooding is not strongly dependent on its value, which we took as 0.25 to produce reasonably smooth water distributions. + +# Strengths and Weaknesses Strengths + +The model is independent of the site simulated: Given elevation data for a region and an equation governing the flow rate of water from a dam breach, it calculates the behaviour of flooding. + +The Downhill Model is intuitive. It relies on a simple exchange rule between cells, making it easy to tune and troubleshoot. Tuning may be needed to account for problems associated with more extreme flooding cases, a need to extract additional results from the model, or other unforeseen demands. + +The algorithm is efficient; the computation of a single time step is linear in the number of cells in the region. This efficiency makes it possible to model many variations on breach types and flow rates in a short period of time. + +The model produces three data sets of grids: 0/1 values describing which cells are flooded; the water depth in each cell to determine the severity of the flooding in a region; and the water depth plus elevation for each cell. From plots of these data sets, the extent and severity of the flooding are easy to see. + +# Weaknesses + +The primary weakness of this model is the tendency of water in the deeper regions of the flooded area to slosh. It should be possible to eliminate this, perhaps by introducing a depth dependence into MAX_LOSS_FRAC. + +Another weakness that could be corrected with more analysis is the time scale. Since the $k$ -dependence—the only place where the duration of the time step is used explicitly—is weak, the model's time scale is not easily changeable. The time scale could be calibrated by running simulations of an analytical system, such as the propagation of water down a channel, and determining the speed of the water and hence the time scale. Since the time scale was not needed to analyze the extent of the flooding, we did not perform this calibration. + +# References + +Chauhan, Sanjay S., et al. 2004. Do current breach parameter estimation techniques provide reasonable estimates for use in breach modeling? Utah State University and RAC Engineers & Economists. www.engineering.usu.edu/uwr1/www/faculty/DSB/breachparameters.pdf. Accessed 3 February 2005. +Cheremisinoff, Nicholas P. 1981. Fluid Flow: Pumps, Pipes, and Channels. England: Buttersworth. +Federal Energy Regulatory Commission (FERC). 2002. Saluda Dam remediation: Updated frequently-asked questions and answers. www.ferc.gov/industries/hydropower/safety/saluda/saluda_qa.pdf. Accessed 4 February 2005. +Fread, D.L. 1998. Dam-Breach Modeling and Flood Routing: A Perspective on Present Capabilities and Future Directions. Silver Spring, MD: National Weather Service, Office of Hydrology. +Mayer, L. 1987. Catastrophic Flooding. Boston, MA: Allen & Unwin. +Munson, Bruce R., et al. 2002. Fundamentals of Fluid Mechanics. 4th ed. New York: John Wiley & Sons. +Sedimentation and River Hydraulics Group. 2004. Comparison between the methods used in MIKE11, FLDWAV 1.0, and HEC-RAS 3.1.1 to compute flows through a dam breach. U.S. Department of the Interior. +Smith, Alan A. Hydraulic theory: Kinematic flood routing. Alan A. Smith Inc. http://www.alanasmith.com/theory-Kinematic-Flood-Routing.htm. Accessed 3 February 2005. +U.S. Army Corps of Engineers. 1997. Engineering and design—Hydrologic engineering requirements for reservoirs. Department of the Army Publication EM 1110-2-1420, Ch. 16. +U.S. Geological Survey (USGS). 2004. Seamless Data Distribution System, National Center for Earth Resources Observation and Science. seamless.usgs.gov. Accessed 4 February 2005. +Wahl, Tony L. 1997. Predicting embankment dam breach parameters—A needs assessment. Denver, CO: U.S. Bureau of Reclamation. http://www.usbr.gov/pmts/hydraulics_lab/twahl/publications.html. Accessed 3 February 2005. +Williams, Garnett P. 1978. Hydraulic geometry of river cross sections—Theory of minimum variance. Geological Survey Professional Paper. Washington, DC: U.S. Government Printing Office. + +# Appendix: Dam Breach Model Equations + +For an instant total failure: + +$$ +Q _ {\mathrm {T F 1}} (t) = \left\{ \begin{array}{l l} - \frac {Q _ {\mathrm {p e a k}} (t - t _ {\mathrm {T F 1}})}{t _ {\mathrm {T F 1}}}, & t < t _ {\mathrm {T F 1}}; \\ 0 & t _ {\mathrm {T F 1}} < t, \end{array} \right. +$$ + +where $t_{\mathrm{TF1}} = \frac{2\Delta V}{Q_{\mathrm{peak}}}$ . + +For a delayed total failure: + +$$ +Q _ {\mathrm {T F 2}} (t) = \left\{ \begin{array}{l l} \frac {2 Q _ {\mathrm {p e a k}} t}{t _ {\mathrm {T F 2}}}, & t \leq \frac {1}{2} t _ {\mathrm {T F 2}}; \\ \frac {2 (t _ {\mathrm {T F 2}} - t) Q _ {\mathrm {p e a k}}}{t _ {\mathrm {T F 2}}}, & \frac {1}{2} t _ {\mathrm {T F 2}} - t < 0 \text {a n d} t - t _ {\mathrm {T F 2}} < 0; \\ 0, & t _ {\mathrm {T F 2}} \leq t, \end{array} \right. +$$ + +where $t_{\mathrm{TF2}} = \frac{2\Delta V}{Q_{\mathrm{peak}}}$ . + +For a piping breach that turns into a total failure: + +$$ +Q _ {\text {P I P E}} (t) = \left\{ \begin{array}{l l} (Q _ {\text {p e a k}} + 1) ^ {t / t _ {b}} - 1, & t \leq t _ {b}; \\ Q _ {\text {p e a k}} \exp \left[ \frac {5 (t - t _ {b})}{t - t _ {b}} \right], & t _ {b} - t < 0 \text {a n d} t + t < 0; \\ 0, & t \leq t, \end{array} \right. +$$ + +where + +$$ +t _ {\mathrm {P I P E}} = \Delta V - t _ {\mathrm {b r e a c h}} \left[ \frac {5 \left(\frac {2 + Q _ {\mathrm {p e a k}}}{\ln (Q _ {\mathrm {p e a k}} + 1)} - 1\right)}{Q _ {\mathrm {p e a k}} (1 - e ^ {- 5})} + 1 \right]. +$$ + +For an overtopping breach: + +$$ +Q _ {\mathrm {O T}} = \left\{ \begin{array}{l l} Q _ {\mathrm {p e a k}} + & \\ 1 5 \left(t ^ {2} + 2 t t _ {\mathrm {b r e a c h}} - t _ {\mathrm {b r e a c h}} ^ {2}\right) \times 1 0 ^ {- 6}, & t \leq t _ {\mathrm {b r e a c h}}; \\ \frac {Q _ {\mathrm {p e a k}} (t - t _ {\mathrm {O T}})}{t _ {\mathrm {b r e a c h}} - t _ {\mathrm {O T}}}, & t _ {\mathrm {b r e a c h}} t < 0 \text {a n d} t - t _ {\mathrm {O T}} < 0; \\ 0, & \text {o t h e r w i s e}, \end{array} \right. +$$ + +where + +$$ +t _ {\mathrm {O T}} = \frac {2 (\Delta V + 0 . 0 0 0 0 0 5 t _ {\mathrm {b r e a c h}} ^ {3} - t _ {\mathrm {b r e a c h}} Q _ {\mathrm {p e a k}})}{Q _ {\mathrm {p e a k}}} + t _ {\mathrm {b r e a c h}}. +$$ + +![](images/e65bfc2eaa47c7ce674e93cb7074b62490363c1758ff1c4a69db5561b1348a1b.jpg) + +Michael G. Barnett, Dr. James Brooke (advisor), Scott J. Wood, and Jennifer Dale Kohlenberg. + +# Analysis of Dam Failure in the Saluda River Valley + +Ryan Bressler + +Christina Polwarth + +Braxton Osting + +University of Washington + +Seattle, WA + +Advisor: Rekha Thomas + +# Summary + +We identify and model two possible failure modes for the Saluda Dam: gradual failure due to an enlarging breach, and sudden catastrophic failure due to liquefaction of the dam. + +For the first case, we describe the breach using a linear sediment-transport model to determine the flow from the dam. We construct a high-resolution digital model of the downstream river valley and apply the continuity equations and a modified Manning equation to model the flow downstream. + +For the case of dam annihilation, we use a model based on the Saint-Venant equations for one-dimensional flood propagation in open-channel flow. Assuming shallow water conditions along the Saluda River, we approximate the depth and speed of a dam break wave, using a sinusoidal perturbation of the dynamic wave model. + +We calibrate the models with flow data from two river observation stations. + +We conclude that the flood levels would not reach the Capitol Building but would intrude deeply into Rawls Creek. + +# Introduction + +The Saluda Dam, located $20\mathrm{km}$ above Columbia, South Carolina, impounds the almost 3-billion-cubic-meter Lake Murray [South Carolina Electric & Gas Company 1995]. It is a large earthen dam of a type that has failed in earthquakes before [Workshop 1986]. In such a failure, the water in Lake Murray would + +rush down the Saluda River Valley towards Columbia, its 100,000 residents, and the State Capitol. + +We present a comprehensive mathematical description of the resulting flood, including its intrusion into Columbia and the tributaries of the Saluda. See Figure 1 for an overview of the local topography. + +![](images/77432f7041a4e9360becbf079e77051ed0951b0685a4f8498371c96e9a42d0de.jpg) +Figure 1. An overview of Lake Murray and the Saluda River Valley generated from the NCRS topographical data [National Geophysical Data Center 2005]. (a) Lake Murray. (b) Saluda River. (c) Rawls Creek. (d) State Capitol Building. + +A brief survey of earthquake-related earthen dam incidents [Workshop 1986] reveals that failure can follow two distinct courses: + +- A crack or breach forms in the dam, causing gradual failure due to erosion. +- The dam is completely annihilated, resulting in the formation of a surge. + +To describe both of these situations accurately, we apply two different models. + +Gradual Failure The relatively gradual rate at which water is introduced into the downstream valley suggests that the dispersion of the flood may be modeled using classical open-channel hydraulics. We divide the downstream river course into basins or reaches and then use the Manning formula and the continuity equation to describe the movement of water between them. + +We determine the flow into the first basin using a model for the destruction of the dam due to breach erosion. + +We create a three-dimensional topographical model of each basin using $3^{\prime}$ resolution data from the NGDC Coastal Relief Model [National Geophysical Data Center 2005]. (Figure 1 was generated using these models.) This lets us estimate the relationship between the volume in each basin and the cross-sectional area of its outflow channel. The Manning formula and the continuity equation yield a system of coupled first-order differential equations. We integrate this system numerically and calibrate it using data for normal flow in the Saluda River from river observation stations just below the Saluda Dam and just above Columbia City [USGS 2005]. + +Rapid Failure The flood wave is described as a sinusoidal perturbation to the steady-state solution of the Saint-Venant equation. We apply the dynamic wave model of Ponce et al. [1997] to determine the surge's propagation. + +We represent the Saluda River Valley as a prismatic channel of rectangular cross section. We use a small surge recorded by the USGS river observation station in the Saluda Valley [USGS 2005] to calibrate the frictional constant governing the rate of attenuation of the flood waves. + +Finally, we address the results of the two models and their consequences for Rawls Creek, the Capitol, and the residents of Columbia. + +# Gradual Failure + +Our model for downstream flooding depends on the conservation of matter as described by the continuity equation, which states that for any given reach of the river, the change in volume equals the difference between flow in and flow out: + +$$ +\frac {d V}{d t} = Q _ {\text {i n}} - Q _ {\text {o u t}}, \tag {1} +$$ + +where $V$ is the volume of the reach, $t$ is time, and $Q_{\mathrm{in}}$ and $Q_{\mathrm{out}}$ are the flows. + +We divide the Saluda River Valley into four reaches. Since the amount of water involved in a dam failure flood would be significantly greater than that contributed by any other source, we simplify our model by assuming that all flow into and out of a reach would occur along the Saluda. For each reach, we set $Q_{\mathrm{in}}$ of each reach equal to $Q_{\mathrm{out}}$ of the reach above it, ignoring all tributaries. Eq. (1) becomes: + +$$ +\frac {d V _ {n}}{d t} = Q _ {n - 1} - Q _ {n}, \quad n = 1, \dots 4, \tag {2} +$$ + +where $V_{n}$ is the volume in the nth reach (numbered downstream from the dam) and $Q_{n}$ is the flow out of the nth reach; $Q_{0}$ is the flow into the reach 1 through the breach in the dam. + +To evaluate (2), we must estimate several parameters and relations: + +- the flow out of the reservoir (into the first reach) resulting from a dam breach, +- the flow through each reach, and +- the topographical profiles of the reaches. + +# Flow Through the Breach + +Dam-breach erosion is an interaction between the flooding water and the material of the dam. Once a breach has formed, the discharging water further erodes the breach. Enlargement of the breach increases the rate of discharge, which in turn increases the rate of erosion. This interaction continues until the reservoir water is depleted or until the dam resists further erosion. + +We assume that the pre-breach flow into and out of the reservoir can be ignored, since they are of opposite sign and of negligible magnitude compared to the flooding waters. The breach outflow discharge $Q_{0}$ equals the product of the rate at which the water is lowering and the surface area at that height, $A_{s}(H)$ . Also, the breach outflow discharge is related to the mean water velocity $u$ and the breach cross-sectional area $A_{b}$ by the continuity equation: + +$$ +A _ {s} (H) \frac {d H}{d t} = - Q _ {0} = - u A _ {b}. \tag {3} +$$ + +Experimental observations show that the flow of water through a breach can be simulated by the hydraulics of broad-crested weir flow [Chow 1959; Pugh et al. 1984]: + +$$ +u = \alpha (H - Z) ^ {\beta}, \tag {4} +$$ + +where $Z$ is the breach bottom height measured from the bottom of the lake. For critical flow conditions, $\alpha = [(2/3)^{3}g]^{1/2} = 1.7\mathrm{m/s}$ and $\beta = 1/2$ [Singh 1996]. + +We further assume that the surface area of the reservoir, $A_{s}$ , is independent of the height (i.e., the reservoir is rectangular). Combining (3) and (4) yields + +$$ +A _ {s} \frac {d H}{d t} = - u A _ {b} = - \alpha (H - Z) ^ {\frac {1}{2}} A _ {b}. \tag {5} +$$ + +We describe erosion in the breach using the simplest method that has been used to model dam breaks accurately in the past [Singh 1996] and assume that + +$$ +\frac {d Z}{d t} = - \gamma u ^ {\phi} = - \gamma \alpha^ {\phi} (H - Z) ^ {\phi \beta}, \tag {6} +$$ + +where $\gamma$ and $\phi$ are determined from experimental analysis of the dam material and $u$ is given by (4). Because we do not have access to the dam, we assume that $\phi = 1$ (linear erosion) and approximate $\gamma$ as 0.01. This value of $\gamma$ has given good results for linear erosion in the past [Singh 1996]. Eqs. (5)-(6) are coupled + +first-order differential equations governing the elevation of the water surface and the elevation of the breach bottom as functions of time. To evaluate them, we must determine the shape of the breach. + +Breaches in dams are typically modeled as triangles, trapezoids, or rectangles; but rectangles are used most often, since the resulting ODEs (5)-(6) are solved relatively easily [Singh 1996]. For simplicity, we model the breach as a rectangle with constant width $b$ such that it erodes only in the vertical direction. Thus, the area of the breach is given by + +$$ +A _ {b} = b (H - Z). \tag {7} +$$ + +Substituting (7) into (5) and rewriting (6) with $\phi = 1$ and $\beta = 1/2$ gives + +$$ +\frac {d H}{d t} = - \frac {\alpha b}{A _ {s}} (H - Z) ^ {\frac {3}{2}}, \quad \frac {d Z}{d t} = - \gamma \alpha (H - Z) ^ {\frac {1}{2}}. \tag {8} +$$ + +Equations (8) admit the solution + +$$ +H (Z) = Z + q + C \exp \left(\frac {- \left(Z _ {0} - Z\right)}{q}\right), \tag {9} +$$ + +$$ +t (Z) = \frac {2 \sqrt {q} \operatorname {a r c t a n h} \left(\sqrt {\frac {H (Z) - Z}{q}}\right)}{\gamma \alpha} - D, +$$ + +where $q = A_s \gamma / b$ and $C = H_0 - Z_0 - q$ , $H_0$ and $Z_0$ are the initial values of $H$ and $Z$ at $t = t_0$ , and $D$ is a constant of integration determined from the initial conditions. The quantity $Z(t)$ is defined implicitly by (9), and $H(t)$ can then be recovered from (9). Then the flow through the breach, $Q_0$ can be determined from (3) and (5): + +$$ +Q _ {0} = - \alpha \left[ q + C \exp \left(\frac {- \left(Z _ {0} - Z\right)}{q}\right) \right] ^ {1 / 2} A _ {b}. \tag {10} +$$ + +When $Z(t) = 0$ at some time $\tilde{t}$ , the dam must stop eroding and from (8) we obtain + +$$ +\frac {d H}{d t} = - \frac {\alpha b}{A _ {s}} H ^ {3 / 2}, \tag {11} +$$ + +resulting in + +$$ +H (t) = \left(\frac {1}{\sqrt {H _ {0}}} + \frac {\alpha b}{2 A _ {s}} \left(t - t _ {0}\right)\right) ^ {- 2} \quad \text {f o r} t \geq \tilde {t}. \tag {12} +$$ + +Figure 2 graphs $Q$ and $Z$ vs. time. The discontinuity of the derivative at time $\tilde{t} \approx 2\mathrm{h}$ is the transition between these two solutions. + +![](images/d6b64a50f3c096df87c680cbb18232195da5e80650bad7ac8abf11cb79aeec43.jpg) +Figure 2. Flow through dam and height of breach bottom as breach enlarges. + +![](images/a2f0cf047e0bf66a7130a10634ceccdbc52c53948d88b1058003109afbda1e83.jpg) + +# Flow Between Reaches and the Manning Equation + +We select reaches so that the river valley at their junctions is relatively prismatic and narrow. Assuming that the flow in a flood would be regulated by the rate at which water can flow through these narrows, we model the river as a series of pools, one flowing into the next. + +Traditionally, flow in a floodplain is analyzed as the flow in a prismatic channel using the Manning equation + +$$ +u = \frac {1}{n _ {m}} \left(\frac {A}{P _ {w}}\right) ^ {2 / 3} \sqrt {S}, \tag {13} +$$ + +where + +$u$ is the mean flow velocity, + +$n_m$ is determined experimentally for each channel, + +$A$ is the cross-sectional area of the flow channel, + +$P_{w}$ is the wet perimeter of the channel cross section, and + +$S$ is the slope of the energy line. + +There is no theoretical basis for the Manning equation; however, it has been extensively verified experimentally. Its primary advantage is the amount of information available on estimating Manning coefficients, $n_m$ [Chanson 2004]. We apply it in our model because we can estimate $n_m$ for the Saluda from data for other floodplains. The prismatic nature of the narrows means that we can apply the Manning equation without correcting for channel irregularities. Typical values of $n_m$ are 0.5 for a brush-covered floodplain and 1.5 for a tree-covered one [Chanson 2004]. Assuming that our floodplain is somewhere between, we choose a moderate value of $n_m = 1$ . + +We set $S$ , the slope of the energy line, equal to the slope of the valley floor. This is equivalent to assuming that the depth and speed of the water are constant with respect to flow direction in each narrows. Because of this, our model will be most accurate when radical changes in volume occur on a time scale greater than the time required to pass through the narrows. From the propagation rates observed, the time required for the water to pass through each of the narrows is on the order of $0.1\mathrm{h}$ . The flood that we wish to consider rises sharply for $0.5\mathrm{h}$ , stays steady for $1.5\mathrm{h}$ , and then trails off gradually (see Figure 2). Our model is least accurate for the steepest part of the initial rise but ultimately describes most of the flood well. + +We estimate the slope of the channel out of each reach from USGS topographical maps [USGS 1971; 1994; 1997]: + +$$ +S _ {1} = \frac {1}{1 2 0 0}, \quad S _ {2} = \frac {1}{8 0 0}, \quad S _ {3} = \frac {1}{6 0 0}, \quad S _ {4} = \frac {1}{8 0 0}. \tag {14} +$$ + +We estimate $S_4$ , the slope of the final outflow channel, conservatively so as to produce a worst-case scenario of the flooding of the basin that contains Columbia. + +Our topographical models of the river basin allow us to establish one-to-one correspondences between the volume of water in each reach, the cross-sectional area and wet perimeter of its outlet, and the height of the water in the reach. These correspondences define the cross-sectional area and wet perimeter of the outflow narrows as functions of volume; we designate these functions as $A_{n}(V)$ and $P_{n}(V)$ . Noting that for a given channel cross section, the flow $Q$ satisfies + +$$ +Q = u A _ {n}, \tag {15} +$$ + +where $u$ is the mean water velocity, (13) can be stated as a constraint on (2): + +$$ +Q _ {n} = A _ {n} u _ {n} = \frac {A _ {n}}{n _ {m}} \left(\frac {A _ {n}}{P _ {n}}\right) ^ {2 / 3} \sqrt {S _ {n}}, \tag {16} +$$ + +where $A_{n} = A_{n}(\rho V),P_{n} = P_{n}(\rho V)$ , and $V = V(t - \zeta)$ + +We introduce parameters $\rho$ and $\zeta$ to calibrate of the model; we determine them subsequently from observational data. + +$\rho$ describes how friction and surface features of the reach prevent the entire volume of water from flowing downstream. + +$\zeta$ describes the amount of time that it takes water to pass through a reach. We assume $\zeta$ to be constant because of the constant length of our reaches. + +# Selection and Analysis of Reaches + +We use $3^{\prime}$ topographical data [National Geophysical Data Center 2005] to construct a topographical model. To establish correspondences between the + +volume $V_{n}$ of water contained in each basin and the area $A_{n}$ and wet perimeter $P_{w}$ of their outflow channels, we intersect the topographical model of each basin with a plane representing the water level and integrate numerically over the appropriate regions. We construct a database of these values in terms of height, to be used as we simulate (2). Figure 3 displays one such profile, for reach 4, with volume and area next to the topographical map of the basin. + +Our accuracy is limited by the 0.2-m height resolution of the NGDC data. This does not significantly effect the accuracy of our volume estimates, but the area and wet perimeter estimates display noticeable discontinuities for small volumes. (The oscillatory behavior seen later in Figure 4 is caused by this.) Our model could be improved by conducting better surveys of the outflow channel of each reach; since we are primarily interested in large volumes, we proceed. + +To summarize, our model places the following requirements on the selection of the reaches: + +- The inflow and outflow channels must be narrower than the rest of the reach. +- The channels must also be prismatic. +- Water should take approximately the same amount of time to flow down each reach. + +To satisfy these conditions, we construct reaches as follows: + +- Reach 1: The 6-km section from Saluda Dam to the narrows at Correly Island. +- Reach 2: The 6-km section from Correly Island to the narrows just below Interstate 20. +- Reach 3: The 6-km section from just below Interstate 20 to the narrows just above the Saluda's outlet into the Congaree river. +- Reach 4: A large section of the Congaree River Basin including the area around the Capitol and a 6-km stretch of downstream channel. + +Reaches 1, 2, and 3 satisfy our requirements extremely well. The Congaree River valley widens rapidly into a floodplain below Columbia and there are no natural narrows. We end our basin at a point that is somewhat narrow and satisfies the requirement for water flow time. A large portion of the broad river valley is included to allow for upstream flooding. + +# Calibration and Sensitivity Analysis + +The US Geological Survey (USGS) has river observation stations just below the dam (at $34^{\circ}03'03''$ N $81^{\circ}12'35''$ W) and just above Columbia (at $34^{\circ}00'50''$ N $81^{\circ}05'17''$ W). Each logs the last 31 days' flows [USGS 2005]. On 6 January 2005, the station at Saluda Dam registered a surge of $30,000\mathrm{m}^3$ . Flow rates jumped from $27\mathrm{m}^3/\mathrm{s}$ to $700\mathrm{m}^3/\mathrm{s}$ and then receded over a 5-h period. A similar surge + +![](images/724ac58e5151307a04b4082f187262e66bb405bb1436847e014132a0d67d1a28.jpg) +Figure 3. Topographical map and volume and outlet channel area profiles for Reach 4. + +![](images/8247079f753b1c44f741be5e752fd1462f59af9b1a60fa44e877edc2b0cc2f80.jpg) + +![](images/f1d684448f9bcb070acd899d1eb1a2c3a73b8ca93b341e83a9e28340193fe7c9.jpg) + +![](images/79a1f8783572b4341950e13f37e57b8db2ecf4b547fb056097c09c0d8a39fe47.jpg) +Surge down the Saluda River on 6 January 2005. The solid line is flow at the upstream station and the dotted line is flow at the downstream station. +Figure 4. Actual surge and simulated surge. + +![](images/ac0af9e5e78e155511bc3461576e83f3cd06d78334c9f019afe7bf84eea8c06a.jpg) +Our simulation of a similar surge. + +was recorded by the Columbia observation station $1.5\mathrm{h}$ later (see Figure 4). We use this event to calibrate our model. + +We first calibrate the model to produce a typical river flow at the dam to a value of $60\mathrm{m}^3/\mathrm{s}$ and systematically vary $\rho$ . We find that our model displays stable but oscillatory behavior for $\rho < 0.1$ . The oscillations can be traced to jumps in the flow rates between the breaches, and we attribute them to inaccuracy of our channel profiles for small volumes. + +For $\rho = 0.1$ , our system becomes unstable when large volumes are introduced. Since we are indeed interested in a large flood, we set $\rho = 0.01$ . This value is consistent with the idea that the ground cover density, and thus the amount of water stored in the ground cover, increases with distance from the main river channel. The small size of $\rho$ corresponds to the fact that in our equation it scales volume. + +Once our model is stable for typical flow volumes, we introduce a "flood" in the form of a Gaussian bump in $Q_{0}$ of similar shape to the Jan. 6th event. We adjust $\zeta$ until this event arrives at the bottom of reach 3 in 1.5 hours. This occurs when $\zeta = -0.5$ , consistent with the three reaches that must be traversed. In calibrating our model for a large flood by using a small one, we assume that the effect of $\zeta$ is independent of flood size. A better calibration could be achieved by analyzing observations of a larger flood, but such data are not available from the observation stations [USGS 2005]. + +# Predictions + +Our model predicts that the flood waters would travel slowly down the Saluda River Valley, producing extremely high levels of flooding in the upper reaches of the Saluda near Rawls Creek (reach 1) and near Columbia (reach 4). Our results are summarized in Figure 5 and in Tables 1-2. Our numerical simulations suggest that Rawls Creek would flood approximately $32\mathrm{m}$ but the State Capitol building would remain dry. + +Table 1. The maximum flood volumes in each reach and their corresponding elevations above sea level. + +
ReachMax. Flood Vol. (×108m3)Max Flood Elev. (m)Avg. River Elev. (m)
1198758
2157955
33.56850
42.46845
+ +Table 2. Elevation above sea level of points of interest. + +
Point of InterestElevation (m)
Lake Murray120
Saluda River (Just Below Dam)60
Rawls Creek (Reach 1)55
Saluda River (Bottom of River)45
Capitol (Reach 4)100
+ +![](images/38da2a143c95c2b4423d9c97a272fe1d3d3b0d7da6b5d7ffb8eadf56c80f2cc1.jpg) +Figure 5. Volumes predicted in each reach as a gradual flood propagates. + +# Rapid Failure: Dam Break Wave + +The complete annihilation of a dam results in a highly turbulent, unsteady flow that is commonly known as a dam break wave. The removal of the dam results in the creation of a retreating (negative surge) wave front in response to the sudden reduction in flow depth [Chanson 2004]. In the case of a dam separating two bodies of water, the intersection of the resulting negative surge with a relatively slow moving body of water results in a discontinuity of velocity. Since momentum must be preserved, these two bodies of water cannot intersect without the creation of a second wave moving in a direction opposite to that of the first wave; this second wave is a positive surge (see Figure 6). + +The propagation properties of the wave resulting from the intersection of the positive and negative surges can be described using equations developed by Saint-Venant. These equations form a coupled system of one-dimensional quasi-linear hyperbolic partial differential equations describing varied unsteady + +![](images/7eee80ea86a195fd0cbef4a8b4f6a0e6c75be6a5c613bf030c7f65e058ff70d8.jpg) +Figure 6. The shape of the wave just after the dam fails. The dam is located at $x = 0$ . Note the discontinuity between the positive and negative surges at $x = 100 \, \text{m}$ . + +channel flow [Freed 1971]: + +$$ +\frac {1}{g} \frac {\partial u}{\partial t} + \frac {u}{g} \frac {\partial u}{\partial x} + \frac {\partial d}{\partial x} + \left(S _ {f} - S _ {0}\right) = 0, \tag {17} +$$ + +$$ +\frac {\partial d}{\partial t} + \frac {\partial d u}{\partial x} = \frac {\partial d}{\partial t} + d \frac {\partial u}{\partial x} + u \frac {\partial d}{\partial x} = 0, +$$ + +where $u$ is the mean velocity of the wave, $d$ is the flow depth, $x$ is the direction of propagation, $t$ is time, and $g$ is the acceleration due to gravity. $S_{f}$ is the friction slope and $S_{0}$ is the slope of the canal. + +The first equation is known as the equation of motion and describes the contribution of various forces to wave propagation, each of which is represented by a separate term: + +- the first term describes the local inertia of the wave, +- the second term describes the convective inertia of the wave, +- the third term describes the pressure differential, and +- the fourth term describes the friction and bed slope. + +The second equation, known as the equation of continuity, expresses conservation of mass. + +The Saint-Venant equations assume the following [Chanson 2004; Freed 1971]: + +- The flow is one dimensional; motion occurs only in the direction of propagation. + +- Vertical acceleration is negligible, resulting in a hydrostatic pressure distribution. +Water is incompressible. +- Flow resistance is the same as for uniform flow, $S_f = S_0$ . + +We are interested in describing the flood wave attenuation. In our model, we assume that the total volume of water impounded by the Saluda Dam is released as a single giant surge. The final value to which the peak discharge is attenuated is independent of the magnitude of the initial peak discharge [Ponce et al. 2003]. This allows for generalization of results calculated by our model to waves of arbitrary size. + +# Solutions to the Saint-Venant Equations + +Ponce et al. [2003] derives a solution to (17) in the case of a dam failure through sinusoidal perturbation of the steady-state solution. Using spectral analysis, it can be shown that the peak discharge at position $X$ has magnitude + +$$ +q _ {p} = q _ {p _ {0}} \exp \left(\frac {- \alpha X}{L _ {0}}\right), \tag {18} +$$ + +where + +$$ +\begin{array}{l} \alpha = \frac {2 \pi}{m ^ {2}} \Big (\frac {L _ {0} d _ {0} B}{V _ {w}} \Big) \Big [ \zeta - \big (\frac {C - A}{2} \big) ^ {1 / 2} \Big ], \quad A = \frac {1}{F _ {0} ^ {2}} - \zeta^ {2}, \quad C = \Big [ A ^ {2} + \zeta^ {2} \Big ] ^ {\frac {1}{2}}, \\ \zeta = \frac {1}{\sigma F _ {0} ^ {2}}, \qquad \qquad \sigma = \left(\frac {2 \pi}{L}\right) L _ {0}, \quad F _ {0} = \frac {u _ {0}}{\sqrt {g d _ {0}}}, \\ \end{array} +$$ + +with + +$F_{0}$ the Froude number $F_{0}$ , + +$u_{0}$ the steady equilibrium mean flow velocity, + +$L$ the perturbation wavelength, + +$L_{0}$ the reference channel length, + +$d_0$ the steady equilibrium flow depth, + +$B$ the average reach width, + +$V_{w}$ the reservoir volume, + +$g$ the acceleration due to gravity, and + +$m$ the Manning friction coefficient. + +The equation for unit width discharge (speed $\times$ depth) is + +$$ +q = \frac {N}{N + 1} V _ {\max } d \tag {19} +$$ + +where $N = 0.4\sqrt{8 / f}$ , $f$ is the Darcy friction factor, $V_{\mathrm{max}}$ is the maximum reservoir volume, and $d$ is the flow depth. + +We compute the wave speed using the empirical data in Figure 4 and from it estimate the Darcy friction factor $f$ for the Saluda River Valley. From (19), we also estimate the depth of the wave. + +# Predictions + +Using estimated values of the depth of the water impounded by the dam, the depth of the Saluda River in close proximity to the dam, and the volume of the Saluda River Basin, we approximate the depth of a dam break wavefront as a function of distance from the damsite. Figure 7 displays the results. The depth of the dam break wave decreases exponentially from an initial value of $65\mathrm{m}$ to a final value of approximately $4\mathrm{m}$ at a distance of $20\mathrm{km}$ from the dam site. This distance roughly corresponds to the distance between the Saluda Dam and the Capitol Building. + +Since the Capitol Building sits approximately $50\mathrm{m}$ above the Saluda River, the possibility of the wave reaching the Capitol Building is extremely small. The probability is further decreased by the simplistic geometry of our model, which approximates the river bed approximated as rectangular and of uniform width and texture. In reality, the river exhibits numerous contractions and expansions and is far from uniform in texture. These qualities would further attenuate the flow depth of the propagating wave. + +![](images/85bbffb0c709eac4917aab8066df3c74ec546ccef816486a4ed8c372d39357c7.jpg) +Figure 7. Predicted maximum depth of the floodwave for the upper Saluda River (left) and the entire Saluda River (right). + +![](images/653a371b08358ed6d07053d6cd5ebd213305159fc12a41f7376c92cc4f90c523.jpg) + +Our model predicts a wave $40\mathrm{m}$ high in the vicinity of Rawls Creek. A rapid dam failure would cause significant intrusion of flood waters into the Rawls Creek basin. + +# Conclusions + +In either gradual or rapid failure of the Saluda dam, the effect on the downstream areas would be severe. Our models predict that the waters near Rawls Creek could rise by as much as $40\mathrm{m}$ (rapid dam failure) or $32\mathrm{m}$ (gradual dam failure), protruding far into the Rawls Creek basin and other drainages. The waters would not be as high near Columbia and would not reach the Capitol Building. However damage to low-lying areas would be severe, since the water might rise as much as $23\mathrm{m}$ . + +Several improvements could be made to the models: + +# Gradual Failure Model + +- This model successfully describes small surges in the Saluda River. However, extrapolating small events to larger events is inherently problematic; so for a flood of the magnitude that we are considering, we should test the model against larger events in the Saluda and/or large events in comparable rivers. +- Estimates in our erosion model could be strengthened with better information about the material from which the dam is constructed. +- Better profiles of the outlet channel of each reach would allow us to apply the Manning Equation more accurately. + +# Rapid Failure Model + +- We calibrate this model too from a small surge in the Saluda River. A more comprehensive study of waves from other breached dams would provide better data for calibration of this model for large events. +- Access to the river site would provide better estimates for friction factors of the floodplain. +- This model is intended to place an upper bound on the magnitude of the flood wave. Further consideration of factors such as turns in the flood course would increase the accuracy of this model. + +# References + +Chanson, Herbert. 2004. The Hydraulics of Open Channel Flow: An Introduction. 2nd ed. Amsterdam, Holland: Elsevier. + +Chow, V.T. 1959. Open-Channel Hydraulics. New York: McGraw-Hill. + +Fread, D.L. 1971. Transient hydraulic simulation: Breached earth dams. Doctoral Dissertation, University of Missouri-Rolla. + +National Geophysical Data Center Coastal Relief Model. 2005. GEODAS Grid Translator—Design-a-Grid. http://www.ngdc.noaa.gov/mgg/gdas/gd designingagrid.html. + +Ponce, V.M., and D.B. Simons. 1977. Shallow wave propagation in open channel flow. Journal of the Hydraulics Division 103 (12): 1461-1476. +Ponce, V.M., A. Taher-shamsi, and A.V. Shetty. 2003. Dam-breach flood wave propagation using dimensionless parameters. Journal of the Hydraulics Division 129 (10): 777-782. +Pugh, C.A., and F.W. Gray, Jr. 1984. Fuse Plug Embankments in Auxiliary Spillways—Developing Design Guidelines and Paramaters. Bureau of Reclamation Reports. Denver, CO: U.S. Dept. of the Interior. +Singh, Vijay P. 1996. Dam Breach Modeling Technology. Dordrecht, Holland: Kluwer Academic Publishers. +South Carolina Electric & Gas Company. 1995. Lake Murray Shoreline Management Program. http://www.scana.com/NR/rdonlyres/e56porb5wwxxtjqfz3i4dci3pc7j3o2ans3t7c2ydaoh5m7myuigj5dz3cnmdw7 qpplewtvnq4j3qy3lhhrmjvmqawh/shorelinemanagementplan.pdf. +Workshop on Seismic Analysis and Design of Earth and Rockfill Dams. 1986. Lecture notes. 2 vols. New Delhi, India: Central Board of Irrigation and Power. +USGS (Department of the Interior, U.S. Geological Survey). 1971. Irmo Quadrangle South Carolina 7.5 Minute Series (Topographic). Photorevised 1990. Denver, CO: Department of the Interior, U.S. Geological Survey. +_______. 1994. South West Columbia Quadrangle South Carolina 7.5 Minute Series (Topographic). Denver, CO: Department of the Interior, U.S. Geological Survey. +_______. 1997. Columbia North Quadrangle South Carolina 7.5 Minute Series (Topographic). Denver, CO: Department of the Interior, U.S. Geological Survey. +.2005.USGS Real-Time Water Data for the Nation.Stations 02168504 (Saluda River below Lake Murray Dam...) and 02169000 (Saluda River near Columbia, SC). NWISWeb Data for the Nation. http://waterdata.usgs.gov/nwis/current/?type $\equiv$ flow. + +![](images/ad659de2965b81052488a8e556e2f63043404c9443f1af42938ea750533db85f.jpg) +Prof. Rekha Thomas, Ryan Bressler, Christina Polwarth, Braxton Osting, and Prof. Jim Morrow. Prof. Morrow coached the other MCM team from the University of Washington. + +# Judge's Commentary: The Outstanding Flood Planning Papers + +Daniel Zwillinger + +Raytheon + +528 Boston Post Road + +Sudbury, MA 01776 + +# Introduction + +Flood planning for dams is a real-life activity. An analysis of the Saluda Dam at Lake Murray determined that the area below the dam needed to be protected in the event of dam failure, and a second dam is being built. Investigation conclusions can be found on the Web. + +Since flood plan analysis is complex, mathematical modeling is appropriate and useful. A sequence of models is typically used to understand a phenomena. For the Saluda Dam, a first model might have a straight river, the dam disappearing instantaneously, and a simple model of water flow. More detailed effects could then be added: the riverbed bends, the riverbed gradient is not uniform, perhaps the dam breaks slowly, perhaps the dam is breached in the center, etc. Starting with a complicated model may make it difficult to determine if the results are reasonable, since there may be little to validate against. A series of models that allows additional effects to be incorporated sequentially is preferable; it may facilitate creation of a sensitivity analysis. + +Water flow in open channels has traditionally been modeled by the Saint-Venant equations, which are nonlinear partial differential equations. Many teams started by numerically solving these equations and got immersed in details. (The MCM is not a contest in computation!) Often these teams focused only on the water flow and spent little effort modeling the dam break itself. Although there are many models for dam failure, a dam "vanishing" completely is rather simplistic. (There are a few well-defined dam failure mechanisms. Teams that considered different mechanisms tended to do better than those teams that used simplistic assumptions.) + +Many teams started with static models, but most recognized that these models do not yield reasonable results. The dam-break problem seemed to require a dynamic approach. The approaches varied considerably, but included: + +- Continuous technique: use of sophisticated equations, such as Saint-Venant's equation. (Note: Copying an equation derivation achieves little. Pointing out assumptions needed to obtain an equation may be useful.) +- One-dimensional discrete techniques: breaking the Saluda River up into prisms and computing flow from one to the next. Rectangles and trapezoids were popular choices. +- Two-dimensional discrete techniques: cellular automata using USGS data and computing flows from neighboring cells. The cellular approach can be difficult to understand and to implement correctly. + +Widely varying techniques obtained approximately the same result. Teams that used more than one approach tended to do better. The usual answers to the specific test questions are: No, the State Capitol doesn't flood, and Rawls Creek backs up about 2.5 miles. + +The outstanding papers are remarkable in that each used a fundamentally different technique: + +- The University of Washington team pursued an analytic approach. They considered two models, obtained real data, and calibrated their model. +- The Harvey Mudd team numerically solved the Saint-Venant equations. +- The University of Saskatchewan team considered a model, rejected it as being unrealistic, and then numerically solved a dynamic model that they created themselves. + +Some overall comments on the submissions: + +- Several teams validated their results from evacuation plans and recorded flood events. Many others did not do enough reality checking; a back-of-the-envelope computation frequently would have helped. +- Many teams had perhaps overly complicated models, involving many variables and parameters. +- The reference for a Web page should list the date of access. + +# About the Author + +Daniel Zwillinger attended MIT and Caltech, where he obtained a Ph.D. in applied mathematics. He taught at Rensselaer Polytechnic Institute, worked in industry (Sandia Labs, Jet Propulsion Lab, Exxon, IDA, Mitre, BBN), and has been managing a consulting group for the last dozen years. He has worked in many areas of applied mathematics (signal processing, image processing, communications, and statistics) and is the author of several reference books. + +# The Booth Tolls for Thee + +Adam Chandler + +Matthew Mian + +Pradeep Baliga + +Duke University + +Durham, NC + +Advisor: William G. Mitchener + +# Summary + +We determine the optimal number of tollbooths for a given number of incoming highway lanes. We interpret optimality as minimizing "total cost to the system," the time that the public wastes while waiting to be processed plus the operating cost of the tollbooths. + +We develop a microscopic simulation of line-formation in front of the toll-booths. We fit a Fourier series to hourly demand data from a major New Jersey parkway. Using threshold analysis, we set upper bounds on the number of tollbooths. This simulation does not take bottlenecking into account, but it does inform a more general macroscopic framework for toll plaza design. + +Finally, we formulate a model for traffic flow through a plaza using cellular automata. Our results are summarized in the formula for the optimal number $B$ of tollbooths for $L$ lanes: $B = \lfloor 1.65L + 0.9\rfloor$ . + +# Previous Work in Traffic Theory + +Most models for traffic flow fall into one of two categories: microscopic and macroscopic. + +Microscopic models examine the actions and decisions made by individual cars and drivers. Often these models are called car-following models, since they use the spacing and speeds of cars to characterize the overall flow of traffic. + +Macroscopic models view traffic flow in analogy to hydrodynamics and the flow of fluid streams. Such models assess "average" behavior, and commonly-used variables include steady-state speed, flux of cars per time, and density of traffic flow. + +Some models bridge the gap between the two kinds, including the gaskinetic model, which allows for individual driving behaviors to enter into a macroscopic view of traffic, much as ideal gas theory can examine individual particles and collective gas [Tampere et al. 2003]. + +The tollbooth problem involves no steady speed, so macroscopic views may be tricky. On the other hand, bottlenecking is complex and tests microscopic ideas. + +An $\mathbf{M}|\mathbf{M}|n$ queue seems appropriate at first: Vehicles arrive with gaps (determined by an exponential random variable) at $n$ tollbooths, with service at each tollbooth taking an exponential random variable amount of time [Gelenbe 1987]. We incorporate aspects of the situation from a small scale into a larger-scale framework. + +# Properties of a Successful Model + +A successful toll-plaza configuration should + +- maximize efficiency by reducing customer waiting time; +- suggest a reasonably implementable policy to toll plaza operators; and +- be robust enough to handle efficiently the demands of a wide range of operating capacities. + +# General Assumptions and Definitions + +# Assumptions + +- All drivers act according to the same set of rules. Although the individual decisions of any given driver are probabilistic, the associated probabilities are the same for all drivers. +- Bottlenecking downstream of the tollbooths does not hinder their operation. Vehicles that have already passed through a tollbooth may experience a slowing down due to the merging of traffic, but this effect is not extreme enough to block the tollbooth exits. +- The number of highway lanes does not exceed the number of tollbooths. +- All tollbooths offer the same service, and vehicles do not distinguish among them. +- The amount of traffic on the highway is dictated by the number of lanes on the highway and not by the number of tollbooths. Changing the number of tollbooths does not affect "demand" for the roadway. +- The number of operating tollbooths remains constant throughout the day. + +# Terms and Definitions + +- A "highway lane" is a lane of roadway in the original highway before and after the toll plaza. +- Influx is the rate (in cars/min) of cars entering all booths of the plaza. +- Outflux is the rate (in cars/min) of cars exiting all booths of the plaza. + +# Optimization + +We seek to balance the cost of customer waiting time with toll plaza operating costs. + +- The daily cost $C$ of a tollbooth is the total time value of the delays incurred for all individuals (driver plus any passengers) plus the cost of operation of the booth. The tolls and the startup cost of building the plaza are not part of this cost. +- $a$ is the average time-value of a minute for a car occupant. +- $\gamma$ is the average car occupancy. +- $N$ is the total number of (indistinct) tolls paid over the course of one day. +- $L$ is the number of lanes entering and leaving a plaza. +- $W$ is the average waiting time at a tollbooth, in minutes. +- $B$ is the number of booths in the plaza. +- $Q$ is the average daily operating cost of a human-staffed tollbooth. + +We seek a number of toolbooths $B$ that minimizes cost $C$ . + +The total waiting time per car is $WN$ , so the total cost incurred by waiting time is $WaN\gamma$ . General human time-value is cited as $\$6/$ hour or $a =$ $0.10/$ min [Boronico 1998]. The cost to operate a booth for a day is $QB$ . The average annual operation cost for a human-staffed tollbooth is $\$180,000$ , so we set $Q =$ $180000/365.25$ days [Sullivan et al. 1994]. + +Reasoning that $W$ depends on $B$ , we have + +$$ +C (B) = W a N \gamma + Q B. +$$ + +# Car Entry Rate + +We fit a curve to mean hourly traffic-flow data from Boronico [1998]. To interpolate an influx rate for every minute during the day, we fit a Fourier series approximation, whose advantage is its periodicity, with period of one day. [EDITOR'S NOTE: We omit the table of data from Boronico [1998] and the 17-term expression for the approximating series.] + +# Model 1: Car-Tracking Without Bottlenecks + +# Approach + +We seek an upper bound on the optimal number of booths for a particular number of lanes. + +# Assumptions + +- Each vehicle is looking to get through the toll plaza as quickly as possible, and the only factor that may cause Car A, which arrives earlier than Car B, to leave later than B is the random variable of service time at a tollbooth. In other words, cars do not make bad decisions about their wait times +- Customers are served at a tollbooth at a rate defined by an exponential random variable (a common assumption in queueing theory [Gelenbe 1987]) with mean $12\mathrm{~s}$ /vehicle, or 5 cars/min. +- Traffic influx occurs on a "per lane" basis, meaning that influx per lane is constant over all configurations with varying number of lanes. +- Bottlenecking occurs more frequently when there are more tollbooths, given a particular number of lanes. This implies that omitting bottlenecking from our model will cause us to overestimate the optimal number of tollbooths. +- There exists a time-saving threshold such that if the waiting time saved by adding another tollbooth is under this threshold, it is not worth the trouble and expense to add the tollbooth. We assume that if an additional tollbooth does not reduce the maximum waiting time over all cars by the same amount as the average time that it takes to serve a car at a tollbooth $(12\mathrm{s} = 0.2\mathrm{m})$ , then it is unnecessary. +- An incoming car chooses the tollbooth that will be soonest vacated, if all are currently occupied. If only one is vacant, the car chooses that tollbooth. If multiple tollbooths are vacant, the car chooses the one vacated the earliest. +- Cars make rational decisions with the goal of minimizing their wait times. + +# Expectations of the Model + +- An additional booth should not increase waiting time. +- Each additional tollbooth offers diminishing returns of time saved. + +# Development of Model + +Cars arrive at the toll plaza at a rate described by the Fourier series approximation of the data from Boronico [1998]. Cars are considered inside the toll plaza (meaning that we begin to tabulate their waiting times) when they are either being served or waiting to be served. + +Service time does not count as waiting time; so if a car enters the toll plaza and there is a vacant tollbooth, its waiting time is 0. If there are no vacant toll-booths, cars form a queue to wait for tollbooths, and they enter new vacancies in the order in which they entered the toll plaza. Once a car has been served, it is considered to have exited its tollbooth and the toll plaza as a whole. + +Our car-tracking model does not factor in the cost of tollbooth operation. + +# Simulation and Results + +For $L$ highway lanes, $L \in \{1..8, 16\}$ , we ran the simulation for numbers $B$ of tollbooths up to a point where additional booths no longer have any noticeable effect on waiting time. We exhibit results for a 6-lane highway in Table 1. + +Table 1. Waiting times (in minutes) for six-lane simulations. + +
BoothsAve. waitAve. wait for wait > 0Max. waitMarginal utility
6284399N/A
712285544
86173223
9281616
100.251.222.7813
110.020.170.752
120.0040.070.310.44
130.0010.040.270.04
+ +The column "Marginal Utility" shows how much each additional booth reduces maximum waiting time. For the 13th booth, this value is $0.04\mathrm{min}$ . To choose an optimal number of booths by threshold analysis, we seek the first additional booth that fails to reduce the maximum waiting time for a car by at least the length of the average tollbooth service time $(0.2\mathrm{min})$ . So, based on our assumptions, it is unnecessary to build a 13th tollbooth for a toll plaza serving 6 lanes of traffic. Thus, we set $B = 12$ for $L = 6$ . Table 2 shows the optimal number of tollbooths for other various numbers of lanes. + +Table 2. Optimized number of booths. + +
Lanes1234567816
Booths45781012131629
+ +We also explore the situation of one booth per lane. Regardless of the number of lanes, we find average wait times of around $30\mathrm{min}$ (over $40\mathrm{min}$ for cars that wait at all), and maximum wait time of around $100\mathrm{min}$ . + +# Discussion + +Our results match our expectations. The optimal number of booths increases with the number of lanes, each additional booth reduces waiting time, and additional booths yield diminishing returns in reducing waiting time. + +The benefits of this rather simplistic model are its speed and the definite upper bounds that it offers. + +# Model 2: Cost Minimization + +# Approach + +This model is concerned less with the details of individual vehicular motion and decision-making than with the general aggregate effect of the motions of the cars. We monitor traffic over the course of one day. + +For instance, there is no need for this model to decompose analytically the situation of two cars trying to merge into the same lane. Instead, it recognizes that beyond a certain threshold of outflux from the booths, some bottlenecks will occur. + +Also, this model addresses the cost of daily operation of the plazas. + +# Assumptions, Variables, and Terms + +- The average waiting time per car in the toll plaza, $W$ , is comprised of time in line $(W_{1})$ , service time $(W_{2})$ , and bottlenecking $(W_{3})$ . +- $F_{i}$ (a function of time) is the influx (cars/min) to the plaza from one lane, $F_{o}$ is the outflux (cars/min) from the booth. +- $r$ is the maximum potential service rate (cars/min). +- There is an outflux barrier, $K$ (cars/min), above which bottlenecking takes place. We take it to be linear in $L$ and independent of $B$ , and we call it the bottlenecking threshold. + +# Development + +From the definitions, we have $W_{2} = 1 / r$ . Both $W_{1}$ and $W_{3}$ depend on $B$ . + +The average time in line, $W_{1}$ , begins to accumulate when the influx of traffic exceeds the toll plaza capacity (see Figure 1). The influx is $LF_{i}(t)$ and the + +maximal rate of service is $Br$ . We integrate over time to calculate how many cars were forced to wait in line. + +![](images/fe52ece59b5d6185d9c557e9c6ebdcbd953f4d5e788f1ef765af4c53f7643b7a.jpg) +Figure 1. The area below the curve and above the line represents cars in line. + +Integrating again (over time) gives us the total waiting time for all those cars (with 3600 as a scale factor for time units), and dividing by the total number of cars gives the average waiting time: + +$$ +W _ {1} = \frac {3 6 0 0}{N} \int_ {0} ^ {2 4} \int_ {0} ^ {t} \max \big (L F _ {\mathrm {i}} (\tau) - B r, 0 \big) d \tau d t. +$$ + +We obtain $W_{3}$ in similar fashion: + +$$ +W _ {3} = \frac {3 6 0 0}{N} \int_ {0} ^ {2 4} \int_ {0} ^ {t} \max \left(F _ {\mathrm {o}} (\tau , B) - K, 0\right) d \tau d t. +$$ + +The problem is to determine the variable(s) that $K$ depends on. First, $K$ is not directly dependent on $B$ , since bottlenecking should only be a result of general outflux from the booths into $L$ lanes. Instead, $K$ depends indirectly on the number of booths, because $K$ depends on outflux $F_{0}$ , which in turn depends on $B$ . Also, $K$ also can be considered a linear function of $L$ , because $L$ is directly proportional to influx, which, by the law of conservation of traffic, must equal outflux in the aggregate. + +# Simulation and Results + +We use the same data and Fourier series for traffic influx as in Model 1. We focus our attention on the case $L = 6$ ; other values are analogous. + +We use Mathematica to integrate numerically the expression for $W_{1}$ for a given $L$ and $r = 5$ cars/min. ( $N$ comes from integration of the influx expression.) We do the calculation for values of $B$ ranging from $L$ to $L + 7$ (since $L + 7$ is usually greater than the upper bound from Model 1) with step size 0.25. We fit a quartic polynomial fit to the resulting points $(B, W_{1}(B))$ to get $W_{1}$ as a function of $B$ . + +We illustrate for $L = 6$ . We find $N = 92,355$ . We plot $W_{1}$ for values of $B$ from 6 to 13, in steps of 0.25, together with the best-fit quartic, in Figure 2. + +![](images/a47c9da9705637c6ff506f6367a7952610b78f0a1a92ef67daab75b2d3cbf1f6.jpg) +Figure 2. $W_{1}(B)$ for $B$ ranging from 6 to 13 (actual points with quartic fit). + +The recipe for $W_{3}(B)$ is somewhat less straightforward, since $F_{0}(t)$ is generated from a stochastic distribution, unlike the deterministic $F_{\mathrm{i}}(t)$ . Also, $F_{0}(t)$ depends on $B$ , a significant complication. We ran at least 20 trials of each case $(L,B)$ under the first model; the averaged outcomes of their outflux functions are the function that we use for outflux in this model's simulation, henceforth referred to just as $F_{0}(t,B)$ . + +We use surface-fitting software (Systat's TableCurve3D) to generate an expression for outflux as a function of time and the number of booths and use this expression in the compound integral for $W_{3}$ in integrating numerically. As before, we generate a scatter plot of points $(B, W_{3}(B))$ and fit a quartic polynomial. + +The values of $R^2$ for the surface fits all fall between .84 and .95, which are acceptable values. All of the quartic fits have $R^2$ near 1. + +For the case $L = 6$ , Figure 3 shows the surface fit and Figure 4 shows the function $W_{3}$ . + +With $W_{3}$ a quartic polynomial in $B$ , minimization via calculus yields a solution. For 6 lanes, Mathematica's numeric solver gives the minimum at $B = 10.84$ . Values for various numbers of lanes are summarized in Table 3. + +Outflux (cars/min) -- 6 Lane + +Rank 4 Eqn 317016996 $z^{\wedge}(-1) = a + bx^{\wedge}(1.5) + cx^{\wedge}2dx + ex^{\wedge}(2.5) + fx^{\wedge}3 + ge^{\wedge}(x /wx) + h / lny$ + +r^2=0.89795631 DF Adj r^2=0.89349539 FitStdErr=3.5049628 Fstat=231.30704 + +a=-220.32191 b=-2.9096518 c=1.5952541 d=-0.63655404 + +e=0.23714718 f=-0.0079954663 g=220.43421 h=0.010602912 + +![](images/2df8738446e5b6c31a305804b0d6107507ff4bf04382e36f3deb6e909c38e1c4.jpg) +Figure 3. $L = 6$ : Surface fit for outflux function in terms of time (h) and number of booths. + +![](images/84ab832d0520d1500464226f6096b4b8432c17ec0ace1599c96d09c2cbe5ae91.jpg) +Figure 4. $L = 6$ : Plot of $W_{3}(B)$ for $B$ ranging from 6 to 13 (actual points plus quartic fit). + +Table 3. +Optimized number of booths—final recommendations from Model 2. + +
Lanes1234567816
Booths3567911121427
+ +# Discussion + +This model calculates total waiting time for drivers based on general ideas of traffic flow. The results are reasonable and satisfy many of our expectations for a successful model. The recommended $B$ -values increase monotonically with $L$ and are all less than the upper bounds produced in Model 1. One booth per lane is nowhere near optimal, because (as we can see from the graphs of $W_{1}$ and $W_{3}$ ), while bottlenecking is zero, waiting time in line is much higher, thus diminishing the effect of bottlenecking. + +Given the model's success, it may be disheartening to acknowledge its lack of robustness. Any adjustments to fine-scale aspects of traffic, such as the addition of a potential E-ZPass lane (to be discussed later), would be impossible to implement. Perhaps the rate of service $r$ could be adjusted higher for such a scenario, but changing lanes before the tollbooths would be difficult to capture with this model. + +# Model 3: Cellular Automata + +# Motivation + +What effect does the discreteness of traffic have on the nature and solution of the problem? A continuous model of traffic may neglect the very factors that give rise to traffic congestion and jamming. To address this possibility, we turn to cellular automata theory to develop a discrete, microscopic model. + +# Approach + +Each cell is designated as a vehicle, a vacancy, or a barrier to traffic flow. The model follows individual vehicles through the plaza and computes the waiting time for each. The total waiting time measures the plaza's efficiency. + +In any particular time step, a vehicle advances, changes lanes, or sits still. Vehicles enter the plaza from a stretch of road containing a specific number of lanes. As a vehicle approaches the string of tollbooths, the road widens to accommodate the booths (given that there are more tollbooths than lanes). There is a specific delay associated with using a tollbooth. Once a vehicle leaves a booth, it merges into a roadway with the original number of lanes. + +# Assumptions + +- The plaza consists of occupied cells, vacant cells, and "forbidden" cells. +- Cells represent a physical space that accommodates a standard vehicle with buffer regions on both sides. +- All vehicles are the same size. + +# Governing Dynamics + +Cars move through the toll plaza according to rules. Each vehicle has options, each with an associated probability. For each time step, the following rules are applied in sequential order: + +1. Starting at the front of the traffic and moving backward (with respect to the flow), vehicles advance to the cell directly in front of them with probability $p$ ; if the next cell is not vacant, the vehicle does not advance and is flagged. This probability is meant to simulate the stop-and-go nature of slowly moving traffic. We can think of $p$ as a measure of driver attentiveness; $p = 1$ corresponds to the case where drivers are perfectly attentive and move forward at every opportunity, while $p = 0$ represents the extreme case where drivers have fallen asleep and fail to move forward at all. +2. Using an influx distribution function, the appropriate number of new vehicles is randomly assigned to lanes at the initial boundary (see next section). +3. Starting at the front of traffic and moving backward, vehicles flagged in step 1 are given the opportunity to switch lanes. For each row of traffic, the priority order for switching is determined by a random permutation of lanes. Switching is attempted with probability $q$ . If switching is attempted, left and right merges are given equal probability to be attempted first. If a merge in one direction (i.e. left or right) is impossible (meaning that the adjacent cell is not vacant), then the other direction is attempted. If both adjacent cells are unavailable, the vehicle is not moved. +4. Total waiting time for the current time step is computed by determining the number of cells in the system containing a vehicle. +5. The number of vehicles advancing through the far boundary (end of the simulation space) are tabulated and added to the total output. This number is later used to confirm conservation of traffic. + +# Population Considerations + +The Fourier series for daily influx distribution of cars is still valid for the automata model, but the influx values must be scaled to reflect the effective influx over a much smaller time interval (a single time step). The modified influx function, $F_{\mathrm{in}}$ , is computed as follows: + +$$ +F _ {\mathrm {i n}} (\tau) = \min \left(\left\lfloor \frac {F _ {\mathrm {i n}} (t)}{\eta} \right\rfloor , L\right), +$$ + +where $\eta$ is a constant factor required for the conversion from units of $t$ to those of $\tau$ and $L$ is the number of initial travel lanes. + +# Computing Wait Time + +Wait time is determined by looking through the entire matrix at each time step and noting the number of cells with positive values. The only cells containing positive values are those representing vehicles. Thus, by counting the number of vehicles in the plaza at any given time, we are also counting the amount of time spent by vehicles in the plaza (in units of time steps). + +At time step $i$ , total cumulative waiting time is computed as follows: + +$$ +W _ {i} = W _ {i - 1} + 1 \left(\operatorname {p l a z a} (x, y) > 0\right), +$$ + +where $1()$ denotes an indicator function and plaza denotes the matrix of cells. + +# Simulation and Results + +The cost optimization method defines total cost as + +$$ +C _ {\text {t o t a l}} = \alpha \gamma N W (B, L) + B Q. +$$ + +Using the cellular automata model, we compute waiting time as a function of both the number of lanes and the number of tollbooths. For fixed $L$ , we compare all values of $C_{\mathrm{total}}$ and choose the lowest one. The results are presented in Table 4. + +Table 4. Optimization for cellular automata model. + +
Highway lanesOptimal number of booths
Typical dayRush hour
112
244
356
477
589
61011
71213
81415
162729
+ +Each value in Table 4 represents approximately 20 trials. Through these trials, we noted a remarkable stability in our model. Despite the stochastic nature of our algorithm, each number of lanes was almost always optimized to the same number of tollbooths. There were a handful of exceptions; they occurred exclusively for small numbers of highway lanes ( $< 3$ lanes). + +# Sensitivity Analysis + +Our cellular automata model is relatively insensitive to both $p$ and $q$ . Changes of $\pm 11\%$ in $p$ and $\pm 5.2\%$ in $q$ have no effect on the optimal number of tollbooths + +for a six-lane highway. On the other hand, increasing the delay time by $25\%$ shifts the optimal number of booths from 10 to 11 ( $10\%$ ). Decreasing the delay by $25\%$ has no effect on the solution. Perhaps additional work could lead to an elucidation of the relation between delay and optimal booth number that could help stabilize the cellular automata model. + +# Comparison of Results from the Models + +Table 5 show the optimal number of booths. +Table 5. Comparison of final recommendations for three models. + +
LanesCar-trackingModel MacroscopicAutomata
1432
2554
3765
4877
51098
6121110
7131212
8161414
16292727
+ +The car-tracking model serves as an upper bound for the optimal number of booths, due to its omission of bottlenecking, a fact confirmed in the table. The cellular automata model, on the other hand, incorporates bottlenecking. Due to its examination of each car and each period waited, we lean more toward the cellular automata model for a determination of the optimal number of booths that is more accurate than those of the other two models. + +The optimal values for each model are fit very well ( $r^2 > .996$ ) by a straight line, with slopes between 1.6 and 1.7. + +# Conclusion + +We use three models—the car-tracking model, the macroscopic model for total cost minimization, and the cellular automata model—to determine the optimal (per our definition) number $B$ of tollbooths for a toll plaza of $L$ lanes. + +The car-tracking model uses a simple orderly lineup of cars approaching tollbooths and ignores bottlenecking after the tollbooths; it provides a strong upper bound on $B$ for any given $L$ . + +The macroscopic model looks at the motion of traffic as a whole. It tabulates waiting time in line before the tollbooths by considering times when traffic influx into the toll plaza is greater than tollbooth service time. It also + +finds bottlenecking time by assuming there exists a threshold of outflux, above which bottlenecks will occur, and notices when outflux is greater than said threshold. This is a much more accurate model than the Car-Tracking Model, and it provides us with reasonable solutions for $B$ in terms of $L$ . + +The cellular automata model looks at individual vehicles and their "per lane length" motion on a toll plaza made up of cells. With a probabilistic model of how drivers advance and change lanes, this model details far better than the other models the waiting time in line and the bottlenecking after the tollbooths. + +Thus, we recommend values closer to those provided by the automata model than the macroscopic one. In order to write $B$ explicitly in terms of $L$ , we invoke the linearity of the results. Also, to preserve integral values for $B$ , we use the floor function and determine that + +$$ +B = \lfloor 1. 6 5 L + 0. 9 \rfloor . +$$ + +# Potential Extension and Further Consideration + +Our models assume that all booths are identical. However, systems such as E-ZPass allow a driver to pay a toll electronically from an in-car device without stopping at a tollbooth. If all E-ZPass booths also double as regular teller-operated booths, much of our models remain the same, except that the average service rate might increase. The trouble comes when all the booths are not the same and drivers may need to change lanes upon entering the plaza. This directed lane changing was not implemented in any of the models presented here, but could easily become a part of the automata model. Exclusive E-ZPass booths also would drastically reduce the operating cost for the booth, since an operator's salary would not need to be paid (from $16,000 to$ 180,000 annually) [Sullivan 1994]. + +# References + +Boronico, Jess S., and Philip H. Siegel. Capacity planning for toll roadways incorporating consumer wait time costs. *Transportation Research A* (May 1998) 32 (4): 297-310. +Daganzo, C.F., et al. 1997. Causes and effects of phase transitions in highway traffic. ITS Research Report UCB-ITS-RR-97-8 (December 1997). +Gartner, Nathan, Carroll J. Messer, and Ajay K. Rathi. 1992. Traffic Flow Theory: A State of the Art Report. Revised monograph. Special Report 165. Oak Ridge, TN: Oak Ridge National Laboratory. http://www.tfhrc.gov/its/tft/tft.htm. +Gelenbe, E., and G. Pujolle. 1987. Introduction to Queueing Networks. New York: John Wiley & Sons. + +Jost, Dominic, and Kai Nagel. 2003. Traffic jam dynamics in traffic flow models. Swiss Transport Research Conference. http://www.strc.ch/Paper/jost.pdf. Accessed 4 February 2005. +Kuhne, Reinhart, and Panos Michalopoulos. 1992. Continuum flow models. Chapter 5 in Gartner et al. [1992]. +Schadschneider, A., and M. Schreckenberg. Cellular automaton models and traffic flow. Journal of Physics A: Mathematical and General (1993) 26: L679-L683. +Sullivan, R. Lee. Fast lane. Forbes (4 July 1994) 154: 112-115. +Tampere, Chris, Serge P. Hoogendoorn, and Bart van Arem. Capacity funnel explained using the human-kinetic traffic flow model. http://www.kuleuven.ac.be/traffic/dwn/res2.pdf. Accessed 4 February 2005. + +![](images/7634dc22b6b8ea019ba0bcda6a5be964c0e7e94b4528a44a32831094726a6d0c.jpg) + +Matthew Mian, William G. Mitchener (advisor), Pradeep Baliga, and Adam Chandler. + +# A Single-Car Interaction Model of Traffic for a Highway Toll Plaza + +Ivan Corwin + +Sheel Ganatra + +Nikita Rozenblyum + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +We find the optimal number of tollbooths in a highway toll-platz for a given number of highway lanes: the number of tollbooths that minimizes average delay experienced by cars. + +Making assumptions about the homogeneity of cars and tollbooths, we create the Single-Car Model, describing the motion of a car in the toll-plaza in terms of safety considerations and reaction time. The Multi-Car Interaction Model, a real-time traffic simulation, takes into account global car behavior near tollbooths and merging areas. + +Drawing on data from the Orlando-Orange Country Expressway Authority, we simulate realistic conditions. For high traffic density, the optimal number of tollbooths exceeds the number of highway lanes by about $50\%$ , while for low traffic density the optimal number of tollbooths equals the number of lanes. + +# Definitions and Key Terms + +- A toll plaza with $n$ lanes is represented by the space $[-d,d] \times \{1,\dots ,n\}$ , where members of the set $\{0\} \times \{1,\ldots ,n\}$ are called tollbooths and $d$ is called the radius of the toll plaza. Denote the tollbooth $\{0\} \times \{i\}$ by $\tau_{i}$ . The subspace $[-d,0) \times \{1,\dots ,n\}$ is known as the approach region and $(0,d] \times \{1,\dots ,n\}$ is known as the departure region. + +- A highway/toll plaza pair is represented by the space $H = (-\infty, d) \times \{1, \dots, m\} \cup [-d, d] \times \{1, \dots, n\} \cup (d, \infty) \times \{1, \dots, m\}$ , where the toll plaza is (as above) the subspace $[-d, d] \times \{1, \dots, n\}$ and the stretches of highway are the subspaces $(-\infty, d) \times \{1, \dots, m\}$ and $(d, \infty) \times \{1, \dots, m\}$ . Elements of the sets $\{1, \dots, m\}$ and $\{1, \dots, n\}$ are highway lanes and tollbooth lanes respectively, and elements of $\mathbb{R}$ are highway positions. In practice, we take $m \geq n$ . +- The fork point of a highway/toll plaza pair, given by the highway position $-d$ , is the point at which highway lanes turn into toll lanes. Similarly, the merge point of a highway/toll plaza pair, given by the highway position $d$ , is the point at which toll lanes turn back into highway lanes (Figure 1). + +![](images/fa66fd377d8c38eefcb9b94e55ffe2c4ce4193426e0da721454996f89e4ef93a.jpg) +Figure 1. A depiction of the highway/toll plaza pair. + +- A car $C$ is represented by a 4-tuple $(L, a_{+}, a_{-}, a_{\mathrm{brake}})$ and a position function $p = (x, k): \mathbb{R} \to H$ where $x(t)$ is smooth for all $t$ . Here, $x(t)$ gives the highway position of the front tip of $C$ and $k(t)$ is the (tollbooth or highway) lane number of $C$ . Let $L$ be the length of $C$ in meters, $a_{+}$ the constant comfortable positive acceleration, $a_{-}$ the constant comfortable brake acceleration, and $a_{\mathrm{brake}}$ the maximum brake acceleration. At a fixed time, the region of $H$ in front of $C$ is the portion of $H$ with greater highway position than $C$ , while the rear of $C$ is the region of $H$ with highway position at most the position of $C$ minus $L$ . +- The speed limit $v_{\mathrm{max}}$ of $H$ is the maximum speed at which any car in $H$ can travel. +- The traffic density $\rho(t)$ of $H$ at time $t$ is the average number of cars per lane per second that would pass highway position 0 if there were no toll plaza. +- The average serving rate $s$ of tollbooth $\tau_{i}$ is the average number of cars that can stop at $\tau_{i}$ , pay the toll, and leave, per second. + +Table 1. Variables, definitions, and units. + +
VariableDefinitionUnits
nNumber of tollboothsunitless
ρTraffic densitycars/s
TTotal delay times
xPositionm
vVelocitym/s
xoPosition of initial decelerationm
toTime of initial decelerations
xfPosition upon returning to speed limitm
tfTime upon returning to speed limits
x1Position of car Cm
x2Position of car C'm
v1Velocity of car Cm/s
v2Velocity of car C'm/s
x'1Position of car C after time stepm
x'2Position of car C' after time stepm
v'1Velocity of car C after time stepm/s
v'2Velocity of car C' after time stepm/s
GSafety gapm
G'Safety gap after time stepm
tTimes
t'Additional times
αCCompensation deceleration from car/safety gap overlapm/s2
αOCompensation deceleration from obstacle/safety gap overlapm/s2
xPositionm
vVelocitym/s
ciSize of tollbooth line icars
liLength of tollbooth line im
tserveTime C enters departure areas
tmergeTime C upon passing merge points
voutVelocity of a car C upon passing merge points
+ +Table 2. Constants, definitions, and units. + +
ConstantMeaningUnits
dtoll plaza radiusm
mNumber of highway lanesunitless
a+Comfortable accelerationm/s2
a+Comfortable decelerationm/s2
abrakeHard brake decelerationm/s2
LCar lengthm
vmaxSpeed limitm/s
sMean serving ratecars/s
σStandard deviation of serving times/car
ΔtExpected reaction times
γUnexpected reaction times
εLine spacing distancem
+ +# General Assumptions + +# Time + +- Time proceeds in discrete time steps of size $\Delta t$ . + +# Geometry of the Toll Plaza + +- The highway is straight and flat and extends in an infinite direction before and after the toll plaza. The highway is obstacle-free with constant speed limit $v_{\mathrm{max}}$ . The assumption of infinite highway is based on toll plazas being far enough apart that traffic delays at one toll plaza don't significantly affect traffic at an adjacent one. +- A car's position is determined uniquely by a lane number and a horizontal position. Thus, on a stretch of road with $m$ operating lanes, the position of a car is given by the ordered pair $(x, i) \in \mathbb{R} \times \{1, \dots, m\}$ . + +# Tollbooths and Lines + +- A car comes to a complete stop at a tollbooth. +- The time required to accelerate and decelerate to move up a position in a waiting line is less than the serving time of the line. Thus, average time elapsed before exiting a line is simply a function of average serving time and line length measured in cars. +- A car cannot enter a tollbooth until the entire length of the car in front of it has left the tollbooth. +- All tollbooths have the same normally distributed serving time with mean $1 / s$ and standard deviation $\sigma$ . + +# Fork and Merge Points + +- Transitions between the highway and tollbooth lanes are instantaneous. +- When transitioning at the fork point into a tollbooth lane, cars enter the lane with the shortest tollbooth lines. +- There is no additional delay associated with the division of cars into toll-booths. +- The process of transitioning at the merge points from the tollbooth lanes, called merging, does incur delay due to bottlenecking because we assume that there are at least as many tollbooth lanes as highway lanes. + +# Optimality + +Measures of optimality for a toll include having minimal average delay, standard deviation of average delay, accident rate, and proportion of cars delayed [Edie 1954]. We assume that optimality occurs when cars experience minimal average delay. Specifically, for a car $C$ , let $x_0$ , $t_0$ be the position and time at which $C$ first decelerates from the speed limit to enter the tollbooth line, and let $x_f$ , $t_f$ be time and position at which, having merged onto the highway once more, $C$ reaches the speed limit. Then the delay $T$ experienced by the car, or the time cost associated with passing through the toll plaza instead of travelling unhindered, is given by + +$$ +T = t _ {f} - t _ {\mathrm {o}} - \frac {x _ {f} - x _ {\mathrm {o}}}{v _ {\max }}. \tag {1} +$$ + +We secondarily prefer toll plaza configurations with minimal construction and operating cost, i.e., toll plaza configurations with fewer tollbooths. Specifically, for a given highway, if two values of $n$ (the number of tollbooths) give sufficiently close average delay times (say, within 1 s), we prefer the lower $n$ . + +We rephrase the problem as follows: + +Given a highway configuration with $m$ lanes and a model of traffic density, what is the least number of tollbooth lanes $n$ that minimizes the average delay (within 1 s) experienced by cars travelling through the tollbooth? + +# Expectations of Our Model + +- For sufficiently low traffic density, the delay time per car is relatively constant and near the theoretical minimum, because the tollbooth line does not grow and there are no merging difficulties. We expect that for low density the optimal number of tollbooths equals or slightly exceeds the number of lanes. +- For high traffic density, the delay time per car is very large and continues to grow, because the tollbooth queue is unable to move fast enough to handle the influx of cars; waiting time increases approximately linearly in time. We expect that for high density, the optimal number of tollbooths significantly exceeds the number of lanes. +- An excessive number of tollbooths leads to merging inefficiency, causing great delay in the departure region. + +# The Single-Car Model + +# Additional Definitions and Assumptions + +- An obstacle for a car $C$ is a point in the highway/toll plaza pair which $C$ must slow down to avoid hitting. The only obstacle that we consider is the merge point under certain conditions. +- At a fixed time, the closest risk to a car $C$ is the closest obstacle or car in front of $C$ . +- The unexpected reaction time $\gamma$ is the amount of time a car takes between observing an unexpected occurrence (a sudden stop) and physically reacting (braking, accelerating, swerving, etc.). The expected reaction time $\Delta t$ is the amount of time between observing an expected occurrence (light change, car brake, tollbooth) and physically reacting. +- Cars are homogeneous; that is, all have the same $L, a_{+}, a_{-}$ , and $a_{\text{brake}}$ . +- All cars move in the positive direction. +- All cars observe the speed limit $v_{\mathrm{max}}$ . Moreover, unless otherwise constrained, a car travels at this speed or accelerates to it. In particular, outside a sufficiently large neighborhood of the toll plaza, all cars travel at $v_{\mathrm{max}}$ . +- Cars accelerate and decelerate at constant rates $a_{+}$ and $a_{-}$ unless otherwise constrained. +- Cars do not attempt to change lanes unless at a fork or merge point. That is to say, for a car $C$ , $k(t)$ is piecewise constant, changing only at $t$ such that $x(t) = -d$ or $d$ . +- A car $C$ prefers to keep a certain quantity of unoccupied space between its front and its closest risk, of size such that if $C$ were to brake with maximum deceleration, $a_{\text{brake}}$ , $C$ would always be able to stop before reaching its closest risk [Gartner et al. 1992, §4]. We refer to this quantity as the safety gap $G$ . Given the position of a car $C$ , the position corresponding to distance $G$ in front of $C$ is the safety position with respect to $C$ . If the safety position with respect to $C$ does not overlap the closest risk, we say $C$ is unconstrained. +- A car can accurately estimate the position and velocity of itself and of the car directly in front of it and its distance from the merging point. +- If a car $C$ comes within a sufficiently small distance $\epsilon$ of a stopped car, $C$ stops. This minimum distance $\epsilon$ is constant. +- For each car, there is a delay, the reaction time, between when there is a need to adjust acceleration and when acceleration is actually adjusted. Green [2000] splits reaction times into three categories; the ones relevant to us are + +expected reaction time $\Delta t$ and unexpected reaction time $\gamma$ , which are defined above. Although these times vary with the individual, we make the simplifying assumption that all cars have the same values, $\Delta t = 1$ s and $\gamma = 2$ s. Reaction times provide a motivation for discretizing time with time step $\Delta t$ ; drivers simply do not react any faster. + +# The Safety Gap + +We develop an expression for the safety gap $G$ of car $C$ , which depends on the speed of the closest risk $C'$ . Let the current speeds of $C$ and $C'$ be $v_{1}$ and $v_{2}$ . Now suppose that $C'$ brakes as hard as possible and thus decelerates at rate $a_{\mathrm{brake}}$ . In time $v_{2} / a_{\mathrm{brake}}$ , car $C'$ stops; meanwhile it travels distance + +$$ +v _ {2} \frac {v _ {2}}{a _ {\mathrm {b r a k e}}} - \frac {1}{2} a _ {\mathrm {b r a k e}} \left(\frac {v _ {2}}{a _ {\mathrm {b r a k e}}}\right) ^ {2} = \frac {v _ {2} ^ {2}}{2 a _ {\mathrm {b r a k e}}}. +$$ + +If $C$ starts braking after a reaction time of $\gamma$ , it takes total time $\gamma + v_1 / a_{\text{brake}}$ to stop and travels distance + +$$ +\gamma v _ {1} + \frac {v _ {1} ^ {2}}{2 a _ {\mathrm {b r a k e}}}. +$$ + +Thus, in the elapsed time, the distance between $C$ and $C'$ decreases by + +$$ +\gamma v _ {1} + \frac {v _ {1} ^ {2} - v _ {2} ^ {2}}{2 a _ {\mathrm {b r a k e}}}. +$$ + +Therefore, this must be the minimum distance between the front of $C$ and the back of $C'$ in order to avoid collision. Accounting for the length of $C'$ , the minimum distance between $C$ and $C'$ , and thus the safety gap, must be + +$$ +G = L + \gamma v _ {1} + \frac {v _ {1} ^ {2} - v _ {2} ^ {2}}{2 a _ {\mathrm {b r a k e}}}. +$$ + +Now suppose that the closest risk is an obstacle, in particular the merge point. Rather than braking with deceleration $a_{\mathrm{brake}}$ , $C$ will want to keep a safety gap that allows for normal deceleration of $a_{-}$ . Because deceleration on approach is expected, $C$ will opt to decelerate at a comfortable rate, $a_{-}$ . Moreover, since $C$ is reacting to an expected event, the reaction time is given by $\Delta t$ . Since the length and velocity of the obstacle are both 0, the safety gap must be + +$$ +G = \Delta t v _ {1} + \frac {v _ {1} ^ {2}}{2 a _ {-}}. +$$ + +# Individual Car Behavior + +An individual car $C$ can be in one of several positions: + +- No cars or obstacles are within its safety gap, that is, $C$ is unrestricted. Consequently, $C$ accelerates at rate $a_{+}$ unless it has velocity $v_{\max}$ . +- The tollbooth line is within braking distance. Since this is an expected occurrence, the car brakes with deceleration $a_{-}$ . +- Another car $C'$ is within its safety gap, so $C$ reacts by decelerating at some rate $\alpha_{C}$ so that in the next time step, $C'$ is no longer within the safety gap. $C$ chooses $\alpha_{C}$ based on the speeds $v_{1}, v_{2}$ and positions $x_{1}, x_{2}$ of both cars. If $C$ assumes that $C'$ continues with the same speed, then after one time step $\Delta t$ the new positions and speeds are + +$$ +x _ {1} ^ {\prime} = x _ {1} + v _ {1} \Delta t - \frac {1}{2} \alpha_ {C} (\Delta t) ^ {2}, x _ {2} ^ {\prime} = x _ {2} + v _ {2} \Delta t, +$$ + +$$ +v _ {1} ^ {\prime} = v _ {1} - \alpha_ {C} \Delta t, v _ {2} ^ {\prime} = v _ {2}, +$$ + +and the new safety gap is + +$$ +G ^ {\prime} = \gamma v _ {1} ^ {\prime} + \frac {v _ {1} ^ {\prime 2} - v _ {2} ^ {\prime 2}}{2 a _ {\mathrm {b r a k e}}}. +$$ + +For the new position of $C_2$ to not be within the new safety gap, we must have + +$$ +x _ {2} ^ {\prime} - x _ {1} ^ {\prime} - L = G ^ {\prime}. +$$ + +Substituting into this equation, we find: + +$$ +x _ {2} + v _ {2} \Delta t - v _ {1} \Delta t + \frac {1}{2} \alpha_ {C} (\Delta t) ^ {2} - L = \gamma v _ {1} - \gamma \alpha_ {C} \Delta t + \frac {(v _ {1} - \alpha_ {C} \Delta t) ^ {2} - v _ {2} ^ {2}}{2 a _ {\mathrm {b r a k e}}}. +$$ + +Solving this equation for $\alpha_{C}$ and taking the root corresponding to the situation that $C$ trails $C'$ , we find that + +$$ +\begin{array}{l} c \alpha_ {C} = \frac {1}{\Delta t} \left(\frac {\Delta t a _ {\mathrm {b r a k e}}}{2} + v _ {1} + \gamma a _ {\mathrm {b r a k e}} \right. \\ - \frac {1}{2} \left([ (\Delta t) ^ {2} - 4 v _ {1} \Delta t a _ {\mathrm {b r a k e}} + 4 \Delta t a _ {\mathrm {b r a k e}} ^ {2} \gamma + (2 \gamma a _ {\mathrm {b r a k e}}) ^ {2} \right. \\ \left. + 8 (x _ {2} - x _ {1}) a _ {\mathrm {b r a k e}} + 8 v _ {2} \Delta t a _ {\mathrm {b r a k e}} - 8 L a _ {\mathrm {b r a k e}} + 4 v _ {2} ^ {2} \right] ^ {\frac {1}{2}}\left. \right). \\ \end{array} +$$ + +- The merge point is within its safety gap. The safety gap equation differs from the car-following case by using $a_{-}$ instead of $a_{\mathrm{brake}}$ and $\Delta t$ instead of $\gamma$ and by leaving out the $L$ . Therefore, by the same argument as in the previous paragraph, the deceleration is + +$$ +\begin{array}{l} \alpha_ {\mathrm {o}} = \frac {1}{\Delta t} \left(v _ {1} + \frac {3 \Delta t a _ {-}}{2} \right. \\ - \frac {1}{2} \sqrt {(\Delta t) ^ {2} - 4 v _ {1} \Delta t a _ {-} + 8 (\Delta t) ^ {2} a _ {-} ^ {2} + 8 (x _ {2} - x _ {1}) + 8 v _ {2} \Delta t a _ {-} + 4 v _ {2} ^ {2}}). \\ \end{array} +$$ + +Finally, once we have determined the new acceleration $\alpha$ of $C$ , we can change its position and velocity for the next time step as follows (letting $x, v$ and $x', v'$ be the old and new position and velocity respectively): + +$$ +v ^ {\prime} = v + \alpha \Delta t, \qquad x ^ {\prime} = x + v \Delta t + \frac {1}{2} \alpha \Delta t ^ {2}. +$$ + +# Calculating Delay Time + +We calculate the delay time $T$ for a car $C$ moving through a toll plaza by breaking the process into several steps, tracing the car as soon as it starts slowing down before passing through the tollbooth, and until it merges back into a highway lane and accelerates to the speed limit. + +Recalling our assumptions that cars do not change lanes, that they are evenly distributed among the lanes, and that there is no time loss associated with the distribution of cars into tollbooth lane at the fork point, we find that the period of approach to a tollbooth can be broken down as follows: + +- Deceleration from speed limit to stopping. We assume that a car comes to a complete stop upon joining a tollbooth line as well as upon reaching the tollbooth. Therefore, the first action taken by a car approaching a toll plaza is to decelerate to zero; at constant deceleration $a_{-}$ , it takes time $v_{\max} / a_{-}$ to go from the speed limit to zero, over distance $v_{\max}^2 / 2a_{-}$ . +- Line Assignment. As a car approaches the toll plaza, it is assigned to the currently shortest line. Let $c_{i}$ be the number of cars in line $i$ . The cars are spaced equidistantly throughout the line with distance $\epsilon$ between cars. Thus, as long as the length of the line is less than $d$ , we have that the length of the line is $l_{i} = c_{i}(L + \epsilon)$ , where $L$ is the length of a car. Now, if $c_{i}(L + \epsilon) > d$ , then the line extends to before the fork area, where there are $m$ lanes instead of $n$ . Assuming that the line lengths are roughly the same, increasing the minimum line length by one car increases the total number of cars by about $n$ , and therefore each of the $m$ lanes has an additional $n / m$ cars. It follows that + +$$ +l _ {i} = \left\{ \begin{array}{l l} c _ {i} (L + \epsilon), & \text {i f} c _ {i} (L + \epsilon) < d; \\ d + \frac {n \left[ c _ {i} (L + \epsilon) - d \right]}{m}, & \text {o t h e r w i s e .} \end{array} \right. \tag {2} +$$ + +- Movement through a Tollbooth Line. A car $C$ joins the tollbooth line that it was assigned if such a line exists, that is, if the line length $l_{i}$ is positive. In this case, $C$ must wait for the entire line ahead to be serviced before $C$ reaches the tollbooth. Let $t_{\text{serve}}$ be the time when $C$ enters the departure area, after it has been serviced. If there is an overflow of cars from the merge line such that $C$ cannot leave the tollbooth, $t_{\text{serve}}$ is the time when the car actually leaves, after the line in front has advanced sufficiently. +- Movement through the Departure Region. Different scenarios can occur in the departure region. + +- Once $C$ enters the departure area, it accelerates forward until either another car or the merge point enters its safety gap. +- If another car $C'$ enters the safety gap of $C$ , $C$ slows down and follows $C'$ until $C'$ merges, at which time the merge point will overlap the safety gap of $C$ . +- When the safety position of $C$ reaches the merge point, if $C$ does not have right of way, $C$ will slow down so as to prevent the merge point from overlapping the safety gap, treating the merge point as an obstacle. This is in order to allow other cars who have already begun to merge, to do so until $C$ can merge. +- Upon having the right of way, $C$ merges and accelerates unconstrained from the departure region until reaching the speed limit. Let $t_{\text{merge}}$ be the time at which $C$ merges and $v_{\text{out}}$ be its speed at that time. Then + +$$ +t _ {f} = t _ {\mathrm {m e r g e}} + \frac {v _ {\mathrm {m a x}} - v _ {\mathrm {o u t}}}{a _ {+}}, +$$ + +$$ +x _ {f} = d + v _ {\mathrm {o u t}} \frac {v _ {\mathrm {m a x}} - v _ {\mathrm {o u t}}}{a _ {+}} + \frac {(v _ {\mathrm {m a x}} - v _ {\mathrm {o u t}}) ^ {2}}{2 a _ {+}}. +$$ + +Thus by (1), the delay experienced by $C$ is + +$$ +T = t _ {\mathrm {m e r g e}} - t _ {\mathrm {l i n e}} - \frac {l _ {i} (t _ {\mathrm {l i n e}}) + d}{v _ {\mathrm {m a x}}} + \frac {v _ {\mathrm {m a x}} - v _ {\mathrm {o u t}}}{a _ {+}} - \frac {3 v _ {\mathrm {m a x}}}{2 a _ {-}} - \frac {v _ {\mathrm {o u t}} (v _ {\mathrm {m a x}} - v _ {\mathrm {o u t}})}{a _ {+} v _ {\mathrm {m a x}}}. +$$ + +# The Multi-Car Interaction Model + +We now determine the average delay time for a group of cars entering the toll plaza over a period of time. We simulate a group of cars arriving as per an arrival schedule and average their respective delay times. There are two complications: determining the arrival schedule (the distribution of individual cars over which to average) and the two variables $t_{\text{merge}}$ and $v_{\text{out}}$ (used in the delay-time formula above). + +To determine computationally the average delay time, we must use the traffic density function $\rho(t)$ to produce a car arrival schedule. We create the arrival schedule by randomly assigning arrival times based on $\rho$ . Using this schedule, we determine which cars begin to slow down for a given time step. Unfortunately this task is not as straightforward as determining whether a car's arrival time is less than the present time step. The arrival schedule provides the time a car reaches 0 (on the highway) if unconstrained. We wish to know when a car reaches a certain distance from the tollbooth line. Essentially given that a car would be at a set position (say 0 for the tollbooth) at time $t$ , we seek the time $t'$ when that car would have passed the front of the tollbooth line. This reduces to a question of Galilean relativity and we find that + +$t' = t - l_i(t) / v_{\mathrm{max}}$ . Now, up to knowing $l_i(t)$ , we can exactly determine when cars join the tollbooth lines. We use (2) and the difference equation for car flow + +$$ +\frac {\Delta c _ {i}}{\Delta t} = \frac {m}{n} \rho \left(t - \frac {l _ {i}}{v _ {\mathrm {m a x}}}\right) - s _ {i} +$$ + +to keep track of the length of the tollbooth line, increasing it as cars join and decreasing it as cars are served. + +As a car's arrival time (adjusted to the line length) is reached, we immediately assign it to the current shortest tollbooth line. We introduce normally distributed serving times with mean $\frac{1}{s}$ (where $s$ is serving rate) and standard deviation $\sigma$ that we assume to be $\frac{1}{6s}$ . + +The second consideration in simulating many cars is how to determine $t_{\text{merge}}$ and $v_{\text{out}}$ for each car. Our time-stepping model allows us to recursively update every car and thus to determine the actions of a single car at each time step. Following the rules in the previous section, we know exactly when and how much to accelerate $(a_{+})$ and decelerate $(\alpha_{c}, \alpha_{o})$ . Furthermore, we observe that when a car that is first in its tollbooth lane approaches the merge point, it joins a merging queue (with at most $n$ members). The only time when a car (on the merging queue) does not treat the queue as an obstacle (and consequently slow down) is when a highway lane clears and the car is taken from the queue and allowed to accelerate across the merge point and into free road. A lane is clear once the car in it accelerates $L + \epsilon$ passed the merge point. + +With this model, we thus have a method, given a highway with $m$ lanes, a certain traffic density function, and values for various constants, to calculate the optimal number of tollbooth lanes $n$ . We can estimate a finite range of values of $n$ in which the optimal number must lie. For each value of $n$ we run our model, calculating the delay experienced by each car and averaging these to calculate average delay. We then compare our average delays for all $n$ , choosing the minimal such $n$ so that average delay is within 1 s of the minimum. + +# Case Study + +We need reasonable specific values for our constants and density function for use in our tests. We take most of these from the Orlando-Orange Country Expressway Authority [2004] and a variety of reports on cars. We begin with a few simplifying assumptions about our traffic density function. + +- To determine optimal average delay, it suffices to calculate the average delay over a suitably chosen day, as long as this day has periods of high and low density. This is reasonable because over most weekdays, traffic tends to follow similar patterns. Therefore, we limit the domain of $\rho$ to the interval of seconds $[0, 3600 \times 24]$ . +- The function $\rho(t)$ is piecewise constant, changing value on the hour. This + +is reasonable: Since cars are discrete, $\rho(t)$ really is an average over a large amount of time and thus must already be piecewise constant. + +- The length of the time interval between an arriving car and the next car is normally distributed. + +The Orlando-Orange Country Expressway Authority's report on plaza characteristics [2004] allows us to construct a realistic traffic density function $\rho$ for the purposes of testing. The report gives hourly traffic volume on several highways in Florida, which we use along with our assumption about normal arrival times to develop an arrival schedule for cars on the highway. + +We assume several realistic values for constants defined earlier (Table 3). + +Table 3. Constant values used in testing. + +
Constant nameSymbolValue
Comfortable accelerationa+2 m/s2
Comfortable decelerationa-2 m/s2
Hard braking decelerationabrake8 m/s2
Car lengthL4 m
Speed limitvmax30 m/s
Line spacingε1 m
+ +Our model assumes that every tollbooth operates at a mean rate of approximately $s$ cars/s. But each type of tollbooth—human-operated, machine-operated, and beam-operated (such as an EZ-pass)—has a different service rate. We attempt to approximate the heterogenous tollbooth case by making $s$ a composite of the respective services rates. According to Edie [1954], the average holding time (inverse of service rate) for a human operated tollbooth is 12 s/car, while according to the Orlando-Orange County Expressway Authority [n.d.], the average service rate for their beamoperated tollbooths, the E-Pass, is 2 s/car. Similarly, a report for the city of Houston [Texas Transportation Institute 2001] places the holding time for a human operated tollbooth at 10 s/car and a machine operated tollbooth at 7 s/car. Looking at these times, we find that a reasonable average holding time could be 5 s/car, giving us an average service rate $s = 0.2$ cars/s. + +For verification, we consider hourly traffic volumes for six Florida highways, with from 2 to 4 lanes and varying traffic volumes [Orlando-Orange Country Expressway Authority 2004]. We use the data to obtain $\rho(t)$ and test various components of our model. After model verification, we use our model to determine the optimal tollbooth allocations. + +We look at two typical cases. A toll plaza radius of $d = 250 \, \text{m}$ [Orlando-Orange Country Expressway Authority 2004] is fairly standard. The hourly traffic densities of the six highways take a standard form; they differ mostly in amplitude, not in shape. Therefore, we model our density functions on two such standard highways, 4-lane Holland West (high density) and 3-lane Bee + +Line (low density) (Figure 2). We extrapolate their traffic volume profiles to profiles for highways with 1 through 7 lanes. For $m$ lanes, we scale the traffic volume by $m/4$ (Holland West) or $m/3$ (Bee Line). Doing so maintains the shape of the profile and the density of cars per lane while increasing the total number of cars approaching the toll plaza. + +![](images/e2b2600f7a1c71fa7da80866d55e4159e2343030f11942e8b042fe9475bd39d4.jpg) +Figure 2. Traffic volume as a function of time for Holland West (top) and Bee Line (bottom). + +# Verification of the Traffic Simulation Model + +Based on the optimality criteria, for various test scenarios we determine the minimal number of tollbooths with average delay time within 1 s of the minimal average delay. We show the results in Table 4. + +Model results for three toll plaza match the actual numbers, and the other three differ only slightly. In the case of Dean Road, having 4 tollbooths (the actual case) instead of 5 leads to a significantly longer average delay time (70 s vs. 25 s). For Bee Line and Holland West, the difference is at most 1 s. These results suggest that our model agrees generally with the real world. + +Table 4. Comparison of model-predicted optimal number of tollbooths and real-world numbers for six specific highway/toll plaza pairs. + +
HighwayTollboothsComparison
OptimalActual
Hiawassee44same
John Young Parkway44same
Dean Road54mismatch
Bee Line35close
Holland West76close
Holland East77same
+ +# Results and Discussion + +Using real-world data from the Orlando-Orange Country Expressway Authority [2004], we create 14 test scenarios: high and low traffic density profiles for highways with 1 to 7 lanes. For each scenario, we run our model for a number $n$ of tollbooths $n$ ranging from the number of highway lanes $m$ to $2m + 2$ (empirically, we found it unnecessary to search beyond this bound) and determine at which $n$ the average delay time is least—this is the optimal number of tollbooths. We present our optimality findings in Table 5. For high traffic densities and more than two lanes, the optimal number of tollbooths tends to exceed the number of highway lanes by about $50\%$ , a figure that seems to match current practice in toll plaza design; for low densities, the optimal number of tollbooths equals the number of highway lanes. + +Table 5. The optimal number of tollbooths for 1 to 7 highway lanes, by traffic density. + +
High densityLow density
Lanes12345671234567
Tollbooths345689111234567
+ +For high traffic density but only as many tollbooths as lanes, the average delay time is roughly 500 s, almost 20 times as long as the average delay of 25 s for the optimal number of tollbooths. So we strongly discourage construction of only as many tollbooths as lanes if high traffic density is expected during any portion of the day. However, when there is low traffic density, this case is optimal, with an average delay time of 22 s. + +# Further Study + +To simulate real-world conditions more accurately, we could + +- consider the effect of heterogenous cars and tollbooths; + +- allow for vehicles other than cars, each with their own size and acceleration constants; +- consider the effect of changing serving rates, since research shows that average serving time decreases significantly with line length [Edie 1954]; or +- vary the toll plaza radius. + +# Strengths of Model + +The main strength of the Multi-Car Interactive Model stems from our comprehensive and realistic development of single-car behavior. The intuitive notion of a car's safety gap and its relation to acceleration decisions, as well the effects of reaction times associated with expected and unexpected occurrence all find validation in traffic flow theory [Gartner et al. 1992]. The idea of a merge point and a car's behavior approaching that point mimics the practices of yielding right-of-way as well as cautiously approaching lane merges. Our choice of time step realistically approximates the time that normal decision-making requires, allowing us to capture the complete picture of a toll plaza both on a local, small scale, but also on the scale of overall tendencies. Finally by allowing for certain elements of normally distributed randomness in serving time and arrival time we capture some of the natural uncertainty involved in traffic flow. + +A great strength of our model lies in the accuracy of its results. Our model meets all of our original expectations and furthermore predicts optimal toll-booth line numbers very close to those actually used in the real world, suggesting that our model approximates real-world practice. + +Finally, the Multi-Car Interactive Model provides a versatile framework for additional refinements, such as modified single-car behavior, different types of tollbooth, and nonuniform serving rates. + +# Weaknesses of Model + +In the real world, a car in the center lane has an easier time merging into the center lanes than a car in a peripheral lane, but this behavior is not reflected in our model. We also disallow lane-changing except at fork and merge points, though cars often switch lanes upon realizing that they are in a slow tollbooth line. Our method of determining car arrival times may be flawed, since Gartner et al. [1992, §8] suggest that car volume is not uniformly distributed over a given time block but rather increases in pulses. + +Perhaps the two greatest weaknesses of our model are that all cars behave the same and all tollbooth lanes are homogeneous. While we believe that we capture much of the decision-making process of navigating a toll plaza, we recognize that knowledge is imperfect, decisions are not always rational, and all tollbooth lanes, and not all cars (or their drivers) are created equal. + +# References + +Adan, Idan, and Jacques Resing. 2001. Queueing Theory. 2001. http://www.cs.duke.edu/~fishhai/misc/queue.pdf. +Chao, Xiuli. 2000. Design and evaluation of toll plaza systems. http://www.transportation.njit.edu/nctip/final_report/Toll_Plaza_Design.htm. +Edie, A.C. 1954. Traffic delays at toll booths. Journal of Operations Research Society of America 2: 107-138. +Gartner, Nathan, Carroll J. Messer, and Ajay K. Rathi. 1992. Traffic Flow Theory: A State of the Art Report. Revised Monograph. Special Report 165. Oak Ridge, TN: Oak Ridge National Laboratory. http://www.tfhrc.gov/its/tft/tft.htm. +Green, Marc. 2000. "How long does it take to stop?" Methodological analysis of driver perception-brake times. *Transportation Human Factors* 2 (3): 195-216. +Insurance Institute for Highway Safety. 2004. Maximum Posted Speed Limits for Passenger Vehicles as of September 2004. http://www.iihs.org/safety_facts/state_laws/speed_limit_laws.htm. +Malewicki, Douglas J. 2003. SkyTran Lesson #2: Emergency braking: How can a vehicle shorten it's [sic] stopping distance in an emergency. http://www.skytran.net/09Safety/02sfty.htm. +Orlando-Orange Country Expressway Authority. n.d. E-Pass. http://www.expresswayauthority.com/trafficstatistics/epass.html. +______ . 2004. Mainline plaza characteristics. http://www.expresswayauthority.com/assets/STD&Stats%20Manual/Mainline%20Toll%20Plazas.pdf. +Texas Transportation Institute. 2001. Houston's travel rate improvement program. http://mobility.tamu.edu/ums/trip/toolbox/increase_system_efficiency.pdf. +UK Department of Transportation. 2004. The Highway Code. http://www.highwaycode.gov.uk/. + +![](images/630d142b80c4b3276bf9b4f04267b94a8d0bcb82b082d7710bdafc75213b5d48.jpg) + +Ivan Corwin, Nikita Rozenblyum, and Sheel Ganatra. + +# Lane Changes and Close Following: Troublesome Tollbooth Traffic + +Andrew Spann + +Daniel Kane + +Dan Gulotta + +Massachusetts Institute of Technology + +Cambridge, MA + +Advisor: Martin Zdenek Bazant + +# Summary + +We develop a cellular-automaton model to address the slow speeds and emphasis on lane-changing in tollbooth plazas. We make assumptions about car-following, based on distance and relative speeds, and arrive at the criterion that cars maximize their speeds subject to + +$$ +\operatorname {g a p} > \left\lfloor \frac {V _ {\mathrm {c a r}}}{2} \right\rfloor + \frac {1}{2} \left(V _ {\mathrm {c a r}} - V _ {\mathrm {f r o n t c a r}}\right) \left(V _ {\mathrm {c a r}} + V _ {\mathrm {f r o n t c a r}} + 1\right). +$$ + +We invent lane-change rules for cars to determine if they can turn safely and if changing lanes would allow higher speed. Cars modify these preferences based on whether changing lanes would bring them closer to a desired type of tollbooth. Overall, our assumptions encourage people to be a bit more aggressive than in traditional models when merging or driving at low speeds. + +We simulate a 70-min period at a tollbooth plaza, with intervals of light and heavy traffic. We look at statistics from this simulation and comment on the behavior of individual cars. + +In addition to determining the number of tollbooths needed, we discuss how tollbooth plazas can be improved with road barriers to direct lane expansion or by assigning the correct number of booths to electronic toll collection. We set up a generalized lane-expansion structure to test configurations. + +Booths should be ordered to encourage safe behavior, such as putting faster electronic booths together. Rigid barriers affect wait time adversely. + +Under typical traffic loads, there should be at least twice as many booths as highway lanes. + +# Definitions and Conventions + +Car/Driver. Used interchangeably; "cars" includes trucks. + +Tollbooth lane and highway lane. There are $n$ highway lanes and $m$ tollbooths. The tollbooth lane is the lane corresponding to a particular tollbooth after lane expansion. + +Default lane. In the lane-expansion region, each highway lane is assigned a tollbooth lane such that following the highway lane without turning leads by default to the tollbooth lane, and following that in the lane-contraction region leads to the corresponding highway lane. Other tollbooth lanes begin to exist at the start of lane expansion and become dead ends at the end of lane contraction. + +Delay. The time for a car to traverse the entire map of our simulated world, which stretches 250 cells before and after the tollbooth. + +Gap. We represent a lane as an array; the gap is obtained by subtracting the array indices between two adjacent cars. + +# Assumptions and Justifications + +# Booths + +A booth is manual, automatic, or electronic. A manual booth has a person to collect the toll, an automatic booth lets drivers deposit coins, and an electronic booth reads a prepaid radio frequency identification tag as the car drives by. A booth may allow multiple types of payment. + +The cost of operating the booths is negligible. Compared to the cost of building the toll plaza or of maintaining the stretch of highway for which the toll is collected, this expense is insignificant, particularly since automated and electronic booths require less maintenance than manual booths. + +Booth delays. Cars with an electronic pass can cross electronic tollbooths at a speed of 2 cells per time increment ( $\approx$ 30 mph). A car with an electronic pass can also travel through a manual or automatic booth but must wait 3-7 s for the gate to rise. A car without an electronic pass is delayed 8-12 s at an automatic booth and 13-17 s at a manual booth. We use probabilistic uniform distributions over these intervals to ensure that cars do not exit tollbooths in sync. + +# Cars/Drivers + +Cars are generated according to a probability distribution. We start them at 1 mi from the tollbooth and generate for a fixed amount of simulated time (usually about 1 h), then keep running until all have gotten to the end of the simulated road 1 mi beyond the tollbooth. There are no entry or exit ramps in the 1 mi section leading to the tollbooth. Some vehicles are classified as trucks, which function identically but must use manual tollbooths if they do not have an electronic pass. + +Drivers accurately estimate distances and differences in speed. + +Car acceleration and deceleration is linear and symmetric. In reality, a car can accelerate much faster from 0 to $15\mathrm{mph}$ than from 45 to $60\mathrm{mph}$ , and the distances for braking and acceleration are different; but this is a standard assumption for cellular-automaton models. + +Cars pack closely in a tollbooth line. Drivers don't want people from other lanes to cut into their line, so they follow at distances closer than suggested on state driver's license exams. + +Dissatisfaction from waiting in line is a nondecreasing convex function. An especially long wait is a major annoyance. In other words, it is better to spread wait times uniformly than to have a high standard deviation. + +# Lanes + +Toll collectors can set up new rigid barriers in the lane-expansion region. Doing so would make certain lane changes illegal in designated locations. Since adding an extra tollbooth can be cost-prohibitive, setting up barriers to promote efficient lane-splitting and merging is important. + +Signs are posted telling drivers what types of payment each lane accepts. If drivers benefit from a certain type of booth (e.g., electronic), they will tend to gravitate toward it. + +No highway lane is predisposed to higher speeds than others. Which lanes are "fast" or "slow" is dictated by the types of tollbooths that they most directly feed into. + +The lane-expansion region covers about 300 ft. The lane contraction section is also assumed to be 300 ft. + +# Criteria for Optimal Tollbooth Configuration + +Cars slowing or stopping at tollbooths make for bottlenecks. Since the speed of a car through a tollbooth must be slower than highway speed, adding tollbooths is an intuitive way to compensate. + +Our goal is a configuration of lanes and tollbooths that minimizes delay for drivers. Mean wait time is the simplest criterion but not the best. Consider the case where there are no electronic passes and traffic is very heavy. In this limiting but plausible case, there are constantly lines and cars pass through each booth at full capacity. For a fixed number of tollbooths, the total wait time should be similar regardless of tollbooth-lane configuration; but if one lane is moving notably faster than the others, then the distributions of wait times will differ. Because we assume that dissatisfaction is a convex function, we give more weight to people who are stuck a long time. Klodzinski and Al-Deek [2002, 177] suggest that the 85th percentile of delays is a good criterion. + +At the same time, we do not wish to ignore drivers who go through quickly. Therefore, we take the mean of the data that fall between the 50th and 85th percentiles for each type of vehicle. This will put an emphasis on cars that are stuck during times of high traffic but will not allow outliers to hijack the data. We consider separately the categories of cars, trucks, and vehicles with electronic passes, take the mean of the data that fall between the 50th and 85th percentiles, and take the weighted average of this according to the percentage of vehicles in the three categories. + +We also wish to analyze effect of toll plaza layout. We therefore record the incidence of unusually aggressive lane changes, excessive braking, and cars getting "stuck" in the electronic lane that do not have an electronic pass. + +# Setting Up a Model + +In the Nagel-Schreckenberg cellular-automaton model of traffic flow [Nagel and Schreckenberg 1992, 2222], cars travel through cells that are roughly the size of a car with speeds of up to 5 cells per time increment. This model determines speeds with the rules that cars accelerate if possible, slow down to avoid other cars if needed, and brake with some random probability. The model updates car positions in parallel. Such models produce beautiful simulations of general highway traffic, but less research has been done using the tight speed constraints and high emphasis on lane-changing that a tollbooth offers. + +Creating models for multiple lanes involves defining lane-change criteria, such as change lanes if there is a car too close in the current lane, if changing lanes would improve this, if there are no cars within a certain distance back in the lane to change into, and if a randomly generated variable falls within a certain range [Rickert et al. 1996, 537]. Even with only two lanes, one gets interesting behavior and flow-density relationships that match empirical observations [Chowdhury et al. 1997, 423]. Huang and Huang even try to implement tollbooths into the Nagel-Schreckenberg model, but their treatment of lane expansion assumes that each lane branches into two (or more) tollbooths dedicated to that lane [2002, 602]. In real life, sometimes highways add tollbooth lanes without distributing the split evenly among highway lanes. + +To allow for a various setups, we develop a generalized lane-expansion + +structure. In the tollbooth scenario, low speeds are more common than in general stretches of highway, and there is a need to address more than two lanes. We change car-following and lane-change rules to fit a congested tollbooth area. + +In the Nagel-Schreckenberg model, cars adjust their speeds based on the space in front. The tollbooth forces a universal slowdown in traffic. At these slow speeds, it is possible to follow cars more closely than at faster speeds. In the real world, we consider the speed of the car in front in addition to its distance away. Random braking is needed in the Nagel and Schreckenberg model to prevent the cars from reaching a steady state. However, in a tollbooth scenario, the desire not to let other cars cut in line predisposes drivers to follow the car in front more closely than expected. Thus, we do not use random braking but incorporate randomness into the arrival of new cars and lane-change priority orders. Instead of Nagel and Schreckenberg's rules, we propose the following rules for a cellular automaton model simulating a tollbooth scenario: + +1. Cars have a speed of from 0 to 5 cells per time increment. In a single time increment, they can accelerate or decelerate by at most 1 unit. +2. Drivers go as fast as they can, subject to the constraint that the distance to the car in front is enough so that if it brakes suddenly, they can stop in time. +3. Cars change lanes if doing so would allow them to move faster. They modify the increased speed benefits of changing lanes by checking if the lane leads to a more desirable tollbooth type or if they face an impending lane merger. Before changing lanes, cars check the gap criterion of rule 2) applies to both the gap in front and the gap behind the driver after the lane change. +4. At each time step, we update positions and speeds from front to back. + +We examine the rules in detail. Let us say that + +- There are 250 cells in a mile (a little over 21 ft/cell). +Each time step represents about 1 s. + +Rule 1's maximum speed of 5 cells per time step corresponds to $72\mathrm{mph}$ (each unit of velocity is just under $15\mathrm{mph}$ ), which is about the expected highway speed. The numbers for length and time increments do not need to be precise, since we can fix one and scale the other; what is important is that the length of a cell is a little more than the average length of a car (about 15 ft). + +For any pair of following cars, we want the rear car to be able to decelerate at a rate of at most 1 unit and still avoid collision with the front car, even if the front car begins decelerating at a rate of 1 unit per time step squared. If for a given time step the rear and front cars have speeds $V_{\mathrm{car}}$ and $V_{\mathrm{frontcar}}$ and immediately begin decelerating at a rate of 1 unit squared until they stop, the total distances that they travel are + +$$ +V _ {\text {c a r}} + \left(V _ {\text {c a r}} - 1\right) + \dots + 1 = \frac {1}{2} V _ {\text {c a r}} \left(V _ {\text {c a r}} + 1\right), +$$ + +$$ +V _ {\text {f r o n t c a r}} + \left(V _ {\text {f r o n t c a r}} - 1\right) + \dots + 1 = \frac {1}{2} V _ {\text {f r o n t c a r}} \left(V _ {\text {f r o n t c a r}} + 1\right) +$$ + +Our condition is equivalent to the car in back remaining behind the car in front, so the gap or difference in squares between the cars must be + +$$ +\mathrm {g a p} > \frac {1}{2} V _ {\mathrm {c a r}} (V _ {\mathrm {c a r}} + 1) - \frac {1}{2} V _ {\mathrm {f r o n t c a r}} (V _ {\mathrm {f r o n t c a r}} + 1) = \frac {1}{2} (V _ {\mathrm {c a r}} - V _ {\mathrm {f r o n t c a r}}) (V _ {\mathrm {c a r}} + V _ {\mathrm {f r o n t c a r}} + 1). +$$ + +Thus, at each update, the rear car checks if it can increase its speed by 1 and still satisfy this inequality or if it must decrease its speed to maintain the inequality, and acts accordingly. + +However, according to this model, if two cars are going the same speed, then they theoretically touch. Besides being a safety problem, this also contradicts the observations of Hall et al. that flow of cars as a function of percent occupancy of a location increases sharply until about $20\%$ and then decreases thereafter [1986, 207]. With the inequality above, we could generate initial conditions with high occupancy and high flow. Before we discard our model, though, let us first check to see if these conditions would actually show up in the simulation. + +We add the rule that a car tries to leave at least $\lfloor V_{\mathrm{car}} / 2 \rfloor$ empty spaces before the car in front; this would still let cars tailgate at low speeds. For high speeds, this would be a somewhat unsafe distance but consistent with aggressive merging; but we expect high speeds to be rare near the tollbooth during moderate or high congestion. Thus, our final criterion for rule 2 is that a car looks at the number of empty spaces in front of it and adjusts its speed (upward if possible) so that it still meets the inequality + +$$ +\operatorname {g a p} > \left\lfloor \frac {V _ {\text {c a r}}}{2} \right\rfloor + \frac {1}{2} \left(V _ {\text {c a r}} - V _ {\text {f r o n t c a r}}\right) \left(V _ {\text {c a r}} + V _ {\text {f r o n t c a r}} + 1\right). +$$ + +When changing lanes (rule 3), cars ask, "If I changed lanes, how fast could I go this time step?" Cars avoid making lane changes that could cause a collision, as determined by the gap criterion. When given an opportunity to change lanes, a car compares the values of the maximum speeds that it could attain if it were in each lane but adds modifiers. Lanes have penalties in valuation for leading to tollbooths that the driver cannot use $(-2$ per lane away from a usable lane before the lanes branch and $-20$ after) or are suboptimal $(-1$ per lane away from an optimal lane), where suboptimal means a car with an electronic pass in a lane that does not accept it. Leaving the tollbooth, drivers try hard to get out of dead-end lanes $(-3$ or $-5$ depending on how far it is to the end). If drivers value the lane they are in and a separate lane equally, they do not change. If drivers value both the lane on their left and the lane on their right equally more than their current lane, they pick randomly. + +We update speeds in each lane from front to back, with lanes chosen in a random order. A consequence of this is that information can propagate backwards at infinite speed if for example the head of a string of cars all going at speed 1 came to a complete stop. This infinite wave-speed problem could be fixed by introducing random braking, but at slow speeds we find it more acceptable to have people inching forward continuously than to have people braking from 1 to 0 at inopportune times. This also has consequences for lane changes, in that + +randomly giving lanes an update priority will have different results from processing all lane changes in parallel. Although some cellular-automaton traffic models in the literature update in parallel, we use serial updating because it makes handling the arrays easier and eliminates the problem of having people from two different lanes trying to change into the lane between them at the same time. The random update priority ordering for the lanes is changed every time increment, so that there is less systematic asymmetry in lane changing. + +# Generalized Lane-Expansion Structure + +We develop a system to describe easily a large number of different tollbooth setups. The road both starts and ends as an $n$ -lane highway and contains $m$ tollbooths in the middle. The lane dividers are labeled from 1 to $n + 1$ for the highway lanes and from 1 to $m + 1$ for the tollbooth lanes. A rigid barrier consists of an $(x, y)$ pair where the $x$ coordinate represents a lane divider and the $y$ coordinate represents a tollbooth divider. Figure 1 shows the case $n = 4$ , $m = 6$ , with rigid barriers at $(1, 1)$ , $(2, 3)$ , $(3, 4)$ , and $(5, 7)$ . Cars may not make lane changes across a rigid barrier. + +![](images/b9d4143752a355b11df1364c327568560bd998a447ab91e9d29a1e30b16c9805.jpg) +Figure 1. Generalized lane-expansion scheme. + +Suppose that we have an ordered set of rigid barriers $\{(x_1, y_1), \ldots, (x_k, y_k)\}$ , where $x_i + 1 > x_i$ for $i = 1, \ldots, k - 1$ . Then the set of rigid barriers must obey the following rules: + +- you cannot drive off the road: $(1,1)$ and $(n + 1,m + 1)$ must be rigid barriers; and + +- rigid barriers do not cross each other: for $i = 1, \ldots, k - 1$ , we have $y_{i + 1} > y_i$ . + +The dotted lines in Figure 1 can be crossed as normal lane changes. In the lane-expansion region, each lane is assigned a "default" tollbooth lane that it most naturally feeds into. Highway lanes 1, 2, 3, and 4 feed tollbooth lanes 1, 3, 4, and 5. If there is an adjacent lane not blocked by a rigid barrier, a car can enter that lane. The default tollbooth lane then feeds back into the highway lane after the tollbooth and the other highway lanes are treated as dead-ends. Cars are given an incentive to change out of these dead-end lanes ahead of time. We assume that rigid barriers and default lanes are symmetric between lane expansion and contraction. Additionally, no lane changes are allowed on the five cells immediately preceding and following the tollbooth cell. + +# Results + +We simulate a 70-min period when incoming traffic starts light, increases for $40\mathrm{min}$ , then decreases again. Figure 2 shows the generation rates for light, normal, and heavy traffic. For a four-lane highway, these settings correspond to volumes of about 2200, 3000, and 3600 cars over the 70-min period. + +![](images/9a135cc55196fa331c3b590006cd5e907d30f95752f30d8e39042f376c6095cb.jpg) +Figure 2. Traffic generation rates. + +We test two cases of allocation and arrangement of tollbooths: no barriers, or else each highway lane branches into an equal number of tollbooth lanes. In both cases, we make the odd numbered lanes the default lanes. We tested both of these cases for different orderings of the tollbooths. For a 4-lane highway, we use 2 electronic booths, 4 automatic booths, and 2 manual booths. Half of the vehicles had electronic passes and $10\%$ of the vehicles are trucks (no electronic pass). First we clustered all booths of the same type in some permutation, then we alternated types. Figure 3 shows data averaged over 10 runs. + +![](images/0d3fd42d7b3b203748deca95640d35c97034f1c222dd451d09d434fe87a46423.jpg) +Figure 3a. Clustered lanes. + +![](images/fcef5b13d5001c772ec94cf14a747a7ae5636f69396ffffa49d27b82f0fdb635.jpg) +Figure 3b. Alternating lanes. +Figure 3. Average delays for two lane configurations and with vs. without barriers. All situations are for normal traffic load, 4 highway lanes, and 8 booths—2 electronic, 4 automatic, 2 manual. + +The $x$ -axis gives configurations and the $y$ -axis is adjusted average delay time (mean of the 50th through 85th percentiles of travel time through the 250 cells before and after the tollbooth). If there were no tollbooth, then a car at full speed would have delay 100 s. Barriers are slightly worse than just allowing people to change lanes. + +We note from Figure 3a that each of the clustered lane types is different from its mirror image, and this phenomenon is reproducible, which is puzzling. We think that it is caused by our handling of the concept of default lane, where some lanes feed directly into tollbooth lanes; with unrestricted lane expansion, this might make some lane changes easier than others. + +The alternating tollbooth configurations appear to have slightly less delay, but they also generate more warning flags about dangerous turns and cars becoming stuck in tollbooth lanes that they cannot use. For each clustered configuration, either it or its mirror image gives time equivalent to the alternating tollbooth configurations. Therefore, for safety reasons, we suggest using the clustered configurations. + +We next determine how many of each type of booth to use for a 4-lane highway with 8 tollbooths and a lane-expansion region with no rigid barriers. We put the electronic booths on the left and the manual booths on the right. We vary the numbers of each type of tollbooth for different distributions of cars, trucks, and vehicles with electronic passes and run the simulation under normal traffic loads. Unless the percentage of trucks is very low, allocating only 1 manual booth for the trucks generates a large number of trucks stuck in lanes that they cannot use. The number of electronic booths should be 2 or 3, depending on whether cars with electronic passes outnumber vehicles without them. Since 8 to 12 tollbooths is reasonable size for a 4-lane highway, we recommend very roughly one-fourth electronic, one-half automatic, and one-fourth manual tollbooths. + +How many tollbooths are needed for different levels of traffic? We round down the number of manual and electronic tollbooths and round up the number of automatic tollbooths from the above proportions. Using the light, normal, and heavy traffic loads defined in Figure 2 above, we arrive at Figure 4. + +![](images/2fb59e91db3cc6c12e83f1d223e023161765b0d1227fe63de0209d02dbd29bf1.jpg) +Figure 4. Delay vs. number of tollbooths. + +Finally, we consider the limiting case of no trucks and no electronic passes (all tollbooths are automatic). This is the standard case directly comparable to other models in the literature. Under light, normal, and heavy traffic loads, we find that delay times are as in Figure 5. + +Without electronic tollbooths, cars experience much longer delays, since each must stop at a tollbooth. With only one tollbooth per lane, the normal traffic load (3000 vehicles over 70 min) forces many people to wait over 45 min! It takes about 12 lanes to reach minimal delay in this situation instead of the 9 in the situation with automatic and electronic lanes. + +# Do the Cars Behave Reasonably? + +In Figure 6, we graph travel time vs. arrival time for all cars, under normal traffic loads with no barriers, 4 highway lanes, 8 tollbooths (from left to right: 2 electronic, 4 automatic, 2 manual). There are two main features: + +- The line represents cars with electronic passes. Even under a heavy traffic load, they are not terribly delayed, since they can pass through their booth without stopping. +- Along the top, we see the cars without electronic passes. The distribution of their wait times looks like the graph of their generation rate shifted over by + +![](images/cdd8661220745ad28403909e6644e905551b79ba1957ecbeef1b176f9ef1678f.jpg) +Figure 5. Delay vs. number of tollbooths—no trucks or electronic passes. + +![](images/edc7e0feceec427489a7f4998a118c44bea43a3bc28159818ce7a1510bec5024.jpg) +Figure 6. Travel time vs. arrival time. + +about $10\mathrm{min}$ . One can even see the graph split into several "lanes," which shows the difference between the slower truck lanes (manual tollbooths) and the normal cars (automatic tollbooths). + +We are also interested in what configurations lead to potential accidents. We ran setups under the default parameters of normal traffic load, $50\%$ electronic passes, and $10\%$ trucks. Figure 7 shows the number of occurrences of several types of these behaviors, out of about 3000 cars total. + +![](images/58d660c6987714d2277fb652360d7a5a2c1e60c0f63236c2d56962c7b6a12028.jpg) +Figure 7. Incidents of dangerous behavior. + +We see in the leftmost configuration that having only one manual lane leads to trucks stuck in the wrong booth. Trucks joining the wrong booth also seems to lead to an increase in hard braking; this appears to be an artifact of cars traveling at speeds of 1 or 2 not decelerating properly when nearing a tollbooth. In our experiments, this phenomenon tends to be correlated with inefficient lane-changing schemes. The second configuration from the left is our recommended configuration. The third and fourth configurations show the difference that barriers make: They cause fewer tollbooth mistakes but lead to dangerous turns and hard braking, which are probably related. + +# Sensitivity to Parameters + +Changing the length of the lane-expansion and -contraction regions did not have a statistically significant effect on either wait times or logs of bad behaviors. The percentages of cars with electronic passes and trucks can be changed by a fair amount before they affect anything. For a general number $n$ of highway lanes and rigid barriers, the marginal return of adding a new tollbooth after $2n$ or $2n + 1$ is small unless the traffic load is exceptionally large. + +# Strengths and Weaknesses + +# Strengths + +Can handle a wide variety of possible setups. It is hard to add new tollbooths but easier to change the type of tollbooth or set up barriers. + +Captures important features of the actual situation. + +Behavior based on simple procedures meant to accomplish natural goals. We avoid introducing artificial effects by basing drivers' behaviors on simple methods of accomplishing natural goals, such as avoiding collisions and getting into a better lane. + +# Weaknesses + +Need to obtain real-world parameters. If we were acting as consultants for a particular highway, we should collect data. + +More complicated than simple models in literature. Our model may introduce some artificial behavior. Cellular automaton models are supposed to have complex behavior emerge from simple assumptions, not the other way around. + +Infinite speed of information propagation. Due to the order in which cars are updated in our model, information about obstacles can propagate backwards at infinite speed, an effect which can lead to inaccuracies. + +# Conclusion + +Cellular-automaton models are one effective means of studying traffic simulations. Other approaches use partial differential equations motivated by kinetics or fluid mechanics [Chowdhury et al. 1997, 213-225]. + +Our cellular automaton model gives us valuable insight into the tollbooth traffic problem. We can see cars flowing through the tollbooths and piling up during rush hour. We can follow the motions of individual cars and collect statistics on their behaviors. From our experiments, we make the following recommendations: + +Tollbooths should be ordered based on encouraged behavior. Safety considerations should take precedence; putting faster booths on the left and slower booths on the right accomplishes this. + +No barriers. Barriers prevent drivers getting to lanes that they need to use. + +The distribution of types of cars should determine how many tollbooths. Traffic density has little effect on the number of tollbooths needed to minimize delay; the distribution of types of cars has a much larger effect. + +An effective ratio of tollbooths is 1 electronic: 2 automatic : 1 manual. + +# References + +Chowdhury, Debashish, Ludger Santen, and Andreas Schadschneider. 2000. Statistical physics of vehicular traffic and some related systems. Physics Reports 329: 199-329. +Chowdhury, Debashish, Dietrich E. Wolf, and Michael Schreckenberg. 1997. Particle hopping models for two-lane traffic with two kinds of vehicles: Effects of lane-changing rules. Physica A 235: 417-439. +Haberman, Richard. 1977. Mathematical Models: Mechanical Vibrations, Population Dynamics, and Traffic Flow. Englewood Cliffs, NJ: Prentice Hall. 1998. Reprint. Philadelphia: SIAM. +Hall, Fred L., Brian L. Allen, and Margot A. Gunter. 1986. Empirical analysis of freeway flow-density relationships. *Transportation Research Part A: General* 20 (3): 197-210. +Huang, Ding-wei, and Wei-neng Huang. 2002. The influence of tollbooths on highway traffic. Physica A 312: 597-608. +Klodzinski, Jack, and Haitham Al-Deek. 2002. New methodology for defining level of service at toll plazas. Journal of Transportation Engineering 128 (2): 173-181. +Nagel, Kai, and Michael Schreckenberg. 1992. A cellular automaton model for freeway traffic. Journal de Physique I 2 (12): 2221-2229. +Rickert, M., K. Nagel, M. Schreckenberg, and A. Latour. 1996. Two lane traffic simulations using cellular automata. Physica A 231: 534-550. + +![](images/2468c10ddff1d8c74443d5fd901f726c5cb091c057e7be32a8cc398043a15d09.jpg) +Martin Bazant (advisor), Daniel Kane, Andrew Spann, and Daniel Gulotta. + +# A Quasi-Sequential Cellular-Automaton Approach to Traffic Modeling + +John Evans +Meral Reyhan +Rensselaer Polytechnic Institute +Troy, NY + +Advisor: Peter Kramer + +# Summary + +The most popular discrete models to simulate traffic flow are cellular automata, discrete dynamical systems whose behavior is completely specified in terms of its local region. Space is represented as a grid, with each cell containing some data, and these cells act in accordance to some set of rules at each temporal step. Of particular interest to this problem are sequential cellular automata (SCA), where the cells are updated in a sequential manner at each temporal step. + +We develop a discrete model with a grid to represent the area around a toll plaza and cells to hold cars. The cars are modeled as 5-dimensional vectors, with each dimension representing a different characteristic (e.g., speed). By discretizing the grid into different regimes (transition from highway, tollbooth, etc.), we develop rules for cars to follow in their movement. Finally, we model incoming traffic flow using a negative exponential distribution. + +We plot the average time for a car to move through the grid vs. incoming traffic flow rate for three different cases: 4 incoming lanes and tollbooths, 4 incoming lanes and 4, 5, and 6 tollbooths. In each plots, we noted at certain values for the flow rate, there is a boundary layer in our solution. As we increase the ratio of tollbooths to incoming lanes, this boundary layer shifts to the right. Hence, the optimum solution is to pick the minimum number of tollbooths for which the maximum flow rate expected is located to the left of the boundary layer. + +# Introduction + +![](images/360dd8b09cb60fcd6ef772c42cdee8bf3870aba2e3aa45dd4b0e90e06fa066ab.jpg) +Figure 1. The New Jersey Turnpike (I-95) at night. + +Models for traffic flow can be broken down into two basic types. + +- The first type treats space and time as a continuum; both cars and time are continuous in nature. +- The second type, discrete models, treats space as a lattice and time discretely. A common discrete model is a cellular automaton, where space is modeled by a lattice and each lattice site represents a state of the system. The lattice sites are updated and their states change. For traffic flow, the states of the lattice sites represent whether a car is present at that spatial location or not. + +Near a tollbooth, cars must stop to pay before moving on. Since each car affects the other cars in its direct neighborhood, it is not reasonable to model cars as a continuum. Discrete time also allows us to control the movement of the cars at each individual time step. Finally, discrete models in general are much easier to understand and to implement on modern computing resources. + +# Assumptions + +- Upon nearing a toll plaza, a driver maneuvers based on local congestion to minimize travel time. +- Within 100 ft of the toll plaza, a driver remains in a lane and slows down to an average speed of about $5 - 10\mathrm{mph}$ . We base the speed of the cars on what is suggested in most driver's manuals: Car separation should be one car length for every $10\mathrm{mph}$ of speed. +- Once a driver pays the toll, they maneuver to a highway lane and accelerate to highway speeds. + +- Drivers do not cooperate. While the drivers are not directly competing against one other, they are affecting each other and are hence fierce indirect obstacles/opponents. +- Vehicles are of constant length (17.5 ft). +- It takes about 4 s for a tollbooth employee to process a motorist [Chao n.d.]. + +# A Quasi-SCA Model of Toll Plaza Dynamics + +# Case 1: Equal Numbers of Lanes and Booths + +# Preliminaries + +Cellular automata (CA) are discrete dynamical systems whose behavior is completely specified locally. Space is represented as a uniform grid, with each cell containing data. Time advances in discrete steps, and the laws of the universe are expressed in a look-up table relating each cell to nearby cells to compute its new state. The system's laws are local and uniform. + +The basic one-dimensional cellular automata model for highway traffic flow is the CA rule 184, as classified by Wolfram [Nagel et al. 1998; Jiang n.d.; Wolfram 2002]. CA 184 is a discrete time process with state space $\eta \in \{0,1\}^{\mathcal{Z}}$ and the following evolution rule: If $\eta \in \{0,1\}^{\mathcal{Z}}$ is the state of at time $n$ , then the state $\eta'$ at time $n + 1$ is defined by + +$$ +\eta^ {\prime} := \left\{ \begin{array}{l l} 1, & \text {i f} \eta (x) = \eta (x + 1) = 1; \\ 1, & \text {i f} \eta (x) = 1 - \eta (x + 1) = 0; \\ 0, & \text {o t h e r w i s e}, \end{array} \right. +$$ + +where $\eta (x)$ denotes the value of $\eta :\mathcal{Z}\to \{0,1\}$ at the coordinate $x$ + +In this model, cars march to the right in a rather uniform manner, and all nodes execute their moves in parallel. + +Toll plaza dynamics, while similar to traffic dynamics, are quite different. + +- Toll plazas cannot be approximated as covering an infinite domain. +- Drivers must make decisions based on who moves in front of them. In this sense, we use ideas from Sequential Cellular Automata (SCA) [Tosic and Agha n.d.] instead of the classical schemes. +- Cells are updated in a slightly different manner than in classical cellular automata. To model car movement properly, "cars" are moved through cells one at a time. + +Our model is like a board game. For these reasons, we dub our model a "Quasi-SCA Model of Toll Plaza Dynamics." + +We divide a multilane highway into equally partitioned lanes. Each cell is approximately 25 ft long and contains information on whether it contains a car and, if it does, certain information about the car. Furthermore, there are specialized cell characteristics for different regimes, as shown in Figure 2. In our model, we also move forward in discrete time steps. For convenience, this time step is set to be 2 s in length. + +![](images/a94b79bd85c2b345166f40ddc6579a57560a0406fb98abc55175f34a66add2bc.jpg) +Figure 2. Possible regimes. + +![](images/fb07a41cd1dc170674f2e46ea7583df1d834e8b3b99ad15a6b5935d7009a5684.jpg) + +To implement our model, we exploit the object-oriented features of $\mathbf{C} + +$ . We create a car class, with certain variables associated with it, as shown in Table 1. + +Table 1. Car class variables in $C++$ + +
Car Class Variables
Occupied1 = Car, 0 = Null
CongestionPercent Measure of Local Congestion
SpeedMeasure of Car Speed
TotalTimeOnGridCounter Measuring Time on Grid
TotalTimeInTollCounter Measuring Time in Toll
+ +The highway is represented as a large $50 \times n$ array of car variables, where $n$ is the number of lanes. When initialized, this array contains empty grid spaces. As cars enter in from the left, grid spaces are activated and infused with information about the cars. Then, with this information, the state of the system at the next time step can be determined. + +# Vehicle Speed + +The speeds of cars not in the tollbooth regime are dictated by car separation having to be one car length for every $10\mathrm{mph}$ of speed. Since our model is discrete in both space and time, this criterion must be quantized. Moving one grid space ahead in one temporal step corresponds to a speed of about $8.5\mathrm{mph}$ . If we approximate one grid space as one car length and $8.5\mathrm{mph} \sim 10\mathrm{mph}$ , we + +can generalize the speeds of the cars in the following manner: + +$$ +s (i, j, t) := \left\{ \begin{array}{l l} 0, & \mathrm {i f} \min _ {x > i} \{x \mid o (x, j, t) = 1 \} = i + 1; \\ 1, & \mathrm {i f} \min _ {x > i} \{x \mid o (x, j, t) = 1 \} = i + 2; \\ 2, & \mathrm {i f} \min _ {x > i} \{x \mid o (x, j, t) = 1 \} = i + 3; \\ 3, & \mathrm {o t h e r w i s e .} \end{array} \right. +$$ + +We enforce 25.6 mph as an upper limit to speed, since the vehicles must slow down as they approach the toll. At each time step, the speed for a car is updated just before it initiates movement. + +# Congestion + +Since a driver is far more forward-focused than rearward-focused, we consider congestion to be determined only by the cars immediately in front—in particular, the nearest five cars. We write congestion for the car located in grid cell $\eta(i,j,t)$ as + +$$ +c (i, j, t) := \frac {1}{5} \sum_ {k = 1} ^ {5} o (i + k, j), +$$ + +where + +$$ +o (i, j, t) := \left\{ \begin{array}{l l} 1, & \text {i f g r i d c e l l} (i, j) \text {c o n t a i n s a c a r}; \\ 0, & \text {o t h e r w i s e} \end{array} \right. +$$ + +# Sequencing + +Cells are updated sequentially as opposed to simultaneously, because cars make decisions based on the cars in front. Furthermore, in a given column of our array, that is, one spatial location across four lanes, the car with the largest speed has the first initiative; the car with the second largest speed moves second, etc. In the case of a tie, the car closer to the top of the grid moves first. + +# Movement + +# Transition Regimes + +Transition regimes are regions where traffic comes in from the highway or leaves to the highway. In these regimes, drivers maneuver in a manner such that they can optimize travel time but minimize effort. Thus, movement possibilities in the transition regimes can be described by Figure 3. + +The optimal maneuver is to move forward, but a driver will enter a lane to the right or left if the move minimizes congestion. + +In two locations of the transition regimes, there are special considerations. + +![](images/738936d1d6840f8fdcd0ab35faf4eadbdf23882083aa2842b6dc5bffb1c51ade.jpg) +Figure 3. Movement in transition regimes: Center lane, far left lane, far right lane + +- The transition from highway regime: There must be some way to depict the arrival of traffic from the highway. We discuss later how we do this. +- The tail end of the transition-to-highway regime: Provided a car has sufficient speed, we eliminate the car from the grid. We also record its TotalTime-OnGrid variable. + +# Tollbooth Regime + +In the tollbooth regime, drivers no longer veer to the right or left. Instead, they move forward in line until they reach the tollbooth. In this region, spanning the 100 ft in front of the tollbooth, cars move at a maximum rate of one grid space per temporal element. Once in the tollbooth, they wait two entire temporal elements solely in the booth (about 4 s) until they move on to the transition regime. This is implemented by incrementing a car's TotalTimeInTo1 variable (initialized to zero when a vehicle enters the map) every temporal step that a car is in the booth (for the entire step) and checking if it is greater than 2. Often in this region, lines will form. As soon as a car emerges from the tollbooth, all of the cars behind move forward immediately. The dynamics of this regime are quite a bit different and simpler than the dynamics of the transition regime. + +We illustrate this situation in Figure 4. The red cars in lanes one and four are stopped, waiting behind cars located in the booth. The green cars ahead of the toll are transitioning to the highway regime. The yellow car is moving into the tollbooth, and the blue car is moving further inside the region. The green car before the toll is just now moving into the tollbooth region. While its current speed is $25.6\mathrm{mph}$ , once inside the region, it decelerates to $8.5\mathrm{mph}$ . + +![](images/53bbe289dfaefb5722f98758bd44122c989b963d1dce9ec8f32b39c549b0de83.jpg) +Figure 4. Movement in the tollbooth regime + +# Modeling the Incoming Traffic Flow + +To make our model more accurate, we use a statistical distribution to predict incoming flow. Two commonly-used distributions are the Poisson and the negative exponential. However, the Poisson distribution fits well only for light traffic [Aston 1966]. The negative exponential distribution is a good fit for heavy traffic; it is used to model the variations of gap length in a traffic stream over distance and random arrivals. It has probability density function + +$$ +f (t) = q e ^ {- q t}, +$$ + +where $t$ is the time (s) between arrivals and $q$ is the rate of arrival (cars/s), and cumulative distribution function + +$$ +F (t) = 1 - e ^ {- q t}. \tag {1} +$$ + +To implement this arrival time into our simulation, we assign it to a site of entry (a space) into the grid. A random number generator creates a random fraction $F$ ; using (1), we solve for $t = -\ln R / q$ . The value $t$ is assigned to a "spawn site," a place where "cars" are created. We use a counter to keep track of the time between different spawnings of cars. If this counter is greater than $F$ and the "spawn site" is empty (contains a null car), then a car is created at the spawning site. Otherwise, the counter is incremented until one of these two conditions are met. Cars "arrive" in each lane of the simulation using this method. We use a modified $q$ such in units of car per 2 s per lane. + +# Results + +We simulate for varying values of $q$ , the flow rate of cars per 2 s per lane, for a 4-lane highway with 4 tollbooths. We let $q$ vary from 0.01 cars/s/lane (0.02 cars/s overall) to 1 car/s/ lane (2 cars/s overall. Figure 6 outlines a given time evolution for a small value of $q$ . + +The time through which the cars move through the grid (or toll plaza) is an appropriate measure of congestion. Thus, we plot in Figure 7 the average time + +![](images/1615db47373ab45d1d01ffe0c17956ca7fa41f6bd9ab573165680f37eb522732.jpg) +Figure 6.Time evolution of a simulation, four temporal steps. + +getting through the grid versus the flow rate. The average time is obtained from a simulation accounting for one hour of traffic. We also plot for each flow value the maximum amount of time that anyone spent getting through the grid. + +For $q$ in [0.01, 0.37] cars/(2 s)/lane (0.02-0.74 cars/s overall), drivers enjoy an average time through the grid below 50 s. We consider this an optimal situation. However, at around a $q = 0.36$ cars/(2 s)/lane, there appears to be a boundary layer. For $q > 0.37$ , it takes drivers an average of 2 min or more to get through the quarter-mile long grid, corresponding to less than 10 mph. We demonstrate later that by adding more tollbooths, we shift the boundary layer and lower the average time for larger $q$ . Thus, a good strategy to determine the number of tollbooths is to estimate the anticipated maximum flow rate and choose a number of lanes for which $q$ is never beyond the boundary layer. + +Congestion is at its worst during rush hour, when toll plazas serve as bottlenecks. But what do these congestion levels mean in total time through the plaza? Is the number of tollbooths optimal? + +The Hiawassee M/L Toll Plaza in Florida uses a 4-tollbooth plaza. In October 2003, the Eastbound car count 7-8 A.M. was 3403 cars [Orlando-Orange County Expressway Authority 2003], so cars arrived at a rate of 0.945 cars/s/lane. With our assumption that a car is 17.5 ft long, clearly, four toll-booths are not enough to handle this heavy demand. + +However, EZPass and other such programs allow one to minimize the time at a tollbooth. If even a small portion of the cars use the EZPass system, the value of $q$ for which the boundary layer results grows vastly. If we were to accurately determine an optimal value of tollbooths for a certain value of $q$ for a highway using such a system, we would have to approach the problem in a + +![](images/6f8cd9b17c5e93f549560d9821b3665aa0dbe1e2f4cf76d8f25146345c8afc83.jpg) +Figure 7. Average time through grid vs. flow rate. + +![](images/6a0b727fd23c32687dbbab51acaf41da38c9db811bb6e33390579ad2d95dbe65.jpg) +Figure 8. Maximum time through grid vs. flow rate. + +slightly different fashion. In particular, we would have to vary the time drivers spend at the booth and designate certain lanes as having a quick pass system. + +# Case 2: More Tollbooths than Lanes + +# Preliminaries + +The situation changes quite a bit if there are more tollbooths than incoming lanes. Drivers in the far left and right lanes start moving into the new tollbooth lanes. Hence, we introduce a new scheme, as presented in Figure 9. + +![](images/9cec54abca3aad1826865be2dce6b1c622fc6aaff57ee1a54dfb62c5616fc329.jpg) +Figure 9. Possible regimes. + +# Movement in the Expansion Regime + +The expansion regime is where the incoming traffic lanes fan out to a greater number of tollbooth lanes. For the center lanes, movement is identical to the transition regimes. On the outer lanes, however, movement is slightly different. The movement possibilities are outlined in Figure 10. + +![](images/aee816edd77d6150f9d57f96dba330b39ac8231cbef26be72d801ea4f23e0937.jpg) +Figure 10. Movement in the expansion regime. + +For a driver in the outside lane, the optimal maneuver is to move into one of the newly-created tollbooth lanes, unless the congestion is less in the current lane. Another new addition is that the driver will not try to move into one of the inner lanes—more for psychological reasons than practical reasons. According to the model, the driver assumes that the outside lanes are the least dense (and fastest), since they did not exist on the highway. Drivers on the newly created lanes are allowed to move only forward in our model. While a driver may move to an outside lane just to move back again, we consider the chance of this occurring as very slim. + +# Movement in the Compression Regime + +The compression regime is where a greater number of tollbooth lanes collapse onto a smaller number of highway lanes. We have the movement possibilities presented in Figure 11. + +![](images/e3ff02a37b94b38ed2597a9cadc966626e5bf6e419794013b9f34720044029c7.jpg) +Figure 11. Movement in the compression regime. + +A driver in a tollbooth lane that is a highway lane follows the same rules as in the earlier tollbooth regime. A driver not in a highway lane, however, tries to move back onto a highway lanes; if this is not possible, they keep driving forward and trying again until they are forced to stop at the end of the tollbooth lane. This protocol can provide for some hectic situations. + +# Results + +We simulate our second model for varying values of $q$ for 4 highway lanes with 5 and 6 tollbooths. The range for $q$ is the same as our first model. Figures 12-13 show the results for these two cases, for which we take the expansion and compression regimes to be 125 ft long. + +![](images/9a42367dd7bbd5fe473e92da2bb8a99d8febea6873cbb3c9cdee21cacaca0856.jpg) +Figure 12. Average time through grid vs. flow rate, for 6 lanes. + +The boundary layer is moved to the right as the number of toll lanes increases. Furthermore, the value for $q$ on the right side of the boundary layer decreases with more toll lanes. Thus, as suggested, one should choose a sufficient number of lanes that correlates to this behavior. If the maximum flow rate one expects is a certain value, one can run a simulation for a certain number of + +![](images/7b7c2ecad0a568cb23a1689efb732b70054d5de7a4a427736b8475c96d048ea9.jpg) +Figure 13. Average time through grid vs. flow rate, for 6 lanes. + +tollbooths and choose the least number of tollbooths such that the maximum flow is to the left of the boundary layer. + +However, with an increased number of lanes comes an increased maximum individual travel time: At times, people become stuck in the toll lanes and have to wait for an opportune moment to move over. In our model, this is reflected in the fact that while the four-tollbooth case results in a maximum travel time of about $3.4\mathrm{min}$ , the 5- and 6-lane cases sometimes have a maximum individual travel time near $4\mathrm{min}$ . + +# Model Improvements and Discussion + +Drivers do not always move in a predictable manner. A probabilistic model taking into account the unpredictable nature of humans could further improve our model. + +Our model also does not take into account the possibility of accidents. An accident model would surely improve our model. + +While we do take into account the random nature of incoming traffic flow, we could develop an even better model to approximate the flow rate. + +Lastly, our model could include a probabilistic model for the time that a car waits at a tollbooth. + +# Conclusion + +We develop a quasi-SCA model for toll plaza dynamics that treats time and space in a discrete manner to capture the motivation and actions of drivers. We use a negative exponential distribution for the incoming flow rate of cars. We compute the average waiting time for different traffic flow rates. + +At a certain flow rate, there is a boundary layer at which travel time increases sharply with flow rate. Thus, an optimal solution to the tollbooth problem is to choose the minimum number of tollbooths such that the expected rate of incoming flow corresponds to a point before the boundary layer. + +# References + +Aston, Winifred. The Theory of Traffic Flow. 1966. New York: John Wiley & Sons Inc. +Campbell, Paul (ed.). 1999. Special Issue: The 1999 Mathematical Contest in Modeling. *The UMAP Journal* 20 (3). Lexington, MA: COMAP. +2000. Special Issue: The 2000 Mathematical Contest in Modeling. The UMAP Journal 21 (3). Lexington, MA: COMAP. +Chao, Xiuli. n.d. Design and evaluation of toll plaza systems. http://www.transportation.njit.edu/nctip/finalreport/TollPlazaDesign.htm. +TranSafety Inc. 1997. Designing traffic signals to accommodate pedestrian travel. http://www.usroads.com/journals/p/rej/9710/re971002.htm. +Drew, Donald. 1968. Traffic Flow Theory and Control. New York: McGraw-Hill. +Edie, A.C. 1954. Traffic delays at tollbooths. Journal of Operations Research Society of America 2: 107-138. +Hristova, Hristina. n.d. Lecture Notes. MIT OpenCourseWare. http://ocw.mit.edu/OcwWeb/Mathematics/18-306Spring2004/LectureNotes/. +Jiang, Henry. 2003. Traffic flow with cellular automata. NKS SJSU (A New Kind of Science at San Jose State University). http://sjsu.rudyrucker.com/~han.jiang/paper/. +Nagel, Kai, Dietrich Wolf, Peter Wagner, and Patrice Simon. 1998. Two-lane traffic rules for cellular automata: A systematic approach. *Physical Review E* 58-2 (August 1998). http://www.sim.inf.ethz.ch/papers/nagel-etc-2lane/nagel-etc-2lane.pdf. +Orlando-Orange County Expressway Authority. 2003. Mainline plaza characteristics. http://www.expresswayauthority.com/assets/STD&Stats%20Manual/Mainline%20Toll%20Plazas.pdf. +Rauch, Erik. 1996. Locality. http://www.swiss.ai.mit.edu/~rauch/dapm/node1.html. +Sveshnikov, A.A. 1968. Problems in Probability Theory, Mathematical Statistics, and Theory of Random Functions. New York: Dover Publications. +Tosic, Predrag and Gul Agha. n.d. Concurrency vs. sequential interleavings in 1-D threshold cellular automata. http://osl.cs.uiuc.edu/docs/ipdps04/ipdps.pdf. + +Traffic simulations with cellular automaton model. April 30, 2000. http:// www.newmedialab.cuny.edu/traffic/gridCA1/sld001.htm. +Wainer, Gabriel, and Norbert Giambiasi. n.d. Timed cell-DEVS: Modeling and simulation of cell spaces. http://www.sce.carleton.ca/faculty/wainer/papers/timcd.PDF. +Wikipedia. 2005. Toll roads. http://en.wikipedia.org/wiki/Tollroad. Accessed 2 February 2005. +2005. Traffic congestion. +http://wwwanswers.com/traffic+congestion\&r=67. Accessed 2 February 2005. +Wolfram, Steven. 2002. A New Kind of Science. Canada: Wolfram Media Inc. + +![](images/9b6d6cd11d4a2093764d1abdb2cd43d8da4c2aedf62ce526c872b54351af9cb8.jpg) + +John Evans, Peter Kramer (advisor), and Meral Reyhan. + +# The Multiple Single Server Queueing System + +Azra Panjwani + +Yang Liu + +HuanHuan Qi + +University of California, Berkeley + +Berkeley, CA + +Advisor: Jim Pitman + +# Summary + +Our model determines the optimal number of tollbooths at a toll plaza in terms of that minimizing the time that a car spends in the plaza. + +We treat the toll collection process as a network of two exponential queueing systems, the Toll Collection system and the Lane Merge System. The random, memoryless nature of successive car interarrival and service times allows us to conclude that the two are exponentially distributed. + +We use properties of single server and multiple server queuing systems to develop our Multiple Single Server Queuing System. We simulate our network in Matlab, analyzing the model's performance in light, medium, and heavy traffic for tollways with 3 to 6 lanes. The optimal number of tollbooths is roughly double the number of lanes. + +We also evaluate a single tollbooth vs. multiple tollbooths per lane. The optimal number of booths improves the processing time by $22\%$ in light traffic and $61\%$ in medium traffic. In heavy traffic, one tollbooth per lane results in infinite queues. + +Our model produces consistent results for all traffic situations, and its flexibility allows us to apply it to a wide range of toll-plaza systems. However, the minimum time predicted is an average value, hence it does not reflect the maximum time that an individual may spend in the network. + +# General Definitions + +The Network: The point at which the car enters the queue for toll collection to the point at which the car is able to drive off with current traffic speed. It consists of two systems of queues. + +Toll-Collection System: The point at which cars arrive at the toll-plaza and form queues to the point at which they exit the booth after toll collection. + +Lane Merge System: The point at which cars leave the tollbooth to enter the queue to merge back into the tollway lanes, to the point at which they can drive off with current speed. + +Single Server Queueing System: A system with one queue and one server. + +Multiple Server System: A system with one queue and multiple servers such that a customer has the freedom to choose any server available. + +Arrival rate: The number of cars per minute per lane that arrive to a network or system. + +Departure rate: The number of cars per minute per lane that depart from a network or system. + +Service or processing: The act of toll collection. + +Service rate: The number of cars per minute per booth being served. + +Merge rate: The number of cars per minute per lane that merge back into the tollway lanes. + +Total time: The time for a car to pass through the network. + +Optimal time: The minimum feasible total time. + +Idle time: The time interval during which the attendant is not serving anyone. + +# General Assumptions + +- Car arrival times are independent, identically distributed non-negative random variables. +- Cars are served first-come-first-served. +- The service times for individual cars are independent, identically distributed nonnegative random variables with no correlation to the arrival process. +- In the long run, the rate at which cars are served is greater than the rate at which cars enter the network; otherwise, there would be infinite queues. + +- There is no limit to the number of cars that can enter the network, because from the point of view of the network, the road length is arbitrarily large. +- Motorists tend to join the shortest queue in vicinity; hence, in the long run, the queue length is about the same at every tollbooth. + +Table 1. +Table of variables. + +
VariableDescriptions
SThe network of the Toll Collection System and the Lane Merge System
S1The Toll Collection System
S2The Lane Merge System
λ1Average car arrival rate per lane to the S1 queue
λ2Average car arrival rate per lane to the S2 queue
μ1Average service rate per lane in S1
μ2Average merge rate per lane in S2
WTotal expected time spent by a car in S
W1Expected time spent by a car in S1
W2Expected time spent by a car in S2
lAverage length of a vehicle and the safety distance in front of it
νTraffic speed on the road, independent of tollbooth collection
nThe number of lanes in a tollway before the toll plaza
mNumber of tollbooths in a toll plaza
kNumber of lanes in a tollway after the toll plaza
+ +# Our Approach + +We assume that the cars arrive according to a Poisson process. The arrival of a car at a time $t$ does not affect the probability distribution of what occurred prior to $t$ ; hence the system is memoryless [Pitman 1993]. A driver's decision to drive on a road at a particular time is independent from that of any other driver; so the time periods between successive arrivals of vehicles are independent exponential random variables. If the tollbooth attendant is idle, the driver "goes into service"; otherwise, the car joins the queue to be served. + +Similarly, the server processes cars with successive service times also being independent exponential random variables. From probability theory, we know that the sum of two exponential random variables with rates $\lambda$ and $\mu$ is another exponential random variable, with rate $\lambda + \mu$ . + +We apply the theory of exponential queueing systems to develop a model that predicts the value of $m$ that minimizes $W$ . + +# General Model + +A queueing system often consists of "customers" arriving at random times to some facility where they receive service. They depart from the facility at the same rate at which they arrive. The network $S$ consists of two systems, the tollcollection system, $S_{1}$ , where cars arrive and join the queue and the lane-merge system, $S_{2}$ , where people join the queue to receive the "service" of merging. + +# Multiple Single Server Queueing System + +We employ queueing theory together with continuous-time Markov chains to build our Multiple Single Server Queueing System, based on the following reasoning. + +When cars get to a toll barrier, they determine which queue to join. In theory, they can join the shortest queue. In practice, however, they are unlikely to change too many lanes to join a shorter queue if there are other cars on the road. In most cases, they are limited to entering the queue directly in front of them, or a queue to their immediate left and right. Furthermore, under the assumption that the queue lengths are approximately the same for all the queues, they are most likely to join the queue directly ahead. + +The process is similar to a single-server queueing system, but the fact that they have somewhat of a choice in choosing the tollbooth also gives this process properties of a multiple server queueing system. However, multiple-server queueing systems allow for only one queue and the freedom to choose any server that is not occupied. Our system does not fall exactly under either one of the two categories; hence, we coin the name "Multiple Single-Server Queueing System" for the systems in our network, which has the following properties: + +- It consists of several parallel single-service queues. +Each queue has a "super server" that has a processing rate of $\mu_{1} \times m / n$ . + +In Figure 1, each colored box represents the probability that a car in a lane uses a tollbooth of that color. The bigger the box, the higher the chance of choosing that tollbooth. Most drivers use the tollbooth right ahead of them, though a few would choose the tollbooth to the left or right (with equal probability). The probability that a car uses a tollbooth that farther away is negligible. As we can see from Figure 1, the total areas of all the colored boxes representing the probabilities of going through the tollbooths are eventually the same from lane to lane. This implies that the service rate is the same for every tollbooth. Hence, in the long run, each lane is processed at the rate $\mu_{1} \times m / n$ . + +Similarly, the process of waiting in a queue to merge back into the $k$ lanes of the tollway after paying the toll can also be considered as a Multiple Single-Server Queueing System with processing rate $\mu_2 \times k / m$ . To allow for more flexibility in our model, $k$ may or may not equal $n$ . + +![](images/0698614389f5328f7cbcfd1a66179815a3b94d0d74f017d25a3aa31d2ce8a5c9.jpg) +Figure 1. Each lane has equal probability over all cars. + +# Model Development + +The total waiting $W$ is the sum of the times to pass through the two systems, i.e., $W = W_{1} + W_{2}$ . + +Based on the queueing theory equation [Ross 2003] + +$$ +W = \frac {1}{\mu - \lambda}, +$$ + +and the discussion of service rates, we find + +$$ +W = W _ {1} + W _ {2} = \frac {1}{\frac {m \mu_ {1}}{n} - \lambda_ {1}} + \frac {1}{\frac {k \mu_ {2}}{m} - \lambda_ {2}}. +$$ + +# Derivation of the Service Rates + +We assume that on average each tollbooth attendant takes a fixed amount of time $t$ to collect a toll, so $\mu_1 = 1 / t$ . + +For heavy traffic situations, we also take into account driver reaction time $r$ before stepping on the gas and moving up to the booth. We incorporate this delay into the service time to get + +$$ +\mu_ {1} = \frac {1}{t + r}. +$$ + +We estimate $t \approx 5.5$ s and $r \approx 2.5$ s. + +Calculating $\mu_{2}$ is a little trickier. We take into account $\nu$ , which we consider to be determined independently from the toll plaza system. This is justifiable, since whether a toll plaza interrupts a tollway or not, $\nu$ varies considerably depending on different traffic situations. Since $\nu$ is in miles per hour, and we're interested in cars per minute, we first transform the velocity into meters per minute. We also consider the fact that the car is going from $0\mathrm{mph}$ to get up to + +$\nu$ , hence we use the average speed of the car during the time that it must catch up to the tollway traffic. We then divide the velocity by $\ell$ , which depends on $\nu$ , because the safety distance needed for cars at high speed is much greater than that for low speeds. Thus, we obtain + +$$ +\mu_ {2} = \frac {\nu}{2 \ell}. +$$ + +Since cars from the $m$ lanes of the toll plaza must merge back into the $k$ lanes of the highway, we calculate the overall merge rate per lane, $\mu_{2} \times k / m$ , as described earlier, to be + +$$ +\frac {k}{m} \frac {\nu}{2 \ell}. +$$ + +# Derivation of the Second Arrival Rate + +Since drivers join $S_{2}$ as soon as they depart $S_{1}$ , the rate $\lambda_{2}$ is the same as the departure rate from $S_{1}$ . Now, consider the departure rate from $S_{1}$ . If there are $n$ lanes in the system and $n\lambda_{1} \geq m\mu_{1}$ , then all $m$ servers are busy. Since each server works at rate $\mu_{1}$ , the total departure rate is $m\mu_{1}$ . On the other hand, if $n\lambda_{1} < m\mu_{1}$ , then only $n$ servers are busy and the total departure rate is $n\lambda_{1}$ . Since cars emerging from the tollbooth must merge into $k$ lanes in $S_{2}$ , each of which has arrival rate $\lambda_{2}$ , we have + +$$ +k \lambda_ {2} = n \lambda_ {1} \Longrightarrow \lambda_ {2} = \frac {n \lambda_ {1}}{k}. +$$ + +# Final Formula + +Based on the discussion above, we get + +$$ +W = W _ {1} + W _ {2} = \frac {1}{\frac {m \mu_ {1}}{n} - \lambda_ {1}} = \frac {1}{\frac {k \nu}{2 m \ell} - \frac {n \lambda_ {1}}{k}}. +$$ + +Since the problem statement stipulates that under most situations $k = n$ , we simplify this formula to + +$$ +W = W _ {1} + W _ {2} = \frac {1}{\frac {m \mu_ {1}}{n} - \lambda_ {1}} = \frac {1}{\frac {n \nu}{2 m \ell} - \lambda_ {1}}. +$$ + +# The Range of Feasibility + +Our model can calculate the optimal number of tollbooths needed only if the denominators for both $W_{1}$ and $W_{2}$ are greater than zero. Therefore, + +$$ +\frac {m}{n} \mu_ {1} > \lambda_ {1} \quad \text {a n d} \quad \frac {n}{m} \frac {\nu}{2 \ell} > \lambda_ {1}. +$$ + +Hence the feasible range for the number of tollbooths is + +$$ +\left(\frac {\lambda_ {1} n}{\mu_ {1}}, \frac {n \nu}{2 \lambda_ {1} \ell}\right). +$$ + +For a single tollbooth per lane, we set $m = n$ ; the resulting $W$ is + +$$ +W = W _ {1} + W _ {2} = \frac {1}{\mu_ {1} - \lambda_ {1}} = \frac {1}{\frac {\nu}{2 \ell} - \lambda_ {1}}. +$$ + +The model is still a system of two queues. Though the merge factor $n / m$ is diminished, the cars must still catch up to traffic speed and may have to wait in a queue to do so. + +# Data Analysis + +We implement our algorithm for $W$ in Matlab using $n = 3, 4, 5,$ and 6, corresponding to most tollways. We vary $\lambda_{1}$ from 0.5 to 5 cars/minute for light traffic, from 5 to 10 cars/minute for medium traffic, and from 10 to 15 cars/minute for heavy traffic. We establish the range of feasibility for $m$ for each traffic situation. We then determine the number that gives minimal $W$ . + +# Parameter Values + +We set $\mu_{1} = 11$ cars/min for the light and medium traffic; we set $\mu_{1} = 7.5$ cars/min for heavy traffic, to account for the service time plus the reaction time of the cars waiting in queue. + +To determine $\mu_{2}$ , we set $\nu = 60\mathrm{mph}$ for light traffic situations, since most heavily trafficked tollways have speed limits between 50 and $70\mathrm{mph}$ . We set $\nu = 46\mathrm{mph}$ for medium traffic and $\nu = 32\mathrm{mph}$ for heavy traffic. The average car length is between 3.5 and $5.5\mathrm{m}$ [Edwards and Hamson 1990], hence we set car length in our model to $4\mathrm{m}$ . We set the safety distance to $20\mathrm{m}$ for light traffic, $14\mathrm{m}$ for medium traffic, and $8\mathrm{m}$ for heavy traffic. The optimal number of tollbooths for the different levels of traffic and numbers of highway lanes are shown in Table 2. + +Table 2. Optimal numbers of tollbooths. + +
TrafficNumber of lanes
3456
Light57910
Medium59911
Heavy991113
+ +Regardless of the traffic level, the optimal number of tollbooths is always greater than the number of highway lanes. However, for light traffic, the difference between the average wait for optimal number of tollbooths vs. the average wait for $m = n$ is only about 2 s. + +For medium traffic, though, the differences $(\approx 15\mathrm{s})$ are large enough to conclude that having extra tollbooths would be a wise decision. + +For heavy traffic, setting single tollbooth per lane would result in infinite waiting queues for all situations examined. + +# Detailed Analysis of a Six-Lane Tollway + +We conduct a detailed study for a six-lane tollway. The general trends observed for this dataset are typical for any number of lanes. We generate plots for the three traffic levels with number $m$ of tollbooths as independent variable and $W$ as dependent variable. We keep $\lambda$ constant for each curve; hence we produce a set of level curves that show the optimal value for $m$ based on the $\lambda s$ . + +As the traffic gets heavier, the region of feasibility for $m$ gets smaller. This is because having too few tollbooths causes an infinite waiting time at $S_{1}$ , while having too many tollbooths causes an infinite wait at $S_{2}$ , due to the influx of cars processed in $S_{1}$ . + +For light traffic, the difference in $W_{\mathrm{ave}}$ for $m = 5$ and $m = 18$ is merely $2\mathrm{s}$ . For medium traffic, a shift from the optimal number $m = 10$ causes a more dramatic increase in the time spent in the network. For heavy traffic, the range of feasibility reduces to a small region centered around the optimal number $m = 13$ —namely 12, 13, or 14 (Figure 2). The onset of heavy traffic both before and after the tollbooth excludes more extreme values of $m$ from the feasible range. The beauty of the results is that the optimal number of tollbooths is the same for varying arrival rates. + +# Conclusion + +It is better to have more than one tollbooth per lane. But having too many tollbooths per lane is just as bad. We recommend that for frequently traveled roads, the number of tollbooths available should be the maximum of all the optimal tollbooth numbers generated by our algorithm. The number of booths open can then vary for different traffic flows during the day. + +For toll roads that usually have light traffic, having a single tollbooth per lane reduces the cost of building and running the toll plaza; reduction in waiting time does not justify more tollbooths. + +![](images/316614ff8542b723f572a29cd6b6855db03b38af8a8be23adc1d6a1df2437c18.jpg) +Figure 2. Wait vs. number of tollbooths for heavy traffic on a 6-lane tollway, for various arrival rates. + +# Strengths of Our Model + +- Our model withstands many variations in parameters. +- Given reasonable values for the parameters, the algorithm generates realistic results for the optimal number of tollbooths. +- When we vary within the range of a specific traffic situation, the optimal solution is consistent for each in each situation. +- The optimal number of tollbooths differs among traffic levels, reflecting the fact that varying the number of tollbooths has a significant impact on waiting time. +- The algorithm, though rich in theory, is very easy to implement and test. + +# Weaknesses + +- We assume that the arrival rate is less than the service rate at each system. In the long run, this assumption must hold in order to avoid infinite queues; but there can be intervals during when arrivals overwhelm the service rate. + +Hence, though the average waiting time for the optimal solution may be small, the maximum waiting time for some cars may be rather large. + +- Our model's range of feasibility is limited by the rates at which the cars are served at the two systems. +- Our model predicts the optimal tollbooth numbers based on the minimal time, but this may not be the most cost-effective solution. +- We don't incorporate the electronic payment passes that many toll systems use to minimize waiting time. + +# References + +Edwards, Dilwyn, and Mike Hamson. 1990. Guide to Mathematical Modeling. Boca Raton, FL: CRC Press. + +Pitman, Jim. 1993. Probability. New York: Springer-Verlag. + +Ross, Sheldon M. 2003. Introduction to Probability Models. San Diego, CA: Academic Press. + +![](images/295e03e1a4169c06c64b25a51b4f080e7765bb6095dcdbd3e1cdaf25968e8b5a.jpg) +Azra Panjwani, Jim Pitman (advisor), HuanHuan Qi, and Yang Liu. + +# Two Tools for Tollbooth Optimization + +Ephrat Bitton + +Anand Kulkarni + +Mark Shlimovich + +University of California, Berkeley + +Berkeley, CA + +Advisor: L. Craig Evans + +# Summary + +We determine the optimal number of lanes in a toll plaza to maximize the transit rate of vehicles through the system. We use two different approaches, one macroscopic and one discrete, to model traffic through the toll plaza. + +In our first approach, we derive results about flows through a sequence of bottlenecks and demonstrate that maximum flow occurs when the flow rate through all bottlenecks is equal. We apply these results to the toll-plaza system to determine the optimal number of toll lanes. At high densities, the optimal number of tollbooths exhibits a linear relationship with the number of toll lanes. + +We then construct a discrete traffic simulation based on stochastic cellular automata, a microscopic approach to traffic modeling, which we use to validate the optimality of our model. Furthermore, we demonstrate that the simulation generates flow rates very close to those of toll plazas on the Garden State Parkway in New Jersey, which further confirms the accuracy of our predictions. + +Having the number of toll lanes equal the number of highway lanes is optimal only when a highway has consistently low density and is suboptimal otherwise. For medium- to high-density traffic, the optimal number of toll lanes is three to four times the number of highway lanes. Both models demonstrate that if a tollway has lanes in excess of the optimal, flow will not increase or abate. + +Finally, we examine how well our models can be generalized and comment on their applicability to the real world. + +# Statement of Problem + +We are asked for a model that determines the optimal number of tollbooths in a toll plaza located on an $n$ -lane tollway. The two criteria that we use for evaluating optimality are total throughput in number of cars and average transit time for individual cars to pass through the plaza. + +# Definitions + +Number of highway lanes, $n$ : The number of lanes on the highway entering and leaving the plaza. + +Number of transit lanes, $m$ : The number of tollbooths and lanes in the toll plaza. + +Entry zone: The $m$ -lane region of the toll plaza between the entry tollway and the tollbooths. + +Merge zone: The $m$ -lane region of the toll plaza between the tollbooths and the exit tollway. + +Flow or throughput, $q$ : Number of cars per second which pass through a given point $x$ in our system. + +Backlog $B$ : Number of queued cars waiting to enter the tollbooths or exit the plaza. + +Tollbooth processing time, $\tau_{i}$ : The number of seconds required, on average, for a car to pull into, pay, and exit a tollbooth $i$ . + +Density $\rho(x)$ : number of vehicles per square meter in a given region. + +$m^{*}$ : the optimal number of tollbooth lanes. + +Bottleneck capacity, $q_{b}$ : the maximum number of cars per second that can pass through a given bottleneck $b$ . + +# Assumptions + +- A toll plaza consists of $n$ highway lanes diverging into $m$ toll lanes and converging back into $n$ highway lanes. The toll plaza is sufficiently long to permit cars to reach all of the $m$ tollbooths. +- Each tollbooth controls one lane and can serve at most one car at a time. +- Exit from the tollbooths is not metered. + +- Drivers seek to move through the toll plaza as quickly as possible while maintaining safety. +- Within the toll plaza, all vehicles move at the safest possible maximum speed for a given density, since drivers seek to avoid accidents. + +# Model Development + +# Motivations + +There are two general approaches to modeling traffic motion: + +Macroscopic approaches begin with some observations about aggregate traffic behavior and attempt to approximate traffic behavior as a continuous flow over some large region or large time period. + +Microscopic approaches attempt to model driver and car behavior and use this information in aggregate as the basis for modeling the large-scale behavior of traffic. + +The microscopic approach often hinges on a large number of parameters that may be difficult to model accurately. For example, driver decision-making strategies, driving styles, preferred following distances, and the physical parameters of individual vehicles are highly variable. + +We first pursue the macroscopic approach. Such approaches are traditionally used for modeling traffic behavior over long stretches of highway, and to model traffic jams, so it may seem that such an approach is inapplicable to a setting where highway length is not large. However, there are two advantages: + +- Properties that vary significantly between drivers and vehicles are averaged out if we let the system run for a sufficiently long time and it approaches a steady-state equilibrium. +- At equilibrium, we can use the total throughput of cars through the toll plaza over a given duration as a metric for the disruption it causes. + +We construct a theoretical flow model of the tollbooth plaza to determine the effect of varying numbers of tollbooth lanes for given numbers of transit lanes, and use this to predict optimal conditions. + +The downside of the macroscopic approach is that traffic flow is not necessarily continuous, so approximations made in the model may not reflect reality. The best way to check them is to contrast them with real traffic data. As a result, we eventually construct a full microscopic approach to generate realistic data to test our continuous model: We design a cellular-automata simulation, constructed with an independent set of driver behaviors, to verify our macroscopic model. To represent the effect of unknown variables, we introduce a small random component to the simulation. + +# Flow in the Plaza + +# Initial Observations and Conservation of Flow + +We begin by defining the flow $q$ of traffic as the number of cars per second to pass through a perpendicular cross-section across all lanes of the highway, $dx$ . By definition, flow follows the equation + +$$ +q = \rho v, +$$ + +where $v$ is the average vehicle speed and $\rho$ is the average vehicle density. + +Two bottlenecks limit the flow in every toll plaza: the first is at the tollbooths, caused by the time required for cars to stop and pay the toll, and the second occurs when the lanes exiting the tollgates merge back into the highway. + +All traffic that enters the plaza must eventually exit the plaza. Treating the motion of vehicles through the plaza as a continuous flow of traffic, we can represent this fact with the following lemma. + +Lemma. For any given cross-sectional slice $dx$ of the system, + +$$ +\int_ {\mathrm {a l l t i m e}} \left(q _ {o u t} - q _ {i n}\right) d x = 0. +$$ + +Using the relation $q = \rho v$ , we can arrive at the following relationship, following the method of Kuhne and Michalopoulos [2002, Ch. 5, 5-8]: + +$$ +\frac {\partial \rho}{\partial t} + \frac {\partial q}{\partial x} = 0. \tag {1} +$$ + +This is the conservation of traffic flow equation given by Kuhne and Michalopoulos [2002, Ch. 5, 5-8], among others. + +# Bottlenecking Constraints on Flow + +Bottlenecks along the highway restrict the maximum rate of traffic flow, as described in the following theorem: + +Bottleneck Theorem. The flow of vehicles through any system void of sources and sinks is bounded by the minimum of the bottleneck capacities along the path. + +Proof: Suppose that the flow at some point $dx$ along the highway is in excess of the maximum of all bottleneck capacities ahead of it, i.e. + +$$ +q (x) > \max \{q _ {\mathrm {b o t t l e n e c k} _ {1}}, \dots , q _ {\mathrm {b o t t l e n e c k} _ {i}} \}, +$$ + +where $i$ is the number of bottlenecks ahead of $dx$ . Since in the steady-state model all points flow at the same rate, this would mean that $\max \{q_{\text{bottleneck}}\} = q(x)$ , which is a contradiction. + +This result is used several times in the construction of our model. + +# A Queueing Model Based on Flow + +Since the rate of flow is constrained by the bottleneck of minimum capacity, it follows that the throughput is determined by the relative rates of the bottleneck at the tollbooths and at the point where the highway retreats back to its original size (i.e., the "merge point"). + +This observation reduces the problem to examining throughput solely at the endpoints of the "problem zone," without need to consider the behavior of traffic flow between those points. Thus, we can proceed simply by modeling the behavior of traffic at these two points. + +# Calculating Backlogs + +We find the number of cars at each of these bottlenecks at any given time, so as to determine when a backlog occurs. For an arbitrary segment of the $m$ -lane section of the toll plaza, we integrate the conservation of flow equation (1) with respect to $x$ over the length of the road segment (with $m$ lanes). This gives the instantaneous number of vehicles within the segment, $B(t)$ : + +$$ +B (t) = \int_ {x} m \rho (x, t) d x. +$$ + +From this we obtain the rate at which the backlog or pileup in the merge zone is growing: + +$$ +\frac {d B (t)}{d t} = \left\{ \begin{array}{l l} q _ {\text {a r r i v a l}} - q _ {\text {d e p a r t u r e}}, & \text {i f} q _ {\text {a r r i v a l}} > q _ {\text {d e p a r t u r e}}; \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {2} +$$ + +where $q_{\mathrm{arrival}}$ is the rate (in cars/s) at which cars enter the segment and $q_{\mathrm{departure}}$ is the rate at which they are exit. It then follows that: + +Theorem (Flow Equilibrium). To prevent congestion from building at a bottleneck (i.e., to keep $B(t) = 0$ ) while maintaining maximum system throughput, it must be that + +$$ +q _ {\mathrm {a r r i v a l}} \leq q _ {\mathrm {d e p a r t u r e} _ {\mathrm {m a x}}}, +$$ + +where $q_{\text{departure}_{\max}}$ is the bottleneck capacity and thus the maximum system throughput. + +Proof : Consider the following three possible cases: + +Case 1: Let $q_{\text{arrival}} > q_{\text{departure}_{\text{max}}}$ . Then by (2), the number of cars building up in the system will increase at a rate of $q_{\text{arrival}} - q_{\text{departure}} > 0$ . + +Case 2: Let $q_{\text{arrival}} < q_{\text{departure}_{\text{max}}}$ . By (2), the rate of increase in the number of cars in the system is 0, and system throughput is $q_{\text{arrival}}$ . + +Case 3: Let $q_{\text{arrival}} = q_{\text{departure}_{\text{max}}}$ . Similar to Case 2, a backlog does not grow, and the system throughput is $q_{\text{arrival}}$ . Note that, however, $q_{\text{arrival}}$ , and thus system throughput, is at a maximum while preventing congestion at the bottleneck; therefore, this is clearly the optimal case. + +# Applications to Toll Plaza System + +Adapting this general result to our model, we define + +$q_{\mathrm{in}_i}$ to be the flow in cars/second entering the system in lane $i$ + +$q_{\mathrm{toll}_i}$ to be the flow or turnover rate of tollbooth $i$ , and + +$q_{\mathrm{out}_i}$ to be the flow leaving the system (at or after the merge point) in lane $i$ . + +Our model considers the interaction between the two bottlenecks. Upon investigation, two observations become apparent: + +1. The maximum flow, $q_{\mathrm{max}}$ through a cross-sectional slice of the highway $dx$ is independent of the road structure before that point. In other words, the maximum flow capacity is fixed solely by the number of lanes at that point and not the number of lanes merging or diverging into it. + +2. The only cross-sectional slice $dx$ at which the maximal flow can be varied (by the model) is at the tollbooths; this is done by changing the number of tollbooths, which directly results in a change in the number of cars that can be processed per unit time. + +With this in mind we apply the Flow Equilibrium Theorem. We divide the system into two segments, the first from $(-\infty, x_{\mathrm{tolls}})$ and the second from $(x_{\mathrm{tolls}}, x_{\mathrm{merge}})$ , where $x_{\mathrm{tolls}}$ is the point $x$ along the highway where the tollbooths are and $x_{\mathrm{merge}}$ is the point where the $m$ lanes of the toll plaza merge into the $n$ lanes of the highway. + +The $q_{\mathrm{arrival}}$ of the first segment (into the tollbooths) is simply $q_{\mathrm{in}}$ , and $q_{\mathrm{departure}} = q_{\mathrm{tolls}}$ . For the second segment, $q_{\mathrm{arrival}} = q_{\mathrm{tolls}}$ and $q_{\mathrm{departure}} = q_{\mathrm{out}}$ . + +Since $q_{\mathrm{tolls}} = m / \tau$ , only the number of tollbooths and their individual turnover rates $\tau$ determine the flow entering the merge zone. By observation (1), we note that the bottleneck capacity of the merge point, $q_{\mathrm{outmax}}$ , is independent of $m$ and $q_{\mathrm{tolls}}$ ; it is merely a property of an $n$ -lane highway. + +We are therefore interested in how $q_{\mathrm{tolls}}$ affects the flow of cars through the merge zone. By observation (2), when $q_{\mathrm{tolls}} > q_{\mathrm{out}}$ the backlog increases at a rate of + +$$ +\frac {d B (t)}{d t} = q _ {\mathrm {t o l l s}} - q _ {\mathrm {o u t}}. +$$ + +The backlog continues to grow until the entire merge zone is filled, and then it spills out into the segment before the tolls. This buildup does not fully dissipate until $q_{\mathrm{in}}$ reduces to below $q_{\mathrm{out}_{\mathrm{max}}}$ , or in other words, until the incoming flow + +rate is below the bottleneck capacity of the tightest bottleneck (such as at the end of rush hour). + +To prevent the occurrence of this effect, let $q_{\mathrm{tolls}} \leq q_{\mathrm{out}}$ , which allows traffic to flow through the merge zone without causing backlog. However, when $q_{\mathrm{tolls}} < q_{\mathrm{out}}$ , the merge point is not operating at maximum flow; therefore, letting $q_{\mathrm{tolls}} = q_{\mathrm{out}}$ is optimal. Surprisingly, however, we show later that this is actually a lower bound on $q_{\mathrm{tolls}}$ . + +From this result, we get + +$$ +q _ {\mathrm {o u t}} = n q _ {o u t _ {\mathrm {m a x}}} = q _ {\mathrm {t o l l s}} = \frac {m ^ {*}}{\tau}, +$$ + +where $m^*$ is the optimal number of tollbooths for an $n$ -lane highway. Solving for $m^*$ , we get + +$$ +\left| m ^ {*} = n \tau q _ {\mathrm {o u t} _ {\max}}. \right. +$$ + +# Performance When the Number of Tollbooths Exceeds $m^*$ + +We now consider toll plaza performance when the number of tollbooths $m$ exceeds the predicted optimum $m^*$ . This investigation is necessary. For example, if our model were to predict $m^*$ slightly above the actual value, a backlog would build within the merge zone, but it might build so slowly that by the time its size became critical, the rush-hour mass of vehicles would already have dissipated. + +As previously shown, when $m > m^{*}$ and $q_{\mathrm{in}} > q_{\mathrm{toll}_{\mathrm{max}}}$ , a backlog builds in the merge zone at a rate of $(q_{\mathrm{toll}_{\mathrm{max}}} - q_{\mathrm{out}})$ . Until the merge zone fills completely with vehicles (when vehicle density is at a maximum), the tollbooths continue to process vehicles at their maximum rate, $q_{\mathrm{toll}_{\mathrm{max}}} = m / \tau$ . In this case, the backlog at the tolls grows at a rate of $(q_{\mathrm{in}} - q_{\mathrm{toll}_{\mathrm{max}}})$ . + +As a result, the effective backlog growth is the sum of the backlog growth rates at each bottleneck: + +$$ +\begin{array}{l} \frac {d}{d t} \operatorname {b a c k l o g} _ {\text {e f f e c t i v e}} = \frac {d}{d t} \operatorname {b a c k l o g} _ {\text {t o l l s}} + \frac {d}{d t} \operatorname {b a c k l o g} _ {\text {m e r g e}} \\ = \left(q _ {\mathrm {i n}} - q _ {\mathrm {t o l l s} _ {\mathrm {m a x}}}\right) + \left(q _ {\mathrm {t o l l s} _ {\mathrm {m a x}}} - q _ {\mathrm {o u t}}\right) \\ = \left(q _ {\text {i n}} - q _ {\text {o u t}}\right). \\ \end{array} +$$ + +Interestingly, this result implies that the total backlog of the system is entirely dependent on the rate at which vehicles enter and the maximum rate at which they can leave (i.e., the bottleneck capacity of the tightest bottleneck). Therefore, as long as the tollbooths do not limit the total flow capacity of the system, the exact rate at which the tollbooths process vehicles does not affect the system flow. This line of reasoning leads us to the following theorem: + +Theorem (Lower bound on $m^*$ ). The optimal number of tollbooths for an arbitrary $n$ -lane highway is greater than or equal to $m^* = n\tau q_{out_{\max}}$ . + +# Maximum Flow on an $n$ -Lane Highway + +Garber and Hoel [1999] observe empirically an inverse relationship between $\rho$ and $v$ , and several models have been proposed to describe this behavior. It is generally accepted that any such model must exhibit the following properties: + +- Flow is zero when density is zero. +- As density increases to some critical value, so does flow. +- Past this critical density the flow begins to decrease. +- Flow cannot decrease beyond some minimum value. + +Let $v_{\mathrm{max}}$ be the maximum velocity of cars traveling freely on the highway (generally the speed limit), and let $\rho_{\mathrm{max}}$ be the maximum number of cars (i.e., jam-packed) per unit area of highway (a constant). + +One of the more popular models, proposed by Greenshield [Garber and Hoel 1999], establishes a linear relationship between the two: + +$$ +v = v _ {\mathrm {m a x}} \left(1 - \frac {\rho}{\rho_ {\mathrm {m a x}}}\right) \quad \Rightarrow \quad q = \rho v _ {\mathrm {m a x}} \left(1 - \frac {\rho}{\rho_ {\mathrm {m a x}}}\right), +$$ + +where $v_{\mathrm{max}}$ is the maximum speed at which cars tend to travel when flowing undisrupted along the highway, generally taken as the speed limit. + +A second popular model, introduced by Greenberg [Garber and Hoel 1999], proposes a logarithmic relationship: + +$$ +v = v _ {\max } \ln \left(\frac {\rho}{\rho_ {\max }}\right) \quad \Rightarrow \quad q = \rho v _ {\max } \ln \left(\frac {\rho}{\rho_ {\max }}\right). \tag {3} +$$ + +While accurate in certain cases, these models do not seem to represent effectively the motion of cars through toll plazas, because they both model the flow as zero when density has reached its maximum. Although density will tend to some maximum, flow will asymptotically approach but never reach zero, since some number of cars will still flow out of the system over a long enough time interval. This discrepancy is a direct result of the limitations inherent in treating traffic as a continuous flow. Determining the precise relationship between $\rho$ and $v$ is a relatively complex modeling task beyond the scope of this paper, so we accept Greenberg's model with the restriction that if $\rho = \rho_{\mathrm{max}}$ , flow will approach some low constant value instead. + +To determine a rough estimate of the maximum flow through the merge point (or any $n$ -lane highway for that matter), we use Greenberg's model. To find the maximum flow $q_{\mathrm{max}}$ , we differentiate (3): + +$$ +\frac {d}{d \rho} q (\rho) = v _ {\mathrm {m a x}} \log \left(\frac {\rho_ {\mathrm {m a x}}}{\rho}\right) - v _ {\mathrm {m a x}} = 0 +$$ + +Solving for $\rho$ , we get $\rho = \rho_{\mathrm{max}} / 2$ . Therefore, flow is maximized for density $\rho_{\mathrm{max}} / 2$ , which gives + +$$ +q _ {\mathrm {m a x}} = \frac {\rho_ {\mathrm {m a x}} v _ {\mathrm {m a x}}}{2}. +$$ + +# Streamlined Flow Model + +We made the assumption in the previous section that flow is distributed uniformly among all lanes—that is, an equal number of cars pass through each lane per second. However, in real toll plaza systems there are conditions when this does not apply. For example, on New Jersey's Garden State Parkway users are restricted to movement between sets of individual highway lanes, before and after the tollbooths, by lane dividers. As a result, sets of lanes operate independently of each other. Moreover, one or more lanes may be reserved for low-speed vehicles (recreational vehicles or large trucks) or high-speed traffic (electronic toll collection, motorcycles, buses, or carpools). + +To generalize our model, we relax this assumption to account for varying flows between lanes of traffic. We divide the total flow through the tollbooths, and the total outgoing flow through the exit, into individual lane flows, so that + +$$ +q _ {\text {t o l l s}} = \sum_ {i = 1} ^ {m} q _ {\text {t o l l s} _ {i}}, \quad q _ {\text {o u t}} = \sum_ {j = 1} ^ {n} q _ {\text {e x i t} _ {j}}. +$$ + +As observed in the Bottleneck Theorem, no lane can flow faster than $q_{\mathrm{max}}$ . However, by conservation of traffic flow (1), total traffic flow through the system remains the same, even though streams of traffic may move at different rates. + +We must also consider what happens when lanes merge towards the exit of the toll plaza. Whereas in the queueing theory model we were allowed to consider only the flow at two points— $q_{\mathrm{exit}}$ and $q_{\mathrm{toll}}$ —we must now also consider the flow rate at all points where two or more lanes merge into one. + +The flow rate at a merge point can never exceed $q_{\mathrm{max}}$ , the maximum flow for a single lane. According to the results derived earlier in the basic model, the rate of flow through each lane equals the rate of the slowest bottleneck ahead. However, it is the combined rate of two merging lanes that exceeds the lane bottleneck, not each individual lane, so naively setting $q_{\mathrm{in}} = q_{\mathrm{max}}$ would incorrectly increase the rate of the premerged lane to match the bottleneck. + +As a result, each lane can contribute at most a decreased quantity such that the sum of the two lanes equals the bottleneck capacity. A simple way to represent this behavior is to allow each to contribute a proportion of flow relative to their combined size: + +$$ +q _ {1 _ {\mathrm {r e d u c e d}}} = q _ {\max} \frac {q _ {1}}{q _ {1} + q _ {2}}, \qquad q _ {2 _ {\mathrm {r e d u c e d}}} = q _ {\max} \frac {q _ {2}}{q _ {1} + q _ {2}}. +$$ + +Thus, when both flows would normally overfill the lane into which they merge, each lane's contribution to that lane's flow, $q_{\mathrm{out}}$ , will be proportional to its percentage of the total amount of flow present, without ever exceeding $q_{\mathrm{max}}$ . + +This observation lends itself to a simple recursive function in modeling toll plaza traffic as an aggregate of independent flows. Therefore, predictions of system behavior with introduction of electronic toll collection lanes and other flow-monitored lanes are significantly simplified, by applying this model at all steps of the merging process (i.e., by first merging every set of two lanes, and then merging the following two, etc.). + +# Simulation + +# Motivations for a Discrete Simulation + +To validate the continuous model, we create a discrete simulation using cellular automata to generate traffic behavior. Whereas our earlier models approximate car flux as continuous, the cellular automata simulation treats individual vehicles as distinct entities that behave according to well-defined simple rules. Since the discrete model is based on an independent set of intuitions about the system and how it behaves, any agreement between the two models will suggest a high degree of accuracy in our modeling efforts. + +# Overview + +The simulation runs on a two-dimensional grid of points, each of which corresponds to a width and length slightly greater than the average car size. The simulation takes in parameters that determine the geometry of the toll plaza as well as a probability, $p$ , that a car enters the toll plaza in a given lane. When populating the grid, each cell can be one of four types and behaves according to its corresponding set of rules: + +Free: a transient place holder when block is unoccupied. + +Barrier: a boundary point of the toll plaza; this cell never changes. + +Toll block: a tollbooth. + +Car: a cell occupied by a vehicle. + +# Rules of State Evolution + +The rules that create the next generation of cells (next state of the grid) from time $t$ to $t + 1$ are: + +- All cars travel at a constant speed of 1 forward cell per time step. +- A car can change lanes and move forward on the next step if there is an open adjacent cell to its side and an open cell along the appropriate diagonal. +- A car can stay in the same lane if the cell ahead is free, if the car in front of it moves forward on that step, or if it is in front of the toll and its toll delay has expired. +- As the cells in the entry of the toll plaza are freed by the evolution of the grid, at each time step new cars arrive in these with probability $p$ , the density of incoming cars. + +# Stochastic State Evolution + +We use random variables to make the system nondeterministic. This more closely represents real-world simulations where exact car path is unpredictable and also attempts to account for the wide range of parameters relating to driver psychology, variations in vehicles, and variations in service time in paying the toll. To generate this effect, we implement the following rules: + +1. Each toll processes a car at a random rate each time, with distribution centered at $\tau = 3$ : + +$$ +p (x) = \left\{ \begin{array}{l l} 0. 2 5, & x \in \{2, 4 \}; \\ 0. 5, & x \in \{3 \}. \end{array} \right. +$$ + +2. Cars switch lanes at random with some assigned probability, but their decision is influenced by the desirability of the target lane. +3. The arrival of cars into a lane is a Poisson process with rate $\lambda$ , a parameter to the simulation. + +# Reporting + +We run the simulation for 1,000 time steps. It returns total car throughput, total waiting time, average waiting time, total transit time, and the density of cars in each section of the system (i.e., before, within, and after the toll plaza). + +# Sample simulation run + +Cars are released at $t = 0$ and proceed toward the tollbooths. Upon entering the diverge zone, cars change lanes and spread out to minimize their total wait time at the tolls. + +Depending on the number $n$ of highway lanes and the number $m$ of toll-booths, traffic eventually reaches an equilibrium flow, or else flow is reduced by the bottlenecks and backlog begins to grow. In our figures, the color of the vehicle designates the amount of time spent waiting in the system, with bright green the least, dark green moderate time, and bright red the most. + +The model supports varying traffic density over time; we ran our simulations with arrival densities of $15\%$ , $50\%$ , and $85\%$ percent. + +# Simulation Sample Images + +Simulation sample activity is illustrated in Figure 1. + +![](images/a316c32b64b4ff7786fb2baf60c2138d0400f8c1f5633eb7491e4744b00e2c21.jpg) + +![](images/a0d5ce243c5d2a9774143bada3b0c3d5c8c7883f643433824c5c289a5cc2d0d2.jpg) +Figure 1. Simulation at step 1000 for $n = 5$ . + +# Running the Simulation + +We ran the simulation for $t = 1000$ timesteps, with $\tau = 3$ steps, for all possible tollway sizes from $n = 1$ to $n = 8$ , and for $m = n$ to $4n$ (for small $n$ ) or to $3n$ (for large $n$ ). For each highway size, we ran the simulation at $85\%$ , $50\%$ , and $15\%$ density. We repeated this process 5 times for the sake of statistical significance. For a timestep of $1$ s, a single run of the simulation corresponds to about $17$ min of traffic. + +Over each set of conditions, we track the total throughput of cars. We then compare the throughput achieved using $m$ tollbooth lanes on a given $n$ -lane tollway over the entire 1000-s period. + +# Results and Analysis + +# Model Predictions vs. Simulation Results + +To compare further the accuracy of our theoretical model and our discrete simulation, we compare experimental data collected from the simulation with our model's predictions for the corresponding number of highway lanes $n$ . To do this, we must make the following parameter assumptions. + +- The average vehicle length plus its separation distance from the vehicle ahead of it is approximately 15 ft. +- At high density, vehicles travel in the merge zone on average at 15 mph. +- In correspondence with the simulation, the average processing time at the tolls is $\tau = 3\mathrm{s / car}$ . + +Computing the model's predictions based on these parameters and running the simulation for from one to seven highway lanes yields the results in Table 1. The values for $m^*$ are the minimum values for which adding additional lanes does not alter performance significantly. [EDITOR'S NOTE: The experimental graphs used to derive these values are omitted here.] + +Figure 3 shows that our flow model is validated as an accurate long-term predictor of traffic behavior for high density scenarios ( $p = .85$ ). We believe that the difference between the continuous model prediction and the observed value in the simulation stems from uncertainty in the value of the parameter $q_{\mathrm{max}}$ . Our prediction of $q_{\mathrm{max}} = 4$ was accurate only for the high-density case. For the low-density case, we almost never experience the conditions caused by $q_{\mathrm{max}}$ -clogging at the merge points—and so our model does not apply. The continuous flow model does not apply when traffic flow has significant variations in speed, when density cannot be considered a regular flow. + +Our flow model accurately predicts the observed optimum to within 3 lanes for high-density traffic. From this, we see a high level of agreement between + +Experimentally observed optimal number of lanes for various traffic densities vs. predicted optimal number $\lfloor m^{*}\rfloor$ of lanes. There is a rough one-to-four relationship between the number of highway lanes and the optimal number of tollbooths. + +Table 1. + +
Highway lanesSimulation density ρ[m*]
.15.50.85
11454
226108
3391312
44101616
55131820
66142124
77162528
+ +![](images/472c4862e10d495edff629920025cd2111c166e7bd09f8d48a491ed30bcc31e8.jpg) +Figure 3. Observed optimal number $m^{*}$ of toll lanes for given number $n$ of highway lanes. + +the two models, even though they operated on completely independent assumptions. These results suggest that our model predicts the optimal number of tollbooths for an arbitrary $n$ -lane tollway fairly accurately. + +# Total Throughput vs. Average Wait Time + +For every simulation that we ran, whenever throughput is higher or lower for a given pair of $(m,n,\rho)$ , average transit time is correspondingly higher or lower. We conclude that total throughput and average wait time are highly correlated. As a result, optimizing either of them results in good performance relative to the other criterion. + +# Accuracy of Simulation + +# Sensitivity of Parameters + +We examine the effect of changing parameters. Altering the processing time $\tau$ , length of the tollway, length of the merging area, and the probability distribution for random behavior all affect the absolute throughputs achieved for different numbers of tollbooths—but do not affect the optimal number of lanes. + +The only parameter with a significant effect on the optimal number of lanes is length of the merging area when set to an extremely low value, so that cars couldn't switch lanes in time to utilize all of the lanes. However, this condition contradicts an initial assumptions of our model and condition is unlikely to occur in the real world. + +Our simulation is therefore very robust with respect to parameter variation. + +# Faithfulness to Real-World Behavior + +We use the cellular automata simulation to validate the effectiveness of the flow model; however, this verification is only accurate to the extent that the cellular automata is a realistic description of real-world traffic flow through tollbooths. + +To validate our simulation, we examine real-world flow rates of the Union toll plaza of the Garden State Parkway at several peak flow times, where $n = 5$ and $m = 13$ [New Jersey Institute of Technology 2001, 11]. Examining the seven hours of data, we arrive at a throughput of 2393 cars/hr; our model predicts 2530 cars/hr. Our simulation matches the empirical results surprisingly well for peak density. + +According to our simulation, the Garden State Parkway's performance is fairly suboptimal. The best results, according to the simulation, are obtained + +for $m = 20$ tollbooth lanes, enabling an increase in average throughput by almost a factor of two. + +# Extensions of Base Model + +# Electronic Toll Collection + +Under electronic fare payment systems such as Fastrack and EZ-Pass, drivers attach an electronic device to their vehicles, which is scanned automatically as they pass through a special tollbooth lane with little or no reduction in speed. + +Both our model and simulation can analyze inclusion of special "fastlanes." In the streamlined-flow model, fastlanes are simply lanes with a much higher rate of flow $q_{\mathrm{toll}_i}$ through the tollbooth. Since congestion still occurs later as a result of the narrow bottleneck caused by merges and $q_{\mathrm{max}}$ , fast progress may still be impeded by slow merging. This possibility explains the common practice of having separate fastlane toll lanes running alongside the outside of the toll plaza, so that merging happens far enough down the road. + +Because merge rates are proportional to ratio of the rates of the lanes merging and the maximum possible rate, we have that + +$$ +q _ {\text {f a s t l a n e - a t - m e r g e p o i n t}} = q _ {\max } \left(\frac {q _ {\text {f a s t l a n e}}}{q _ {\text {f a s t l a n e}} + q _ {\text {o t h e r}}}\right). +$$ + +Since $q_{\mathrm{fastlane}}$ is potentially much greater than $q_{\mathrm{other}}$ , cars in the fastlane flow at a rate close to the maximum. As a result, users who choose a fastlane still move through the toll plaza faster than other cars, even when forced to merge with slower traffic. The more use of fastlanes, the higher overall average throughput, and the recommended number of toll lanes for regular use can drop. + +# Final Recommendations + +- Our model predicts that the findings in Table 1 provide the best results for high-density situations. For traffic density at or above $85\%$ of the maximum bumper-to-bumper density, our model should be used. Lanes can be closed when density is lower. +- Our model provides a lower bound on the recommended number of toll-booth lanes. Running more tollbooth lanes than the optimal predicted value does not hinder throughput. +- The case $m = n$ suffices exactly when a road has consistently low-density traffic. For medium- and high-density traffic, this case causes suboptimal performance. + +# Model Assessment + +# Model Strengths + +- The discrete and continuous models agree well at peak densities. +- We generate plausible traffic behavior through partially random behavior in our models. In particular, our models match well effects we observe in the Union toll plaza of the Garden State Parkway. +- Our model can scale successfully to represent the impact of electronic toll-taking and variable tollbooth speeds. + +# Model Weaknesses + +- The primary shortcoming of the theoretical model is the assumption that car flux is continuous. Fractional values of car flux do not realistically represent low-density traffic. +- Our model is too sensitive to variations in the average amount of time to process a car at the tolls. +- The simulation accounts for many unknown factors with random choices. Validation of our model is accurate only insofar as this randomness accurate reflects driver behavior. +- We don't consider cost as a component of our solution. + +# References + +Garber, Nicholas J., and Lester A. Hoel. 1999. Traffic and Highway Engineering. Pacific Grove, CA: Brady/Cole Publishing Company. +Jiang, Rui, and Qing-Song Nu. 2003. Cellular automata for synchronized traffic flow. Journal of Physics A: Mathematical and General. +Kuhne, R.D., and Panos Michalopoulos. 2002. Revised Monograph on Traffic Flow Theory: Flow Models. Turner-Fairbank Highway Research Center. +Mihaylova, Lyudmila, and Rene Boel. 2003. Hybrid stochastic framework for freeway traffic flow modeling. ACM Proceedings of the 1st International Symposium on Information and Communication Technologies. +New Jersey Institute of Technology. 2001. Ten Year Plan to Remove Toll Barriers on the Garden State Parkway. +Rastorfer, Robert L., Jr. 2004. Toll plaza concepts. ASCE Fall Conference, Houston, TX. + +![](images/b689504b0a26f6d6c998a56d9d811200886067e9fb1112466a92ebf72d499e7a.jpg) + +L. Craig Evans (advisor), Anand Kulkarni, Ephrat Bitton, and Mark Shlimovich. + +# For Whom the Booth Tolls + +Brian Camley + +Bradley Klingenberg + +Pascal Getreuer + +University of Colorado + +Boulder, CO + +Advisor: Anne Dougherty + +# Summary + +We model traffic near a toll plaza with a combination of queueing theory and cellular automata in order to determine the optimum number of tollbooths. We assume that cars arrive at the toll plaza in a Poisson process, and that the probability of leaving the tollbooth is memoryless. This allows us to completely and analytically describe the accumulation of cars waiting for open tollbooths as an $\mathrm{M}|\mathrm{M}|n$ queue. We then use a modified Nagel-Schreckenberg (NS) cellular automata scheme to model both the cars waiting for tollbooths and the cars merging onto the highway. The models offer results that are strikingly consistent, which serves to validate the conclusions drawn from the simulation. + +We use our NS model to measure the average wait time at the toll plaza. From this we demonstrate a general method for choosing the number of toll-booths to minimize the wait time. For a 2-lane highway, the optimal number of booths is 4; for a 3-lane highway, it is 6. For larger numbers of lanes, the result depends on the arrival rate of the traffic. + +The consistency of our model with a variety of theory and experiment suggests that it is accurate and robust. There is a high degree of agreement between the queueing theory results and the corresponding NS results. Special cases of our NS results are confirmed by empirical data from the literature. In addition, changing the distribution of the tollbooth wait time and changing the probability of random braking does not significantly alter the recommendations. This presents a compelling validation of our models and general approach. + +# Introduction + +A toll plaza creates slowdowns in two ways: + +- If there are not enough tollbooths, queues form. +- If there are too many tollbooths, a traffic jam ensues when cars merge back onto the narrower highway. + +We use queueing theory to predict how long vehicles will have to wait before they can be served by a tollbooth. Using cellular automata to model individual cars, we confirm this prediction of wait time. This vehicle-level model is used to predict how traffic merges after leaving the toll plaza. + +# Initial Assumptions + +- The optimal system minimizes average wait time. We do not consider the cost of operating tollbooths. +- Cars arrive at the toll plaza uniformly in time (the interarrival distribution is exponential with rate $\lambda$ ). We can consider rush hour by varying the arrival rate $\lambda$ . +- Cars have a wait time at the tollbooth that is memoryless (exponential distribution with rate $\mu$ ). This assumption is confirmed by the study of tollbooths by Hare [1963]. +- Cars are indistinguishable. All cars have the same length and the same maximum speed. +- The toll plazas are not near on-ramps or exits. We do not consider the possibility of additional cars merging, only those that were already on the main road. +- Two-way highways are equivalent to two independent highways. We consider only divided highways. + +# Delays Due to Too Few Tollbooths + +# Tollbooths As an $\mathbf{M}|\mathbf{M}|n$ Queue + +As a vehicle approaches the toll plaza, it has a choice of $n$ tollbooths for service. Cars tend toward the shortest queue available. We simplify this behavior by supposing that all vehicles form a single queue, and that the next car in line enters a tollbooth as soon as one of the $n$ booths becomes available. A + +real system would be less efficient, and therefore we expect longer times in a more detailed simulation. + +We assume that vehicles arrive uniformly distributed in time. We additionally suppose that the length of service time is exponentially distributed as in Hare [1963]. This class of model is called a memoryless arrivals, memoryless service times, $n$ -server or "M|M|n" queue (Figure 1). + +![](images/6dcdef4b5526f152f8713b0460063dd60affb6382254a4f54e9f2c55ac58a226.jpg) +Figure 1. The $\mathrm{M}|\mathrm{M}|n$ queue. Vehicles arrive at rate $\lambda$ and are serviced at rate $\mu$ . + +We define $X(t)$ as the number of vehicles either in the queue or at a tollbooth at time $t$ . We also define the stationary probabilities $p_k$ such that, in steady state, the probability that the queue has length $k$ is $p_k$ . From the input-output relationship of the $\mathbf{M}|\mathbf{M}|n$ queue, the stationary probabilities must satisfy + +$$ +0 = - \lambda p _ {0} + \mu p _ {1}; +$$ + +$$ +0 = \lambda p _ {k - 1} - (\lambda + k \mu) p _ {k} + (k + 1) \mu p _ {k + 1}, \qquad k = 1, \ldots , n; +$$ + +$$ +0 = \lambda p _ {k - 1} - (\lambda + n \mu) p _ {k} + n \mu p _ {k + 1}, \qquad k = n + 1, n + 2, \ldots . +$$ + +The solution to this system is [Medhi 2003]: + +$$ +p _ {0} = \left[ \sum_ {j = 0} ^ {n - 1} \frac {\rho^ {j}}{j !} + \frac {\rho^ {n}}{n ! (1 - \frac {\rho}{n})} \right] ^ {- 1}, \qquad p _ {k} = \left\{ \begin{array}{l l} \frac {\rho^ {k}}{k !} p _ {0}, & k = 0, \ldots , n; \\ \rho^ {k - n} p _ {n}, & k = n + 1, n + 2, \ldots , \end{array} \right. +$$ + +where $\rho = \lambda /\mu$ + +Let the random variable $W$ be the time that a vehicle spends in the system (time in the queue + time in the tollbooth). From Medhi [2003], the distribution and expected value of $W$ are + +$$ +P (W = w) = \sum_ {k = 0} ^ {n - 1} \frac {\left(\frac {\lambda}{\mu}\right) ^ {k}}{k !} p _ {0} \mu \mathrm {e} ^ {- \mu w} + \frac {\left(\frac {\lambda}{\mu}\right) ^ {n}}{n !} p _ {0} \frac {n \mu^ {2}}{(n - 1) \mu - \lambda} \left(\mathrm {e} ^ {- \mu w} - \mathrm {e} ^ {- (1 - \rho) n \mu w}\right), +$$ + +$$ +E [ W ] = \frac {1}{\mu} + \frac {p _ {n}}{n \mu (1 - \rho) ^ {2}}. +$$ + +This result describes the first part of the general problem: how the cars line up depending on the number $n$ of tollbooths. + +To model the traffic merging after the tollbooths, it is important to describe how vehicles leave the $\mathbf{M}|\mathbf{M}|n$ queue. For an $\mathbf{M}|\mathbf{M}|n$ queue, the interdeparture + +times of the output of the queue are exponentially distributed with rate $\lambda$ , and the output process has the same distribution as the input process. Because of the memoryless nature, interdeparture intervals are mutually independent (see Medhi [2003] or Bocharov et al. [2004] for proofs of these statements). + +We define $D$ as the number of cars departing the tollbooth during an interval $\Delta t$ . Then the probability that $d$ cars leave in that time is: + +$$ +P (D = d) = \frac {e ^ {- \lambda \Delta t} (\lambda \Delta t) ^ {d}}{d !}, +$$ + +where $\lambda$ is the mean number of cars that arrive at the toll plaza in a time step. + +The $\mathrm{M}|\mathrm{M}|n$ queue provides a simple and well-developed model of the toll-booth plaza. In particular, the average wait time and the output process are known, allowing us to verify simulation results. + +# Limitations of the $\mathbf{M}|\mathbf{M}|n$ Queue + +Though useful, the $\mathrm{M}|\mathrm{M}|n$ queue is incomplete and oversimplifies the problem. Even though the $\mathrm{M}|\mathrm{M}|n$ queue allows us to find the distribution of departures simply, its assumptions prevent it from being a complete solution. By using a single-queue theory, we assume that any car can go to any open server. This is overly optimistic, especially when the density is high. We would expect our predictions to be more valid for low density. Perhaps most importantly, the $\mathrm{M}|\mathrm{M}|n$ queue only simulates half of the problem—the waiting times due to back-ups in front of the tollbooths. + +# Modeling Traffic with Cellular Automata + +# Overview + +The complex system of traffic can be modeled by the simple rules of automata. We use cellular automata to model the traffic flow on a "microscopic" scale. In this scheme, we discretize space and time and introduce cars that each behave according to a small set of rules. + +Cellular automata are well-suited for simulating our specific problem, since there are a large number of individual vehicles in the toll plaza, all of which are interacting. Continuous or macroscopic models could not capture this interaction and its role in causing jams that spontaneously form both before and after the toll plaza. + +We first create a one-lane highway model and then add a delay for the time to pay the toll. As a one-lane simulation can allow no passing, cars accumulate behind the stopped car, creating a queue. We then extend this model into a multiple lane system, and then to a multiple lane system where the number of lanes is not constant, that is, where the road enters or leaves a toll plaza. + +# Single-lane Nagel-Schreckenberg Traffic + +Most automata used to simulate traffic are generalizations of the Nagel-Schreckenberg cellular automata model (NS) [Chowdhury et al. 2000]. The NS model is a standard tool used to simulate traffic flow and has been shown to correspond to empirical results [Brilon et al. 1991; Chowdhury et al. 2000; Gray and Griffeath 2001; Knopse et al. 2004; Rickert et al. 1996; Schreckenberg et al. 1995]. + +We use this automaton to create a numerical model to confirm the queueing theory predictions. + +In the NS model, a car is represented by an integer position $x_{n}$ and an integer speed $v_{n}$ . The vehicles are deterministically moved by their velocities, $x_{n} \rightarrow x_{n} + v_{n}$ . The system evolves by applying the following procedure (Figure 2) simultaneously to all $(x_{n}, v_{n})$ . + +# NS Algorithm + +![](images/a3cf6becca037a9036fae168cd0257b7c71403b25c9e433bf6bcb3b5a2ea4b9b.jpg) +Figure 2. Rules of the NS algorithm. + +1. Acceleration. If the vehicle can speed up without exceeding the speed limit $v_{\mathrm{max}}$ , it adds one to its speed, $v_{n} \rightarrow v_{n} + 1$ . Otherwise, the vehicle has constant speed, $v_{n} \rightarrow v_{n}$ . +2. Collision prevention. If the distance between the vehicle and the car ahead of it, $d_{n}$ , is less than or equal to $v_{n}$ , that is, the $n$ th vehicle will collide if it doesn't slow down, then $v_{n} \rightarrow d_{n} - 1$ . +3. Random slowing. Vehicles often slow for nontraffic reasons (cell phones, coffee mugs, even laptops) and drivers occasionally make irrational choices. With some probability $p_{\text{brake}}$ , we have $v_n \rightarrow v_n - 1$ , presuming $v_n > 0$ . + +We choose the cell size to be $7.5\mathrm{m}$ to match Nagel and many others [Brilon et al. 1991; Chowdhury et al. 2000]. Since a typical maximum speed for cars is $30 - 35\mathrm{m / s}$ , choosing $v_{\mathrm{max}} = 5$ makes a single time step close to $1\mathrm{s}$ . We also use periodic boundary conditions for simplicity. (We later abandon these boundary conditions, since open boundary conditions—a Poisson generator and a sink—are consistent with the $\mathbf{M}|\mathbf{M}|n$ model.) In addition, research results indicate that the periodic boundary may oversimplify the distribution of vehicles [Yang et al. 2004]. + +We apply the above algorithm to a random initial state with a given density. This system was created by assigning each cell a probability of occupation $c$ , which is the vehicle density parameter. This matches Gray and Griffeath's approach [2001]. + +The NS model produces results similar to those cited as typical by Chowdhury [2000]. In Figure 3, the state of the system at time step $i$ is drawn in the $i$ th column, with a white pixel where there is a vehicle and a black pixel for open space. Both images show generally smooth flow interrupted by congestion. + +![](images/459cfe69853b6193bdb46ad713c142d1ef31696cca1cf33fa4c1ddf6d731e1b6.jpg) +a. Results from the NS model, $c = 0.2$ , $p = 0.25$ . +Figure 3. Typical results from two models. + +![](images/f4fe0249053adbf12d5b3edbfe3c186e451ec03ad7e981e4714ec6b0401df70e.jpg) +b. Results from Chowdhury et al. [2000]. + +# Properties of and Support for the NS Model + +The one-lane NS model is self-consistent, flexible, and matches known empirical data. + +Some of the properties of the NS model can be predicted analytically [Nagel and Herrmann 1993]. We use this information as well as experimental results to test our model. In the limiting case where the random braking probability is zero, it is possible for vehicles to "cruise," moving at their maximum speed at all times, corresponding to a flux of $J = c v_{\mathrm{max}}$ . This is possible only if there is sufficient space. Once the "hole density" or the remaining spaces, given by $(1 - c)$ , is smaller than this flux, the lack of free space limits the speed of the vehicles. This relationship between flux and density is given by: + +$$ +J (c) = \min \left\{c v _ {\max }, 1 - c \right\}, \tag {1} +$$ + +where $J$ is the flux of cars, the number of cars passing a cell in unit time, and $c$ is the density of cars. We ran our NS automaton with $p_{\mathrm{brake}} = 0$ for 20 trials with excellent agreement between our mean and the theory, as seen in Figure 4. + +As pretty as this graph is, it indicates only that the model is self-consistent and can be approximated; it does not show that it actually represents a real + +![](images/89118fe1471260c686dd42feb3ea1747808a2305678e56cc496b58c2e5e05a9d.jpg) +Figure 4. The flux equation (1) predicts the results of the NS model with very good accuracy. + +system. We consult empirical data on vehicle flux (Figure 5b). Clearly, the NS model is an accurate approximation of the known data. + +It is also possible to use mean field theory to describe the NS model. Even if $p_{\mathrm{brake}} \neq 0$ , the case of $v_{\mathrm{max}} = 1$ can be solved analytically with this technique [Schadschneider and Schreckenberg 1997]. For our system, with $v_{\mathrm{max}} = 5$ , this becomes computationally difficult. + +# Adding Delays + +Delays prevent the use of periodic boundary conditions. + +To simulate an encounter with a tollbooth, we must add a delay to the unobstructed system. Simon and Nagel model the NS automaton for a blockage but only with a fixed delay probability [1998]. We assume that the service time is exponentially distributed, with a probability of $1 - \exp(-\mu \Delta t)$ that any one tollbooth completes service in $\Delta t$ , and so we use this assumption to describe the delay in our NS model as well. + +Introducing this delay creates an asymmetry in the problem; particles to the right of the barrier have to loop around to reach the "tollbooth," whereas particles on the left will impact it immediately. Because of this, we measure the flux at both the one-quarter and three-quarter points of the lane (Figure 6). + +The fundamental diagrams in Figure 6 confirm our intuition regarding the interaction of flux and a bottleneck (the tollbooth). The flux at the quarter point experiences a decline due to congestion at a relatively low vehicle density compared with the three-quarter point, which is beyond the bottleneck. The + +![](images/c44901ff44bcf114d7d806df04951a52a76e9c19573dd112617f95dcf83caaa8.jpg) +a. NS model with theory. +Figure 5. Comparison of model with data. + +![](images/2e0e9fca8f9af75f7256e740df5018da1c05835eba21f2843405f3e6fb4c8ff2.jpg) +b. Empirical data from Canadian highway [Chowdhury et al. 2000]. + +heavy incoming traffic therefore affects the accumulating queue faster than the vehicles past the tollbooth. + +These fundamental diagrams show that the periodic boundary conditions are inappropriate for this calculation; with periodic boundaries, the input rate to the queue is limited by the (smaller) flux of vehicles wrapping around from the right. This is not representative of a true traffic jam; without periodic boundary conditions, jams cannot affect the flux upstream from them. + +![](images/a782d12188c45851af0fee33fe252d5c0cfbd4a9db2c11f328fd285d0086eadc.jpg) +a. Fundamental diagram at one-quarter point, $p = 0$ +Figure 6. Fundamental diagrams. + +![](images/4d47be56e3b337e09c37a54c6934b3af1ef671da1a93cbbfaca85f7838cbcfc3.jpg) +b. Fundamental diagram at threequarters point, $p = 0$ + +# Simulating the Complete System + +# Multiple Lanes + +By adding a new rule to the one-lane automaton, we can model multilane highways. We use a single-lane model to ensure that our automaton is a proper representation of the real world, but the actual problem is a multiple-lane one. Two-lane system studies are less common than single-lane studies, and higher-lane models even rarer [Chowdhury et al. 2000; Nagel et al. 1998; Rickert et al. 1996]. We extend the rule set of the automaton to describe lane changes, using the one-lane NS rules with a single additional rule for lane changing (Figure 7). + +![](images/bec8d8993b89dde22f9e6ff6c99342125707c0d1a0b43e4d68d238256fa69609.jpg) +Figure 7. The multi-lane automaton rule. + +4. Merge to avoid obstacles. The vehicle attempts to merge if its forward path is obstructed $(d_{n} = 0)$ . The vehicle randomly chooses an intended direction, right or left. If that intended direction is blocked, the car moves in the other direction unless both directions are blocked (the car is surrounded). This is consistent with the boundaries and tailgating rules proposed by Rickert et al. [1996]. + +# Changing the Highway Shape + +By using the multilane automaton, we can model a multilane highway that has realistic lane-changing behavior. This still does not, however, model a transition between highway and a number of tollbooths. + +To create the toll plaza, we introduce borders that force the automata to change lanes to avoid hitting the boundary. The borders outline the edge of a ramp that moves from the highway onto a wider toll plaza and back again (Figure 8). This is the only aspect of this model that is not general; by imposing different boundaries, we could easily model a different problem. To simulate the wait at the tollbooth, we also add a delay at the center, as in the one-lane case. + +The previous models [Gray and Griffeath 2001; Nagel et al. 1998; Rickert et al. 1996] assume a roadway of constant width—the number of lanes does not change. By restricting the geometry of our roadway to represent the toll plaza (note the "diamond" shape in Figure 8 and introducing the behavior of + +![](images/239f16647b38ba7bbe7bb7d9c8509b40ae5862d9cf058c4904e8638189b5f35e.jpg) +Figure 8. We introduce imaginary borders into the system to narrow and widen traffic. + +merging away from obstacles, we have increased the flexibility of the model without making additional assumptions. + +# Consistency of $\mathbf{M}|\mathbf{M}|n$ Queue and NS Model + +The $\mathbf{M}|\mathbf{M}|n$ queue is an idealized system. It predicts a shorter wait time than a real toll plaza, because it fails to account for inefficiencies in the queue (Figure 9). The $\mathbf{M}|\mathbf{M}|n$ queue does, however, predict the correct distribution. In addition, the stability of the queue is very different from the stability of the NS model. An $\mathbf{M}|\mathbf{M}|n$ queue achieves a steady state if $\lambda/\mu < n$ , with $\lambda$ the arrival rate, $\mu$ the service rate, and $n$ the number of servers [Gross and Harris 1974]. We observe in the NS simulation that traffic in front of the tollbooths could create a growing backlog even when the corresponding $\mathbf{M}|\mathbf{M}|n$ queue would be stable. + +Despite these apparent inconsistencies, there is a very strong agreement between the queueing theory predictions and the observed results. From queueing theory analysis, we know the probability distribution of the number of cars leaving the tollbooths: + +$$ +P (D = d) = \frac {\mathrm {e} ^ {- \lambda \Delta t} (\lambda \Delta t) ^ {d}}{d !}. +$$ + +This equation provides a good deal of information about the queue, and this probability can easily be measured in the cellular automata model. We compare the simulated and theoretical probability distribution in Figure 10, and the two distributions are very similar: The difference in their means is decreasingly small and is less than $2\%$ after $10^{4}$ iterations of the NS model. + +![](images/dac8e20256bdfddb9a2ab007ef275e8a7b38c3e6b9cd65c5e08951d01b622430.jpg) +Figure 9. The $\mathbf{M}|\mathbf{M}|n$ distribution and NS distribution are similar, but the $\mathbf{M}|\mathbf{M}|n$ has a smaller mean wait time due to its optimistic assumptions. + +![](images/7d3a5b0daebdb145a9bea24710c7ad7fad75fa9b940f3312ed003ee72700367b.jpg) +Figure 10. The simulated distribution of leaving vehicles is very close to that predicted by queueing theory. + +# Time Predictions of the Automata Model + +# The Optimal Number of Tollbooths + +The optimal configuration minimizes the wait time; so to determine the correct number of tollbooths, we need to measure the average time for a vehicle to enter the toll plaza, wait in line, and then exit out onto the main road again. We do this by tracking automata and averaging the time that passes between entering and leaving the system. + +# Calculating Average Times + +The average time required to pass through our system depends on the arrival rate (which controls congestion), the number of lanes, and the number of tollbooths. We consider the mean service rate to be fixed, at 5 s; Hare uses 9 s [1963]. However, though changing the service rate does change the average time, this change does not affect which value of $n$ is optimal. + +We fix the number $l$ of incoming lanes and search over the number $n$ of tollbooths and the arrival rate $\lambda$ . We calculate the average wait time for a range of $n$ and $\lambda$ by using our cellular automata model and averaging over a long period of time to eliminate transient effects. + +What, though, should these ranges be? We presume that $n$ is not larger than three or four times the number of lanes. We placed this restriction after noticing that the wait time increases sharply when $n$ is much larger than $l$ . The range of $\lambda$ is determined by commonsense restrictions. If $\lambda$ is the mean number of cars arriving in a time step, then $\lambda$ should be no more than the number of lanes of incoming traffic as this is the physical capacity of the road. + +# Optimal Results for 2 Lanes + +We allow $n$ to range from 2 to 8 and $\lambda$ from 0 to 2. We plot the average time against $n$ and $\lambda$ in Figure 11. + +The clear minimum in this graph lies along the line $n = 4$ , even for different values of $\lambda$ . This indicates that even for different arrival rates, the optimal number of tollbooths is 4. This is a very stable solution that does not require changes with traffic rates, at least for typical values. + +Though 4 is the optimal number of booths, 2 is very near optimal and 3 is very bad. For $n = 3$ , the additional lane adds more traffic jam than throughput. For small $n$ , when there is one more booth than lanes of traffic, the wait time is a local maximum. + +# Optimal Results for 3 Lanes + +By varying $\lambda$ from 0.6 to 2.7 and $n$ from 3 to 9, we find that the minimum occurs, once again independent of $\lambda$ , at $n = 6$ (Figure 12). + +![](images/dbf01cf14659be00b817c27b51beb52e3e7dfce4f893eb02290d3ad227a58e29.jpg) +Figure 11. For a 2-lane system, the minimum time occurs for 4 tollbooths. + +![](images/08a1e2e1a41a0eb5a930f76b81038d2a5817b394168e4fa1c953d18023c8cafd.jpg) +Figure 12. For a three-lane system, the minimum time occurs when there are six tollbooths. + +# Higher Numbers of Lanes + +The results so far would suggest the naive solution of always having twice as many tollbooths as incoming lanes. Unfortunately, the 4-lane case disproves this guess. For different values of the arrival rate $\lambda$ , the optimal number of tollbooths changes, from a low of 6 for small $\lambda$ . The tollbooth owners could measure traffic flow, estimate $\lambda$ , and open or close tollbooths as needed. + +# Generalizing These Results + +To determine the stability of these results, we made calculations with the probability of random braking changed to 0.1 rather than 0. For each $\lambda$ , even though the wait times change, the optimal numbers of tollbooths do not. + +This process could be repeated for any required setup; we have illustrated a general technique for determining the optimal number of tollbooths. + +# One Tollbooth per Lane + +If cars were not allowed to change lanes, the case with one tollbooth per lane of traffic would just reduce to $l$ independent one-lane models, and this would be equivalent to our single-lane highway. We know that this is not the case. Cars move into the lane with the shortest queue. + +In our results, the $n = l$ case is typically nonoptimal. The one exception to is the two-lane highway; here, the time for $n = 2$ is only barely longer than for $n = 4$ . However, this is not because $n = l$ is always "bad," but because there is usually a better case. As the number of lanes increases, the number of tollbooths increase significantly. If we consider cost, the $n = l$ case could be very important, since for low $l$ ( $l < 5$ ), the $n = l + 1$ case is a local maximum and $n = l$ is a local minimum. + +# Re-examining Assumptions + +Though our calculations assume an exponential service probability, using a Gaussian service distribution does not change the model's recommendations significantly. + +# Conclusions + +We build a model of traffic flow near a toll plaza by using cellular automata modified from the one-lane Nagel-Schreckenberg automaton. + +- Our model's predictions match empirical data on vehicle distribution and are confirmed by our queueing theory analysis of the problem. + +- Changing the service rate and service distribution of the tollbooths does not significantly alter the recommendations for the optimal number of booths +- We establish a general technique for determining the optimal number of tollbooths to put on a given highway. +- Though in general the optimal tollbooth results are complex and depend on the arrival rate, there are simple cases: a 2-lane highway should have 4 tollbooths and a 3-lane highway should have 6, independent of the amount of traffic. +- In general, the case of as many tollbooths as lanes is suboptimal. + +# Strengths and Weaknesses + +# Strengths + +- Consistency. Our queueing theory model and the cellular automata model agree on the distribution of cars leaving the booths. Both models match theoretical results and past empirical results. In addition, under small changes, like adjusting the probability of braking, the recommendations of the model do not change significantly. +- Minimal assumptions required. By using the automata, we reduce the number of parameters and assumptions. For our queueing theory, we assume that the probability of leaving the tollbooth is exponential, but altering this distribution does not affect the recommendations. +- Flexibility. Our model easily adapts to problems with different geometries, such as different numbers of lanes or even different boundaries. +- Ease of implementation. A complex problem is simulated using very simple rules. + +# Weaknesses + +- No closed-form solution. For the complete model, we must actually calculate the simulation. +- Calculation time. To get an accurate average time for vehicles, we need to average over a number of time steps on the order of 10,000. As the number of lanes increases, computation slows. + +# References + +Bocharov, P., C. D'Apice, A. Pechenkin, and S. Salerno. 2004. Modern Probability and Statistics: Queueing Theory. Utrecht, The Netherlands: Brill Academic Publishers. +Brilon, W., F. Huber, M. Schreckenberg, and H. Wallentowitz (eds.). 1991. Traffic and Mobility. Springer-Verlag. +Chowdhury, Debashish, Ludger Santen, and Andreas Schadschneider. 2000. Statistical physics of vehicular traffic and some related systems. Physics Reports 329 (199). http://arxiv.org/pdf/cond-mat/0007053. +Gray, Lawrence, and David Griffeath. 2001. The ergodic theory of traffic jams. Journal of Statistical Physics 105 (3/4). +Gross, D., and C. Harris. 1974. Fundamentals of Queueing Theory. New York: John Wiley and Sons. +Hare, Robert R., Jr. 1963. Contribution of mathematical concepts to management. Industrial College of the Armed Forces http://www.ndu.edu/library/ic4/L64-015.pdf. +Knopse, Wolfgang, Ludger Santen, Andreas Schadschneider, and Michael Schreckenberg. 2004. Empirical test for cellular automaton models of traffic flow. Physical Review E 70. +Medhi, J.. 2003. Stochastic Models in Queueing Theory. 2nd ed. New York: Academic Press, Elsevier Science. +Nagel, Kai, and Hans J. Herrmann. 1993. Deterministic models for traffic jams. Physica A 199 (2). +Nagel, Kai, Dietrich Wolf, Peter Wagner, and Patrice Simon. 1998. Two-lane traffic rules for cellular automata: A systematic approach. *Physical Review E* 58 (2). +Rickert, M., K. Nagel, M. Schreckenberg, and A. Latour. 1996. Two lane traffic simulations using cellular automat. Physica A 231 (4): 534-550. citeseer. ist.psu.edu/rickert95two.html. +Schadschneider, Andreas, and Michael Schreckenberg. 1997. Car-oriented mean field theory for traffic flow models. Journal of Physics A: Mathematical and General 30. +Schreckenberg, M., A. Schadschneider, K. Nagel, and N. Ito. 1995. Discrete stochastic models for traffic flow. Physical Review E 51 (4): 2939. citeseer. ist.psu.edu/schreckenberg94discrete.html. + +Simon, P.M., and Kai Nagel. 1998. A simplified cellular automaton model for city traffic. Physical Review E 58 (2). + +Yang, X., Y. Ma, and Y. Zhao. 2004. Boundary effects on car accidents in a cellular automaton model. Journal of Physics A: Mathematical and General 37. + +![](images/fb7c4d266c2c9f3e3b720d4dfefe6907503b9fd563e54e0d892be0642cffffa3.jpg) +Brian Camley, Pascal Getreuer, and Bradley Klingenberg. + +# Judge's Commentary: The Outstanding Tollbooths Papers + +Kelly Black +Dept. of Mathematics +Union College +Schenectady, NY 12308 +blackk@union.edu + +# Overview of the Problem + +The teams were asked to examine the flow of traffic through a toll plaza. One of the difficulties in this problem is that the traffic flow through a toll plaza is not actively managed; rather, the traffic through the system is passively managed by the careful design of the roadway and the toll-collection system. On most roadways, a toll plaza consists of a long transition to an increased number of lanes, the collection system (the tollbooths), and a long transition back to the original number of lanes. Interestingly, only a very small number of entries suggested adding an active management component to the toll plaza, such as ways to restrict the way that drivers can change lanes. + +In the last paragraph of the problem statement three tasks/questions were given: + +- "Make a model to help determine the optimal number of tollbooths to deploy in a barrier-toll plaza." +- "Explicitly consider the scenario where there is exactly one tollbooth per incoming lane." +- "Under what conditions is this more or less effective than the current practice?" + +At first glance, it would seem that the considerable power of queueing theory would be readily available for this problem. Unfortunately, this is only true to a certain extent. The nature of a toll plaza is that multiple lanes expand into even more lanes into which people can change if they feel it is advantageous. Once again, people get in the way of good mathematics. + +On second thought, however, the difficulty of the problem should not be a surprise. As far as we are aware, no state's Dept. of Transportation has been able to get this problem right. Moreover, traffic designers often only have one chance to get it right. The cost of making changes to an existing toll plaza can be prohibitive. + +In the following section, we provide an overview of the kinds of solutions that were submitted, including some comments on the judges' reactions. We also provide an overview of the judging process itself and the challenges that this particular problem represented to the judges. Finally, some tips and pointers are provided in reaction to some of the things that appeared in many of the team's submissions. + +# The Problem at Hand + +# Modeling a Toll Plaza + +The first of the three questions required teams to create a mathematical model of a barrier-toll plaza. The most common mathematical model treated the toll plaza as a queue. Unfortunately, the complex nature of the toll plaza is not easily described as a simple queue. For example, the lanes diverge into more lanes at the entrance, and drivers are able to change lanes. Also, the lanes must combine again after the service area, and crowded traffic after the tollbooth can impact the traffic in the entrance of the toll plaza. + +Of the entries that modeled a toll plaza using queueing theory, what set them apart was how they handled the various subtleties of a toll plaza. For example, teams had to make decisions on how cars move in the entrance and exit sections of a toll plaza. Teams also had to decide on how each car is handled by the tollbooth attendants. For example, some teams assumed that each car could be handled in the same amount of time. Other teams assumed that the time required for each car was a random variable with a prescribed probability distribution. To make this decision even more difficult, the teams had to decide how to handle the various payment methods available, such as cash, EZ-Pass, or other remote rapid-payment methods. + +Of those teams that assumed that the service time required for each car varied, some treated the entrance section as a queue and the actual tollbooth as a separate queue. In this situation, the resulting chain of queues could be coupled and described if the two probability distributions were similar. For example, the majority of such entries assumed that the distributions of the cars entering the system and tollbooth service times were both Poisson with different parameters. + +While the more advanced entries were able to bring together the entrance and the tollbooths in the plaza, the exit represented a significant difficulty. The majority of entries briefly discussed the potential problems of the traffic after the service booths but did not include the effects of the exits in their models. + +In fact, the teams that did explicitly consider the exit areas usually only did so in the context of a simulation in a computational model. + +In addition to the queueing theory approach, a less common approach was to model the flow of traffic as a fluid. The resulting models were much more complex than those based on queueing theory. The process of matching the physical situation to a flow and then converting the results back to what is happening within a toll plaza represented a substantial difficulty for those taking this approach. + +Besides the construction of a mathematical model, the most popular approach to this problem made use of simulations based on a computational model. Such models were usually based on either a cellular automata model or a highly modified queue making use of a time-stepping scheme allowing for lane changes. The more advanced approaches also factored the exit areas into the simulation. + +A computational model required a much different approach to the analysis and discussion of the results. The results are a composite of many runs and take on a statistical nature. Furthermore, the large number of variables—the number of lanes before the plaza, the number of lanes in the plaza, the waiting time, the way cars enter the system, the length of the plaza, and a wide variety of other factors—make it difficult to reach concrete conclusions about the best design for a toll plaza. This is especially true given the short time available to develop the computational model, implement it, decide which situations to use, run it, and examine the results. + +The judges took this into consideration and did not expect a complete examination. This approach did require a more complete description of the computational model, though, since the number of assumptions that can be incorporated was significantly greater. There was also a heightened expectation of doing more in the way of a sensitivity analysis with respect to some of the various parameters. + +While many different approaches were submitted, the entries that received the higher ratings examined at least two different models. The most common combination by far was a simple queue and a computational model. The most striking aspect of this was that few teams explicitly stated a comparison of the two results under identical circumstances. Those that did stood out, and the results of the comparison helped to establish a good benchmark of the computational model. + +# One Tollbooth Per Lane + +The second question in the problem statement required each team to use their models for a toll plaza that has one tollbooth for each lane of travel on the road. This established a sanity check that each team had to examine. Surprisingly, a number of teams did not examine this situation, which resulted in a penalty for ignoring one of the stated requirements. + +# What Is "Best" + +The final question required the teams to determine best practice in designing a toll plaza—to define a way to compare different configurations of a toll plaza. Each team had to balance the competing costs of each driver's time, the cost of operating the toll plaza, and the cost of construction. + +One of the most surprising aspects of this year's competition is that few teams explicitly defined what they thought would be the cost of operating a toll plaza. The vast majority of teams simply compared the average waiting time for the drivers under various circumstances with little comment or justification. Those teams that looked at a nontrivial cost of operating the tollbooths based on the cost of lost productivity of the drivers and the cost of operating the tollbooths certainly stood apart from the others. + +The few teams that did examine this part of the problem reported some of the most interesting results. In fact, in some circumstances the option of not building a tollbooth is the most satisfactory option to almost everybody except maybe the tollbooth operators themselves! + +# Overview of the Judging Process + +We give an overview of the judging process, including some general observations about some of the entries submitted for the competition. In the subsections that follow we try to provide some insight into what the judges discussed prior to the actual judging, first impressions of a paper, and some of the small details that help to make an entry stand out. + +First, we try to provide a broad overview of the judging process itself. The papers are examined in a two-step process. The first round, or triage round, is the first pass. In this round, judges are able to examine the papers for only a relatively short time. When a judge begins reading a paper in this first stage, the question is, "Should the paper be read in closer detail?" If the answer is "yes" or even "maybe," then it is passed on to the second stage. Because we try to give each paper the benefit of the doubt, it is difficult to state what necessary essence is required to move past this round. + +At the most basic level, the quality of the writing and the consistency of the summary with the rest of the paper is vital in this first stage. It is a really bad idea to make a judge work too hard on a paper. The easier it is for a judge to determine how the students interpreted the problem, the approach used, and the results, the easier it is for the judges to determine the quality of the work. + +Entries that remain through the second stage are given much closer, detailed readings. For example, papers that remain on the final day of judging are read by as many as eight different judges for at least an hour per entry. During this time the judges sometimes confer with one another if they are not sure about an equation, result, or the wording in a section. For the most part, though, each judge tries to provide an independent review of the student's work. + +# Discussion Before Judging Began + +Before the judging began, the judges got together to discuss the problem. As usual, the problem was nontrivial, and we judges had to ensure that we understood what was being asked. The judges had to carefully parse the original question. For example, this year the problem included some very specific tasks that were given in the last paragraph of the problem, and whether or not an entry specifically addressed those questions was important. + +Additionally, each judge initially read through a set of randomly chosen papers. In the second stage of the process, this set of papers was adjusted to ensure that each judge viewed papers with a wide variety of initial scores. The purpose of this protocol is to make sure that we also took into consideration how the various teams interpreted the question and how they reacted to the problem. + +After these initial readings, the judges had to agree on what was important and how to provide a consistent mechanism for comparing different entries. Each year, the relative importances of the various aspects of a paper are tailored to the particular problem; but in general, the kinds of things that judges look for in a paper are relatively consistent. This year the judges decided that the following aspects were important: + +Summary This is the first thing that a judge sees. A summary should provide a brief overview of the problem, a brief review of the methodologies used, and an overview of the conclusions. It is a difficult challenge to include all these things on one page and do it well, but it does provide the first impression. + +Assumptions and Justifications In constructing a mathematical representation of a physical system some simplifications must be made and some subtleties must be left out. The parts of the problem deemed most important—as well as what is left out of the model—must be made explicit. + +Model/Analysis One of the novel aspects of this year's problem was the close association between the mathematical model and the analysis of the problem. The stochastic nature of the problem, as well as the prevalence of entries making use of both queuing theory and simulation, made it difficult to separate these two aspects of the problem. In the end, the judges decided not to treat them separately, so that it would be easier to compare entries whose balance varied between the different approaches used. + +Results/Validation It is not unusual to see many papers that make use of a variety of approaches and techniques, but this problem resulted in more entries than usual that employed at least two solution techniques. It was more important than ever for the teams to be able to compare the different results as well as interpret the results. + +Sensitivity Between the stochastic nature of the problem and the wide use of simulation, the validation of the results had to include some way to test the robustness of the conclusions. One of the most important ways to do this + +is to examine what happens after changing the values of certain parameters or changing the assumptions on what kind of probability distributions to use. For example, some teams that used queuing theory examined their conclusions under a variety of assumptions about the time required by an individual tollbooth attendant to complete one transaction. If a small variation in the service time resulted in a large change in the average waiting times, that is an indication that the conclusions may be circumspect. + +Strength & Weakness Any mathematical model requires many assumptions and simplifications and is only good for a restricted situation. It is vital that the modeler identify the conditions under which the mathematical model is appropriate. Each team was expected to demonstrate explicitly that they had done some critical analysis of the model itself and to identify what they felt was good and bad about the model. + +Clarity/Communication One of the key aspects of any problem is to be able to share the results. The methods employed, the results that are delivered, and the analysis of both the methods and the results must be clearly described. This is the filter through which all mathematics is shared. + +# Communicating Mathematics + +As mathematicians, we are engaged in a social exercise that absolutely requires us to share our work with one another, however sharing mathematical ideas can be extremely difficult. In fact, it is difficult enough that we often try very hard to avoid putting our students through the difficult learning process associated with writing and sharing mathematical ideas. We often have our students take part in writing proofs or problem solving, but putting it all together in a formal report or a paper the first time can be an excruciating process. + +There are many excellent books and other resources for students that offer a better introduction than can be expressed here. In fact, from the many excellent entries it is clear that those resources are being exploited. We focus on just a few general issues that stand out in this year's event. + +- Some teams presented a narrative of the team's activities. An entry that lists how the team approached the problem and chronicles what the team did (or tried to do) puts the team at a severe disadvantage. In contrast, an entry in the format of a self-contained report immediately stands out. In such a report, the problem is restated, including the results; the various methodologies that are used are clearly stated; the analyses are given; and the results are clearly stated, including a critical examination of the approach and the results. +- One aspect of writing that even advanced writers struggle with is graphing. When a plot is provided, it should be clearly introduced and described in the text, including a proper reference to the figure number. It should be clear to the reader from the text of the report what to look for in the plot before + +turning to look at the plot. This year's problem is an excellent example of the importance of describing plots and figures. Providing a graphical example of cars moving through a toll plaza over time is difficult and each team attempted to do this is in many different ways. Furthermore, some teams provided sequences of figures to demonstrate a particular transition, and there is a huge burden on the writers to explain what the reader should be looking for and what the implications are. + +In general, when a figure or plot is provided, the text should provide a detailed explanation of what is in the figure. Also, the caption and labels in the figure should succinctly describe what the figure is. Of course, the axes should be labeled and units should be provided. One thing that was different for this year, however, was that the judges gave teams more leeway in how discrete vs. continuous functions were displayed. Discrete data should be displayed as discrete and not with lines drawn between points. This year was different in that many figures represented the organization of the cars in a queue that might be discrete according to the model even though the domain (time) could be continuous. + +- Some of the entries included many tables. Almost everything said above for figures also applies to tables. Tables should be clearly labeled and explained in detail in the text of the report. The easier you make it for a judge to determine what is in a table and why it is important, the more likely the judge will be happy. A happy judge is a higher-scoring judge! +- Finally, a small thing that is very likely to keep an otherwise good paper from being held back in the early rounds. Some teams have a difficult time integrating equations and citations within the text of the report. Equations and citations should be correctly integrated into each sentence using proper grammar. Some teams that do excellent work make it extremely difficult for themselves when the equations or citations are set apart from each sentence and not properly integrated into the flow of the text. + +# The Little Things + +The vast majority of teams do great work, and it is always exciting to see what the teams are able to accomplish in such a short amount of time. It is also important to be able to share and express the ideas developed by each team in a formal report. This final product is the vehicle used to communicate the team's ideas and techniques. It is not a narrative of what the team did but is an opportunity to educate and persuade others to follow up on the team's excellent work. + +There are a number of simple things that can be done to make an entry easier to read. Some of these may appear to be trivial, but they make the judges task easier which in turn makes it easier for the judges to concentrate on the ideas rather than the way the ideas are expressed: + +Strengths & Weaknesses This aspect of an entry demonstrates whether or not a team has provided a critical inquiry into the methods and techniques developed. Including this aspect as a separate section of a report makes it much easier for the judges to identify easily this important aspect of the team's efforts. + +Table of Contents Given the growing number of teams that use $\mathrm{LATEX}$ , it is shocking how few entries provide a table of contents. This is a trivial step that can radically improve the readability of a report. + +Citations A paper that makes ample use of citations properly and consistently integrated into the text is guaranteed to stand out. For example, many papers included citations when providing the definitions of functions describing the way cars entered the toll plaza but failed to provide a citation when stating some of the results that happen to come directly from the relevant literature. + +Equation Numbers Number all equations even if they are not explicitly referred to in the text. This makes it easier for judges to confer when there is a question about a particular equation. + +Units Units are important. Always make sure that the definition of a variable, parameter, or function includes its units. Also, always check a result to make sure that the units are correct. This is one of the first checks judges make when confronted with a result that is not obvious. (Always keep in mind the difference between a quantity and its rate of change!) + +# Conclusions + +Each year we are amazed at the high quality of the entries. The things that the teams can accomplish in a weekend are a testament to the quality of the team's training and hard work. The teams receiving the higher honors should be proud to stand out in such an incredible field. + +This year the teams that submitted entries for the Tollbooth Problem focused their efforts on the optimal design of a toll plaza. They had to consider the number of lanes, the lengths of lane transitions, the times required to collect the fares, and a wide variety of other factors. Most of the teams made use of either queuing theory, comparisons to fluid flow, or simulation via a computational model. The teams that received higher recognition from the judges derived more than one model and made comparisons among their models. + +One aspect that set entries apart was the analysis and critical evaluation of their models and results. As usual, a sensitivity analysis of the models is important; but because of the nature of this year's problem, an even higher value was placed on this important aspect of the modeling process. + +Finally, each team's entry consists of a written report designed to educate and persuade. This is a difficult task in itself, but the teams are asked to do this without the benefit of an outside editing process; they must somehow build + +editing into their efforts themselves. Adding to the difficulties, good writing is most effective when it does not get in the way of the ideas that the writer is trying to convey and is not noticed until after the reader looks back and realizes what has been shared. + +# About the Author + +![](images/1d037ca10ecf8a2d0488befe114c3163e18db172e596980303a5c1cc9210ed43.jpg) + +Kelly Black is a faculty member in the Dept. of Mathematics at Union College. He received his undergraduate degree in Mathematics and Computer Science from Rose-Hulman Institute of Technology and his Master's and Ph.D. from the Applied Mathematics program at Brown University. His research is in scientific computing and he has interests in computational fluid dynamics, laser simulations, and mathematical biology. + +# Statement of Ownership, Management, and Circulation + +
1. Publication Title2. Publication Number3. Filing Date
UMAP JOURNAL0197-3622
4. Issue FrequencyQUARTERLY5. Number of Issues Published Annually6. Annual Subscription Price
4$99.00
7. Complete Mailing Address of Known Office of Puplication (Not printer) (Street, city, county, state, and ZIP+4)Contact Person
COMAP, Inc., 57 Bedford St., Suite 210, Lexington MA 02420Kevin Darcy
Telephone
781-862-7878 ex31
+ +8. Complete Mailing Address of Headquarters or General Business Office of Publisher (Not printer) + +# SAME + +9. Full Names and Complete Mailing Addresses of Publisher, Editor, and Managing Editor (Do not leave blank) + +Publisher (Name and complete mailing address) Solomon Garfunkel, 57 Bedford St., Suite 210, Lexington MA 02420 + +Editor (Name and complete mailing address) Paul Campbell, 700 College St., Beloit WI 53511 + +Managing Editor (Name and complete mailing address) + +Pauline Wright 57 Bedford St., Suite 210, Lexington MA 02420 + +10. Owner (Do not leave blank. If the publication is owned by a corporation, give the name and address of the corporation immediately followed by the names and addresses of all stockholders owning or holding 1 percent or more of the total amount of stock. If not owned by a corporation, give the names and addresses of the individual owners. If owned by a partnership or other unincorporated firm, give its name and address as well as those of each individual owner. If the publication is published by a nonprofit organization, give its name and address.) + +
Full NameComplete Mailing Address
CONSORTIUM FOR MATHEMATICS57 Bedford St., Suite 210
AND ITS APPLICATIONS INC.Lexington MA 02420
(COMAP, INC.)
+ +1. Known Bondholders, Mortgagees, and Other Security Holders Owning or Holding 1 Percent or More of Total Amount of Bonds, Mortgages, or Other Securities. If none, check box + +
Full NameComplete Mailing Address
+ +2. Tax Status (For completion by nonprofit organizations authorized to mail at nonprofit rates) (Check one) The purpose, function, and nonprofit status of this organization and the exempt status for federal income tax purposes: $\square$ Has Not Changed During Preceding 12 Months $\square$ Has Changed During Preceding 12 Months (Publisher must submit explanation of change with this statement) + +
13. Publication Title +The IIMAP Journal14. Issue Date for Circulation Data Below +7/1/05
15. Extent and Nature of CirculationAverage No. Copies Each Issue +During Preceding 12 MonthsNo. Copies of Single Issue +Published Nearest to Filling Date
a. Total Number of Copies (Net press run)700700
b. Paid and/or Requested Circulation(1)Paid/Requested Outside-County Mail Subscriptions Stated on Form 3541. (Include advertiser's proof and exchange copies)650640
(2)Paid In-County Subscriptions (Include advertiser's proof and exchange copies)00
(3)Sales Through Dealers and Carriers, Street Vendors, Counter Sales, and Other Non-USPS Paid Distribution3020
(4)Other Classes Mailed Through the USPS00
c. Total Paid and/or Requested Circulation [Sum of 15b. (1), (2), (3), and (4)]680660
d. Free Distribution by Mail (Samples, comprising air, and other free)(1)Outside-County as Stated on Form 354100
(2)In-County as Stated on Form 354100
(3)Other Classes Mailed Through the USPS1212
e. Free Distribution Outside the Mail (Carriers or other means)00
f. Total Free Distribution (Sum of 15d. and 15e.)1212
g. Total Distribution (Sum of 15c. and 15l)692672
h. Copies not Distributed812
i. Total (Sum of 15g. and h.)700684
j. Percent Paid and/or Requested Circulation (15c. divided by 15g. times 100)9898
16. Publication of Statement of Ownership +□ Publication required. Will be printed in the 3rd (FALL) issue of this publication. □ Publication not required.
17. Signature and Title of Editor, Publisher, Business Manager, or Owner +AaAaAaADate +9/15/05
+ +I certify that all information furnished on this form is true and complete. I understand that anyone who furnishes false or misleading information on this form or who omits material or information requested on the form may be subject to criminal sanctions (including fines and imprisonment) and/or civil sanctions (including civil penalties). + +# Instructions to Publishers + +1. Complete and file one copy of this form with your postmaster annually on or before October 1. Keep a copy of the completed form for your records. +2. In cases where the stockholder or security holder is a trustee, include in items 10 and 11 the name of the person or corporation for whom the trustee is acting. Also include the names and addresses of individuals who are stockholders who own or hold 1 percent or more of the total amount of bonds, mortgages, or other securities of the publishing corporation. In item 11, if none, check the box. Use blank sheets if more space is required. +3. Be sure to furnish all circulation information called for in item 15. Free circulation must be shown in items 15d, e, and f. +4. Item 15h., Copies not Distributed, must include (1) newsstand copies originally stated on Form 3541, and returned to the publisher, (2) estimated returns from news agents, and (3), copies for office use, leftovers, spoiled, and all other copies not distributed. +5. If the publication had Periodicals authorization as a general or requester publication, this Statement of Ownership, Management, and Circulation must be published; it must be printed in any issue in October or, if the publication is not published during October, the first issue printed after October. +6. In item 16, indicate the date of the issue in which this Statement of Ownership will be published. +7. Item 17 must be signed. + +Failure to file or publish a statement of ownership may lead to suspension of Periodicals authorization. \ No newline at end of file diff --git a/MCM/1995-2008/2006ICM/2006ICM.md b/MCM/1995-2008/2006ICM/2006ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..da236a85873b9286bc84a37ccf3b6022b18268f8 --- /dev/null +++ b/MCM/1995-2008/2006ICM/2006ICM.md @@ -0,0 +1,2291 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Associate Director, + +Mathematics Division + +Program Manager, + +Cooperative Systems + +Army Research Office + +P.O.Box 12211 + +Research Triangle Park, + +NC 27709-2211 + +David.Arney1@arl.army.mil + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Production Editor + +Timothy McLean + +Distribution + +Kevin Darcy + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 27, No. 2 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meye + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young University + +Army Research Office + +AT&T Shannon Research Laboratory + +University of Houston-Downtown + +Harvey Mudd College + +Oberlin College + +Troy University—Montgomery Campus + +University of Wisconsin—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Harvey Mudd College + +Adelphi University + +Eastern Washington University + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes a CD-ROM of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2620 $104 + +(Outside U.S.) #2621 $117 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2670 $479 + +(Outside U.S.) #2671 $503 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2640 $208 + +(Outside U.S.) #2641 $231 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2610 $41 + +(Outside U.S.) #2610 $41 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730 + +© Copyright 2006 by COMAP, Inc. All rights reserved. + +# Vol. 27, No. 2 2006 + +# Table of Contents + +# Editorial + +HIV: The Math + +Paul J. Campbell 93 + +# Special Section on the ICM + +Results of the 2006 Interdisciplinary Contest in Modeling Chris Arney 95 + +The United Nations and the Quest for the Holy Grail (of AIDS) Andrew Mehta, Quianwei Li, and Aaron Wise 113 + +Managing the HIV/AIDS Pandemic: 2006-2055 Tyler Huffman, Barry Wright III, and Charles Staats III . . . . . .129 + +AIDS: Modeling a Global Crisis (and Australia) Chris Cecka, Michael Martin, and Tristan Sharp 145 + +The Spreading HIV/AIDS Problem Adam Seybert, David Ryan, and Nicholas Ross 163 + +Judge's Commentary: The Outstanding HIV/AIDS Papers +Kari Murad and Joseph Myers 175 + +Author's Commentary: The Outstanding HIV/AIDS Papers Heidi Williams 181 + +![](images/6de243d8ac552a7e20a7e2b722628aee8a565c0e28857164f4f9709e17ad5dc8.jpg) + +# Editorial + +# HIV: The Math + +Paul J. Campbell + +Mathematics and Computer Science + +Beloit College + +Beloit, WI 53511 + +campbell@beloit.edu + +Roughly $1\%$ of the world's adult population is infected with HIV, which currently results in 2.8 million deaths per year (3% of all deaths and almost three times as many as malaria). + +What can mathematics offer in the struggle against this disease? Mathematics itself offers no protection and no cures. However, like claims early on that "everyone" can get AIDS, and revelations of false assurances of safety of the blood supply in the U.S. and France, mathematics can definitely help add to the scare effect. Many years ago, I rejected for publication in this Journal a paper by a student team that projected that most American adults would be infected by HIV—or dead from it—by now. Both the modeling and the conclusions were unsound. The dynamic of HIV, however, has since taught us again an old lesson, that fear is weaker than desire. (This Journal did publish a UMAP Module on HIV recently [Isihara 2005], dealing mainly with immunological aspects.) + +What mathematics can do is project immediate past and current trends, to reveal what the future could be without basic change. In the case of HIV, the results are encouraging for some countries and discouraging for others. The teams in this year's Interdisciplinary Contest in Modeling were asked to focus their modeling on "critical" countries, with the teams determining for themselves the meaning of "critical" and selecting the countries. The Outstanding papers published here unsurprisingly focus on many of the same countries (ones with large HIV-positive populations or with a large proportion of their population HIV-positive) and project, in human and in economic terms, the results of different strategies of intervention. + +Mathematics can certainly help quantify the varied consequences of differing policies, such as for preventing HIV. But many people mistrust the predictions of mathematics because they mistrust mathematics. And they mistrust + +mathematics because they never understood it nor learned to appreciate its relevance. + +The thousands of interested students who take part in the ICM and MCM are one prong of COMAP's efforts to promote applications in mathematics instruction. However, it is those millions of students mistrustful of mathematics whom COMAP tries hardest to reach, with its high school mathematics textbooks [COMAP 2000; 2002; 2007; Crisler and Froelich 2006], its college-level textbook [Garfunkel et al. 2006], and other initiatives. + +# References + +COMAP. 2000. Mathematics: Modeling Our World. 4 vols. Lexington, MA: COMAP. +________. 2002. Mathematical Models with Applications. New York: W.H. Freeman. +2007. Modeling with Mathematics: A Bridge to Algebra II. New York: W.H. Freeman. +Crisler, Nancy, and Gary Froelich. 2006. Discrete Mathematics through Applications. 3rd ed. New York: W.H. Freeman. +Garfunkel, Solomon A., et al. 2006. For All Practical Purposes. 7th ed. New York: W.H. Freeman. +Isihara, Paul A., et al. 2005. Immunological and epidemiological HIV/AIDS modeling. UMAP Modules in Undergraduate Mathematics and Its Applications: Module 791. The UMAP Journal 26 (1): 49--90. + +# About the Editor + +![](images/0cc311a2930160bf459fb12c4230d1aa35c40a72716a91c5698b7b2826a11397.jpg) + +Paul Campbell graduated summa cum laude from the University of Dayton and received an M.S. in algebra and a Ph.D. in mathematical logic from Cornell University. He has been at Beloit College since 1977, where he served as Director of Academic Computing from 1987 to 1990. He is Reviews Editor for Mathematics Magazine and has been editor of The UMAP Journal since 1984. He is a co-author of the COMAP-sponsored book of applications-oriented collegiate mathematics For All Practical Purposes (7th ed. W.H. Freeman 2006), already used by more than half a million students. + +# Modeling Forum + +# Results of the 2006 Interdisciplinary Contest in Modeling + +Chris Arney, ICM Co-Director + +Division Chief, Mathematical Sciences Division + +Program Manager, Cooperative Systems + +Army Research Office + +PO Box 12211 + +Research Triangle Park, NC 27709-2211 + +David.Arney1@arl.army.mil + +# Introduction + +A total of 224 teams of undergraduates and high school students, from 122 departments in 80 institutions in 6 countries, spent a weekend in February working on an applied mathematics problem in the 8th Interdisciplinary Contest in Modeling (ICM). + +This year's contest began on Thursday, Feb. 2, and ended on Monday, Feb. 6. During that time, the teams of up to three undergraduates or high school students researched, modeled, analyzed, solved, wrote, and submitted their solutions to an open-ended complex interdisciplinary modeling problem involving public-health policy decisions concerning the HIV/AIDS epidemic. After the weekend of challenging and productive work, the solution papers were sent to COMAP for judging. Four of the top papers, which were judged to be Outstanding by the panel of judges, appear in this issue of The UMAP Journal. Results and winning papers from the first seven contests were published in special issues in 1999 through 2005. + +COMAP's Interdisciplinary Contest in Modeling along with its sibling contest, the Mathematical Contest in Modeling, are unique among mathematics competitions in that they are the only international contests in which students work in teams to find a solution. Centering its educational philosophy on + +mathematical modeling, COMAP supports the use of mathematical tools to explore real-world problems. The contests serve society by developing students as problem solvers in order to become better-informed—and better-prepared—citizens, consumers, workers, and leaders. + +This year's public health problem was particularly challenging in its demand for teams to utilize many aspects of science and mathematics in their modeling and analysis. The problem required teams to understand the science of the HIV virus and understand and model the financial and policy issues associated with controlling the pandemic, in order to advise the United Nations on how to manage the resources available for addressing HIV/AIDS. The teams' job was to model several scenarios of interest and use their models to recommend the allocation of financial resources. In addition, to accomplish their tasks, teams had to consider trends in HIV/AIDS morbidity and mortality, together with historical demographic and health data on fertility, population, age distribution, life expectancy, and disease burden. The problem required analysis of issues of many kinds—economic, demographic, political, environmental, social, psychological, plus future technology, along with several challenging requirements needing scientific and mathematical analysis. The problem also included the ever-present requirements of the ICM to use thorough data analysis, creativity, approximation, precision, and effective communication. + +The author of the problem was Heidi Williams, Ph.D. student in Economics at Harvard University, who served on the panel of final judges. The problem originated from her work with the Center for Global Development (a nonprofit think tank in Washington, DC) to contribute to public policy efforts aimed at speeding the development of (and increasing access to) vaccines for diseases that are concentrated in low-income countries. Commentary on the HIV/AIDS problem from Ms. Williams appears in this issue of The UMAP Journal. + +All members of the competing teams are to be congratulated for their excellent work and dedication to scientific modeling and problem solving. This year's judges remarked that the quality of the modeling, analysis, and presentation was extremely high and the interdisciplinary modeling very robust. The award levels for this year's contest reflect the increase in quality. + +The 2006 ICM was managed by COMAP via its information system connected to the World Wide Web, where teams registered, obtained contest materials, downloaded the problem, and also downloaded considerable amounts of provided data through COMAP's ICM Website. + +Next year, we will continue with the public health theme for the contest problem. Teams preparing for the 2007 contest should consider reviewing interdisciplinary topics in the area of public health modeling and analysis. + +Teams should also be aware of the presentation style required for this kind of writing. This contest mirrors reality. A paper will have an impact only if it is read, and most readers make the decision whether or not to read a paper based on the summary and the first few paragraphs of the paper. Although triage judges do spend some time with each paper, they cannot read every paper completely. Therefore, we cannot overemphasize the importance of the + +summary to the success of the paper. It should be clear, concise, and well-organized. Your summary should demonstrate your understanding of the problem by stating the key factors and the key trade-offs. Conclusions should be clearly stated along with the assumptions you made and the sensitivity of your conclusions to your assumptions. For the HIV/AIDS problem, two key assumptions were the extent to which ARV treatment changes the risk of contagion, and when a viable vaccine might be available. + +Another factor is the length of the paper. Long papers are not necessarily good papers. Good writing is extremely important. If you cannot describe your models clearly and succinctly, then they probably are not good models. Complicated models are not necessarily good. In fact, a large part of the art of modeling is choosing the most important factors and using appropriate science and mathematics. + +The final reminder, a very important one, is that any material that comes from other sources must be carefully and completely documented. + +We look forward to both continued improvement in the quality of the contest reports and continued increase in interest and participation in the ICM. It has grown in participation every year of its existence. + +Start-up funding for the ICM was provided by a grant from the National Science Foundation (through Project INTERMATH) and COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS) and IBM. + +# The HIV/AIDS Problem + +As the HIV/AIDS pandemic enters its 25th year, both the number of infections and the number of deaths due to the disease continue to rise. Despite an enormous amount of effort, our global society remains uncertain on how to allocate resources to fight this epidemic most effectively. + +You are a team of analysts advising the United Nations (UN) on how to manage the available resources for addressing HIV/AIDS. Your job is to model several scenarios of interest and to use your models to recommend the allocation of financial resources. The narrative below provides some background information and outlines specific tasks. + +# Task 1 + +Build a model to approximate the expected rate of change in the number of HIV/AIDS infections for the most critical country on each of the continents (Africa, Asia, Europe, North America, and South America) from 2006-2050, in the absence of any additional interventions. Fully explain your model and the assumptions that underlie your model. In addition, explain how you selected the countries to model. + +Use as a list of countries for inclusion in your analysis the countries included in the attached spreadsheet, which include all member states of the World Health Organization (WHO) as of 2003. + +Data: list_WHO_member_states.xls + +Reliable data on HIV prevalence rates by country are generally difficult to obtain. The attached spreadsheet includes several worksheets of data which you may use in your analysis. + +Data: hiv_aids_data.xls + +1. globalhiv-aidscases,1999: These data come from UNAIDS (the Joint United Nations Programme on HIV/ AIDS) and report the estimated number of HIV positive 0 to 49 year olds by country at the end of 1999. +2. HIV-aidsinAfricaovertime: These data come from the US government and give some piecemeal time series data on measured HIV prevalence rates among women of childbearing age, in urban areas, over time for some African countries. +3. HIV-aidssubtypes: These data come from UNAIDS and give the geographic distribution of HIV-1 subtypes by country. + +Also attached for your use are some basic population and demographic data. + +Data: + +1. fertility_data.xls: These data come from the UN and give age-specific fertility rates by major area, region, and country, 1995-2050 (births per thousand women) + +(a) Estimates for 1995-2005 +(b) Projections (under the assumption of medium fertility levels) for 2005-2050 + +2. population_data.xls: These data come from the UN and give total population (both sexes combined) by major area, region, and country, annually for 1950-2050 (thousands). + +(a) Estimates for 1950-2005 +(b) Projections (under the assumption of medium fertility levels) for 2006-2050 + +3. age_data.xls: These data come from the UN and give population (for both sexes, and by gender) by five-year age groups, major area, region, and country, 1950-2050 (thousands) + +(a) Estimates, 1950-2005 + +(b) Projections (under the assumption of medium fertility levels) for 2010-2050 + +4. birth_rate_data.xls: These data come from the UN and give crude birth rates by major area, region, and country, 1950-2050 (births per thousand population) + +(a) Estimates, 1950-2005 +(b) Projections (under the assumption of medium fertility levels) for 2005-2050 + +5. life Expectancy_0_data.xls: These data come from the UN and give life expectancy at birth (by sex and both sexes combined) by major area, region, and country, 1950-2050 (years) + +(a) Estimates, 1950-2005 +(b) Projections (under the assumption of medium fertility levels) for 2005-2050 + +There are a number of interventions that HIV/AIDS funding could be directed toward—including prevention interventions (voluntary counseling and testing, condom social marketing, school-based AIDS education, medicines to prevent mother-to-child transmission, etc.) and care interventions (treating other untreated sexually transmitted diseases, treating opportunistic infections, etc.). You should focus on only two potential interventions: provision of antiretroviral (ARV) drug therapies, and provision of a hypothetical HIV/AIDS preventive vaccine. + +# Task 2 + +First, estimate the level of financial resources from foreign aid donors that you realistically expect to be available to address HIV/AIDS, by year, from 2006-2050 for the countries you selected in Task 1. + +Then use the model that you developed in Task 1 and the finance data to estimate the expected rate of change in the number of HIV/AIDS infections for your selected countries from 2006-2050 under realistic assumptions for the following three scenarios: + +1. Antiretroviral (ARV) drug therapy +2. A preventive HIV/AIDS vaccine +3. Both ARV provision and a preventive HIV/AIDS vaccine + +Assume in these scenarios that there is no risk of emergence of drug-resistant strains of HIV (you will examine this issue in Task 3). + +Be sure to carefully describe the assumptions that underlie your model. + +You can choose whether these scenarios should be implemented for all countries, or for certain subsets of countries based on income cut-offs, disease burden, etc. Available for use if you wish is a spreadsheet of country-level income data. + +Data: income_data.xls: These data are from the World Bank (2002) and give per-capita gross national product (GNP) data as well as broad income classifications that you are free to use in your analysis if you wish. + +ARV drug therapies can have tremendous benefits in terms of prolonging the lives of individuals infected with HIV/AIDS. ARVs are keeping a high proportion of HIV/AIDS-infected individuals in rich countries alive, and policy makers and international institutions are facing tremendous political pressure to increase access to ARVs for individuals in poor countries. Health budgets in low-income countries are very limited, and it seems unlikely that poor countries will be able to successfully expand these programs to the majority of their populations using their own resources. Appendix 1 presents country-specific data from UNAIDS on current access to ARVs for a number of countries. + +The efficacy of ARVs depends in large part on adherence to the treatment regimen and to proper monitoring. The most favorable conditions for ARVs are structured programs with extensive counseling and physician care, as well as regular testing to monitor for disease progression and the onset of opportunistic infections. Non-adherence or inadequate treatment carries with it two very serious consequences. First, the treatment may not be effective for the individual undergoing treatment. Second, partial or inadequate treatments are thought to directly lead to the emergence of drug-resistant strains of HIV. + +The price of the drugs initially used to treat patients has come down to several hundred dollars a year per patient, but delivering them and providing the necessary accompanying medical care and further treatment is the key administrative and financial challenge. It is estimated that purchasing and delivering antiretrovirals using the clinically-recommended approach (DOTS, or "directly observed short-course treatments"), which is intended to minimize the emergence of drug-resistant strains, would cost less than \(1,100 per person per year [Adams, Gregor, et al., 2001, Consensus statement on antiretroviral treatment for AIDS in poor countries, http://www.hsph.harvard.edu/bioethics/pdf/consensus_aims_therapy.pdf]. + +For a preventive HIV vaccine, make assumptions that you feel are reasonable about the following (in addition to other factors that you may choose to include in your model): + +1. The year in which an HIV/AIDS preventive vaccine might be available. +2. How quickly vaccination rates might reach the following steady-state levels of vaccination: + +(a) If you wish to immunize new cohorts (infants), assume that the steady-state level for new cohorts of the country-by-country immunization rates for the third dose of the diphtheria-pertussis-tetanus vaccine (DTP3), as reported by the WHO (2002): + +Data: vaccination_rate_data.xls + +(b) If you wish to immunize adults (any group over age 5), assume that the steady-state level for older cohorts is the second dose of the tetanus toxoid (TT2) rate, as reported by the WHO (2002): + +Data: vaccination_rate_data.xls + +3. The efficacy and duration of protection of the vaccine. +4. Whether there would be epidemiological externalities from vaccination. +5. Assume the vaccine is a three-dose vaccine, and can be added to the standard package of vaccines delivered under the WHO's Expanded Programme on Immunization (EPI) at an incremental additional cost of $0.75. + +# Task 3 + +Reformulate the three models developed in Task 2, taking into consideration the following assumptions about the development of ARV-resistant disease strains. + +Current estimates suggest that patients falling below $90 - 95\%$ adherence to ARV treatment are at a "substantial risk" of producing drug resistant strains. Use as an assumption for your analysis that a person receiving ARV treatment with adherence below $90\%$ has a $5\%$ chance of producing a strain of HIV/AIDS which is resistant to standard first-line drug treatments. + +Second- and third-line ARV drug therapies are available; but assume for your analysis that these drugs are prohibitively expensive to implement in countries outside of Europe, Japan, and the United States. + +# Task 4 + +Write a white paper to the United Nations providing your team's recommendations on the following: + +1. Your recommendations for allocation of resources available for HIV/AIDS among ARV provision and a preventive HIV vaccine. +2. Your argument for how to weigh the importance of HIV/AIDS as an international concern relative to other foreign policy priorities. +3. Your recommendations for how to coordinate donor involvement for HIV/ - AIDS. + +For (1): Assume that between now and 2010 the available financial resources could be allocated so as to speed the development of a preventive HIV vaccine—through directly financing vaccine research and development (R&D), or through other mechanisms. Any gains from such spending would move the date of development you assumed in Task 2 to some earlier date. + +(Note: None of the data files, including the data in the appendices, are included in this article. They are available on the COMAP Website, at http://www.comap.com/undergraduate/contestms/mcm/contestss/2006/problems/.) + +# The Results + +Solution papers were coded at COMAP headquarters so that names and affiliations of the authors were unknown to the judges. Each paper was then read preliminarily by at least two "triage" judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary, the model description, and overall organization are the primary elements in judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, additional triage judges evaluated the paper. + +Final judging by a team of modelers, analysts, and subject-matter experts took place on February 24 and 25, again at West Point, NY. The judges classified the 224 submitted papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
HIV/AIDS44811458224
+ +The four papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries by the author and the final judges. We list those four Outstanding teams and the Meritorious teams (and advisors) below. The complete list of all participating schools, advisors, and results is provided in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +"The United Nations and the Quest for the Holy Grail (of AIDS)" + +Duke University + +Durham, NC + +David Kraines + +Aaron Wise + +Arnav Mehta + +Qianwei Li + +"Managing the HIV / AIDS Pandemic: 2006-2055" + +Duke University + +Durham, NC (INFORMS Prize winner) + +David Kraines + +Tyler Huffman + +Barry Wright III + +Charles Staats III + +"AIDS: Modeling a Global Crisis (and Australia)" + +Harvey Mudd College + +Claremont, CA + +Lisette dePillis + +Cris Cecka + +Michael Martin + +Tristan Sharp + +"The Spreading HIV / AIDS Problem" + +United States Military Academy + +West Point, NY + +Randal Hickman + +Adam Seybert + +David Ryan + +Nicholas Ross + +# Meritorious Teams (48) + +Beihang University, Beijing, China (Hongying Liu) + +Beijing Jiao Tong University, China (3 teams) (Li Guiting) (Zhang Shangli) (Bingli Fan) + +Beijing University of Chemical Technology (Cheng Yan) + +Beijing University of Posts and Telecommunications, China (2 teams) (Yuan Jianhua) (He Zuguo) + +Carroll College, Helena, MT (Holly Zullo) + +Central University of Finance and Economics, Beijing, China (Li Donghong) + +China University of Mining and Technology, Xuzhou, Jiangsu, China (Zhou Shengwu) + +Chongqing University, China (2 teams) (Gong Qu) (Li Zhilang) + +Dalian University, China (Tan Xinxin) + +Dalian University of Technology, Institute of University Students' Innovation, Dalian, Liaoning, China (Fu Donghai) + +Duke University, Durham, NC (William Allard) + +East China University of Science & Technology, Shanghai, China (2 teams) (Ni Zhonxin) (Lu Xiwen) + +Hangzhou Dianzi University, Hangzhou, Zhejiang, China (Shen Hao) + +Harbin Engineering University, Harbin, Heilongjiang, China (2 teams) (Luo Yuesheng) (Fan Zhaobing) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Jiao Guanghong) + +Jinan University, Guangzhou, Guangdong, China (2 teams) (Hu Daiqiang) (Fan Suohai) + +Maggie Walker Governor's School, Richmond, VA (2 teams) (John Barnes) + +Nanjing University of Posts & Telecommunications, Nanjing, Jiangsu, China (2 teams) (Qiu Zhonghua) (He Ming) + +National University of Defense Technology, Changsha, Hunan, China (2 teams) (Mao Ziyang) (Wu Mengda) + +Nonlinear Science Center, Hefei, Anhui, China (Tao Zhou) + +Peking University Health Science Center, Beijing, China (Zhang Xia) + +Southeast University, Nanjing, Jiangsu, China (Dan He) + +Shandong University, Jinan, Shangdong, China (Huang Shuxiang—2 teams) + +Shandong University at Weihai, Weihai, Shandong, China (Cao Zhulou) + +Sun Yat-Sen University, Guangzhou, Guangdong, China (Chen Zepeng) + +Tsinghua University, Beijing, China (2 teams) (Lu Mei) (Xie Jinxing) + +University of College Cork, Cork, Ireland (Ben McKay) + +University of Colorado, Boulder, CO (Anne Dougherty) + +University of Colorado at Colorado Springs, Colorado Springs, CO (Radu Cascaval) + +Xidian University, Xi'an, Shaanxi, China (2 teams) (Qi Xiaogang) (Zhou Shuisheng) + +Zhejiang University, China (3 teams) (Tan Zhiyi) (Fei Jiaer—2 teams) + +Zhejiang University City College, Hangzhou, Zhejiang, China (Wang Gui) +Zhejiang University Ningbo Institute of Technology, Ningbo, Zhejiang, China (Wang Jufeng) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and by the Head Judge. Additional awards were presented to the Duke University team advised by David Kraines from the Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Contest Directors + +Chris Arney, Mathematical Sciences Division, Army Research Office, Research Triangle Park, NC + +Gary W. Krahn, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Associate Directors + +Richard Cassady, Dept. of Industrial Engineering, University of Arkansas, Fayetteville, AR + +Joseph Myers, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Judges + +Kari Murad, Dept. of Natural Sciences, College of Saint Rose, Albany, NY + +Edward Pohl, Dept. of Industrial Engineering, University of Arkansas, Fayetteville, AR + +Luis Prieto-Portar, Dept. of Civil and Environmental Engineering, Florida International University, Miami, FL + +Frank Wattenberg, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Heidi Williams (Ph.D. student), Harvard University, Cambridge, MA + +Triage Judges + +Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY: Jong Chung, Amy Erickson, Keith Erickson, Andrew Glen, Alex Heidenberg, Michelle Isenhour, John Jackson, Sebastien Joly, Jerry Kobylski, Joseph Lindquist, Keith McClung, Barbara Melendez, Fernando Miguel, Joe Myers, Mike Phillips, Frederick Rickey, Heather Stevenson, Rodney Sturdivant, Frank Wattenberg, and Brian Winkel. + +# Acknowledgments + +We thank: + +- the Institute for Operations Research and the Management Sciences (INFORMS) for its support in judging and providing prizes for the winning team; +- IBM for their support for the contest; +- all the ICM judges and ICM Board members for their valuable and unflagging efforts; +- the staff of the U.S. Military Academy, West Point, NY, for hosting the triage and final judgings. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, style has been adjusted to that of The UMAP Journal, and the papers have been edited for length. Please peruse these student efforts in that context. + +To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Editor's Note + +As usual, the Outstanding papers were longer than we can accommodate in the Journal, so space considerations forced me to edit them for length. It was not possible to include all of the many tables and figures, nor the white papers. + +In editing, I endeavored to preserve the substance and style of the paper, especially the approach to the modeling. + +Paul J. Campbell, Editor + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORC
CALIFORNIA
California State U. at Monterey BaySeasideHongde HuP
Harvey Mudd CollegeClaremontLisette de PillisO,H
Darryl YongP
COLORADO
University of ColoradoBoulderAnne DoughertyM
Colorado SpringsRadu CascavalM
KENTUCKY
Asbury CollegeWilmoreKenneth RietzH,P
MARYLAND
Villa Julie CollegeStevensonEileen McGrawH
MASSACHUSETTS
Franklin W. Olin College of Eng.NeedhamBurt TilleyH
MONTANA
Carroll CollegeHelenaHolly ZulloM
Kelly ClineH,P
NEW YORK
United States Military AcademyWest PointRandal HickmanO
NORTH CAROLINA
Duke UniversityDurhamDavid KrainesO,O
William AllardM
OHIO
Youngstown State UniversityYoungstownGeorge YatesH,H
Jay KernsH
VIRGINIA
James Madison UniversityHarrisonburgHasan HamdanH
David WaltonH
Longwood UniversityFarmvilleM. Leigh LunsfordP
Maggie L. Walker Governor's SchlRichmondJohn BarnesM,M,H
CHINA
Anhui
Anhui University (Stat)HefeiChen HuayouP
Zhou LigangP
Hefei University of TechnologyHefeiSu HuamingP
Du XueqiaoH
Huang YouduH
Nonlinear Science Center (Phys)HefeiTao ZhouM
University of Science and Technology of China (Chm) (Foreign Language) (Eng)HefeiLi FuH,P
Zhang ManjunH
Liu YanjunP
Beijing
BeiHang Univ. (BHU) (Eng) (Sci)BeijingSun HaiYanP
Liu HongyingM
Beijing Institute of TechnologyBeijingLi BingzhaoH
Sun HuafeiH
Li YuanH
Beijing Jiao Tong UniversityBeijingLi GuitingM,H
Wang XiaoxiaH
(CS)Fan BingliM
Shang PengjianP,P
(Eng)Wang BingtuanP,P
Yu JiaxinP
(Sci)Feng GuochenP,P
Zhang ShangliM
Beijing Language and Culture University (CS) (Finance)BeijingLiu GuilongH,H
Song RouP
Beijing University of Chemical Technology (Info)BeijingYuan WenyanP
Cheng YanM
(Sci)Jiang XinhuaH
Beijing University of Posts and Telecomm. (Info)BeijingHe ZuguoH
Zhang WenboH
He ZuguoM
(Phys)Ding JinkouP
(Telecomm. Eng)Yuan JianhuaM
Beijing University of TechnologyBeijingGuo EnP
Central University of Finance and EconomicsBeijingFan XiaomingH
Yin XianjunH
Li DonghongM
China University of Geoscience (Econ) (Info)BeijingZheng XunyeP
Chen ZhaodouH,H
Peking UniversityBeijingDeng MinghuaH
Liu YulongH
(Biomath.)Zhang XiaM,P,P
(Financial Math)Wu LanH
(Phys)Mu LiangzhuH
Tsinghua UniversityBeijingLu MeiM,H
Xie JinxingM
(Econ)Bai ChongEnH
Chongqing
Chongqing University (Chm)ChongqingLi ZhiliangM,H
(Info)Gong QuM
Fujian
Xiamen University (Info)Liu SongyueH
(Phys.)XiamenWu ChenxuH
(Software)Wang ShengruiH,H
Guangdong
Jinan UniversityGuangzhouHu DaiqiangM
Fan SuohaiM
(CS)Luo ShizhuangH
South-China Normal UniversityGuangzhouLiu XiuxiangP,P
South China University of Technology (Bus)GuangzhouPan Hua ShaoP
(CS)Tao Zhi SuiH
(Electric Power)Liu Xiao LanP
Sun Yat-sen UniversityGuangzhouFan ZhuJunH
Wang Qi-RuP
(CS)Chen ZePengM
Guizhou
Guizhou University for NationalitiesGuiyangTian YingfuP
Hebei
Hebei University of Technology (Eng)TianjinXu ChangqingH
North China Electric Power University (Eng)BaodingZhao HongshanH
Heilongjiang
Harbin Engineering UniversityHarbinFan ZhaobingM
(Sci)Luo YueshengM
Harbin Institute of TechnologyHarbinLiu KeanP
Shang ShoutingH,H
Ge HongH
Zhang ChipingH
Jiao GuanghongM
(Env'l Sci)Zheng TongH
Harbin University of Science and TechnologyHarbinChen DongyanP
Li DongmeiH
Wang ShuzhongH
Northeast Agricultural University (CS)HarbinLi FanggeH
(Food Sci.)Li FanggeH
Hubei
Huazhong U. of Sci. and Tech. (Eng)WuhanGao LiangH
Hunan
Hunan UniversityChangshaLuo HanP
Li XiaopeiH
(Stat)Yan HuahuiH
National University of Defense TechnologyChangshaDuan XiaojunH
Wu MengdaM
Mao ZiyangM
Inner Mongolia
Inner Mongolia UniversityHohhotHan HaiP
Jiangsu
China University of Mining and TechnologyXuzhouZhou ShengwuM
(Educ. Admin.)Zhu KaiyongH
Nanjing Univ. of Sci. & Tech.NanjingXu ChungenP
Huang ZhengyouH
(Stat)Zhang ZhengjunH
Nanjing University of Posts and Telecomm.NanjingXu LiWeiH
Qiu ZhonghuaM
He MingM
Southeast UniversityNanjingDan HeM,H
Feng WangH,P
Xuzhou Institute of TechnologyXuzhouJiang YingziH
Jilin
Jilin University (Communication Eng.)ChangchunCao ChunlingH,H
Yao XiulingH
(CS)ChangchunLu XianruiH
Liaoning
Dalian Maritime UniversityDalianZhang YunjieH
Chen GuoyanP
(CS)Yang ShuqinH,P
(Econ)Zhang YunH
Dalian University (Info)DalianTan XinxinM
Dong XiangyuH
Shen LianshanH
Liu ZixinP
Dalian University of Technology (Innovation College)DalianDai WanjiH
Guo QiangP
Fu JieH,H
Liu XiangdongP
Zhang HengboH
Ge RendongP
Li YuxingH,P
Pan QiuhuiH,P
Fu DonghaiM
Shenyang Inst. of Aero. Engineering (CS)ShenyangFeng ShanP
Zhu LimeiH
Shaanxi
Northwestern Polytechnical UniversityXi'anXiao HuayongH,H
Lu QuanyiP
Xi'an Jiaotong UniversityXi'anFu ShibinH,H
Zhou YicangP
Xidian University (CS) (Software)Xi'anQi XiaogangM
Zhou ShuishengM
Mu XuewenP
Shandong
Shandong UniversityJinanLiu BaodongH,H
Fu GuohuaH
Huang ShuxiangM,M
(Software)Rong XiaoxiaH
Luan JunfengH
(CS)Feng HaodiH
Shandong University at WeihaiWeihaiCao ZhulouM
Shanghai
Donghua University (Econ)ShanghaiLi YongH
Ying MingyouP
Ma YufangH
East China Univ. of Science and Tech.ShanghaiQian XiyuanP
Ni ZhongxinM
Lu XiwenM,P
Fudan UniversityShanghaiCai ZhijieM
Cao YuanP
Shanghai Jiaotong U., Minhang BranchShanghaiXiao LiuqingH,H
Sichuan
Univ. of Electronic Sci. and Tech. of ChinaChengduDu HongfeiH,P
Zhang YongH
Tianjin
Nankai University (Bus)TianjinHou WenhuaH
Tianjin University (Chm)TianjinShi GuoliangH
(CS)Huang ZhenghaiP
Zhejiang
Hangzhou Dianzi University (CS)HangzhouLi ChengjiaH
(Info)Shen HaoM,H
Zhejiang Gongshang UniversityHangzhouZhu LingH,H
(Info)Zhao HengP
Zhejiang Sci-Tech UniversityHangzhouShi GuoshengH
Zhejiang UniversityHangzhouTan ZhiyiM
(Sci)Fei JiaerM,M
(Ningbo Inst. of Tech.)NingboLi ZheningH
Wang JufengH
Tu LihuiH
Zhejiang University City College (CS)HangzhouZhang HuizengH
(Info)Kang XushengH
Wang GuiM
Zhejiang University of TechnologyHangzhouWu XuejunH,H
(Jianxing College)Zhou WenxinH
FINLAND
Helsingin MatematiikkalukioHelsinkiJuho PakarinenP
HONG KONG
Hong Kong Baptist UniversityKowloonM.L. TangP
INDONESIA
Institut Teknologi BandungBandungEdy SoewonoH
Kuntjoro SidartoP
IRELAND
University College CorkCorkBen McKayM
+ +# Editor's Note + +For team advisors from China, I have endeavored to list family name first. + +# Abbreviations for Organizational Unit Types (in parentheses in the listings) + +
(none)MathematicsM; Pure M; Applied M; Computing M; M and Computer Science; M and Computational Science; M and Information Science; M and Statistics; M, Computer Science, and Statistics; M, Computer Science, and Physics; Mathematical Sciences; Applied Mathematical and Computational Sciences; Natural Science and M; M and Systems Science; Applied M and Physics
BioBiologyB; B Science and Biotechnology; Biomathematics; Life Sciences
BusBusinessB; B Management; B and Management; B Administration
ChmChemistryC; Applied C; C and Physics; C, Chemical Engineering, and Applied C
CSComputerC Science; C and Computing Science; C Science and Technology; C Science and (Software) Engineering; Software; Software Engineering; Artificial Intelligence; Automation; Computing Machinery; Science and Technology of Computers
EconEconomicsE; E Mathematics; Financial Mathematics; E and Management; Financial Mathematics and Statistics; Management; Business Management; Management Science and Engineering
EngEngineeringCivil E; Electrical Eng; Electronic E; Electrical and Computer E; Electrical E and Information Science; Electrical E and Systems E; Communications E; Civil, Environmental, and Chemical E; Propulsion E; Machinery and E; Control Science and E; Mechanisms; Mechanical E; Electrical and Info E; Materials Science and E; Industrial and Manufacturing Systems E
InfoInformationI Science; I and Computation(al) Science; I and Calculation Science; I Science and Computation; I and Computer Science; I and Computing Science; I Engineering; I and Engineering; Computer and I Technology; Computer and I Engineering; I and Optoelectronic Science and Engineering
PhysPhysicsP; Applied P; Mathematical P; Modern P; P and Engineering P; P and Geology; Mechanics; Electronics
SciScienceS; Natural S; Applied S; Integrated S; School of S
SoftwareSoftware
StatStatisticsS; S and Finance; Mathematical S; Probability and S
+ +# The United Nations and the Quest for the Holy Grail (of AIDS) + +Aaron Wise + +Arnav Mehta + +Qianwei Li + +Duke University + +Durham, NC + +Advisor: David Kraines + +# Summary + +In response to the HIV/AIDS pandemic, worldwide interest in HIV treatments has grown, but uncertainty remains about how to fund treatments. Nations must choose between combinations of sexual education, antiretroviral treatments (ART), and vaccine research. We aim to quantify the effects of each of these treatments in order to determine how best to confront AIDS. + +We propose an iterative deterministic model for measuring the progression of HIV through 2050. The crux of our model is use of expressions that predict infection and death rates. Our model accounts for the three main factors in transmission: unprotected intercourse, non-sterile drug needles, and births of children to HIV-positive mothers. Furthermore, we analyze country-specific parameters, such as prevalence of HIV among subpopulations (e.g., homosexuals) as well as condom usage and risky-sex rates, and model the influence of treatments. Additionally, we investigate the impact of multiple-drug resistance. Using data extrapolated from South African prenatal clinics, we recreate historical trends to demonstrate our model's capacity for accurate prediction. + +Our goal is to assess which methods minimize the number of HIV cases in both the short run and the long run, and to use these results to guide policy decisions. Condom usage, ARV therapy, and a vaccine all affect the course of HIV development. Current aid efforts, including sexual education, which reduce risky sex and promote condom use, are very valuable. + +We predict that current downward trends will continue and that the HIV outbreak is beginning to recede. If ART does not decrease the transmission rate, its widespread use may increase the scope of an outbreak; if it does decrease the + +transmission rate, it can be an important factor in containing HIV. We conclude that vaccines provide the greatest promise for long-term prevention. + +We propose an economic model for distributing resources. Even with a vaccine, economic considerations promote ARV usage. We finally recommend universal sexual education, distribution of ARVs based on infection profiles, and adequate endowment for research toward a vaccine, the holy grail of HIV. + +# Introduction + +We focus on countries selected for diversity of the origin of their outbreak and ability to be extrapolated to other outbreaks: + +- South Africa has a large HIV/AIDS population but little drug use. +- India has an enormous population and a small but growing AIDS presence. +- Russia has a large HIV-positive injected-drug community. +- The United States has a fairly small HIV population (clustered among its homosexual population) limited by the high safe-sex rate. + +# Experimental + +# Overview + +An important factor in the continuation of an epidemic is $R_0$ , a measure of the reproductive rate of the disease: on average, each HIV carrier infects $R_0$ other people [Velasco-Hernandez 2002]. An epidemic spreads only if $R_0$ is greater than 1. A treatment is preventive if it decreases the reproduction rate. + +We create an iterative deterministic model. In each iteration, the number of new HIV/AIDS cases is a function of the previous state of the system, along with the expected rate of disease transmission. Hence, + +$$ +R _ {0} \propto \frac {d (\text {t o t a l A I D S p o p u l a t i o n})}{d (\text {t i m e})}. +$$ + +Because $R_0$ is useful as a measure of the change in total population, we discuss results in terms of trends and total HIV-positive populations. + +The model determines the change in HIV/ AIDS victims based on both new cases of HIV as well as deaths of previous HIV victims due to AIDS. + +The three primary vectors for transmission of HIV are unprotected sexual intercourse, use of "dirty" (reused) drug needles, and childbirth by HIV-positive mothers. We balance these factors against the death rate in order to determine the net number of new cases. We use a feedback-based model for death, where the number of AIDS deaths is based on the expected life span of a victim and the number of victims who contracted HIV at specific previous points in time. + +# Basic Model + +The general system of the previous section can be written as: + +$$ +\# \mathrm {a i d s} (t) = \text {n e w I n f e c t i o n s} (t) + \# \mathrm {a i d s} (t - 1) - \text {d e a t h s} (t), +$$ + +where $\# \text{aids}(t)$ is the total number of living HIV/AIDS victims in year $t$ . Our initial point, $t_0$ , is the year 2000. Values at indices $t < 0$ are from historical data. + +# New Infection Rate + +The basic model assumes that HIV transmission occurs only during sex, that is, new Infections $(t) =$ intercourseT $(t)$ , the number of new HIV victims due to unprotected sexual intercourse. Later we model other methods of transmission. + +We model HIV transmission due to sex as related to the number of instances of intercourse times a rate of transmission per sex act [Smith 2005]. The transmission rate male $\rightarrow$ female is twice as high as the rate female $\rightarrow$ male. We use $\rho$ to represent the total risk of transmission through sexual contact; the number of instances of intercourse is proportional to $\rho$ . We assume an average intercourse rate of 3 times per week, or about 160 times per year [Leynaert et al. 1998]. Taking into account heterosexual anal intercourse causes an increase of the risk factor $\rho$ to 200 (or more). + +Since we assume primarily heterosexual intercourse, we track separately the HIV/AIDS population of each sex. We use subscript M to denote a function (or variable) that includes only men, and F to denote a function including only women. Hence we represent the intercourseT rates as + +$\operatorname{intercourse} \mathrm{T}_{\mathrm{M}}(t) =$ (percent unaffected men) $\times$ (number of affected women) + +$$ +\times (1 - (\text {c o n d o m u s e r a t e})) \times (\text {r i s k c o n s t a n t s}). +$$ + +That is, for men: + +$$ +\begin{array}{l} \operatorname {intercourse} \mathrm {T} _ {\mathrm {M}} (t) = (1 - \% \text {aids} _ {\mathrm {M}}) \times \# \text {aids} _ {\mathrm {F}} (t - 1) \times (1 - \text {condomRate}) \\ \times \left(\rho \times \text {r i s k S e x C o n s} _ {\mathrm {M}}\right); \\ \end{array} +$$ + +and for women: + +$$ +\begin{array}{l} \operatorname {intercourse} \mathrm {T} _ {\mathrm {F}} (t) = (1 - \% \text {aids} _ {\mathrm {F}}) \times \# \text {aids} _ {\mathrm {M}} (t - 1) \times (1 - \text {condomRate}) \\ \times \left(\rho \times \operatorname {r i s k S e x C o n s} _ {\mathrm {F}}\right). \\ \end{array} +$$ + +# Assumptions + +- Condom usage is static over time, a worst-case scenario, since it is unlikely that condom usage would decrease if sexual education programs follow the status quo or become increasingly well-funded and organized. + +- Condoms are $100\%$ effective. This assumption is very close to reality ( $>99\%$ effective) and is a useful simplification. +- All sexual acts have the same (very low) chance of infection. This assumption allows us to treat monogamous and promiscuous sexual behaviors as equally likely to spread HIV. + +# Death Rate + +Most carriers of HIV eventually die of AIDS; we assume that all do. On average, it takes 9 years for an HIV infection to become AIDS [Morgan 2002]. We assume that regular sexual activity stops when symptomatic AIDS occurs; hence, the number of years of activity after HIV infection is 9-10. We denote this parameter as averageDeath. We use a feedback loop for the death rate: + +$$ +\mathrm {d e a t h s} (t) = \mathrm {n e w I n f e c t i o n s} (t - \mathrm {a v e r a g e D e a t h}). +$$ + +# Assumptions + +- All HIV carriers die of AIDS. This major simplifying assumption prevents having to track population ages. Overall it causes a (slight) increase in the average life span of the HIV carrier, and hence is a worst-case estimate. This assumption is also tempered by our treatment of ART (see below). +- While ART increases life span, the average age at HIV contraction plus the new life span cannot exceed the life expectancy. This assumption minimizes the impact of the assumption that all carriers die of AIDS. +Each carrier dies after being infected for exactly averageDeath years. + +# Additional Model Parameters + +# Transmission Due to Reproduction + +A major social impact of HIV/AIDS is creation of an orphan population whose parents have both died of AIDS, a common phenomenon with large percentages of HIV-infected adults (such as in South Africa). The birth of infected babies, however, does not impact new cases, because they die before they participate in any form of transmission (intercourse, drug-needle use, and childbirth). For risk of transmission at childbirth, we use $35\%$ in undeveloped countries and $1\%-5\%$ in developed countries [UNAIDS and WHO 2005]. Transmission due to childbirth is calculated as + +$$ +\operatorname {b i r t h} \mathrm {T} (t) = \operatorname {b i r t h R a t e} \times \# \operatorname {a i d s} _ {\mathrm {F}} (t - 1) \times \operatorname {r i s k B i r t h}. +$$ + +# Assumptions + +- All infected children die before contributing to the spread of HIV/AIDS. +- Women with AIDS are as likely as other women to have a child; because of the previous assumption, this assumption has low impact on the model. + +# Transmission Due to Drug Needles + +Needle-sharing is an important factor in HIV transmission in many countries, including India and Russia. To incorporate drug needles, we re-express the infection rate as + +$$ +\operatorname {n e w I n f e c t i o n s} (t) = \operatorname {i n t e r c o u s e T} (t) + \operatorname {n e e d l e T} (t). +$$ + +We calculate the needle transmission rate based on a drug risk factor $\rho_{D}$ , the average number of drug injections per drug user per year. We also assume that the dirty-needle rate is $35\%$ [UNAIDS and WHO 2005]. We account for the sex difference in drug use (an $80/20$ male/female split). We take the risk of infection from a single drug use (riskDrugCons) from Leynaert et al. [1998]. Hence, the drug transmission rate is + +$$ +\begin{array}{l} \operatorname {n e e d l e} \mathrm {T} (t) = (\text {n u m b e r o f d r u g u s e r s H I V n e g a t i v e}) \times \\ (\text {c h a n c e o f s h a r i n g a d r u g n e e l d w i t h s o m e o n e H I V p o s i t i v e}) \times \\ \left(\text {r i s k} \right. \\ \end{array} +$$ + +that is, + +$$ +\mathrm {n e e d l e T} (t) = \# \mathrm {H I V} ^ {-} \mathrm {D r u g U s e r s} _ {\mathrm {M}} \times (0.35 \times \% \mathrm {H I V} ^ {+} \mathrm {D r u g}) \times (\rho_ {D} \times \text {r i s k D r u g C o n s}). +$$ + +# Assumptions + +- Constant dirty-needle rate. +- For each drug use, the chance of HIV transmission is the same. + +# Transmission Due to Homosexual Intercourse + +In most countries, HIV/AIDS is not associated with the homosexual community; rather, the more common carriers are (heterosexual) sex workers. In the United States, however, a disproportionate portion of HIV victims are homosexual. In the model, we compute transmission due to homosexual intercourse very similarly to intercourseT(t). The major change is assuming that homosexuals have solely homosexual sex, with a different risk per sexual act constant (also from Leynaert et al. [1998]). + +$$ +\begin{array}{l} \operatorname {i n t e r c o u s e} \mathrm {T} _ {H} (t) = (\text {p e r c e n t} \text {u n a f f e c t e d g a y m e n}) \times \\ (\text {n u m b e r o f a f t e d g a y m e n}) \times (1 - \text {c o n d o m u s e}) \times (\text {r i s k c o n s t a n t s}), \\ \end{array} +$$ + +or + +$$ +\begin{array}{l} \operatorname {intercourse} \mathrm {T} _ {H} (t) = (1 - \% \mathrm {aids} _ {H}) \times (\# \mathrm {aids} _ {H} (t - 1)) \times (1 - \text {condomRate}) \\ \times \left(\rho \times \operatorname {r i s k S e x C o n s} _ {H}\right). \\ \end{array} +$$ + +# Antiretrovirals + +The foremost effect of ARVs is not to prevent transmission but to extend life span. There is no scientific consensus on the effect of ART on transmission [Anderson 1992; Krieger 1991; Royce 1997]. Hence, we incorporate this effect as an input parameter, arvFactor; we let it vary between 0.25 and 1. Another input parameter is arvPortion, the percentage of HIV-positive individuals who receive ARV treatment. We implement ART by assuming that large-scale treatment begins around 2015, after deployment of infrastructure. We assume that ARV patients have a slightly decreased amount of risky sex, due to increased sexual education from repeated contact with health personnel. + +We calculate the transmissions due to ART patients and other HIV victims separately: + +$$ +\begin{array}{l} \operatorname {n e w I n f e c t i o n s} (t) = \operatorname {i n t e r c o u s e T} (t) \times (1 - \operatorname {a r v P o r t i o n}) + \operatorname {i n t e r c o u s e T} (t) \times \\ \operatorname {a r v P o r t i o n} \times \operatorname {a r v F a c t o r}. \\ \end{array} +$$ + +We assume $100\%$ adherence (except when talking about resistant-strain development; see below). + +# Drug Resistance + +A risk involved in antiretroviral therapy is the creation of treatment-resistant strains of HIV. This can occur when an ART patient follows the treatment regimen incompletely; selection pressures on the virus eases, allowing HIV to replicate once again in greater numbers resistant to the drugs. + +We model drug resistance using the parameters in the problem statement. ARV-resistant infections are tracked separately, so that, while ARV-resistant carriers may continue to take ART, they do not benefit. The number of new ARV-resistant strains that develop as a direct result of missing ARV treatments is modeled as + +$$ +\begin{array}{l} \text {new Infections} _ {\text {resistant}} (t) = (\% \text {aids victims on ARV}) \times (1 - (\text {adherence rate})) \times \\ (\text {c h a n c e} \\ \end{array} +$$ + +We assume that $85 - 95\%$ of ART patients adhere to treatment [Rutenburg 2006]. + +# The Holy Grail, or, The AIDS Vaccine + +We model a vaccine by assuming that as immunizations increase, a growing portion of the population is unable to contract HIV. This is in direct contrast to + +the way in which ARV affects HIV/ AIDS rates. Originally we had + +$$ +\begin{array}{l} \operatorname {intercourse} \mathrm {T} _ {\mathrm {F}} (t) = (1 - \% \mathrm {aids} _ {\mathrm {F}}) \times \# \mathrm {aids} _ {\mathrm {M}} (t - 1) \times (1 - \text {condomRate}) \\ \times \left(\rho \times \operatorname {r i s k S e x C o n s} _ {\mathrm {F}}\right). \\ \end{array} +$$ + +The factor $(1 - \% \text{aids}_{\mathrm{F}})$ determines what portion of the population can catch HIV, which in the case of a vaccine becomes $\left((1 - \% \text{aids}_{\mathrm{F}}) - \% \text{vaccinated}\right)$ . We assume the same vaccination rate for both men and women, hence apply no subscript to that term. + +To simulate $\%$ vaccinated, we fit a logistic curve. We assume that a vaccine will be available by 2015 and a well-regulated vaccine program will achieve steady-state vaccination by 2030. The curve starts at $0\%$ and plateaus at a steady-state level equal to the second-dose tetanus-typhoid vaccination rate for that country (as given in 2002 WHO data for the problem statement). We use a logistic curve because the rate of increase + +- will be low at first, since awareness will be low and infrastructure is needed; +- will increase as awareness builds and as infrastructure becomes established; and +- will decline as people become vaccinated and fewer remain unvaccinated. + +# Assumptions + +- The vaccine is $100\%$ effective. +- The vaccine distribution program is well-organized. + +# Country Choice and Country-Specific Parameters + +To determine the countries most critical in terms of HIV/AIDS from 2006 to 2050, we used five indicators: + +trends in prevalence rates, +demographics of infected population, +- level of HIV/AIDS education and awareness, +- routes of transmission, and +- integrity and availability of current and historical HIV/AIDS statistics, + +Based on these indicators, we selected a country from each of the continents Africa, Asia, Europe, North America, and Australia, namely, South Africa, India, Russia, U.S.A., and Australia. + +![](images/8efd7b7c00c1c1812c02133d567dd64f59878239e8b62f70972960f0e8b71aa6.jpg) +Figure 1. Comparison of model to South African historical data. + +# Results + +# Historical Fitting + +We validate our model by examining historical HIV rates from prenatal clinics in South Africa between 1995 and 2005 (Figure 1). Our model fits the data well with three minor changes: + +- a slight decrease in the life span of the average HIV patient (to 7 years from 9; this difference is probably due to lower sensitivity of HIV detection tests at earlier time points); +- an increase in the risky-sex rate constant (which is very unsurprising, since the data predate much sexual education effort); and +- increasing condom use rate over time. + +# Results by Country + +The model predicts the following trends in the HIV/AIDS population: + +- An increase in condom usage leads to a decrease in cases. +- An increase in the average life span of a patient leads to an increase in cases. +- A decrease in the transmission rate leads to a decrease in cases. + +![](images/b9bbbd2bed5d03ff493a384aa38bc20c4328c2ffefa6693b0ca1385b84b1b8ee.jpg) +Figure 2. Prediction of HIV/AIDS epidemic in South Africa. + +- Distribution of ARV drugs reduces the number of cases due to sexual acts only if ARVs cause a decrease in the per-act transmission rate. +- Steady distribution of a vaccine steadily decreases cases to a baseline level. + +# South Africa + +The model (Figure 2) predicts a decrease in cases prior to 2015 for all scenarios. Introduction of ART shifts the steady-state level—to higher if no decrease in transmission, to drastically lower with decrease in transmission. The sudden increase in 2020 results from the assumption that ART instantaneously increases the life span of all infected individuals, thereby suddenly lowering the death rate. Thus, the deviation is an artifact of our assumptions. + +Implementation of a well-regulated vaccine program starting in 2015 leads to a gradual decrease in cases over the next 20 years, with a significant portion of the disease eradication occurring between 2020 and 2025. + +# India + +The model (Figure 3) indicates a steady increase in cases over the next 45 years. This differs from the observed trend in South Africa most notably because of significant drug usage in India. + +![](images/b8b887ae155f3f03a61260b52f9cdba856d31de21bcfa1007c1ebc6052fa0465.jpg) +Figure 3. Prediction of HIV/AIDS epidemic in India. + +The introduction of ART that increases only the life span increases cases every year; with accompanying decrease in transmission, cases decline. Again there is a small sudden jump around 2020, a consequence of our assumptions. + +With a vaccination program, possibly supplemented with ARV drugs, the model predicts that cases will plateau soon after 2030, again with the most significant decrease 5 to 10 years after implementation of the vaccination program in 2020 to 2025. + +# United States + +In the United States, HIV/AIDS is predominantly spread through homosexual interaction. The model predicts the number of cases to exceed 6 million by 2050 if no advanced treatment is available (Figure 4). + +In the U.S., unlike the other countries that we examine, ARV drugs that do not affect transmission rate have no effect on cases. ARV drugs that decrease transmission curb new cases; the model predicts a stable number of slightly over 3 million cases after 2030. + +A well-regulated vaccination program largely eradicates the virus by 2035. + +![](images/e3ca7d5a23330e3360f7e1c928601adaadbefaf4cdf6854b05ed85482e3737c1.jpg) +Figure 4. Prediction of HIV/AIDS epidemic in the United States. + +# Russia + +The predominance of transmission through injected-drug use in Russia is much greater than in India and plays a larger role in spreading the virus. + +For ART that decreases transmission, cases increase initially but level off much more quickly than for India or the U.S. + +At the normal vaccination rate for a Russian adult, an HIV/ AIDS vaccine causes the number of cases to decline from 2 million cases in 2020 (5 years after the implementation of the vaccine program) to 1 million cases by 2050. A vaccine program in Russia is not as effective as for India or the U.S. because of an unusually low adoption rate for vaccines $(37\%)$ in Russia. Figure 5 shows a much faster eradication of the virus with a higher vaccination rate. Thus, Russia can significantly thwart cases by spending resources on increasing the general vaccination rate among adults. + +# Analysis + +All of the models predict that vaccination is be the most effective method of HIV/AIDS eradication. + +If ARV drugs do not influence the transmission rate, their introduction could be catastrophic. Increasing the life span of HIV/AIDS patients provides more time for each individual to spread the disease. Our model predicts more cases over the next 45 years for this scenario than for any other, for all countries. + +![](images/75246e48d7e148b7ab76d3d44f642e8e138b26c3315965c131277f609806a5db.jpg) +Figure 5. Prediction of HIV/AIDS epidemic in Russia. + +The United States is an exception, where $63\%$ of cases are among homosexuals [UNAIDS and WHO 2005]; an ARV program with no effect on transmission rate does not increase the number of cases. A possible reason is that though we assume that the use of ART implies that individuals are more informed and therefore less likely to perform risky sexual acts, there is not enough effect on people in the other countries modeled, while in the United States there is enough reduction in risky sexual acts to counterbalance the increase in life span. + +ARV drugs that decrease transmission effectively curb the spread of HIV in every case, causing cases to remain fairly constant after 2030. + +One of the least expected results is the decrease in cases in South Africa: The incidence of HIV/AIDS has peaked and is now on a downward turn. This however, is not completely unexpected, since HIV/AIDS has been present for the longest time in South Africa and general awareness about the disease has increased. Over time, an equilibrium point is reached; eventually the number of new cases equals the number of deaths due to AIDS, and the population of infected individuals remains fairly constant. + +# Analysis of Sensitivity and Individual Parameters + +We tested condomRate (percentage of sexual acts performed with condoms) and arvPortion (percentage of HIV/AIDS population with access to ARV drugs). We also tested the effect of differences in transmission rate decreases due to ARV treatment and drug resistance. + +![](images/00fb7d02bbec0dedfdcef9234c07357111c47eddbb3c2634d143a8d76e67db3c.jpg) +Figure 6. Effects of condom usage rate on model outcome + +Figure 6 shows the effects of different values of condomRate on the number of cases in South Africa. The model is quite sensitive to the value. A rate of $50\%$ results in gradual decrease in cases, and complete condom use results in disease eradication in 10 years (the time for pre-existing cases to die). + +Figure 7 shows the effect of varying arvPortion. The assumption is that ART increases the life span of the patient but does not decrease the transmission rate; people infected now have a longer time to spread the disease. Fairly large changes in ART use lead to large changes in the number of cases, as expected. + +Figure 8 assumes that ARV drugs reduce transmission rate by $25\%$ . + +Figure 9 shows the effects of ART adherence rates on the number of cases when ARV drugs lower the transmission rate. ART begins in 2020 and is swiftly followed by a decrease in the number of cases. When adherence is $100\%$ , cases level off. As adherence decreases, multi-drug resistance is observed, resulting in a negation of the lower transmission rates of initial treatment with ARV drugs and a steady increase in cases in the following years. Moreover, the onset of multi-drug-resistant HIV/AIDS is earlier for lower adherence rates, which is intuitively correct. Our model is sensitive to the fairly large changes in adherence rates, which is what we would expect. + +# Economic Model for Administering ARVs + +We discuss an economic model for administering ARVs abstractly in the absence of data, due to the difficulty of its collection. + +![](images/20678385ad48e78dc75bc01514baa6e1d3113f5d4b9701e70ed463c36925a3a9.jpg) +Figure 7. Effects of ARV therapy on model outcome, assuming no impact on transmission rate + +Negative externalities can result from higher rates of infection as HIV / AIDS victims enjoy increased life expectancy and therefore have greater opportunity to spread the virus. Positive externalities can result from the cost savings that rich countries enjoy indirectly by reducing the infection rate in poor countries. For example, in Australia more than half of HIV infections attributed to heterosexual intercourse in 2000-2004 were in people from a high-prevalence country or whose partners were [UNAIDS and WHO 2005]. Hence, reducing infection rates in high-prevalence poor countries might reduce the rate of infection in rich countries. + +[EDITOR'S NOTE: We omit the details of the authors' optimization analysis.] + +# Discussion and Conclusions + +# Strengths and Weaknesses + +# Strengths + +- Ability to incorporate many data sources, such as condom usage rates, drug populations, and historical AIDS death rates. +- Scalable and easy to expand to account for new populational factors. Easy to adapt to new locations. +High accuracy in fitting historical data. + +![](images/5c30c8368e2e7c2c790fe2d481294a1a78bbcf1b71f1ee8181fd26e2e8ffb87b.jpg) +Figure 8. Effects of ARV therapy on model outcome, with ARV patients $25\%$ less contagious. + +- Comprehensive. Takes into consideration all the major factors concerned in HIV, including transmission factors, prevention techniques and economic considerations. + +# Weaknesses + +- Large amount of prerequisite data required, some of which may be hard to acquire, such as historical HIV/AIDS death rates. +- Countries treated as isolated entities (does not account for migration). +- Fails to account for random differences between individuals, such as time to death after infection. + +# References + +Anderson, Deborah J., Thomas R. O'Brien, et al. 1992. Effects of disease stage and Zidovudine therapy on the detection of human immunodeficiency virus Type 1 in semen. Journal of the American Medical Association 267: 2769-2774. + +Krieger, John N., Robert W. Coombs, et al. 1991. Recovery of human immunodeficiency virus Type I from semen: Minimal impact of stage of infection and current antiviral chemotherapy. Journal of Infectious Diseases 163: 386-388. + +![](images/7331ededfb829edd39cebf40d9cbeec789d1a6427fca8fc7ba192cd544e6d39c.jpg) +Figure 9. Effects of multiple drug resistance on transmission rate, assuming $100\%$ ARV usage, and that ARV decreases transmission rate. + +Leynaert, Benedicte, Angela M. Downs, and Isabelle de Vincenzi, for the European Study Group on Heterosexual Transmission of HIV. 1998. Heterosexual transmission of human immunodeficiency virus: Variability of infectivity throughout the course of infection. American Journal of Epidemiology 148 (1): 88-96. +Morgan, Dilys, Cedric Mahe, et al. 2002. HIV-1 infection in rural Africa: Is there a difference in median time to AIDS and survival compared with that in industrialized countries? AIDS 16 (4): 597-603. +Royce, Rachel A., Arlene Seina, et al. 1997. Sexual Transmission of HIV. New England Journal of Medicine 336: 1072-1078. +Smith, D.K., L.A. Grohskopf, et al. (2005). Antiretroviral postexposure prophylaxis after sexual, injection-drug use, or other nonoccupational exposure to HIV in the United States. Morbidity and Mortality Weekly Report 54 (RR02): 1-20. http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5402a1.htm. +UNAIDS and WHO. 2005. UNAIDS / WHO AIDS epidemic update: December 2005. +Velasco-Hernandez, J.X., H.B. Gershengorn, et al. 2002. Could widespread use of combination antiretroviral therapy eradicate HIV epidemics? The Lancet Infectious Diseases 2: 487-493. + +# Managing the HIV/AIDS Pandemic: 2006-2055 + +Tyler Huffman + +Barry Wright III + +Charles Staats III + +Duke University + +Durham, NC + +Advisor: David Kraines + +# Summary + +We begin with a thorough consideration of which nations face the most critical situations with respect to HIV / AIDS. We model an adjusted life expectancy, using a short-term logistic differential equation model, and them mathematically define criticality. By continent, we conclude that the most critical nations are: Botswana, Thailand, Tonga, Ukraine, Bahamas, and Guyana. + +We analyze the futures of these most critical nations with a versatile computer simulation that deals directly with people rather than homogenous populations, as a differential equations model would. + +Treatment analysis includes estimation of the amount of foreign aid available through 2055 and predicts the effects of antiretroviral treatment (ART) and the possibilities of a preventive HIV/AIDS vaccine. We consider the ramifications of drug-resistant strains + +We conclude with a series of recommendations for how best to allocate resources. We recommend intensive spending in the short term on research and development of a vaccine, followed by a global coverage of ART with heavy emphasis on maintaining adherence. + +# Defining Criticality + +# Approach + +What makes a country "critical"? The obvious answer is countries with the greatest number of cases, or the greatest proportion of cases; but this is not a complete analysis. A critical situation implies that progress can be made towards a solution. At this point, nothing beyond antiretroviral therapy can be done for an HIV/AIDS patient. Countries with high rates of treatment can do little more for their infected population, so such countries should not be deemed most critical. The term critical also implies that action is urgent, that HIV/AIDS will be very detrimental in the short term. We believe that the best way to measure the effect of HIV/AIDS on a population is to determine the cumulative number of years of life lost due to infection. + +# Assumptions and Terms + +- ART patients have $100\%$ adherence—a patient either receives ART treatment or not, there is no middle ground. + +- No further intervention occurs within the next five years. + +- ART percentages remain constant. + +- No other major causes of death affect the population. Since we are predicting over a relatively short interval of time, it is unlikely that major events such as natural disasters, wars, or other pandemics will significantly affect the population. + +- People-year: A unit equivalent to one person times one year. The number of people-years of a population equals the sum of all the lifetimes of people in the population. + +- To measure the immediate effects of HIV/AIDS on a population receiving no further intervention, we define criticality over the next five years (2006–2010): + +- Absolute Criticality: The total number of people-years lost by a population over the next five years due to HIV/AIDS. +- Relative Criticality: The average number of people-years lost by a person over the next five years, in other words, the change in life expectancy over the next five years. + +# Development + +We derive a mathematical expression for criticality in terms of various parameters. Relative criticality $\zeta$ is given by + +$$ +\zeta (\alpha , \beta , \gamma , \delta , \epsilon) = \alpha (\gamma + \delta) + \beta \epsilon , +$$ + +where: + +$\alpha$ is the average loss of life expectancy due to contracting HIV and without receiving ART, +$\beta$ is the average loss of life expectancy due to contracting HIV and receiving ART, +$\gamma$ is the number of current untreated cases divided by current population, +$\delta$ is the number of untreated cases contracted over the next five years divided by present population, and +$\epsilon$ is the number of treated cases contracted over the next five years divided by present population. + +Absolute criticality is given by + +$$ +\zeta_ {\mathrm {a b s}} = \zeta P, +$$ + +where $P$ is the country's population. + +It may seem counterintuitive that a country should be considered "less critical" if its life expectancy is innately lower, as our model would conclude. What this really means is that spending money on HIV/AIDS there may be less relevant than spending money on other causes of death. + +# Model A: Adjusting Life Expectancy + +# Approach + +To determine the effects of HIV/AIDS on a population, we determine the life expectancy as if HIV/AIDS did not exist. We then adjust for the fact that life expectancy is a function of a person's year of birth. + +# Assumptions + +- Life expectancy does not significantly change over five years, so we can assume that people of each five-year age group are the same age. +- Life expectancy data does not exist for birth years before 1950, so we assume that any person born earlier has the life expectancy of someone born in 1950. +- No immigration or emigration occurs. + +# Method + +Using 2005 population data, we multiply the population for each age group by the life expectancy of the corresponding birth years. To arrive at the total number of person-years for the population. Dividing this by the population gives the age-adjusted life expectancy $\Gamma_0$ . + +We then determine a life expectancy value for infected people. Worldwide, we estimate that the average age for contracting HIV is 23. We assume that a person on ART has $100\%$ adherence and never stops treatment. In developed nations, a person contracting HIV but untreated typically lives 12 years, hence on average to age $23 + 12 = 35$ . Few people treated with ARV have died; however, the program began only 10 years ago. We estimate that ART patients in developed countries will live 20 years after contraction, hence on average to age $23 + 20 = 43$ . To determine the average life expectancy for a person contracting HIV/AIDS, we use the formula + +$$ +\Gamma_ {\mathrm {H I V}} = \frac {\Gamma_ {0}}{7 0} \left[ 3 5 (1 - T) + 4 3 T \right], +$$ + +where $T$ is the percentage of people currently receiving ARV treatment. The formula yields a weighted average of the life expectancies for untreated and treated patients, and we account for the difference in life expectancy due to causes other than HIV by multiplying by the age-adjusted life expectancy divided by 70, an assumed average for the life expectancy of a developed nation. + +To derive an expression for the HIV-adjusted life expectancy, we take a more intuitive approach. We have data for the total number of people-years of a population, and given the number of HIV cases $P_{\mathrm{HIV}}$ and the average life expectancy for HIV patients, we know the total number of HIV people-years. If these people did not have HIV, the number of people-years that they contribute to the population would increase. Therefore, adding the number of people-years that the infected population loses due to premature death to the unadjusted number of people-years for a population yields the adjusted people-years, and consequently an HIV-adjusted life expectancy for that population. The formula for HIV-adjusted life expectancy $\Gamma_A$ is + +$$ +\Gamma_ {A} = \frac {P (\Gamma_ {0} + P _ {\mathrm {H I V}}) \Gamma_ {0} - \Gamma_ {\mathrm {H I V}}}{P}. +$$ + +# Expectations and Results + +If our model is appropriate, a few things should certainly occur: + +- The HIV-adjusted life expectancy should always be greater than the unadjusted life expectancy. +- The difference between the HIV-adjusted life expectancy and the unadjusted life expectancy should be proportional to the percentage of the population infected with HIV. + +No country shows a decrease in life-expectancy after HIV-adjustment; and by taking the difference between the HIV-adjusted life expectancy and the unadjusted life expectancy and dividing by the percentage of the population infected with HIV, we find strong evidence for proportionality. + +# Model B: Logistic Growth + +# Approach + +To implement our definition of criticality, we must predict the number of HIV/AIDS cases (both treated and untreated) over the next five years. A logistic growth model is easy to work with and incorporates a maximum sustainable population. + +# Assumptions + +- Birth and death trends remain similar over the next five years. +- The incidence of HIV/AIDS is constant within each of 12 representative regions that we feel have minimal variation in HIV/AIDS growth rates: Africa, South East/Central Asia, North/East Asia, Oceania, Brazil, South America excluding Brazil, Canada, the United States, Mexico, Latin/Central America, the Caribbean, and Europe. + +# Development + +The logistic growth model describes a population that grows in proportion to the current size of the population, in addition to factoring in a carrying capacity—in this case, a maximum sustainable AIDS population. The general form of the differential equation is + +$$ +\frac {d P}{d t} = \frac {r P (K - P)}{K} = r P \left(1 - \frac {P}{K}\right), +$$ + +where + +$P$ is the total HIV/AIDS population size, + +$r$ is the maximum population growth rate, and + +$K$ is the maximum sustainable HIV/AIDS population. + +As the population gets closer and closer to the maximum sustainable population, its growth rate becomes a smaller proportion of the maximum growth rate $r$ . The general solution to the differential equation is + +$$ +P (t) = \frac {r}{c e ^ {- r t} + r / k}, +$$ + +where $c$ is a constant determined by an initial condition. We must estimate $k$ and $r$ , using data for cases over the past 5 to 20 years. We rearrange the differential equation to the form + +$$ +\frac {1}{P} \frac {d P}{d t} = a + b P, +$$ + +with $a = r$ and $b = -r / k$ . We then plot successive values of + +$$ +\left(P (t _ {i}), \frac {P ^ {\prime} (t _ {i})}{P (t _ {i})}\right) +$$ + +and fit a least-squares line to the data, yielding an estimated slope $b$ and a $y$ -intercept $a$ . We estimate $P^{\prime}(t_i)$ from the slope of the secant connecting the point before and the point after the chosen point. + +# Results + +We use the above procedure to determine a function $P(t)$ for the size of the HIV/AIDS population at a given time up to 2005. For prediction, we extrapolated by evaluating the function at 2010. + +# Putting It All Together + +Given the HIV-adjusted life expectancy, we can determine the values of $\alpha$ and $\beta$ for each country; + +$$ +\alpha = \Gamma_ {A} - 3 5 \left(\frac {\Gamma_ {0}}{7 0}\right), \qquad \beta = \Gamma_ {A} - 4 3 \left(\frac {\Gamma_ {0}}{7 0}\right). +$$ + +Armed with a logistic model for the infected population of each region, we extrapolate to determine the number of cases that will arise over the next five years. We then make two further assumptions. + +# Additional Assumptions + +- The proportion of cases treated by ARV will remain unchanged over the next five year. +- The proportion of HIV/AIDS cases of each country within its respective region, $H_{\text{relative}}$ , will remain unchanged. + +$$ +\delta = \frac {(1 - T) [ P _ {2 0 1 0} - P _ {2 0 0 5} ] H _ {\mathrm {r e l a t i v e}}}{P _ {2 0 0 5}}, \qquad \epsilon = \frac {T [ P _ {2 0 1 0} - P _ {2 0 0 5} ] H _ {\mathrm {r e l a t i v e}}}{P _ {2 0 0 5}}. +$$ + +Finally, the number of current cases is given by our data, so + +$$ +\gamma = \frac {(1 - T) H}{P}. +$$ + +# Results + +We determine absolute and relative criticality values for the 108 countries for which all the required data were available. We then use relative criticality to select the most critical countries, by continent: Botswana, Thailand, Tonga, Ukraine, Bahamas, and Guyana. Fourteen of the 15 most critical nations worldwide are in Africa. + +Using absolute criticality would give precedence to large nations, despite relatively mild HIV/AIDS situations. + +# Determining Growth Rates + +# Model C: Simulation of a Country with HIV/AIDS + +# Approach + +We want a more detailed and elaborate model to forecast the long-term behavior of HIV/AIDS. We opted for a discrete computer simulation of the interactions of individuals. Such a model is much better able to cope with complicated demographic combinations, since the objects of the model are persons rather than homogeneous populations. A disadvantage is that directly simulating an entire country's population in this way is not feasible. + +# Assumptions + +- An entire country can be modeled by simulating the course of the disease over a small representative community (population on the order of 1,000). +- Allowing the simulation to run for 10 years before introducing HIV allows for a base of existing relationships to form. +- With the exception of contraction before or during birth, all transfers of HIV occur from consensual events (drug- or sex-related) between two people. +- A person's probability of dying of natural causes is directly proportional to age. +- The effect of HIV is to multiply by some factor what would otherwise be a person's probability of dying of all other causes. This effect depends solely on whether or not a person has the virus; other factors, such as time since the virus was contracted, need not be considered. +- The sexual behavior of persons regarding number of partners, frequency of sex, etc., is essentially the same, regardless of sex and sexual orientation. The only exception is that only females can be sex workers and only males can be clients of sex workers. + +- The populations of female homosexuals and bisexuals can be neglected. +- People's characteristics do not change as they grow older, except for changing stages from infant to child at age 2 and from child to adult at age 16. +- Only adults have sexual relationships or share intravenous drugs. +- A needle-sharing or sexual encounter with an infected person automatically results in transfer of the virus. + +# Development + +# Relationships + +The basic tools that we use to model the spread of the disease are relationships and events. A relationship between two people can be initiated by either but to occur must be accepted by the other. An event occurs within a relationship and may result in the transfer of a virus, or multiple strains of a virus. Like a relationship, an event can be initiated by either person but must be accepted by the other. Different people have tendencies to engage in different sorts of relationships and events, and may thereby be classified into relevant demographic groups. The relationships types that we used included sexual relationships, mother-child relationships, and relationships for the social use of intravenous drugs. + +# Availability Pools + +Formation of relationships is based on availability pools. Depending on their characteristics and on existing relationships, persons are placed into availability pools for particular sorts of relationships. A person seeking a relationship chooses an appropriate availability pool and queries it for a match. The availability pool chooses a potential match using an algorithm that attempts to preserve efficiency of data structures while providing some measure of randomness, and the chosen person is given the option of accepting or refusing the offered relationship. Either person may choose to end a relationship. + +# Events + +A person who engages in relationships has a desired rate of events of that category. The chance of accepting an event or of requesting an event in a given cycle is based on whether or not the person has reached their satiation point for the given event. + +[EDITOR'S NOTE: The authors offer further details on the mechanisms for drug-use, mother-child, and sexual relationships, as well as on abstinence, monogamy, casualty, and prostitution, which we must omit.] + +# Birth and Death Rates + +For every adult woman who is not already pregnant, a sexual encounter with a man has a fixed probability of resulting in pregnancy. (Menopause is not taken into consideration.) Every pregnancy results in the live birth of a baby nine months after conception, unless the mother dies earlier. The probability of death by natural causes is assumed to be directly proportional to age. Additionally, children (especially infants) without mothers have a constant term added to their probability of dying. When HIV is present, the death rate as if it were not present is multiplied by a fixed constant; in a sense, the virus reduces a person's "death resistance." + +# Data and Parameter Values + +The risk of a child contracting HIV during pregnancy and birth ranges from $15\%$ to $30\%$ , with the risk increased another $10\%$ to $15\%$ due to breastfeeding over the first two years of life [Orendi 1998]. We divide this range in half to determine the rate per year of breastfeeding contraction. + +The demographic data come largely from the Central Intelligence Agency [2001]. + +We determine the percentage of the population using IV drugs by assuming that this value equals that of the surrounding region [United Nations Office on Drugs and Crime 2005]. To determine the acceptance, seeking, and breaking rates for drug relationships, we make reasonable assumptions based on reading about the typical social behavior of IV drug abusers. We use such an approach also in determining the maximum number of drug relationships and the rate of drug relationships per year [United Nations Office on Drugs and Crime 2005]. + +The HIV vulnerability parameter comes directly from the HIV-adjusted life expectancy model, and is simply an adjusted ratio of the HIV life expectancy and the unadjusted life expectancy. + +We discern nearly all of the parameters for sexual relationships from Francoeur et al. [2004] and the Mackay [2000]. + +In connection with our assumption that a person's probability of dying is directly proportional to the person's age, we need to ascertain the constant of proportionality $k$ based on the life expectancy. The statement about the death rate can be expressed as + +$$ +\frac {- d P / d t}{P} = k t, +$$ + +where $P$ is the probability the probability of a person being alive at time $t$ . (On another scale, $P$ is the number of people born in the same year who remain alive after time $t$ .) Solving, we find + +$$ +P (t) = P _ {0} \exp \left(- \frac {1}{2} k t ^ {2}\right), +$$ + +which—surprisingly (or not, if you are already familiar with this model for human aging, which we weren't)—turns out to be proportional to the right half of a Gaussian distribution. + +To combine this equation with life expectancy, let $r$ represent the death rate and consider that for a differential time quantity $dt$ , the expression $-r dt$ represents a differential quantity of people who die at age $t$ . Hence, the average age of death, or life expectancy, is + +$$ +\frac {1}{P _ {0}} \int_ {0} ^ {\infty} r t d t = \int_ {0} ^ {\infty} k t ^ {2} \exp \left(- \frac {1}{2} k t ^ {2}\right) d t. +$$ + +We calculate this integral numerically as a function of $k$ . [EDITOR'S NOTE: In fact, the exact value is $\sqrt{\pi / 2k}$ .] Setting the result equal to the life expectancy calculated from other data for the country of interest lets us determine the relevant value for $k$ . + +# Results and Discussion + +After running the 50-year simulation a number of times, we noticed that there was almost always an initially explosion of HIV cases in the first few years, followed by much slower growth. This is likely the result of our assumption that every encounter results in the transmission of HIV; because of this, HIV spreads very quickly through relationships that were already in place at the beginning of the 50-year period. + +Additionally, as time progressed, the HIV/AIDS population appeared to approach a steady state, or infected carrying capacity. Based on the structure of our model, the majority of the adult population ends up being infected with HIV, while only a small portion of children contract the virus; the steady-state value is merely a high percentage of the steady-state value for the adult population. + +# Model D: Treating the Pandemic + +# Approach + +We determine the available funding to each of the critical countries and to the world as a whole for the years 2005-2055. Then we add additional parameters to the computer simulation model to determine the effects of increased antiretroviral therapy and preventive vaccination. Further simulation, devoting different proportions of the available funding to ARV and vaccinations, allows us to determine the best way to spend both national and worldwide HIV/AIDS funding. + +# Assumptions + +- Economic trends remain relatively stable over time. +- Inflation in the cost of HIV treatment is comparable to that of the rest of the world economy. +- ART patients have $100\%$ adherence. +Vaccination, when developed, is $100\%$ effective in preventing contraction of HIV. + +# Finding Aid + +In 2004, \(6.1 billion was provided in foreign aid for HIV/AIDS, worldwide. [Agence France-Presse 2004]. To account for the growth of the world economy and the increasing awareness with respect to HIV funding, we model the available funding A (in billions of dollars) exponentially, choosing the growth rate based on recent trends in funding: + +$$ +A _ {\mathrm {w o r l d}} (t) = \S 6. 1 \times (1. 0 5) ^ {t}. +$$ + +We then analyze the funding available to each of the six critical countries. Of the funding for HIV/AIDS in developing/semideveloped nations, $85\%$ comes from foreign aid and $15\%$ from domestic spending [Martin 2003]. We assume that a government spends one-twentieth of one-percent $(0.0005\%)$ of its GDP on HIV/AIDS each year. Botswana, Tonga, Bahamas, and Guyana reasonably fit this $85 / 15$ rule; however, Thailand and Ukraine are too developed for this assumption to apply, and we impose a $25 / 75$ analog. From this, the equations for funding are as follows, where $\rho$ is the growth rate of the GDP for each nation. + +$$ +A _ {\mathrm {d e v e l o p i n g}} (t) = 0. 0 0 0 5 \mathrm {G D P} \left[ \rho^ {t} + (1. 0 5) ^ {t} \frac {8 5}{1 5} \right], +$$ + +$$ +A _ {\mathrm {d e v e l o p e d}} (t) = 0. 0 0 0 5 \mathrm {G D P} \left[ \rho^ {t} + (1. 0 5) ^ {t} \frac {2 5}{7 5} \right]. +$$ + +The predicted cost of supplying ART is \(1,100 per person per year, and we assume that a person continues ARV treatment until death. We account for the potential inflation of costs, again using an exponential function, with growth rate of \(2\%\). The maximum number of ARV patients that a nation can treat equals total funding divided by the current cost per person. + +But what are the effects on the population of an increased number of patients treated with ARV? People strictly adhering to ARV treatment have extremely suppressed HIV virus figures [Porter 2003]. This means that is nearly impossible for a correctly treated ARV patient to transfer the virus to an uninfected person. Therefore, in our modification of the computer simulation, we prevent any person treated with ARV from transferring HIV to other people. This + +change should lead to a significant decrease in the number of new HIV cases per year in comparison to the original model. In theory, if all HIV cases are treated with ARV, over time the virus should be removed from the population. + +In determining when a preventive vaccine will be developed, we assume that research funding is from the worldwide aid pool and that changes in funding do not have a significant effect on when a vaccine will be found. Thus, the probability of finding a preventive vaccine should be a function of time. Multiple sources state that a vaccine will not be found within the next 10 years, so we define the probability of a vaccine being discovered by a given year as + +$$ +S (t) = . 0 3 (t - 1 0), \quad \text {f o r} 1 0 < t < 4 3. 3, +$$ + +where time $t$ is measured in years after 2005. This probability function assumes that in 26.7 years, there will be a $50\%$ chance of a vaccine being discovered. + +# Model E: Preventive Vaccine Distribution + +# Approach + +To model the rate of vaccine introduction, we use a logistic growth model. + +# Assumptions and Terms + +- The steady-state percentage of the population vaccinated will be equal to that of $\mathrm{DTP}_3$ and Tetanus for infants and adults respectively, as reported by the WHO in 2002 in the datasheet accompanying the problem statement. +- The steady-state percentage value will remain constant over the next 50 years. + +# Development + +We let $V$ be the percentage vaccinated, $\lambda$ the initial growth rate, and $D$ the maximum percentage vaccinated. The logistic model leads to + +$$ +V (t) = \frac {D}{C \exp (- \lambda t) + 1}. +$$ + +We determine values for $\lambda$ and $C$ from initial conditions. We assume that in one year the vaccination rate would reach $10\%$ of its maximum value and after 10 years would reach $95\%$ . These conditions lead to + +$$ +\lambda \approx 0. 5 7 1, \qquad C \approx 1 5. 9, \qquad V (t) = \frac {D}{1 5 . 9 e ^ {- 0 . 5 7 1 t} + 1}. +$$ + +Table 1 shows the values of $D$ for the critical countries. + +Table 1. Maximum percentage vaccinated $(D)$ for the critical countries. + +
CountryChildAdult
Botswana8749
Thailand9790
Tonga9093
Ukraine9937
Bahamas861
Guyana911
+ +# Model F: Resistant Strains and Mutations + +# Approach + +One of the most dangerous aspects of the HIV virus is its ability to mutate quickly. If a regimen of treatment does not destroy or incapacitate all of the viruses in a system, only the strong ones will remain to repopulate, over time forming a dangerous resistance that renders the drug useless. + +The antiretroviral therapy associated with the HIV virus is difficult. Patients must take scores of pills, multiple times per day, for the rest of their lives; we cannot expected $100\%$ adherence to the regimen. + +# Assumptions + +- All people receiving ART intend to maintain $100\%$ adherence. No patients are opposed to being treated for psychological, ethical or spiritual reasons. +- No patient is guaranteed to succeed in maintaining $100\%$ adherence. +- A person with cumulative adherence below $90\%$ has a $5\%$ chance of developing a resistant strain. +- The opportunity for a resistant strain to develop occurs every time a treatment occurs in which cumulative adherence is below $90\%$ . +- ART for a resistant strain will not be available before the year 2055. This allows us to make the simplification that only one resistant strain will exist. +- Resistant strains can be vaccinated against, but a new vaccine will have to be developed. +- The only property of a resistant strain that distinguishes it from the original HIV is the resistance to ART. The effects on the body and on life expectancy remain constant. +- The resistant strain, if it exists, takes precedence over the original strain, that is, a person will not carry both. + +# Development + +We assume (with no data basis) that a person will adhere completely to a year of treatment $99\%$ of the time. + +By creating a new parameter within the main simulation, we can simulate the adherence behavior of every ARV patient within the model. A second parameter randomly decides whether a person with sufficiently low adherence (less than $90\%$ ) causes production of a resistant strain. We introduce a constraint on this behavior, not allowing resistant strains to occur within a person until after three years of treatment. This constraint minimizes the skewing effects that could occur if a person developed a resistant strain after missing the first treatment, which is biologically nonsensical, as the virus would have nothing to resist. The computer simulation runs as before, allowing for a resistant strain to occur. This strain would not be affected by ART or vaccination, and thus resistant-strain-infected people would behave like HIV-infected people who remain untreated. + +Given that second- and third-line ARV drugs are so expensive, we assume that none of our critical countries will have access to them. We assess the probability of developing a vaccine against the resistant strain as + +$$ +S _ {\text {r e s i s t a n t}} (t) = . 0 3 t, \quad 0 < t < 3 3. 3, +$$ + +where $t$ is the number of years since finding the original vaccine to the nonresistant strain. We assume that the costs associated with the new vaccine are identical to those of the original vaccine. + +# Discussion of Models D-F + +Assuming no economic disasters over the next 50 years, the world economy is well prepared to handle the HIV/AIDS situation and should be able to provide billions of dollars to the cause consistently. The question is not about availability of money but where to spend it. + +ART is a powerful weapon; it almost certainly prevents transfer to uninfected people. There is, however, the danger of production of resistant strains of HIV. It is vital that the implementation of ARV programs be done with great emphasis on maintaining adherence to the program. + +A preventive vaccine would provide quickly stall new cases and bring the disease down to a manageable level. We believe it probable that a vaccine will be discovered within 25 to 40 years. It is important to devote resources to its research and development. + +We suggest that funds be allocated largely to ART in the next few years to bring raging epidemics in the critical nations under control, followed by a phasing in of an intense vaccine development program beginning in approximately 10 years. + +# Conclusion + +The critical countries by continent—Botswana, Thailand, Tonga, Ukraine, Bahamas, and Guyana—are a springboard for a global control effort of the pandemic. + +Foreign aid should be focused on the most critical nations, not necessarily by continent, but worldwide. Treatment should begin with sweeping programs of antiretroviral therapy focused on maintaining $100\%$ adherence. Simultaneously, research should begin on developing a preventive vaccination, which could begin distribution immediately and reach stable levels within 10 years. + +# Strengths and Weaknesses + +Weaknesses of the model included assumptions made for simplicity that likely do not hold. For instance, in most runs of our model on any country, cases exploded rapidly to include most of the adult population within three years—a feature that does not correspond to the past behavior of HIV. This feature is likely a result of our assumption that every single sexual encounter or sharing of a dirty needle with an infected person results in disease transmission. + +However, a corresponding strength of our model is that it would be relatively easy to include a parameter for probability of transmission. + +Our model is particularly appropriate for simulation of evolving strains of resistant viruses, a problem that naturally lends itself to such discrete modeling. + +# References + +Agence France-Presse. 2004. UN report sounds grim new warning over AIDS. http://www.commondreams.org/headlines04/1123-07.htm. +Central Intelligence Agency. 2001. The World Factbook. http://www.cia.gov/cia/publications/factbook/index.html. Accessed 4 February 2006. +Francoeur, Robert T., Raymond J. Noonan, et al. (eds.). 2004. The Continuum Complete International Encyclopedia of Sexuality. New York: Continuum. +Global Fund. 2006. Global Fund Grants: Progress Report. http://www.theglobalfund.org/en/funds_raised/reports/. Accessed 4 February 2006. +Mackay, Judith. 2000. The Penguin Atlas of Human Sexual Behavior. New York: Penguin Reference. +Martin, Gayle. 2003. A Comparative Analysis of the Financing of HIV/AIDS Programmes. Chicago, IL: Independent Publishing Co. + +Orendi, J.M., K. Boer K, et al. 1998. Vertical HIV-I-transmission. I. Risk and prevention in pregnancy. Nederlands Tijdschrift voor Geneeskunde 142: 2720-2724. Abstract at http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10065235&dopt=Citation. +Porter, Kholoud. 2003. Determinants of survival following HIV-1 seroconversion after the introduction of HAART. Lancet 362 (18 October 2003): 1267-1274. +United Nations Office on Drugs and Crime. 2005. 2005 World Drug Report. Vol. 1: Analysis. http://www.unodc.org/pdf/WDR_2005/volume_1_web.pdf. Accessed 4 February 2006. +United Nations Population Division. 2001. AIDS/HIV effect on life expectancy. In World Population Prospects: The 2000 Revision Highlights (28 February 2001) 59. Reproduced at http://www.overpopulation.com/faq/health/infectious_diseases/aids/life Expectancy.html. + +# About the Authors + +![](images/ec289ab3e829f1d04b5d5876d8f2771fea76688a46c48cb8586a9c0f04cfde10.jpg) + +Tyler Huffman '09 is a Mathematics and Physics major from Burlington, NC. At Duke, he is a math tutor and coordinator of the Duke Math Meet, an annual high school math competition hosted by the Duke University Math Union. After graduating, he plans to pursue a Ph.D. in applied mathematics or a related discipline. + +![](images/342789a8b6c033acc8853c1ad1713b5ab61858649bc56ae1eed565efc6885d49.jpg) + +Barry Wright III '09 is a Mathematics and Physics major from Bel Air, MD. He aspires to be a professor researching in string theory. At Duke, he is a physics tutor and the social chair for the Society of Physics Students. + +![](images/edded7bec55fdae6135ca4ee97ea39a96fe3e5befaae614fd2783041b2d7a940.jpg) + +Charles Staats III '08 is a Mathematics major and Computer Science minor from Charleston, SC. He aspires to be a professor researching in pure math or related areas. He participates in the Duke University Math Union and sings in the Duke Chapel Choir and in Something Borrowed, Something Blue, a Christian a cappella group. + +# AIDS: Modeling a Global Crisis (and Australia) + +Cris Cecka +Michael Martin +Tristan Sharp +Harvey Mudd College +Claremont, CA + +Advisor: Lisette de Pillis + +# Summary + +We introduce the state of the epidemic in six countries: Australia, South Africa, Honduras, Mexico, Ukraine, and India. We describe some previous models and develop a model that uses historical population, HIV data, and historical and projected birth-rate data from each country. The model isolates the population aged 15 to 49 for study. We use the model to predict the infection dynamics during the next half-century in the following situations: + +- The disease is left unchecked to infiltrate the population. +- Anti-retroviral treatment (ART) is provided for those diagnosed. +A vaccine is introduced in the year 2005. +- ART efficacy is affected by resistant disease strains. + +We present simulation results and interpret what factors led to the observed trends. + +# Introduction + +HIV infection is primarily spread through sexual exposure. At the global scale, in areas of highest HIV presence, heterosexual contact seems to be the primary mode of transmission, accounting for $70\%$ of the overall sexual transmission cases [Gayle 2000]. + +We build a mathematical model to approximate the expected rate of infections from 2005 to 2050 for a number of countries chosen from around the world. Next, we consider the effect of antiretroviral (ARV) drug therapies (ART) vs. a preventive vaccine on the spread of HIV, using current and projected economic resources. We then consider the possibility of an ART-resistant virus strain emerging and consider the effects on our previous conclusions. Finally, we determine the important characteristics of our models and conclusions to formulate a white paper to the UN giving our recommendations for allocation of resources. + +We project the rate of HIV infection in Australia, Honduras, India, Mexico, South Africa, and Ukraine in the absence of treatment. ART for both mutating and non-mutating strains of HIV increases life expectancy and total population over time, as expected. While our model predicts that ART does little to prevent further spread of the disease, there is a strong humanitarian and economic argument for global ART. + +Vaccination is the best solution, because ART does little to stop the spread of the infection. We include the effects of a vaccine in our model for the spread of AIDS in our target countries. + +# HIV Epidemic Model + +# Disease Epidemic Models + +# The SIR Model + +One of the simplest models of infectious disease is the static SIR model, a nonlinear model that considers three classes of persons in a population: Susceptible, Infected, and Recovered. + +$$ +\frac {d S}{d t} = - \alpha S I, +$$ + +$$ +\frac {d I}{d t} = \alpha S I - \beta I, +$$ + +$$ +\frac {d R}{d t} = \beta I, +$$ + +where $\alpha$ is the rate of infective incidence (the probability of infection occurring upon contact times the number of contacts that occur in some time interval) and $\beta$ is the rate of recovery of an infected individual. + +Applied to HIV, this model assumes: + +- a fixed population size. This model does not account for birth rates, death rates, the possibility that infected individuals may die more frequently, etc. +- a perfectly homogeneous population, with no individuals treating their infection or modifying their behavior in response to illness. + +However, this model is inappropriate for HIV: + +- Current HIV patients have no chance of recovery, since there is presently no cure for the HIV virus. +- The model assumes no incubation period and a constant infection load, both false for HIV [Hyman et al. 2003]. + +# A Multistage Model + +The staged-progression (SI) model is similar to SIR but takes into account some of these concerns. It accounts for temporal changes in the infectiousness of an individual by a staged Markov process of $n$ infected stages, progressing from initial infection by HIV to development of AIDS [Hyman et al. 2003]. + +$$ +\begin{array}{l} \frac {d S}{d t} = \mu (S ^ {0} - S) - \lambda S, \\ \frac {d I _ {1}}{d t} = \lambda S - (\mu + \gamma_ {1}) I _ {1}, \\ \frac {d I _ {i}}{d t} = \gamma_ {i - 1} I _ {i - 1} - (\mu + \gamma_ {i}) I _ {i}, \qquad i = 1, \dots , n, \\ \frac {d A}{d t} = \gamma_ {n} I _ {n} - \delta A, \\ \lambda (t) = \sum_ {i = 1} ^ {n} \lambda_ {i} (t), \quad \lambda_ {i} (t) = \beta_ {i} \frac {I _ {i} (t)}{N (t)}, \\ \end{array} +$$ + +where + +$S$ is the number of susceptible individuals, + +$I_{i}$ is the number of infected individuals in stage $i$ + +$A$ is the number of infected individuals no longer transmitting the disease, + +$S^0$ is the constant steady-state population maintained by the inflow and outflow when no virus is present in the population, + +$\lambda (t)$ is the infection rate per susceptible individual, + +$r$ is the partner acquisition rate, + +$\beta_{i}$ is the probability of transmission per partner from infected individuals in stage $i$ of the infection, and + +$\gamma_{i}$ is the rate at which individuals move from stage $i$ of infection to stage $i + 1$ . + +All individuals enter group $i = 1$ upon infection. + +Although this model incorporates a birth rate, it is constant. Most importantly, the model does not account for the effect that treatment may have on infectiousness of the treated group, though we may imagine the multiple infection rates $\beta_{i}$ being modified to account for both treated and untreated groups, as we will do later. + +# Characteristics of the Desired Model + +Dynamic algorithms have been implemented that explore sexual activity and the effects of social networks on the spread of HIV as well as the effect of changes in sexual behavior as a result of ART [Bauch 2002; Boily et al. 2004]. + +There is dispute over the net effect of ART. ART generally reduces the infectiousness of an individual [UNAIDS 2005b]. This reduction is normally thought to combine with the social impacts of an HIV diagnosis, that an individual should limit his/her sexual contacts, to greatly reduce the infectivity of a diagnosed HIV patient. However, further research suggests there are competing effects. Law et al. [2001] show that increases in sexual behavior and life expectancy could negate the beneficial impact of decreased infectiousness on total AIDS incidence. Furthermore, treated patients may increase the frequency of sexual activity due to the decreased severity of their symptoms—or maybe the opposite. For example, Ivory Coast individuals reported low sexual activity following HIV diagnosis and this was not increased by the offer of ART [Moatti et al. 2003]. We find this real-world result convincing. + +We use concepts from all of these models (as well as the undiscussed differential infectivity (DI) model) to create a model using nonlinear differential equations similar to the SIR model but differing from it in the following ways: + +- The time-scale of the epidemic necessitates that time-dependent birth and death rates be included in a realistic model. +- Behavior plays a critical role in the transmission of the disease. Individuals who are unaware of their infection are (debatably) more likely to transmit the disease than individuals aware of their infection. +- Age plays a role in the disease dynamics. The susceptible and infected people that can affect the disease dynamics are overwhelmingly between the ages of 15 and 49 [UNAIDS 2005b]. + +The model below incorporates all of these considerations: + +$$ +\frac {d S}{d t} = b (t - t _ {0}) S (t - t _ {0}) - \mu S - \lambda S I _ {s} ^ {u} - \lambda S I _ {r} ^ {u}, +$$ + +$$ +\frac {d I _ {s} ^ {u}}{d t} = \lambda S I _ {s} ^ {u} - (\mu + v _ {s} ^ {u}) I _ {s} ^ {u} - \gamma_ {s} I _ {s} ^ {u}, +$$ + +$$ +\frac {d I _ {s} ^ {T}}{d t} = \gamma_ {s} I _ {s} ^ {u} - (\mu + v _ {s} ^ {T}) I _ {s} ^ {T} - \alpha I _ {s} ^ {T}, +$$ + +$$ +\frac {d I _ {r} ^ {u}}{d t} = - \gamma_ {R} I _ {r} ^ {u} - (\mu + v _ {r} ^ {u}) I _ {r} ^ {u} + \lambda S I _ {r} ^ {u}, +$$ + +$$ +\frac {d I _ {r} ^ {T}}{d t} = \alpha I _ {s} ^ {T} - (\mu + v _ {r} ^ {T}) I _ {r} ^ {T} + \gamma_ {r} I _ {r} ^ {u}. +$$ + +The model extends the SIR model with concepts from the SI model and others. We use five categories of people aged 15 to 49: + +Table 1. Parameters and their symbols. + +
SPopulation susceptible to infection
b(t-t0)Birth rate t0 years ago of the susceptible population: e.g., t0 = 15 to model 15 year-olds entering the sexually active pool
μDeath rate of susceptible population
vsuIncrease in the death rate for the untreated population with the ARV-sensitive strain
vrIncrease in the death rate for the untreated population with the ARV-resistant strain
vSTIncrease in the death rate for the population undergoing treatment with the ARV-sensitive strain
vTRIncrease in the death rate for the population undergoing treatment with the ARV-resistant strain
IsuPopulation infected with the ARV-sensitive strain and untreated
IurPopulation infected with the ARV-resistant strain and untreated
ISTPopulation infected with the ARV-sensitive strain seeking treatment
ITRPopulation infected with the ARV-resistant strain seeking treatment
γsRate at which those with the ARV-sensitive strain seek testing and treatment
γrRate at which those with the ARV-resistant strain seek testing and treatment
λTransmission rate of either strain to the susceptible population
αRate at which treatment induces ARV-sensitive → ARV-resistant mutation
+ +- susceptible, +- infected with a sensitive strain and not undergoing treatment, +- infected with a sensitive strain and with treatment, +- infected with a resistant strain and without treatment, and +- infected with a resistant strain and with treatment. + +Not only do individuals in treatment have a different death rate from individuals not in treatment, but they also behave differently: There is no transmission from this group. + +# Assumptions + +- Although the absolute assumption that treated individuals no longer transmit is markedly false [Baggaley et al. 2005], it seems that the change in sexual behavior in infected individuals who know they are infected has had a significant impact on the recent spread of the disease [UNAIDS 2005b] and hence the assumption represents a best-case scenario for combination ART-treatment and counseling. + +- The projected birth rates given in literature for the next century, assuming medium fertility, are valid. Our model normalizes the healthy birth rates to the ratio of healthy individuals in society. +- We approximate that the infected populations will not contribute to the birth rates, because infected offspring will not have a significant chance to play a role [UNAIDS 2005b]. This simplifying approximation ignores the fact that without treatment, pregnant mothers only have a $35\%$ chance of passing the disease to their children. +- Both strains of the virus, the ARV-sensitive and ARV-resistant, have equal transmission rates. +- No significant mass migrations, natural disasters, or other demographic-altering events occur. + +# Features of the Model + +Some interesting effects that this model can address include: + +- By setting $\gamma_{S} = \alpha = 0$ and $I_r^u(0) = I_r^T(0) = 0$ , the model becomes equivalent to the unchecked dynamics of an SIR model with birth and death rates. We use this approach in analyzing Task #1. +- By setting $\alpha = 0$ and $I_r^u(0) = I_r^T(0) = 0$ , treatment effects can be modeled that include extension of life due to treatment. Based on the magnitude of $I_s^T$ during each year and data on the cost of treatment per individual per year, the model could then describe how much funding would be required to provide treatment to that ratio of the population. We use this approach in analyzing Task #2. +- The model adapts to treatment-resistant strains. The same economic analysis is then possible by using the magnitude of $I_s^T + I_r^T$ against the rest of the population. We use this approach in analyzing Task #3. + +# Critical Countries + +Our choices of critical countries were influenced by the UNAIDS December 2005 update on the AIDS epidemic [UNAIDS 2005b]. Some criteria that we considered were: + +- the percentage of the country's total population infected, +- the total number of AIDS cases, +- the current resources available to the government, + +the rate of growth of AIDS cases, and +- the effect of the specific country on the global AIDS epidemic. + +We selected as critical countries in their respective continents South Africa, Ukraine, India, Honduras, Mexico, and Australia. + +# Projected Unchecked Infections + +We determine the expected rate of change in the number of infections for our critical countries from 2005 to 2050 with no treatment or vaccine. + +# Model + +We do not consider resistant strains nor any kind of treatment. Thus, in the general model, we set $\gamma_{S} = \alpha = 0$ and $I_r^u (0) = I_r^T (0) = 0$ . This allows a great simplification in the accessible states of the system as well as the independent variables. + +The most important assumption is change in behavior when a person becomes aware of their infection. For simplicity, the model presumes best-case: An individual will not knowingly risk infecting another. + +# Procedure + +To obtain country-specific parameters of death rates and infection rates, we first set the HIV transmission to zero, input the birth-rate data for 1950-2005, then set the death rate to accurately reflect the population data for the country between 1990 and 2005. The death-rate acceleration term is chosen so that $\mu + v$ reflects the $1/e$ lifetime of a population with AIDS. We then adjust the transmission parameter $\lambda$ to match the AIDS cases in the same time period and choose $\gamma$ to reflect the average time to exhibit symptoms. Then we integrate the differential equations to extrapolate the total population, the total diagnosed population, and the total healthy population. The total infected and diagnosed population is considered to be equivalent to AIDS fatalities, since death occurs within a few years of the onset of symptoms in the absence of ARV treatment. + +# Results + +We show in Figure 1 the projections for the critical countries with no treatment or vaccine. + +![](images/af219d3724d2b1e4472b17c1e6d637ae96428535ccd97d1c73d9a3fafd680a40.jpg) + +![](images/3d2c332d36d399e044ae40b7e6f0a26feabc4b77a2b852afaef3b674f9109d74.jpg) + +![](images/f5dcaf7bed00578e06b837d17d4ab8548b8d8ff2bb6cca2f43bc06344513069d.jpg) + +![](images/23177a4c6d3fbd274cf8d5feaa1d22dd48d697d87229d3abb0a3f67265d7d812.jpg) + +![](images/d859794dc4b17eda3fac6c33dcb1b7d0f53c3d326f1d78adf528d99b5fd2648d.jpg) +Figure 1. Model predictions for unchecked HIV infections. N: total population. S: susceptibles. I: infecteds. + +![](images/a90c8cdc6a682191429584022a3c494b3ae0dc6ac2238243f2506d577a7a7a65.jpg) + +# South Africa + +The model predicts catastrophic consequences for South Africa. Without treatment, the population would stop growing and even decline in the next few decades, with the number of HIV infections doubling or more before until infected individuals are nearly half the population. + +# Ukraine + +Ukraine also experienced a dramatic increase in cases in the last century, but the increase plays a lesser role in the model than the recent decline in population, which may have skewed the model parameters. Our model does not include migrations or other non-infection-related factors that affect population; therefore, the model likely exaggerates the problem in the Ukraine. + +# India + +India's forecasted rate of growth is considerably affected after about 2015, and the population goes into decline around 2030. The inflection point of population growth is in the past decade, which is unsubstantiated by empirical data. + +# Honduras + +Honduras too has a very large proportion of HIV infections, $1.8\%$ in 2004. The increase from very few infections in 1990 to a great number in 2004 again affects the infection rate variable; if this rate of increase continues, the model shows catastrophic effects for the population as a whole. + +# Australia + +The graph for Australia differs in being on a logarithmic scale. The number of infections is actually declining. The empirical data too shows a decline in infections over the past decade. Our model reflects the apparent inability of the virus to sustain itself with such low infection rates. That is, infected people are dying faster than they are infecting others. If this trend continues, the virus will simply not have the staying power to remain in the population. + +# Mexico + +The data for Mexico reflect the rise and the decline in cases over the last decade, attributed to successful treatment and prevention programs [Ziga 1998], a force not represented in our simple model. The model shows a possibly unrealistically large growth in cases within the next decade; but the model is for infection rates without treatment or prevention measures. + +# Financial Resources and Foreign Aid + +UNAIDS [2005a] has outlined the financial need and resources available in the fight against HIV, including a three-year projection of funds needed to accomplish the following tasks: + +- Develop a concerted international effort focusing on all aspects of prevention and treatment. +- Provide $75\%$ of the global group "in most urgent need" with ARV treatment by 2010 if current financial donor trends continue. +- Train medical staff in low-income countries. +- Create 2700 new health centers with funds available by 2010. + +A total of $ 6.1 billion was available in 2004 [UNAIDS 2005a] and projections for 2005-2007 were $8.3 billion, $ 8.9 billion, and $ 10 billion. + +If people in need are identified only one year before death and provided treatment for that year, $80\%$ coverage could be provided by 2010 by $9.3 billion, assuming a constant geometric growth rate of cases of $1\frac{1}{3}$ from 2008 to 2010, as the study implies [UNAIDS 2005a]. + +Continued geometric growth quickly becomes unreasonable beyond 2010, the goal date for treating and controlling the majority of the epidemic. + +# Projections with ART and Vaccination + +# Model + +To adapt our model to include ARV therapy and/or a preventive vaccine, we alter the "aware" category to include those who seek ART upon diagnosis. Thus, those who are infected and seeking abatement (though they may not receive it) have an overall increase in life expectancy that we model by reducing the death acceleration term, $v^T$ . To include the effects of vaccination, we decrease the "birth" rate (the rate of entry) into the susceptible group to reflect the vaccination rate. For example, for a $75\%$ vaccination rate, $b$ would be reduced to $25\%$ of its value in the absence of treatment. + +Because our model assumes a best-case scenario—diagnosed individuals no longer transmit the infection—the addition of ARV treatment does not dramatically affect the population dynamics. Access to ARV treatment does, however, delay the decline of the total population. The coefficient $\lambda$ describing the rate of infection remains the same; but due to the extended life-span of treated cases, infected individuals on average do not die as quickly—they live longer and hence constitute a greater percentage of the population. + +Using reasonable values for $v_{j'}^i$ , the accelerated death terms, we obtain only mild influence on the unchecked trends from 1950 to 2050. Estimation of $\gamma_t$ the + +rate at which people are diagnosed with HIV and seek treatment, is founded on the predicted aid that the country could receive. + +On the one hand, if diagnosed infected individuals communicate the disease while living longer, the greater their population and the greater the growth of HIV. However, longer-living individuals would help offset the imminent danger to a hard-hit nation by adding to productivity and supporting the next generation of would-be orphans. Ethical mandates seem to require that ARV treatments be administered if at all possible. + +# Vaccination + +An HIV vaccine would be one of the greatest medical accomplishments of the 21st century. With $100\%$ vaccination, the existing AIDS population decays exponentially to zero. But $100\%$ vaccination is not a plausible scenario for most countries. + +For a $75\%$ vaccination rate, the birth-rate term (rate at which people susceptible to HIV enter the general population), $b(t - t_0)$ , would drop to $25\%$ , assuming that the unvaccinated population is the least likely to have their children vaccinated. In this simplified scenario, in South Africa the total susceptible population would decrease starting in 2015, the year when our hypothetical vaccine is introduced (Figure 2). + +![](images/8d2f02b39942193cc25277f8aff0331db18bcdc9d8eca993cb96b4411cc09392.jpg) +Figure 2. In the presence of ART and emergent ART resistance, the susceptible population in South Africa decreases at the introduction of an HIV vaccine in 2015, and the total number of AIDS cases immediately declines as well. + +The vaccine causes a decrease in the total HIV-positive population and thus the total number of AIDS deaths. The HIV-negative vaccinated population is not considered in this model. The figure also incorporates the effects of ART + +resistance of HIV, discussed in the following section. + +The population dynamics show that the presence of a vaccine not only reduces the susceptible population but also causes a downward trend in the total number of AIDS cases as soon as those vaccinated would normally enter the susceptible population. + +We assume that vaccination provides perfect immunity and does not cause infections, and that the vaccinated population secures vaccinations for its children so as to effectively isolate our model as a subset of the total population. Thus, only unvaccinated susceptible individuals contribute to the susceptible pool. This dramatic change to the dynamics only has an effect $t_0$ years after the vaccination is introduced. + +# Effect of ART + +We show the effect of ART in Figure 3. + +# South Africa + +If ART had been heavily supplied concurrently with the rise in cases in South Africa during the 1990s, the consequences would be visible even by 2006. The susceptible population is the same in ART and non-ART. The population of infected individuals in South Africa would have had extended lives with ARV treatment and resisted the downturn of total population. + +# Ukraine + +Our model's prediction for the Ukraine population again shows that ART would have a large effect. If supplied during the last decade, the treatment would have slowed the population decline during the next half century. However, because our model unreasonably takes the Ukraine's recent population decline to be due to AIDS, the effect of the treatment almost certainly would be smaller than described. + +# India + +ART supplied during and after the 1990s would not offset India's population trajectory until well into the twenty-first century, due to the low incidence of HIV relative to India's size and recent exponential growth. Nevertheless, the population would peak at a significantly higher value many years later with ART than without. + +# Honduras + +Because Honduras experienced an especially large increase in HIV rate during the last decade, we fit the parameters of the model to a staggered trend. + +![](images/3e3e0ef97c9fc4022d4267190cc4801fdbe2bca30729533730eb7b94a91cdc63.jpg) + +![](images/dea2c5ea7e345678672b1f887be5877ba0c3be8b1bf98520ec0e7faa8d2d1036.jpg) + +![](images/199394ec0f1022cb82ff348b740cb43cb1d9f2dd3b9af68f6e512f843392438f.jpg) +India + +![](images/cf7d75b07712fa1938197c238c3b7c03e44ca7250829d8a226cf2e07d0a2c8a3.jpg) +Honduras + +![](images/859358065bfcc5031b984a4d7be93d821c6710c9117aa337dbd29def0a1043f7.jpg) +Mexico +Figure 3. Model predictions with and without ART but with no development of ART-resistance. (N: total population. S: susceptibles. I: infected.) + +ART treatment delays the population decline for only a few years but has a tremendous effect on total population by 2050. + +# Australia + +Australia's HIV infection rate was already too small and not self-sustaining in the last model; we do not model it further here. + +# Mexico + +ART in Mexico has a significant effect only when the infected group reaches a substantial proportion of the population. The low HIV rate in Mexico (and in Australia) leads to the prediction that treatment becomes nationally critical only after the year 2000. There is no change in the early population trajectory, due to very low incidence of AIDS. + +# The Economic Strain on Critical Countries + +From the problem statement, the cost of administering treatment to an infected individual is \(1,100 per person per year of aid, although we found the cost to vary with country and country status [Cohen et al. 2005]. We argue, along the lines of the "Consensus Statement on Antiretroviral Treatment for AIDS in Poor Countries" [Adams et al. 2001] included with the problem statement, that donations from foreign countries can cover a good part of the current demand for treatment. + +But even generous ART has only minor effects on the model, since ART is not a cure but only an extension of life that may have competing effects in HIV prevention. + +Unless foreign donations, UN funding, and other public sector resources can afford to provide ARVs, it is unlikely that any but a small minority of patients would receive them. + +Based on our models, in the worst-case scenario—where the only individuals treated are those who would live only 1–2 years more without treatment—public funds must cover $30\%$ of HIV infections around the world (the percentage at this advanced stage) [UNAIDS 2005b]. Summing the HIV infections in each of our critical countries, by 2030 the UN could have been responsible for 90 million cases, costing $90 billion—approximately the entire amount of funding available until then. + +Providing a vaccine to the global population would cost $12 billion to vaccinate the worlds' population, at$ 0.75 cents per vaccination and a three-stage process. + +# Therapy-Resistant Strains + +We include the possibility of development of ART-resistant strains of HIV. We use three countries, South Africa, India, and Mexico, as examples. + +# Model + +The primary difference between the new model and the earlier model for ART is that the infected population variable $I$ is split into two separate variables, $I_{r}$ and $I_{s}$ , to distinguish those seeking treatment from those unaware of their infection. + +# ARV resistant strain emergence and vaccination + +We assume that initially there is no population infected with the resistant strain and that the emergence rate is proportional to treatment. Specifically, we set the death rate acceleration factor of those undergoing treatment with the resistant strain, $v_{r}^{T}$ , to be equal to the death rate acceleration term for those with the treatment-sensitive strain but not seeking treatment, $v_{r}^{u}$ . The effect is to blunt the effect of the ART and bring the population predictions towards the values that we obtained earlier in the absence of ART. Figure 4 shows our model's predictions for total population and the resistant-strain emergence under ART for South Africa, India, and Mexico. + +# Conclusion + +We feel that our model is appropriate for modeling the spread of HIV in otherwise stable countries and could be used to target AIDS funding better. + +By modeling the spread of AIDS with a system of differential equations, we make relatively short-term assumptions about the course of the epidemic. We observe huge increases in global AIDS cases and population downturns for several of the critical countries that we modeled. + +Vaccination appears to be the only pharmaceutical way to stop the spread of HIV. However, ART allows a country to maintain a larger population and thus should be undertaken to the maximum possible extent due to both humanitarian considerations and the effect of global population atrophy on the the world economy. Financial trends indicate increasing available funding for ART treatment globally. Should a vaccine ever become available, our financial analysis shows that it should be made available as quickly as possible. + +Given the increasing availability of funds for the global fight against AIDS, all possible efforts should be made to distribute ARV medication to those populations most at need. + +![](images/57d931a07e8853d9f089158b0413be1f5b1eadabef3f5668d6b57fb361b4879e.jpg) + +![](images/800e9bb2d4e50ce13080a146ede7299dc3a3f52993f172806a8e5c0599abe327.jpg) + +![](images/2b89535767eed5ca2bdf8bc0c229c02b9997336f61b289eafcdf37ab4f29b0b9.jpg) +Figure 4. Model predictions with developed resistance under ART. The net beneficial effect of ART is decreased. (N: total population. S: susceptibles. I: infected.) + +# References + +Adams, Gregor, et al. 2001. Consensus statement on antiretroviral treatment for AIDS in poor countries. http://www.hsph.harvard.edu/bioethics/pdf/consensus_aims_therapy.pdf. +Baggaley, Rebecca F., Neil M. Ferguson, and Geoff P. Garnett. 2005. The epidemiological impact of antiretroviral use predicted by mathematical models: A review. *Emerging Themes in Epidemiology* 2 (10 September 2005): 9ff. http://www.ete-online.com/content/2/1/9. +Bauch, C.T. 2002. A versatile ODE approximation to a network model for the spread of sexually transmitted diseases. Journal of Mathematical Biology 45 (5): 375-395. + +Boily, M.C., et al. 2004. Changes in the transmission dynamics of the HIV epidemic after the wide scale use of antiretroviral therapy could explain increases in sexually transmitted infections: Results from mathematical models. Sexually Transmitted Diseases 31 (2): 100-113. +Cohen, Deborah A., Shin-Yi Wu, and Thomas A. Farley. 2005. Cost-effective allocation of government funds to prevent HIV infection. *Health Affairs* 24 (4): 915-926. http://content.healthaffairs.org/cgi/content/abstract/24/4/915. +Gayle, H. 2000. An overview of the global HIV/AIDS epidemic, with a focus on the United States. AIDS 14 (Sept. 2000) (Suppl 2): S8-S17. +Hyman, James M., Jia Li, and E. Ann Stanley. 2003. Modeling the impact of random screening and contact tracing in reducing the spread of HIV. Mathematical Biosciences 181: 17-54. http://math.lanl.gov/~mac/papers/bio/HLS03.pdf. +Law, M.G., et al. 2001. Modeling the effect of combination antiretrovirus treatments on HIV incidence. AIDS 15 (10): 1287-1294. +Moatti J.P., et al. 2003. Access to antiretroviral treatment and sexual behaviors of HIV-infected patients aware of their serostatus in Cote d'Ivoire. AIDS 17 (Suppl 3): S69-S77. +UNAIDS. 2003a. UNAIDS/WHO Epidemiological Fact Sheet: South Africa. data.unaids.org/Publications/Fact-Sheets01/southafrica_EN.pdf. +UNAIDS. 2003b. UNAIDS/WHO Epidemiological Fact Sheet: Ukraine. data. unaids.org/Publications/Fact-Sheets01/ukraine_EN.pdf. +UNAIDS. 2005c. Update Report on Sub-Saharan Africa: December, 2005. http://www.unaids.org/epi/2005/doc/EPIupdate2005_html_en/epi05_05_en.htm. +UNAIDS. 2005d. Update Report on Eastern Europe and Central Asia: December, 2005. http://www.unaids.org/epi/2005/doc/EPIupdate2005(pdf_en/Epi05_07_en.pdf. +UNAIDS. 2005e. Update Report on Latin America: December, 2005 http:// www.unaids.org/epi/2005/doc/EPIupdate2005_html_en/epi05_09_en.htm. +UNAIDS. 2005a. Resource needs for an expanded response to AIDS in low- and middle-income countries. Geneva: August 2005. +UNAIDS. 2005b. UNAIDS/WHO AIDS Epidemic Update: December 2005. http://www.unaids.org/epi/2005/doc/report(pdf.asp.. +United Nations General Assembly. 2001. Declaration of Commitment on HIV/AIDS. Geneva: June, 2001. + +Ziga, Patricia Uribe. 1998. AIDS in Mexico. Journal of the International Association of Physicians in AIDS Care (November 1998). http://www.thebody.com/iapac/mexico/mexico.html. + +# About the Authors + +![](images/34dc498e664b0f22fd95dad38fd39c299e4300dcd1ce72a01d4fb6522a553f6c.jpg) +Michael Martin, Cris Cecka, and Tristan Sharp. + +Mike Martin graduated with high distinction from Harvey Mudd College in May 2006 with an honors degree in physics. His interests center on fundamental quantum theory and its applications to atomic, molecular, and optical physics. His undergraduate experimental work was conducted through the Sandia National Labs clinic project at Harvey Mudd, where he and teammates worked to characterize soot aerosols optically. In addition to studying physics, Mike spent a semester in Paris studying literature and art. He will begin graduate study in physics at the University of Colorado at Boulder in the fall of 2006. + +Cris Cecka graduated from Harvey Mudd College in 2006 as a Physics and Math-Computer Science major. He will be attending graduate school at Stanford University in the Institute of Computational and Mathematical Engineering. + +Tristan Sharp graduated from Harvey Mudd College in 2006 with distinction. He also spent six months studying physics at the Technische Universität Dresden, in Germany. He is interested in computational fluid dynamics and will be pursuing a graduate degree at UCLA. He will also be working in industry on physics-based modeling and algorithm design. Previous experiences that helped prepare Tristan for the ICM include Harvey Mudd's Clinic Program, in which he studied light's interaction with soot particles, and his work on a student-initiated research project through NASA's Reduced Gravity Office, exploring transitions in two-phase flows. + +# The Spreading HIV/AIDS Problem + +Adam Seybert + +David Ryan + +Nicholas Ross + +United States Military Academy + +West Point, NY + +Advisor: Randal Hickman + +# Summary + +We propose four main focus areas in which the world can win its battle against AIDS: + +- Identifying individuals who are HIV positive, through blood testing (in batches, to save money). People who are HIV-positive will recognize that they are infected and take measures to ensure that they do not spread the virus to other people. We plan to test everybody in sub-Saharan Africa by spending $1.5 billion/year for 3 years. +- Educating the public on how HIV can be prevented, in order to keep the incidence rate down. Showing people that AIDS is incurable, along with the ABCs of prevention, our hope is that they will inculcate the practices in order to stop the epidemic. These practices include remaining abstinent ("A"); being faithful ("B") to one uninfected partner if one chooses to be sexually active; and the use, availability, and effectiveness of condoms ("C") during intercourse. +- Antiretroviral treatments (ART) for women who are or become pregnant before our plan is implemented. These treatments help reduce the risk of an infected mother passing the virus to her child during pregnancy, birth, and nursing. This area will be the most difficult to support financially due to the cost of producing and distributing treatments. However, surplus funds will alleviate this problem after the testing phase is complete. +- A vaccine; one might be available by 2011. Though this date could be pushed forward with more funding, we suggest keeping vaccine research resources stable and focusing on the $100\%$ guaranteed cure, the ABCs. The vaccine + +however, might not be completely effective and accidentally allow for drug-resistant HIV strains to evolve. By 2011, the testing phase will be complete, and hopefully governments would have established a basic national education program, thus freeing up additional assets, which can be put into manufacturing and dispensing the vaccine. + +# Problem Approach + +- Task 1: To get the rate of change in HIV/AIDS for 2006 to 2050, we develop a model of the growth/decay of the population for each country. We use the prevalence rate, number of new cases, and the population each year (as the carrying capacity) to develop a logistic growth model for the virus, assuming that there is no intervention in the spread of HIV. +- Task 2: To account for the introduction of treatments, vaccinations, and a combination of the two, we develop state diagrams to show the possible paths of HIV. Using the diagrams and determining rates of change from one state to another, we formulate new plots of the total population and the spread of AIDS. +- Task 3: There is no way to predict the rate at which the virus would become drug-resistant. We would have to make too many assumptions such as the number of people being treated, the amount of treatment they were receiving, the effectiveness of that treatment, the frequency of each treatment, the adherence to the treatment regimen, and the probability of the virus to mutate. Hence, we offer a qualitative approach rather than a quantitative one. We look at what effect drug-resistant strains would have on our model, specifically the rates of change from one state to another. +- Task 4: In our white paper to the United Nations, we focus on allocating resources to treat the problem and not to the symptoms. The surest way to keep from getting HIV/AIDS is practicing abstinence, being faithful, and using condoms during intercourse. Therefore, HIV testing and education are our primary objectives in combating the epidemic. + +# Assumptions + +- The population with HIV/AIDS is homogeneously distributed. This assumption is critical. I those infected with HIV and AIDS interact only with others with HIV and AIDS, then the infection will never spread to the uninfected populace; with homogeneity, the infection could spread to everyone. +- The number of people infected with HIV/AIDS includes those who have been diagnosed and those who have yet to be diagnosed. Without this + +assumption, the number of people affected each year only from diagnosed patients would be too low. + +- The probability of a person who has been vaccinated contracting HIV from someone who is undergoing ARV treatment is negligible. +- Each country to be modeled is a closed system. There is no immigration or emigration of infected individuals. + +# Task 1: The Rate of Change from 2006 to 2050 + +We chose the following countries: Haiti, Guyana, South Africa, Ukraine, India, and Australia, for reasons noted below. + +The following is a state diagram that is the base for our models. We reserve the technical details of formulation of a system of differential equations to the Appendix. + +![](images/55a67a1613ac28e27fdc428251982e8607bd2189b757aae6bb36770bfc849c8c.jpg) +Figure 1. Simplistic HIV/AIDS state diagram. + +People are born either infected with HIV or non-infected. A person who is not infected can become infected; but once a person becomes infected, there is no known way to rid oneself of the infection. Both states transition to deaths, but at different rates. + +We chose to model Haiti from North America because of its relatively high prevalence rate of $2.6\%$ —high compared to the $1.4\%$ prevalence across the border in the Dominican Republic or the $0.15\%$ and $0.29\%$ prevalence in more developed countries such as Canada and the U.S. Haiti is also one of the poorest countries on the continent, with a GNP per capita of only $510 per year. With treatment cost at well over $1,000 per person per year [Adams et al. 2001], the average Haitian cannot afford HIV medication. + +Guyana represents South America in our model. Guyana has a prevalence rate of $2.0\%$ . While Brazil has 36 times as many cases of as Guyana, Guyana's small population makes the problem more widespread. In addition, Brazil has already taken steps to institute a solution to their AIDS problem, spending $444 million to provide $100\%$ of the HIV infected with treatment [Andrews 2004]. Guyana, however, has a far lower GNP than Brazil and fewer resources to dedicate to HIV. + +Although Africa is by far the hardest hit continent in the world in terms of HIV/AIDS, South Africa stands out even among African countries, leading the world with 4.2 million cases and a $9.33\%$ prevalence. Additionally, $29.5\%$ of pregnant women tested in South Africa test positive, meaning that the disease is being propagated to children [AVERT.org 2006]. + +In 1999, Ukraine reported 240,000 cases, with a total population that is decreasing rather than increasing. + +India has begun to show disturbing signs that it may be the next hotbed of HIV/AIDS. It has the second-largest population, 1.1 billion, as well as the second-largest HIV-positive population, 3.7 million. This leads to a deceptively low prevalence rate— $0.47\%$ —but India is expected to surpass South Africa as the world's-leading HIV-positive population by 2010 [AVERT.org 2006]. + +From the continent of Australia, having not much choice, we selected Australia. All of the countries in the region have low rates of HIV infection; Australia has only 14,000 cases, the most in the region. + +# Assumptions + +- For a simplified model, we assume that countries will not intervene to stem the spread of the virus through medical treatments. +- AIDS spreads through a population according to a logistic function. + +This second assumption is realistic because many similar models, such as the spread of technology and the growth of a population, are based on logistic growth. Each infected person makes contacts with a fraction of the non-HIV population in a way that could transmit the virus, and only a fraction of those contacts actually transmit the virus. So the number of people whom an infected person infects in a year is the number of contacts per non-infected persons, times the infections per contact, times the non-infected population: + +$$ +\left(\frac {\text {c o n t a c t s}}{\text {n o n - H I V}}\right) \times \left(\frac {\text {t r a n s m i s s i o n s}}{\text {c o n t a c t s}}\right) \times (\text {n o n - H I V}). \tag {1} +$$ + +We call the product of the first two fractions the transmission rate $\beta_{N}$ . So the total transmissions per year is $\beta_{N}NI$ : the transmission rate times the noninfected population times the number of infected people $I$ . Since the total population is growing logistically, HIV also spreads logistically. + +Expression (1) assumes that the transmission rate is constant. However, since spreading HIV is much more common among certain human behaviors, such as sexual encounters and intravenous drug use, the transmission rate could be different for each infected person. Additionally, certain types of contacts have a higher fraction of transmissions per contact. For example, anal intercourse has a higher fraction of transmission per contact than vaginal intercourse; likewise, transmission of the virus from a male to a female is more likely than from a female to a male [World Bank Group 2006]. To simplify the model, we just consider an average fraction of transmissions per contact. + +# Logistic Models for Critical Countries + +We now consider a graphical model of the spread of HIV in Haiti (Figure 2). + +![](images/aef217aad13ccf4dd2b3f36ab9e9b4332a3be07cdd7e6f71e478ecb766ac8b51.jpg) +Figure 2. HIV and total population projections for Haiti. + +To develop this graphical model, we follow our assumption that HIV follows simple logistic growth. + +$$ +\frac {d I}{d t} = \alpha_ {I} I - \frac {\alpha_ {I} I ^ {2}}{K _ {I}}, \tag {2} +$$ + +where $\alpha_{I}$ is the initial growth rate of the infected population and $K_{I}$ is its carrying capacity. We can find values for both of these by fitting a second-order polynomial to a graph of the infected population rate of change vs. the total population. To do so, we created a table of data for Haiti from 1985 to 2005: population, number of HIV infections, and prevalence rate [Central Intelligence Agency 2001; UNICEF 2005]. We graphed the change in the number of infections each year against the total number of infections that year. Based on the values obtained, we projected the infected population from 2006 to 2050. + +Since the carrying capacity is based on the population, and the population increases, the graph doesn't appear to follow an exactly logistic model. However, it does show how quickly AIDS epidemic is spreading. Since HIV / AIDS has been around for only 25 years, predictions 50 years into the future, can only give an idea of the potential severity of the spread. + +Following the same procedure, we created graphs for each of the other countries (Figure 3). + +# Guyana + +The overall population is projected to decrease, which one would expect to lead to the AIDS level also decreasing. However, our models project the + +![](images/7e8f3888244a194a7992c555a51491c59f9155a1e87eee10423544721a26319c.jpg) + +![](images/a63de255c529daea91240e0404e637bc577bbb6afd5b422f7ca936967050ea93.jpg) + +![](images/309bdc58ad82f15efe11f500b000a1079791d8ad5bbe267aecfe5fe40d150287.jpg) + +![](images/f5667ac55c660238be8a684fb6aa7bcb379c19393e41a2ca21c782d3a39d513c.jpg) + +![](images/61742be5535fa7c2fc33527378e2399843d188678d1ba23a595564a7fabfa78f.jpg) +Figure 3. Total population and HIV population projections for the critical countries. + +![](images/029f011f3611126cfd2c594dd41be305e6ec64c292464eaa0e2c6cbd625c9e6f.jpg) + +infected population growth rate and then calculate the number of cases based on this growth rate and the population. Therefore, as the infection growth rate increases and population decreases, the number of AIDS cases can either increase, decrease, or remain the same, depending entirely on the relationship between the growth rate and the rate of decrease of population. + +# Ukraine + +presents a scenario similar to Guyana's. Were the population to remain stable, the number of infected people would rise much more quickly. This graphic model still presents an interesting issue: the infected population will eventually reach the total population in time. + +# South Africa + +Based on the initial quick growth of HIV, the model predicts continued alarming increases in cases, barring intervention, to $69.3\%$ of the population in 2050. + +# India + +The large population creates a deceptively small prevalence rate. The graph makes it look as if the number of infected people is small and growing slowly, yet the numbers are large, increasing quickly, and perched to surpass South Africa by 2010. + +# Australia + +HIV is seen to be growing steadily, yet more slowly than the population. This is a stark contrast to almost all of the other critical countries. + +# Task 2: Drug Therapy and Vaccination + +Therapy refers to antiretroviral treatment(ART), which is currently prolonging the lives of many people living with HIV. ARV treatments lower the concentration of HIV in the bloodstream, allowing the body to fight the effects longer and making the disease more difficult to transmit to others. The decreasing transmission rate among infected individuals means that not as many people each year will contract HIV compared to the same scenario without ART. Introducing ARV treatment changes the state diagram to Figure 4. + +![](images/0858444af9748455e799048ad81868e233cbaa1945b040d826dd0e3f71fd75e8.jpg) +Figure 4. Treatment state diagram. + +As treatments become available, infected people begin receiving them. Non-Infected people will not need treatment, which is why there is no transition from the Non-Infected state to the Treated state. + +Although no vaccine exists yet, the state diagram in Figure 5 depicts the changes in the model for a vaccine. + +A vaccine would be administered only to the Non-Infected population or those being born. Since the vaccine is not an elixir of life, there will still be some death rate. + +ARV treatments are available now, and should vaccines become available in the future, the two methods would be used together to attempt to eliminate AIDS. This creates still another model from which we can make predictions (Figure 6). + +![](images/5648310c57977ebf79518a503d687101cc18ff98b4e21dfa0dc30c59501ba424.jpg) +Figure 5. Vaccination state diagram. + +![](images/c448cb9bb7f01d9837a434eadffe11e00bada50ed78b3e7d5394a690cc4b6e89.jpg) +Figure 6. Vaccination and treatment state diagram. + +Without strong data on ART and an actual vaccine, it is difficult to make a quantitative extrapolation of their effects. Qualitatively, however, the factors can be graphed and trends predicted. Figure 7 shows a generic graph based on the differential equations developed at the end of the Appendix. + +The graph suggests that ART and a vaccine will be fairly successful in battling AIDS but with some major drawbacks: + +- ART is expensive and not widespread. It currently costs $800 per person per year to treat AIDS using the name-brand medications; this cost could drop as low as$ 300 with generic drugs [Andrews 2004]. +- If price were the only obstacle, the problem would be clearer. However, ART also makes it much more likely that a patient will develop resistance to a drug, or that the virus will mutate. Both of these are extremely serious considerations that must be weighed against the effectiveness of the treatments themselves. We consider drug resistance in Task 3. + +![](images/2e29bc35616b008b6dfd25a2b5ed2a86be7de4bf79987b2fb7c0b2d4058341e3.jpg) +Figure 7. Effects of ARV treatment and vaccination. + +- Cost and resistance also surface with a vaccine, as well as mutation that would render the vaccine useless. We discuss mutations in Task 3. +- Availability of financial resources has been a problem; a hopeful sign is that with the worldwide concern for AIDS growing quickly, the level of financial resources and relief is also on the rise. + +Figure 8 shows a generic example of what can be expected to happen in South Africa with ART and prevention education. When compared with Figure 2 (spread of AIDS left unchecked), the difference is immediately evident as well as inspiring. + +Why, then, do people in the world still suffer from AIDS if it is so simple to make the line on the graph go down? Aside from the unavailability of a vaccine yet, the main factor is funding. South Africa, for instance, has a GNP per capita of only $3,020. While this is enough to purchase ART, an individual would have to make substantial sacrifices to do so. Additionally, wealth is concentrated in a small percentage of the population. + +In deciding where to send aid and which areas represent the largest threat of infection expansion, GNP and a country's economic strength should be taken into consideration as major factors. Not all countries that are struggling against AIDS have even the mediocre GNP per capita of South Africa. Haiti, for example, is expecting a much sharper increase in the number of cases than Australia. Australia has a GNP per capita of $20,240, allowing individuals infected with HIV to afford ART. Haitians, with a per capita GNP of$ 510, cannot even afford a year worth of ART with an entire year's GNP. This is one major reason why + +![](images/33c2f5636e506f7d3c6d6ab3dc2e00a8725604f1707e3f0d89fb2a678d522089.jpg) +Figure 8. HIV growth with education and ARV treatments. + +HIV prevalence is increasing so quickly in the Third World—the countries cannot stop it. This is why foreign aid is imperative and needs to be directed to the poorest countries. + +# Task 3: Drug-Resistant Strains + +When treatments are weak or are not taken correctly, viruses can develop resistance to the treatments. Both $\beta_{T}$ and $\beta_{V}$ , the coefficients for transmission in the cases of treatment and of a virus would be changed by virus resistance. We can consider such a transmission rate as the reciprocal of the drug or vaccine's effectiveness. As resistance develops, effectiveness decreases; as effectiveness decreases, transmission (and the value of $\beta$ ) increases. In time, it is possible for this transmission rate to catch up with $\beta_{N}$ , which would mean that vaccinating or treating people would make them more susceptible to the virus. + +In addition to drug resistance, HIV could mutate as a result of treatment. Mutations could produce strains of the virus that are more active and destroy the body more quickly. + +# Conclusion: Recommendations + +Sub-Saharan Africa is the decisive point for stopping the spread of AIDS. Of the expected $7 billion dollars to be spent on AIDS worldwide in 2006, we recommend that$ 4.5 billion be spent in sub-Saharan Africa on testing the population, educating the population about prevention, distributing preventive + +measures, limited dispersal of treatment, and providing baby formula to nursing mothers infected with HIV. This would be a three-year program, spending the same proportion of money each year. + +Following the three years of intensive effort in Africa, we would conduct an analysis of gains to determine success. If unsuccessful, the plan would be altered; if successful, focus would shift to India and Southeast Asia, though Africa would continue to receive money for education, prevention, treatment, and new mothers. In India, the distribution of resources would be similar to that in Africa, focusing initially on testing, education, prevention, limited treatment, and new mothers. This phase is expected to take longer in India than in sub-Saharan Africa, five years instead of three. + +Southeast Asia and sub-Saharan Africa comprise the largest threat to world health due to AIDS. After they have been treated, our program will hopefully be able to enter a phase of vaccination, pending development. It is important to focus on testing, education, and prevention up to this point, using ARV treatments sparingly so as to avoid resistance and mutated strains. At this point, the plan will once again be assessed and adjusted to meet new threats. + +# References + +Adams, Gregor, et al. 2001. Consensus statement on antiretroviral treatment for AIDS in poor countries. http://www.hsph.harvard.edu/bioethics/pdf/consensus_aims_therapy.pdf. +Andrews, Jason. 2004. Politics of patents. http://www.yale.edu/ aidsnetwork/AIDS%20and%20Access_A%20Summary.ppt. Accessed 4 February 2006. +AVERT.org. 2006. (6 February 2006). http://www.avert.org.. Accessed 4 February 2006. +Central Intelligence Agency. 2001. The World Factbook. http://www.cia.gov/cia/publications/factbook/index.html. Accessed 3 February 2006. +UNICEF. 2005. Monitoring the situation of children and women. http://www.childinfo.org/index2.htm. Accessed 3 February 2006. +World Bank Group. 2006. Strategic lessons from the epidemiology of HIV. http://www.worldbank.org/aids-econ/confront/confrontfull/chapter2.html. Accessed 3 February 2006. + +# Appendix + +We model the total population $P = N + V + I + T$ as composed of non-infected non-vaccinateds $N$ , vaccinateds $V$ , infected but untreateds $I$ , and infected and treateds $T$ , with corresponding subscripts for parameters. We let $\alpha$ + +and $\Omega$ be rates of birth into and death out of each subpopulation; $\beta_{N}, \beta_{T}, \beta_{V}$ be infection rates for non-infecteds, infected but treated, and vaccinateds; and $\lambda_{V}$ and $\lambda_{T}$ be rates of vaccination of non-infecteds and treatment of infecteds. The resulting system of differential equations, corresponding to the state diagram of Figure 6 and assumption that $\beta_{V} = 0$ , is: + +$$ +\begin{array}{l} \frac {d N}{d t} = \alpha_ {P} (N + V + I + T) \left(1 - \frac {N + V + I + T}{K}\right) \\ - \left(\alpha_ {I} - \Omega_ {I}\right) - \left(\alpha_ {V} - \Omega_ {V}\right) + \Omega_ {T} - \beta_ {N} N I - \beta_ {T} N T - \lambda_ {V}, \\ \end{array} +$$ + +$$ +\begin{array}{l} \frac {d V}{d t} = \alpha_ {V} + \lambda_ {V} - \beta_ {V} V I, \\ \frac {d I}{d t} = \alpha_ {I} + \beta_ {N} N I + \beta_ {T} N T - \lambda_ {T} - \Omega_ {T}, \\ \frac {d T}{d t} = \lambda_ {T} - \Omega_ {T}. \\ \end{array} +$$ + +Systems of equations for the state diagrams of the other figures are obtained by setting other appropriate parameter values to zero. + +# About the Authors + +![](images/0c818d59f0c6e188ff5279a773d4531d382992a95b56801861a2be399ed04064.jpg) +David Ryan, Adam Seybert, and Nicholas Ross. + +# Judges' Commentary: The Outstanding HIV/AIDS Papers + +Kari L. Murad + +Dept. of Natural Sciences + +College of Saint Rose + +Albany, NY 12203 + +muradk@strose.edu + +Joseph Myers + +Dept. of Mathematical Sciences + +United Stares Military Academy + +West Point, NY 10996 + +joseph.myers@usma.edu + +# Introduction + +The final judging for the 2006 Interdisciplinary Contest in Modeling took place at the United States Military Academy on Saturday, February 25, 2006. The seven judges spent a full and enjoyable day reading and comparing a fine set of creative and enjoyable papers. + +# The Problem + +This year's problem charged our teams with advising the United Nations on how best to allocate financial resources in the global fight against HIV/AIDS. Teams were provided a common set of historical and projected data in a variety of categories for various areas, regions, and countries around the world over time. This data included the global incidence of AIDS, the geographic distribution of HIV/AIDS subtypes, populations, fertility rates, age data, birth rates, and life expectancies. Teams were then charged with several related interdisciplinary tasks: + +- Build a model to estimate the change in the number of HIV/AIDS cases in a variety of selected countries over time. + +- Estimate the level of financial resources realistically available from foreign donors. Estimate the expected rate of change in the number of HIV/AIDS cases in the selected countries under realistic assumptions if these resources were used to fund antiretroviral (ARV) drug therapy, if they were used to fund the development of a preventive HIV/AIDS vaccine, and if funding were split to fund both at a lower level of effort for each. +- Reformulate the above three models, but now taking into account the assumption that persons receiving ARV treatment (but at a less than $90\%$ adherence rate) have a $5\%$ chance of producing vaccine-drug-resistant strains. +- Write a white paper to the United Nations recommending how the projected available financial resources should be allocated between ARV treatments and a preventive vaccine, what level of emphasis to give HIV/AIDS relative to other foreign policy priorities, and recommendations for how to coordinate donor involvement for HIV/AIDS. + +# Analysis of Papers + +The judges chose to organize their thoughts in the following areas as they studied student responses. We summarized what we saw and gave feedback on our perspectives along these same lines. + +# Executive Summary + +Every team demonstrated that they knew that it was essential to provide a good executive summary, both in content and in clarity. Moderately successful efforts summarized what the requirements were and made statements like "In our analysis we project the decrease in HIV/AIDS cases if ARV treatments and/or preventive vaccines are developed." The more successful efforts recognized that this section should serve as a "bottom line up front" and actually summarized their most important conclusions here, making statements like "We find that the cost of ARV treatment is prohibitive in poorer countries and that wealthier countries have their own financial assets for the more costly ARV development, and therefore the UN effort should be exclusively toward development of a preventive vaccine that can then be distributed to all countries." The most effective papers summarized the conclusions, not just the problem. + +# Science + +Many papers demonstrated knowledge of the science of HIV/AIDS with separate sections devoted to the biology of the disease, the mechanisms of transmission, the epidemiological spread, the efficacy of current ARV therapy and the potential value of a preventive vaccine. As an infectious disease, HIV/AIDS + +is certainly one of the most problematic diseases in our current history due to its global spread, ease of transmission, its high mutation rate in terms of noncompliant ARV therapy, and its failure to induce preventive immunity in the many vaccine trials to date. + +Add to this equation the widely differing global issues of governance, public health infrastructure, education, culture, and socioeconomics, and you have our current global state of infection with an estimated 40 million people living with HIV, according to the World Health Organization. + +Although many papers addressed the basics of the HIV/AIDS epidemic, some papers certainly addressed this capacious issue better than others. + +The least successful papers supported their science simply by dumping Internet data into this section of the paper, including more science than necessary to address the problem, not focusing on the correct areas of science for the problem, and/or failing to document their information—this type of paper was quickly eliminated from consideration for a high-level award. + +The moderately successful papers clearly had the science section written by one member of the team and the subsequent modeling sections written by other teammates. Although each section was individually well-written, the sections were very self-contained and hardly connected with each other. + +The most effective papers incorporated most if not all of the science background that was presented as a basis for their following models. For example, if the team talked about how ARV treatment prolonged the lifespan of the infected person but did nothing to prevent the transmission of the virus, then the model reiterated this fact when it assumed that the level of spending on ARV had no effect on the rate of spread of HIV/AIDS. The most effective papers presented the science that was essential to the model but very little science that was just gratuitous. + +# Modeling + +The most effective papers made their assumptions explicit as they built or presented their model. They were explicit in tying assumptions to the science when appropriate. The two predominant types of models presented were differential (differential-equation based and/or difference-equation based) and stochastic (probabilistic, based on transition probabilities). Neither was favored over the other by the judges; we saw both mediocre and excellent examples of both types of models and made no value judgments based on the type of model used. + +In both cases, the features that typified the mediocre models were: + +- developing more model details than were required, +- refusing to make simplifying assumptions and instead attempting to capture details in the model that were too fine for the questions being asked, + +- presenting reams of output without highlighting a few of the more pertinent numbers and explaining what they meant, and +- making no attempt to make an argument as to why the model output seems reasonable and why it is reasonable to proceed to use the model in analysis. + +In some papers, the model selected seemed like an awkward fit for the scenario, such as an awkward selection of partitions in a differential equation model, giving the feeling that the model had been selected mostly because it was the one that had been "pre-positioned" before the competition or was first off the shelf rather than because it was the most appropriate model. As one judge reflected, some models appeared to be hammers looking for nails within the scenario. + +The most effective models + +- gave rationale for the hypotheses, +- made appropriate assumptions to make the model more tractable, +- showed just the output necessary for later analysis, +- explained what the output meant with a few specific cases, and +- made an argument why the model seemed to be working reasonably well and merited use in further analysis. + +Teams demonstrated their ability to make appropriate model refinements in the third requirement, when the assumption of vaccine-drug-resistant strains was introduced; here, clarity in showing exactly what was being changed and in highlighting the change in output was most important. + +In most contests, all of the models selected by teams tend to fall in a few fairly homogeneous groups, but occasionally a team adopts a truly novel technique of analysis. This year that occurred with a team that simulated the spread of HIV / AIDS with a cellular automata simulation. This team simulated the population with a representative group of 10,000 individuals with six characteristics (age, sex, level of connection in society, social status, health, and cliquishness). At each time step of the simulation, each cell is determined to change value (from not infected to infected) or not (remain uninfected or remain infected), depending on the values of the surrounding eight cells, according to a set of rules which are developed from the parameters of the problem. Statistics are gathered from many runs of the simulation and then applied to answer the questions at hand. Several of the judges found this to be an intriguing approach that would have required more set-up and tuning to get it running properly but that offers a unique level of customizability and analysis. While this report was not one of the Outstanding papers, it was definitely innovative and commendable interdisciplinary modeling. + +# Analysis/Reflection + +The more effective papers took time to reflect on and discuss the ways in which their model appeared to be strong and the ways in which it appeared to be weak in addressing the problem. They also took time to demonstrate the sensitivity and robustness of their model, either by actually changing a few parameter values and demonstrating the change in output, or at least through a short discussion (such as "a $10\%$ change in any coefficient in this system of linear ODEs is likely to only have a corresponding change in the output because the eigenvalues appearing in our solution are so far separated"). + +This problem asks a lot, and even a long weekend is not much time; even in the Outstanding papers, we found that the financial piece of the scenario received very thin treatment by way of analysis and reflection. That was disappointing for the judges, because it was intended to be a major part of the interdisciplinary modeling. + +# Implementation + +Moderately successful papers often looked as if a member of the team wrote this section who had not been very involved in the modeling and analysis. Those recommendations for implementation relied more on insight and background research than they did on the results obtained from analyzing the model. Often the implementation recommendations made no distinctions between country, region, or class. + +The most effective papers recommended "policy through analysis": The policies that they recommended were justified by the model analysis that they had just been completed. It was not always necessary to quote numbers, but these papers made reference to their analysis when they presented their recommendations. These papers also were the ones that often made recommendations on resources and policy based on country wealth or demographics. They were also explicit in the assumptions that led to their recommendations; e.g., whether they recommended that certain countries be given priority access to ARV therapy because they have the highest incidence of disease and the objective is to minimize suffering; or to give priority to vaccine therapy in different countries because the objective is to prevent HIV spread in other less-infected areas of the world. + +# Communication + +All teams demonstrated that they understood that the clarity and style of writing were very important to an effective product. We were generally pleased with most papers; very few teams were not clear and effective in their prose, and therefore, many papers earned congratulatory comments by the judges. + +The most effective papers ensured that all of the sections were connected to one another—not just by adding a few sentences, but by articulating the logical + +connection between subsequent sections, in particular on Modeling, Analysis, and Recommendation, and those preceding. They also understood that well-selected graphics could be very effective in making a point, but that gratuitous graphics that were easy to make but which did not make an important point were distracting. Teams are reminded that proper documentation is always necessary to an effective paper. Judges saw some undocumented material for which they were sure they knew the source, and for which they felt it necessary to run checks. + +As to length, short and succinct with adequate explanation is always preferred. Long, rambling papers were eliminated because of the frustration in reading too much detail or repetition. + +# Conclusion + +The judges extend their thanks and congratulations to all of the teams. We truly enjoyed reading and studying your work and have come to have quite a bit of confidence in your abilities. We are interested and excited to see what problems you will attack as experienced interdisciplinary modelers when your studies are completed. + +# About the Authors + +Kari Murad is an Associate Professor at the College of Saint Rose in Albany, NY. Her teaching interests include immunology, microbiology, virology and science education. She is actively involved in the National Science Foundation-supported SENCER-project (Science Education for New Civic Engagements and Responsibilities), service learning and problem-based science education initiatives on her campus. Additionally, she is currently the editor of Science Teachers Bulletin, a magazine for K-12 science teachers in New York State, and is the director of the Albany city middle school science fair (Joseph Henry Science Fair). + +Joe Myers has served for two decades in the Dept. of Mathematical Sciences at the United States Military Academy. He holds degrees in Applied Mathematics and other disciplines and is a licensed Professional Engineer. He currently serves as a Professor, having directed freshman calculus, sophomore multivariable calculus, the electives program, and the research program. He has been involved in several major initiatives to improve teaching and learning, including building interdisciplinary activities and programs under the NSF-sponsored Project Intermath; integrating technology and student laptop computers into the classroom; and weaving modeling, history, and writing threads into the mathematics curriculum. He enjoys modeling and problem solving, has posed and guided the research of dozens of math majors, and has been involved in several research projects with the Army Research Laboratory. + +# Author's Commentary: The Outstanding HIV/AIDS Papers + +Heidi Williams + +Economics Dept. + +Harvard University + +Littauer Center + +Cambridge, MA 02138 + +heidi.l.williams.03@alum.dartmouth.org + +# Introduction + +According to estimates by the World Bank, more than one billion individuals worldwide live on less than \(1 per day. Internationally, countries have prioritized improving the lives of the world's poor through mechanisms such as the United Nations Millennium Development Goals—which seek, by 2015, to halve extreme poverty, halt the spread of HIV/AIDS, provide universal primary education, and achieve a number of other goals. Within the United States, President George W. Bush has supported "a new compact for global development" through institutions such as the recently-created Millennium Challenge Corporation. + +Yet despite billions of dollars having been spent on attempts to improve the lives of the world's poor, we lack a consensus on how to allocate foreign aid most effectively. Such decisions inherently involve trade-offs: for any given level of financial resources, more funding devoted to building schools implies less funding devoted to programs aimed at reducing government corruption. In addition, foreign aid donors often must choose among diverse potential programs without any solid evidence on the relative effectiveness of such programs. + +Such decisions are complex even within a more narrow focus. In funding health programs, for example, at the planning stage decisions must often be made based on unreliable data and assumptions that are difficult, if not impossible, to verify. For example, is the development of a malaria vaccine even scientifically feasible? What is the optimal pattern of introduction and use of second-line treatments for multi-drug resistant tuberculosis? Prospec + +The UMAP Journal 27 (2) (2006) 181-183. ©Copyright 2006 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +tively weighing the expected costs and benefits of alternative programs necessarily involves a complex set of assumptions, calculations, and estimations. Even once programs have been implemented, we often fail to conduct rigorous evaluations—thus resulting in missed opportunities to learn which programs are most effective. + +# The Contest Question + +The main goal of this year's interdisciplinary modeling problem was to encourage teams to grapple with some of these issues within the relatively more narrow context of addressing HIV/AIDS. As the HIV/AIDS epidemic enters its 25th year, both the number of infections and the number of deaths due to the disease continue to rise. Despite enormous efforts on a number of fronts, we remain uncertain on how best to allocate resources to fight this epidemic. + +In this year's problem, teams were asked to advise the United Nations on how to manage the resources available for addressing HIV/AIDS. Their job was to model several scenarios of interest and to then use their models to recommend the optimal allocation of financial resources. + +We first asked teams to consider what trends could be expected in HIV / AIDS morbidity and mortality in the absence of any additional interventions. This is a complex problem that encouraged teams to analyze a variety of historical demographic and health data on fertility, population, age distribution, life expectancy, and disease burden. + +In practice, HIV/AIDS funding could be focused on a wide range of interventions. Prevention-focused interventions include voluntary counseling and testing programs, school-based AIDS education, and distributing medicines to prevent mother-to-child transmission of the virus. Care interventions can include treating the virus as well as treating other opportunistic infections. For this work, we asked teams to focus on modeling only two potential interventions: provision of antiretroviral (ARV) drug treatments, and provision of a hypothetical HIV/AIDS preventive vaccine. + +Evaluating the potential impacts of these two interventions required the teams to decide on realistic assumptions in order to generate estimates of the costs and benefits of each intervention. What year might a preventive HIV/AIDS vaccine become available? Should children or adults be targeted for vaccination, and what vaccine coverage rates could be expected for either group? What delivery costs should be assumed for the drug therapy and vaccine? Would there be epidemiological externalities from vaccination that should be taken into account? The teams were also asked to re-analyze their scenarios in light of the potential emergence of drug-resistant strains of HIV/AIDS. + +A major focus of this year's problem was for teams to analyze these issues in the context of realistic political and economic constraints. For example, teams + +were encouraged to base their models on the level of foreign aid resources that they realistically expected to be available. The teams were also asked to interpret the results of their models in light of such political and economic constraints—in part through drafting a white paper to the United Nations which provided their team's recommendations on the optimal allocation of resources as well as their recommendations for how best to coordinate donor involvement for HIV/AIDS. + +# References + +World Bank poverty estimates: http://www.worldbank.org/poverty/. + +United Nations Millennium Development Goals: http://www.un.org/millenniumgoals/. + +United States Millennium Challenge Corporation: http://www.mca.gov/. + +On improving the effectiveness of foreign aid: Evaluation Gap Working Group. 2006. When Will We Ever Learn? Improving Lives through Impact Evaluation. Center for Global Development. http://www.cgdev.org/content/publications/detail/7973. + +# About the Author + +![](images/430c867f391db5e4a5d08882975eb95cb83a8614a4d9e0f3e2c8c7d1ac26c177.jpg) + +Heidi Williams received her A.B. in mathematics from Dartmouth College, where her studies and research were focused on number theory and cryptography. She also received an M.Sc. in development economics from the University of Oxford, supported by a Rhodes scholarship. Heidi is currently a Ph.D. student in economics at Harvard University, where her research is focused in development, health, and public economics. + +For the past several years, Heidi has worked with the Center for Global Development (a nonprofit think tank in + +Washington, DC) and in collaboration with Harvard economist Professor Michael Kremer and other academics to contribute to public policy efforts aimed at speeding the development of (and increasing access to) vaccines for diseases (such as malaria) that are concentrated in low-income countries. For more information, see www.cgdev.org/vaccine. \ No newline at end of file diff --git a/MCM/1995-2008/2006MCM/2006MCM.md b/MCM/1995-2008/2006MCM/2006MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..e5cd4d376c635cd5a973dd6bd3e7d8484d9db700 --- /dev/null +++ b/MCM/1995-2008/2006MCM/2006MCM.md @@ -0,0 +1,5664 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Associate Director, + +Mathematics Division + +Program Manager, + +Cooperative Systems + +Army Research Office + +P.O.Box 12211 + +Research Triangle Park, + +NC 27709-2211 + +David.Arney1@arl.army.mil + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Production Editor + +Timothy McLean + +Distribution + +Kevin Darcy + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 27, No. 3 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meye + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young University + +Army Research Office + +AT&T Shannon Research Laboratory + +University of Houston-Downtown + +Harvey Mudd College + +Oberlin College + +Troy University—Montgomery Campus + +University of Wisconsin—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Harvey Mudd College + +Adelphi University + +Eastern Washington University + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes a CD-ROM of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2620 $104 + +(Outside U.S.) #2621 $117 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2670 $479 + +(Outside U.S.) #2671 $503 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2640 $208 + +(Outside U.S.) #2641 $231 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2610 $41 + +(Outside U.S.) #2610 $41 + +To order, send a check or money order to COMAP, or call toll-free + +1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730 + +© Copyright 2006 by COMAP, Inc. All rights reserved. + +# Vol. 27, No. 3 2006 + +# Table of Contents + +# Editorial + +Because Math Matters Solomon A. Garfunkel 185 + +About This Issue 187 + +# Special Section on the MCM + +Results of the 2006 Mathematical Contest in Modeling Frank Giordano 189 + +Abstracts of the Outstanding Papers and the Fusaro Papers 221 + +Sprinkler Systems for Dummies: Optimizing a Hand-Moved Sprinkler System +Ben Dunham, Steffan Francischetti, and Kyle Nixon 237 + +Fastidious Farmer Algorithms (FFA) +Matthew A. Fischer, Brandon W. Levin, and Nikifor C. Bliznashki 255 + +A Schedule for Lazy but Smart Ranchers Wang Cheng, Wen Ye, and Yu Yintao 269 + +Optimization of Irrigation Bryan J.W. Bell, Yaroslav Gelfand, and Simpson H. Wong 285 + +Sprinkle, Sprinkle, Little Yard Bryan Camley, Bradley Klingenberg, and Pascal Getreuer . 295 + +Developing Improved Algorithms for Irrigation Systems Ying Yujie, Jin Qiwei, and Zhou Kai 315 + +Judge's Commentary: The Outstanding Irrigation Papers Daniel Zwillinger 329 + +Profit-Maximizing Allocation of Wheelchairs in a Multi-Concourse Airport + +Christopher Yetter, Neal Gupta, and Benjamin Conlee 333 + +Minimization of Cost for Transfer Escorts in an Airport Terminal + +Elaine Angelino, Shaun Fitzgibbons, and Alexander Glasser . . .349 + +Application of Min-Flow to Airline Accessibility Services + +Dan Gulotta, Daniel Kane, and Andrew Spann 367 + +Cost Minimization of Providing a Wheelchair Escort Service + +Matthew J. Pellicone, Michael R. Sasseville, and Igor Zhitninsky .387 + +A Simulation-Driven Approach for a Cost-Efficient Airport Wheelchair Assistance Service + +Samuel F. Feng, Tobin G. Isaac, and Nan Xiao 399 + +Judges' Commentary: The Fusaro Award Wheelchair Paper + +Daniel Zwillinger 413 + +# Publisher's Editorial + +# Because Math Matters + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +175 Middlesex Turnpike, Suite 3B + +Bedford, MA 01730-1459 + +s.garfunkel@mail.comap.com + +The President has recently appointed a National Mathematics Advisory Panel. National newspapers carry lead editorials on math education. Why and why now? For many years, there has been a debate on how best to teach mathematics in our nation's schools. There are a number of reasons why this discussion has gone on so long and become so heated. + +The first is that there is a great deal at stake. From Sputnik on, we have worried about our ability to compete in science and industry—first with Russia, then with Japan, and now with India and China (not to mention Western Europe). And math matters. Mathematics is at the heart of technological innovation, advances in engineering, physics, medicine, biology, and on and on. Mathematical models can forecast environmental change and monitor energy supply and demand. Without mathematics we wouldn't have MRI's or maps of the human genome. + +Second, we are not doing a very good job. U.S. students are falling behind students in most industrial countries as measured on any number of international tests. And again math matters. We know that the careers of the 21st century will require more and more quantitative reasoning. We know that in this global economy, companies can and will outsource jobs to countries with more mathematically skilled work forces. To quote CBS news great Fred Friendly, we don't want to become a country "in which we take in each other's laundry." + +The third reason the debate is so heated is that it has become very political. We hear terms like "back to basics" and "fuzzy math." But what's lost in all of this is the kids. Education debates need at their heart to be about education. We want our children to learn, to understand and be able to use mathematics as they go through school and work. Not all students will go on to be mathematicians, but they will all be called upon to use the mathematics they know. + +I can't emphasize this point strongly enough: The half-life of students in mathematics courses remains one year from 10th grade on. In other words, the number of students taking math in 11th grade is half those taking math in 10th, and so on for every year right up until the Ph.D. What happens to the other half? We simply cannot afford to throw away half of our students each year because they don't have serious prospects of becoming research mathematicians. + +We can continue to ask students problems of the form: Solve for $x$ in the equation $x^{2} - 3x + 1 = 0$ . Or we can ask at what proportion of performance enhancing drug use in the population is it cheaper to test two athletes by pooling their samples—a real-life question that leads to the same equation. We can teach mathematics through engaging contexts that students will see as real and important, or we can continue to insist on honing skills. Learning to hammer a nail before trying to build a house sounds right. But hammering nails for six years before even knowing that there's such a thing as a house just doesn't make sense. If mathematics is a life skill, then students need to see mathematical skills at work in their lives. + +Math matters. We cannot afford partisan politics. The National Science Foundation, staffed by independent scientists and mathematicians, has led the effort for innovation in mathematics education since the 1950s. Innovation is desperately needed. We must not go back to methods that have consistently failed us. After all, the reason that the current reform movement began in the first place was that we were unhappy with student performance. What we need are serious people who recognize the importance and difficulties in getting a quantitatively literate citizenry and who are willing to put aside any specific political agenda. + +Articles in the Wall Street Journal and the New York Times have declared that the Math Wars are over and that the Back to Basics movement has won. Well, I have news. The Math Wars are not over. It doesn't matter that these newspapers declare, "Mission Accomplished." The mission is about helping children learn, not about winning a political battle or finding common ground. We will continue to fight for + +- the introduction of new and relevant content, +- the appropriate use of new technologies, +- showing students important contemporary applications, and +- using innovative pedagogical approaches. + +And we will do so because math matters. + +# About the Author + +Solomon Garfunkel, previously of Cornell University and the University of Connecticut at Storrs, has dedicated the last 25 years to research and development efforts in mathematics education. He has served as project director for the Undergraduate Mathematics and Its Applications (UMAP) and the High School Mathematics and Its Applications (HiMAP) Projects funded by NSF, and directed three telecourse projects including Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra, for the Annenberg/CPB Project. He has been the Executive Director of COMAP, Inc. since its inception in 1980. Dr. Garfunkel was the project director and host for the series, For All Practical Purposes: Introduction to Contemporary Mathematics. He was the Co-Principal Investigator on the ARISE Project, and is currently the Co-Principal Investigator of the CourseMap, ResourceMap, and WorkMap projects. In 2003, Dr. Garfunkel was Chair of the National Academy of Sciences and Mathematical Sciences Education Board Committee on the Preparation of High School Teachers. + +# About This Issue + +Paul J. Campbell +Editor + +This issue of The UMAP Journal continues the practice inaugurated in Vol. 26. + +This issue runs longer than 92 pp—in fact, it runs over 200 pp. Not all of the articles in this issue are printed in the paper copy. Some articles appear only in the Tools for Teaching 2006 CD-ROM (and at http://www.comap.com for COMAP members), which will reach members and subscribers at a later time and will also contain the entire 2006 year of Journal issues. + +However, all articles of this issue on the CD-ROM appear in the printed table of contents and are regarded as published in the Journal. In addition, the abstract of each Outstanding paper appears in the printed version. Paging of the issue runs continuously, including in sequence articles that do not appear in printed form. So, if you notice that, say, page 350 in the printed copy is followed by page 403, your copy is not necessarily defective! The articles corresponding to the intervening pages will be on the CD-ROM. + +We hope that you find this arrangement, if not entirely satisfying, at least satisfactory. It means that we do not have to procrusteanize the content of the Journal to fit a fixed number of allocated pages. For example, we might otherwise need to select only two or three Outstanding MCM papers to publish (a hard task indeed!). Instead, we continue to bring you the full content as in the past. + +# Modeling Forum + +# Results of the 2006 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +frgiorda@nps.navy.mil + +# Introduction + +A total of 747 teams of undergraduates, from 270 institutions and 403 departments in 12 countries, spent the first weekend in February working on applied mathematics problems in the 22nd Mathematical Contest in Modeling (MCM). + +The 2006 MCM began at 8:00 P.M. EST on Thursday, February 2 and ended at 8:00 P.M. EST on Monday, February 6. During that time, teams of up to three undergraduates were to research and submit an optimal solution for one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problems at the appropriate time, and entered completion data through COMAP's MCM Website. After a weekend of hard work, solution papers were sent to COMAP on Monday. The top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first 21 contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2005). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first 10 years of the contest and a winning paper for each year. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. That volume is available on COMAP's special Modeling Resource CD-ROM (http://www.comap.com/product/?idx=613). In addition, available from COMAP is a new volume, The MCM at 21, which contains all of + +the 20 problems from the second 10 years of the contest and a winning paper from each year. + +This year's Problem A asked teams to develop a strategy for irrigating a field. Problem B asked teams to prepare a bid and a strategy for managing the provision of wheelchairs and escorts at airports. The 11 Outstanding solution papers are published in this issue of The UMAP Journal, along with commentary from problem authors, contest judges, and outside experts. + +In addition to the MCM, COMAP also sponsors the Interdisciplinary Contest in Modeling (ICM) and the High School Mathematical Contest in Modeling (HiMCM). The ICM, which runs concurrently with MCM, offers a modeling problem involving concepts in operations research, information science, and interdisciplinary issues in security and safety. Results of this year's ICM are on the COMAP Website at http://www.comap.com/undergraduate/contests; results and Outstanding papers appeared in Vol. 26 (2005), No. 2. The HiMCM offers high school students a modeling opportunity similar to the MCM. Further details about the HiMCM are at http://www.comap.com/highschool/ contests. + +# Problem A: Positioning and Moving Sprinkler Systems for Irrigation + +There are a wide variety of techniques available for irrigating a field. The technologies range from advanced drip systems to periodic flooding. One of the systems that is used on smaller ranches is the use of "hand move" irrigation systems. Lightweight aluminum pipes with sprinkler heads are put in place across fields, and they are moved by hand at periodic intervals to insure that the whole field receives an adequate amount of water. This type of irrigation system is cheaper and easier to maintain than other systems. It is also flexible, allowing for use on a wide variety of fields and crops. The disadvantage is that it requires a great deal of time and effort to move and set up the equipment at regular intervals. + +Given that this type of irrigation system is to be used, how can it be configured to minimize the amount of time required to irrigate a field that is 80 meters by 30 meters? For this task you are asked to find an algorithm to determine how to irrigate the rectangular field that minimizes the amount of time required by a rancher to maintain the irrigation system. One pipe set is used in the field. You should determine the number of sprinklers and the spacing between sprinklers, and you should find a schedule to move the pipes, including where to move them. + +A pipe set consists of a number of pipes that can be connected together in a straight line. Each pipe has a 10-cm inner diameter with rotating spray nozzles that have a 0.6-cm inner diameter. When put together the resulting pipe is 20 meters long. At the water source, the pressure is 420 kilo-Pascals and has + +a flow rate of 150 liters per minute. No part of the field should receive more than $0.75\mathrm{cm}$ per hour of water, and each part of the field should receive at least 2 centimeters of water every 4 days. The total amount of water should be applied as uniformly as possible. + +# Problem B: Wheelchair Access at Airports + +One of the frustrations with air travel is the need to fly through multiple airports, and each stop generally requires each traveler to change to a different airplane. This can be especially difficult for people who are not able to easily walk to a different flight's waiting area. One of the ways that an airline can make the transition easier is to provide a wheelchair and an escort to those people who ask for help. It is generally known well in advance which passengers require help, but it is not uncommon to receive notice when a passenger first registers at the airport. In rare instances an airline may not receive notice from a passenger until just prior to landing. + +Airlines are under constant pressure to keep their costs down. Wheelchairs wear out and are expensive and require maintenance. There is also a cost for making the escorts available. Moreover, wheelchairs and their escorts must be constantly moved around the airport so that they are available to people when their flight lands. In some large airports, the time required to move across the airport is nontrivial. The wheelchairs must be stored somewhere, but space is expensive and severely limited in an airport terminal. Also, wheelchairs left in high traffic areas represent a liability risk as people try to move around them. Finally, one of the biggest costs is the cost of holding a plane if someone must wait for an escort and becomes late for their flight. The latter cost is especially troubling because it can affect the airline's average flight delay which can lead to fewer ticket sales as potential customers may choose to avoid an airline. + +Epsilon Airlines has decided to ask a third party to help them obtain a detailed analysis of the issues and costs of keeping and maintaining wheelchairs and escorts available for passengers. The airline needs to find a way to schedule the movement of wheelchairs throughout each day in a cost effective way. They also need to find and define the costs for budget planning in both the short term and in the long term. + +Epsilon Airlines has asked your consultant group to put together a bid to help them solve their problem. Your bid should include an overview and analysis of the situation to help them decide if you fully understand their problem. They require a detailed description of an algorithm that you would like to implement which can determine where the escorts and wheelchairs should be and how they should move throughout each day. The goal is to keep the total costs as low as possible. Your bid is one of many that the airline will consider. You must make a strong case as to why your solution is the best and show that it will be able to handle a wide range of airports under a variety of circumstances. + +Your bid should also include examples of how the algorithm would work for a large (at least four concourses), a medium (at least two concourses), and a small airport (one concourse) under high and low traffic loads. You should determine all potential costs and balance their respective weights. Finally, as populations begin to include a higher percentage of older people who have more time to travel but may require more aid, your report should include projections of potential costs and needs in the future with recommendations to meet future needs. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at either Appalachian State University (Irrigation Problem) or at the National Security Agency (Wheelchair Problem). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +This year, again an additional Regional Judging site was created at the U.S. Military Academy to support the growing number of contest submissions. + +Final judging took place at Harvey Mudd College, Claremont, California. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Irrigation Problem685132293516
Wheelchair Problem53756133231
11122188426747
+ +The 11 papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +# Irrigation Papers + +"Sprinkler Systems for Dummies: Optimizing a Hand-Moved Sprinkler System" + +Carroll College + +Helena, MT + +Mark Parker + +Ben Dunham + +Steffan Francischetti + +Kyle Nixon + +"Fastidious Farmer Algorithms (FFA)" + +Duke University + +Durham, NC + +William G. Mitchener + +Matthew A. Fischer + +Brandon W. Levin + +Nikifor C. Bliznashki + +"A Schedule for Lazy but Smart Ranchers" + +Shanghai Jiaotong University + +Shanghai, China + +Song Baorui + +Wang Cheng + +Wen Ye + +Yu Yintao + +"Optimization of Irrigation" + +University of California + +Davis, CA + +Sarah A. Williams + +Bryan J.W. Bell + +Yaroslav Gelfand + +Simpson H. Wong + +"Sprinkle, Sprinkle, Little Yard" + +University of Colorado + +Boulder, CO + +Bengt Fornberg + +Brian Camley + +Bradley Klingenberg + +Pascal Getreuer + +"Developing Improved Algorithms for Irrigation Systems" + +Zhejiang University of Technology + +Hangzhou, China + +Wang Shiming + +Ying Yujie + +Jin Qiwei + +Zhou Kai + +# Wheelchair Papers + +"Profit Maximizing Allocation of Wheelchairs in a Multi-Concourse Airport" + +Harvard University + +Cambridge, MA + +Clifford H. Taubes + +Christopher Yetter + +Neal Gupta + +Benjamin Conlee + +"Minimization of Cost for Transfer" Escorts in an Airline Terminal + +Harvard University + +Cambridge, MA + +Michael Brenner + +Elaine Angelino + +Shaun Fitzgibbons + +Alexander Glasser + +"Application of Min-Cost Flow to Airline Accessibility Services" + +Massachusetts Institute of Technology + +Cambridge, MA + +Martin Z. Bazant + +Dan Gulotta + +Daniel Kane + +Andrew Spann + +"Cost Minimization of Providing a Wheelchair Escort Service" + +Rensselaer Polytechnic Institute + +Troy, NY + +Peter Roland Kramer + +Matthew J. Pellicione + +Michael R. Sasseville + +Igor Zhitnitsky + +"A Simulation-Driven Approach for a Cost-Efficient Airport Wheelchair Assistance Service" + +Rice University + +Houston, TX + +Mark Embree + +Samuel F. Feng + +Tobin G. Isaac + +Nan Xiao + +# Meritorious Teams + +Irrigation Problem (85 teams) + +Asbury College, Wilmore, KY (David Couliette) + +Austin Peay State University, Clarksville, TN, (Nell Rayburn) + +Beijing Jiaotong University, Beijing, China (three teams) (Wang Xiaoxia) (Wang Zhouhong) (Zhang Shangli) + +Beijing University of Posts and Telecommunications, Beijing, China (He Zuguo) + +Bethel University, St. Paul, MN (William Kinney) + +Cal-Poly Pomona University, Pomona, CA (three teams) (Ioana Mihaila) + +(Hubertus von Bremen) (Kurt Vandervoort, Physics) + +California Polytechnic State University, San Luis Obispo, San Luis Obispo, CA + +(Lawrence Sze) + +California State University at Monterey Bay, Seaside, CA (Hongde Hu) + +Carroll College, Helena, MT (Holly Zullo) + +Central South University, Changsha, Hunan, China (Qin Xuanyun) + +China University of Mining and Technology, School of Computer Science and + +Technology, Xuzhou, Jiangsu, China (Jiang Shujuan) + +Chongqing University, Dept. of Statistics and Actuarial Science, Chongqing, China + +Zhengmin Duan) + +College of Mount St. Joseph, Cincinnati, OH (Scott Sportsman) + +Columbia University, New York, NY (David Keyes) + +Cornell University, Ithaca, NY (two teams) (Alexander Vladimirsky) + +(Shane Henderson, Operations Research and Industrial Engineering) + +Dalian Nationalities Innovation College, Dalian, Liaoning, China (Rixia Bai) + +Drury University, Springfield, MO (Keith Coates) + +Duke University, Dept. of Computer Science, Durham, NC (Owen Astrachan) + +Harvey Mudd College, Claremont, CA (two teams) (Jon Jacobsen) + +(Ran Libeskind-Hadas, Computer Science) + +Hefei University of Technology, Hefei, Anhui, China (Xueqiao Du) + +Helsingin Matematiikkalukio, Helsinki, Finland (Esa Lappi) + +Humboldt State University, Dept. of Environmental Resources Engineering, Arcata, CA + +(Brad Finney) + +Jilin University, Institute of Mechanical Science and Engineering, Changchun, Jilin, China + +(Fang Peichen) + +Johns Hopkins University, Baltimore, MD (Fred Torcaso) + +Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea + +(two teams) (Dongsup Kim, Biosystems) (Ki Hyoung Ko) + +Lawrence Technological University, Southfield, MI (Ruth Favro) + +Maggie Walker Governor's School, Richmond, VA (two teams) + +(Harold Houghton, Science) (John Barnes) + +Nanchang University, Nanchang, Jiangxi, China (Chen Tao) + +National University of Defense Technology, School of Humanity and Management, + +Changsha, Hunan, China (Wang Dan) + +National University of Ireland, Galway, Ireland (Niall Madden) + +Northeastern University, Dept. of Computer Science, Shenyang, Liaoning, China + +(Zhao Shuying Zhao) + +Northwestern Polytechnical University, Dept. of Applied Physics, Xi'an, Shaanxi, China + +(Zhe Liu) + +PLA University of Science and Technology, Institute of Command Automation, + +Nanjing, Jiangsu, China (Liu Shousheng) + +Rensselaer Polytechnic Institute, Troy, NY (Peter Kramer) + +Rowan University, Glassboro, NJ (Hieu Nguyen) (two teams) + +Shanghai Foreign Language School, Shanghai, China (Pan Liquin) + +Shanghai Jiaotong University, Shanghai, China (Song Baorui) + +Shanghai Jiaotong University, Minhang Branch, Shanghai, China (Huang Jianguo) + +Slippery Rock University, Slippery Rock, PA (Richard Marchand) + +South-China Normal University, Dept. of Probability and Statistics, Guangzhou, Guangdong, China (Zhang Shaohui) + +Southern Connecticut State University, New Haven, CT (Therese Bennett) + +Sun Yat-sen University, Dept. of Computer Science, Guangzhou, Guangdong, China (Liu Xiaoming) + +Truman State University, Kirksville, MO (Steve Smith) + +Tsinghua University, Beijing, China (Zhiming Hu) + +University of Alaska Fairbanks, Dept. of Computer Science, Fairbanks, AK (Orion Lawlor) + +University College Cork, Cork, Ireland (three teams) (Dmitrii Rachinskii) (Andrew Usher) (James Grannell, Applied Mathematics) + +University of California, Berkeley, CA (Nicolai Reshetikhin) + +University of Colorado, Boulder, CO (Anne Dougherty) + +University of Colorado at Denver, Denver, CO (Gary Olson) + +University of Delaware, Newark, DE (Louis Rossi) + +University of Helsinki, Helsinki, Finland (Petri Ola) + +University of Massachusetts Lowell, Lowell, MA (James Graham-Eagle) + +University of Oxford, Oxford, United Kingdom, (Jeffrey Giansiracusa) + +University of San Diego, San Diego, CA (Diane Hoffoss) (two teams) + +University of Saskatchewan, Saskatoon, SK, Canada (James Brooke) + +University of Science and Technology of China, Hefei, Anhui, China (two teams) +(Meng Qiang) (Yang Zhouwang) + +University of Stellenbosch, Stellenbosch, Western Cape, South Africa (Jan van Vuuren) + +University of West Georgia, Carrollton, GA (Scott Gordon) + +Victoria University of Wellington, Wellington, New Zealand (Mark McGuinness) + +Wake Forest University, Winston Salem, NC (two teams) (Edward Allen) + +Western Washington University, Bellingham, WA (Tjalling Ypma) + +Xi'an Communication Institute, School of Information, Xi'an, Shaanxi, China (two teams) (Zhang Jianhang Zhang) (Kang Jinlong) + +Xi'an Communication Institute School of Science, Xi'an, Shaanxi, China (two teams) (Yang Dongsheng) (Li Guo) + +Xi'an Jiaotong University, Xi'an, Shaanxi, China (Dai Yonghong) + +Xidian University, Xi'an, Shaanxi, China (two teams) (Feng Hailin) (Li Wei) + +Youngstown State University, Youngstown, OH (Angela Spalsbury) + +Zhejiang University, Hangzhou, Zhejiang, China (Yang Qifan) + +Zhejiang University City College, Hangzhou, Zhejiang, China (Zhang Huizeng) + +# Wheelchair Problem (37 teams) + +Beijing University of Posts and Telecommunications, Dept. of Applied Physics, Beijing, China (Ding Jinkou) + +Central University of Finance and Economics, Beijing, China (Fan Xiaoming) + +Central Washington University, Ellensburg, WA (Stuart, Boersma) + +Davidson College, Davidson, NC (three teams) (Timothy Chartier, two teams) (Mark Foley, Economics) + +Duke University, Durham, NC (two teams) (William Mitchener) + +(Owen Astrachan, Computer Science) + +Eastern Oregon University, La Grande, OR (David Allen) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Jiao Guanghong) + +Harvey Mudd College, Dept. of Computer Science, Claremont, CA (Ran Libeskind-Hadas) + +Humboldt State University Dept. of Environmental Resources Engineering, Arcata, CA (Brad Finney) + +Lewis and Clark College, Portland, OR (Liz Stanhope) + +Maggie Walker Governor's School, Richmond, VA (John Barnes) + +Minhang Branch of Shanghai Jiaotong University, Shanghai, China (Huang Jianguo) + +Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China (Kong Gaohua) + +Nanjing University of Science and Technology, Dept. of Statistics, Nanjing, Jiangsu, China (Liu Liwei) + +Northern Kentucky University, Highland Heights, KY (Gail Mackin) + +Päivölä College, Tarttila, Finland (Janne Puustelli) + +Regis University, Denver, CO (Jim Seibert) + +Shanghai Jiaotong University, Shanghai, China (Zhou Gang) + +Sichuan University, Dept. of Statistics, Chengdu, Sichuan, China (Zhou Jie) + +Simpson College, Indianola, IA (Murphy Waggoner) + +South China University of Technology, Guangzhou, Guangdong, China (Quan Liu) + +Southwest University of Finance and Economics, Dept. of Economic Mathematics, Chengdu, Sichuan, China (Sun Jiangming) + +Tsinghua University, Beijing, China (Chi Chi Hung) + +University of Alaska Fairbanks, Dept. of Computer Science, AK (Orion Lawlor) + +University of California, Davis, CA (Sarah, Williams) + +University of Colorado at Boulder, Dept. of Physics, Boulder, CO (Michael Ritzwoller) + +University of Electronic Science and Technology of China, Dept. of Information and Computation Science, Chengdu, Sichuan, China (Xu Quanzi) + +University of Guangxi, Dept. of Information Science, Nanking, Guangxi, China (Wang Xing) + +University of South Florida Dept. of Industrial and Management Systems Engineering, Tampa, FL (Nan Kong) + +University of Washington, Seattle, WA (two teams) (Anne Greenbaum) (James Morrow) + +Wake Forest University Dept. of Economics, Winston Salem, NC (Claire Hammond) + +Xi'an Jiaotong University, Xi'an, Shaanxi, China (He Xiaoliang) + +Xidian University, Xi'an, Shaanxi, China (Zhu Qiang) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, recognized the teams from Duke University (Irrigation Problem) and Massachusetts Institute of Technology (Wheelchair Problem) as INFORMS Outstanding teams and provided the following recognition: + +- a letter of congratulations from the current president of INFORMS to each team member and to the faculty advisor; +- a check in the amount of $300 to each team member; + +- a bronze plaque for display at the team's institution, commemorating their achievement; +- individual certificates for team members and faculty advisor as a personal commemoration of this achievement; +- a one-year student membership in INFORMS for each team member, which includes their choice of a professional journal plus the OR/MS Today periodical and the INFORMS society newsletter. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from University of Colorado (Irrigation Problem) and Harvard University (team of Christopher Yetter, Neal Gupta and Benjamin Conlee; advisor Clifford H. Taubes) (Wheelchair Problem). Each of the team members was awarded a $300 cash prize and the teams received partial expenses to present their results in a special Minisymposium at the SIAM Annual Meeting in Boston, MA in July. Their schools were given a framed hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding team from each problem as an MAA Winner. The teams were from University of Colorado (Irrigation Problem) and Rice University (Wheelchair Problem). With partial travel support from the MAA, the Rice University team presented their solution at a special session of the MAA Mathfest in Knoxville, TN in August. Each team member was presented a certificate by Richard S. Neal of the MAA Committee on Undergraduate Student Activities and Chapters. + +# Ben Fusaro Award + +One Meritorious paper was selected for each problem for the Ben Fusaro Award, named for the Founding Director of the MCM and awarded for the second time this year. It recognizes an especially creative approach; details concerning the award, its judging, and Ben Fusaro are in The UMAP Journal 25 (3) (2004): 195-196. The Ben Fusaro Award winners were from Shanghai Jiaotong University (Shanghai, China) (Irrigation Problem) and the Maggie L. Walker Governor's School (Richmond, VA) (Wheelchair Problem). + +# Judging + +Director + +Frank R. Giordano, Naval Postgraduate School, Monterey, CA + +Associate Directors + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick J. Driscoll, Dept. of Systems Engineering, U.S. Military Academy, + +West Point, NY + +William P. Fox, Mathematics Dept., Francis Marion University, Florence, SC + +# Irrigation Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, + +Stillwater, OK (MAA) + +Associate Judges + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, + +Appalachian State University, Boone, NC (Triage) + +Kelly Black, Mathematics Dept., Union College, Schenectady, NY + +Steve Horton (MAA), Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Mario Juncosa, RAND Corporation, Santa Monica, CA (retired) + +Michael Moody, Olin College of Engineering, Needham, MA + +David H. Olwell (INFORMS), Naval Postgraduate School, Monterey, CA + +John L. Scharf, Mathematics Dept., Carroll College, Helena, MT + +Richard Douglas West, Francis Marion University, Florence, SC + +(Ben Fusaro Award) + +Daniel Zwillinger, Newton, MA (SIAM) + +# Wheelchair Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, + +Bloomington, IN + +Associate Judges + +Peter Anspach, National Security Agency, Ft. Meade, MD (Triage) + +Karen D. Bolinger, Mathematics Dept., Clarion University of Pennsylvania, + +Clarion, PA + +Jim Case (SIAM) + +Lisette de Pillis, Mathematics Dept., Harvey Mudd College, Claremont, CA + +J. Douglas Faires, Youngstown State University, Youngstown, OH + +Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, Salisbury University, Salisbury, MD (MAA) + +Dan Solow, Mathematics Dept., Case Western Reserve University, Cleveland, OH (INFORMS) + +Michael Tortorella, Dept. of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ + +Marie Vanisko, Dept. of Mathematics, California State University—Stanislaus, Turlock, CA (Ben Fusaro Award) + +# Regional Judging Session + +Head Judge + +Patrick J. Driscoll, Dept. of Systems Engineering, United States Military Academy (USMA), West Point, NY + +Associate Judges + +Merrill Blackman, Dept. of Systems Engineering, USMA + +Darrall Henderson, Dept. of Mathematical Sciences, USMA + +Michael Jaye, Dept. of Mathematical Sciences, USMA + +John Kobza, Dept. of Industrial and Systems Engineering, Texas Tech University, Lubbock, TX + +Ed Pohl, Dept. of Industrial and Systems Engineering, University of Arkansas, Fayetteville, AR + +# Triage Sessions: + +# Irrigation Problem + +Head Triage Judge + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, Appalachian State University, Boone, NC + +Associate Judges + +Mark Ginn, + +Jeff Hirst, + +Rick Klima, + +and + +Vicky Klima + +—all from Dept. of Math'1 Sciences, Appalachian State University, Boone, NC + +# Wheelchair Problem + +Head Triage Judge + +Peter Anspach, National Security Agency (NSA), Ft. Meade, MD + +Associate Judges + +Stewart Saphier, Dept. of Defense, Washington, DC + +Dean McCullough, High Performance Technologies, Inc. + +Craig Orr, NSA + +and other members of NSA. + +# Sources of the Problems + +Both problems were contributed by Kelly Black (Mathematics Dept., University of New Hampshire, Durham, NH). + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency and by COMAP. We thank Dr. Gene Berg of NSA for his coordinating efforts. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +We also thank for their involvement and support: + +- IBM Business Consulting Services, Center for Business Optimization; and +- Two Sigma Investments. (This group of experienced, analytical, and technical financial professionals based in New York builds and operates sophisticated quantitative trading strategies for domestic and international markets. The firm is successfully managing several billion dollars using highly automated trading technologies. For more information about Two Sigma, please visit http://www.twosigma.com.) + +We thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Editing (and sometimes substantial cutting) has taken place: Minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORAB
ALASKA
U. Alaska Fairbanks (CS)FairbanksOrion LawlorMM
ARIZONA
University of ArizonaTucsonBruce BaylyH
CALIFORNIA
Cal Poly PomonaPomonaIoana MihailaM
Hubertus von BremenM
Cal-Poly Pomona U. (Phys)PomonaKurt VandervoortM
Calif. Polytechnic State Univ.San Luis ObispoLawrence SzeM
Cal. State U. at Monterey BaySeasideHongde HuM
Cal. State Univ., BakersfieldBakersfieldMaureen RushP
Cal. State Univ., StanislausTurlockBrian JueP
Harvey Mudd College (CS)ClaremontJon JacobsenM,H
Ran Libeskind-HadasMM
Humboldt State U. (Env Eng)ArcataBrad FinneyMM
University of CaliforniaBerkeleyNicolai ReshetikhinM
DavisSarah WilliamsOM
University of San DiegoSan DiegoDiane HoffossM,M
COLORADO
Regis UniversityDenverJim SeibertM
University of ColoradoBoulderAnne DoughertyM
Bengt FornbergO,O
(Phys)Michael RitzwollerHM
Colorado SpringsRadu CascavalP
DenverGary OlsonM
CONNECTICUT
Sacred Heart UniversityFairfieldPeter LothH
Southern Conn. State U.New HavenTherese BennettM
DELAWARE
University of DelawareNewarkLouis RossiM
DISTRICT OF COLUMBIA
George Washington Univ.WashingtonDaniel UllmanH
FLORIDA
Embry-Riddle UniversityDaytona BeachGreg SpradlinH
Jacksonville UniversityJacksonvilleRobert HollisterH P
University of South Florida(Ind'l & Mgmnt Sys Eng)TampaBrian CurtinP
Nan KongM
GEORGIA
University of West GeorgiaCarrolltonScott GordonM,H
Wesleyan College(Chem & Phys)MaconJoseph IskraH,P
Charles BeneshP
ILLINOIS
Greenville CollegeGreenvilleGeorge PetersH,H
Wheaton CollegeWheatonPaul IsiharaHH
INDIANA
Franklin CollegeFranklinJohn BoardmanP
Goshen CollegeGoshenCharles CraneP
Indiana Univ. South BendSouth BendMorteza Shafii-MousaviP
Rose-Hulman Inst. of Tech.Terre HauteDavid RaderHP
Saint Mary's CollegeNotre DameJoanne SnowH,P
IOWA
Grand View CollegeDes MoinesSergio LochP
Grinnell CollegeGrinnellKaren ShumanH,H
Iowa State UniversityAmesStephen WillsonP
Luther CollegeDecorahReginald LaursenH,H
Steve HubbardP
Mt. Mercy CollegeCedar RapidsK. KnoppH
Simpson College(Bio)IndianolaMurphy WaggonerPM
Jeff ParmeleeH,P
KANSAS
Benedictine CollegeAtchisonLinda HerndonH
Kansas State UniversityManhattanDave AucklyP
KENTUCKY
Asbury CollegeWilmoreDavid CoulietteM,H
Northern Kentucky Univ.(Phys & Geology)Highland HeightsGail MackinHM
Sharmanthie FernandoP
MAINE
Colby CollegeWatervillePaul CohenHP
MARYLAND
Johns Hopkins UniversityBaltimoreFred TorcasoM,H
Loyola CollegeBaltimoreChristos XenophontosHP
Mount St. Mary's UniversityEmmitsburgFred PortierP
Salisbury UniversitySalisburyDivya DevadossP
Towson UniversityTowsonMichael O'LearyP
MASSACHUSETTS
Emmanuel CollegeBostonMatthew TomP
Harvard University (Eng)CambridgeClifford TaubesO
Michael BrennerO
Massachusetts Institute of Tech. (Phys)CambridgeMartin BazantPO
Leonid LevitovH
Simon's Rock College (Phys)Great BarringtonAllen AltmanPP
Michael BergmanH,P
Smith CollegeNorthamptonRuth HaasP
Univ. of Massachusetts LowellLowellJames Graham-EagleM
Western New England CollegeSpringfieldLorna HanesP
Worcester Polytechnic Institute (CS)WorcesterSuzanne WeekesHP
Stanley SelkowHP
MICHIGAN
Albion CollegeAlbionDarren MasonH
Lawrence Technological Univ.SouthfieldRuth FavroM,P
MISSOURI
Drury University (Phys)SpringfieldKeith CoatesMP
Bruce CallenH,P
Saint Louis University (Aero & Mech Eng)St. LouisDavid JacksonP
Sanjay JayaramH
Southeast Missouri State Univ.Cape GirardeauRobert SheetsH
Truman State UniversityKirksvilleSteve SmithMP
MINNESOTA
Bethel UniversitySt. PaulWilliam KinneyM
Coll. of St. Benedict / St. John's U.CollegegevilleRobert HessePP
Macalester CollegeSt. PaulDaniel KaplanP
Minnesota State UniversityMoorheadEllen HillP,P
MONTANA
Carroll CollegeHelenaHolly ZulloM
Mark ParkerO
(Chem)Dawn BregelH
University of MontanaMissoulaGeorge McRaeH
NEBRASKA
Nebraska Wesleyan UniversityLincolnKristin PfabeP
NEW JERSEY
Rowan UniversityGlassboroHieu NguyenM,M
NEW MEXICO
NM Inst. of Mining and Tech.SocorroJohn StarrettH
New Mexico State UniversityLas CrucesMary BallykP
Tiziana GiorgiP
NEW YORK
Clarkson UniversityPotsdamKathleen FowlerP,P
Columbia UniversityNew YorkPeter BankH
(App! Phys & Appl Math)David KeyesM,P
Concordia CollegeBronxvilleJohn LoaseP
Cornell UniversityIthacaAlexander VladimirskyMP
(Operations Res & Ind'1 Eng)Shane HendersonM,P
Hobart and William Smith Colls. (Geoscience)GenevaScotty OrrH,P
Tara CurtinH
Iona CollegeNew RochelleSrilal KrishnanH
Ithaca College (CS)IthacaAli ErkanP
(Phys)Bruce ThompsonH
Nazareth CollegeRochesterDaniel BirmajerP
Rensselaer Polytechnic Institute (Chem & Bio Eng)TroyPeter KramerMO
Shekhar GardeH
Roberts Wesleyan CollegeRochesterGary RadunsP,P
Union CollegeSchenectadyPeter OttoH,H
United States Military Acad.West PointJoseph LindquistP
Kerry MooresH
Westchester Community Coll.ValhallaMarvin LittmanP,P
NORTH CAROLINA
Appalachian State UniversityBooneEric MarlandH,P
Brevard CollegeBrevardClarke WellbornP
Davidson College (Econ)DavidsonTimothy ChartierM,M
Mark FoleyM,P
Duke University (CS)DurhamWilliam MitchenerOM
Owen AstrachanMM
Meredith CollegeRaleighCammey ColeP,P
NC School of Sci. & MathDurhamDaniel TeagueHP
Wake Forest University (CS)Edward AllenM,M
David JohnP
(Econ)Claire HammondM
Western Carolina UniversityCullowheeErin McNelisH
OHIO
Bowling Green State UniversityBowling GreenJuan BesH,P
College of Mount St. JosephCincinnatiScott SportsmanM
Malone CollegeCantonDavid HahnH
Xavier UniversityCincinnatiMichael GoldweberP,P
Youngstown State UniversityYoungstownAngela SpalsburyM,H
(Civil Eng)Scott MartinH
OKLAHOMA
Oklahoma State UniversityStillwaterLisa MantiniHP
OREGON
Eastern Oregon UniversityLa GrandeDavid AllenM
Lewis & Clark CollegePortlandLiz StanhopeM
Linfield CollegeMcMinnvilleJennifer NordstromHP
(CS)Daniel FordHH
Pacific UniversityForest GroveJohn AugustH
Nancy NeudauerH
(Phys)James ButlerH
Southern Oregon UniversityAshlandKemble YatesH
Willamette UniversitySalemInga JohnsonP
PENNSYLVANIA
Bloomsburg UniversityBloomsburgKevin FerlandH,P
Chatham CollegePittsburghJapheth WoodP,P
Gannon UniversityErieMichael CaulfieldP,P
Gettysburg College (Phys)GettysburgSharon StephensonH
Juniata CollegeHuntingdonJohn BukowskiP
Lafayette CollegeEastonEthan BerkoveP
Slippery Rock UniversitySlippery RockRichard MarchandM
University of PittsburghPittsburghJonathan RubinHP
Westminster College (CS)New WilmingtonBarbara FairesP
SOUTH CAROLINA
College of CharlestonCharlestonAmy LangvillePP
Midlands Technical CollegeColumbiaJohn LongH,P
University of South CarolinaColumbiaLili JuH
SOUTH DAKOTA
South Dakota School of Mines & Tech.Rapid CityKyle RileyH
TENNESSEE
Austin Peay State UniversityClarksvilleNell RayburnM
Tennessee Technological UniversityCookevilleAndrew HetzelP
(CS)CookevilleMartha KosaH
TEXAS
Angelo State UniversitySan AngeloKarl HavlakPP
Rice UniversityHoustonMark EmbreeO
UTAH
University of UtahSalt Lake CityDon TuckerP
VIRGINIA
James Madison UniversityHarrisonburgCaroline SmithP
Longwood UniversityFarmvilleM. Leigh LunsfordH
Maggie Walker Governor's School (Sci)RichmondJohn BarnesMM
Harold HoughtonM
Radford UniversityRadfordLaura SpielmanP
Randolph-Macon CollegeAshlandBruce TorrenceP
University of RichmondRichmondKathy HokeP
Virginia Western Community Coll.RoanokeSteve HammerH
Ruth ShermanP
WASHINGTON
Central Washington UniversityEllensburgStuart BoersmaM
Heritage UniversityToppenishRichard SwearingenH
Pacific Lutheran UniversityTacomaJeffrey StuartHP
University of Puget SoundTacomaDeWayne DerryberryP
University of Washington (Appl Comp'1 Math'1 Sci)SeattleJames MorrowHM
Anne GreenbaumHM
Western Washington UniversityBellinghamTjalling YpmaM
WISCONSIN
Beloit CollegeBeloitPaul J. CampbellH
St. Norbert CollegeDe PereJohn FrohligerP
University of WisconsinRiver FallsKathy TomlinsonH
Edgewood CollegeMadisonSteven PostP
AUSTRALIA
University of New South WalesSydneyJames FranklinP
Univ. of Southern QueenslandToowoombaDmitry StruninAH
CANADA
Dalhousie UniversityHalifaxDorothea PronkP
University of Western OntarioLondonAllan MacIsaacH
York UniversityTorontoHongmei ZhuH
Huaiping ZhuP
Queen's UniversityKingstonDavid SteinsaltzP
McGill UniversityMontrealNilima NigamH,P
NilimaNigamP
University of SaskatchewanSaskatoonJames BrookeMP
CHINA
Anhui UniversityHefeiWang XuejunP
(Info)Wang JianP
Chen MingshengP
(Stat)Xiang JunpingP
Hefei University of TechnologyHefeiSu HuamingP
Du XueqiaoM
(Comp'1 Math)Zhou YongwuP
Huang YouduP
University of Science and Technology of ChinaHefeiHuang ChuanP
Yang ZhouwangM
(Automation)Meng QiangM
Beijing
BeiHang University (BHU) (Eng)BeijingSun HaiYanP
(Sci)Liu HongyingP
(CS)Wu SanxingP
Li ShangZhiP,P
Beijing Forestry UniversityBeijingGao MengningP
Li HongjunP,P
(Bio)Gao NingP
Beijing Institute of TechnologyBeijingWang HongzhouP
Ren QunP
Yan XiaoxiaH
Li XuewenP
Yan GuifengP
Beijing Jiaotong UniversityBeijingWang XiaoxiaM
Wang ZhouhongM,P
Feng GuochenP
Zhang ShangliM
Yu YongguangPP
(Comm. Eng)Liu XiaoP
(Info)Fan BingliP
Yu JiaxinP
Beijing Language and Culture Univ. (Accounting)BeijingXun EndongH
(Finance)Song RouH
(Info)Zhao XiaoxiaH,H
Beijing Normal UniversityBeijingCui HengjianH
Huang HaiyangP
Qing HeH,P
Liu LaifuH,P
(Phys)Peng Fanglin)H
(Psych)Lin DanhuaH
Beijing University of Chemical Tech. (Sci)BeijingLiu DaminP
Liu HuiP
(Chem. Eng.)Jiang GuangfengH
Huang JinyangP
Beijing University of Posts and Telecomm. (Info)BeijingYuan JianhuaP
Zhang WenboP
He ZuguoM,H
HongxiangSunH,H
(CS)Wang XiaoxiaH
(Phys)Ding JinkouM
Beijing University of TechnologyBeijingXue YiH
Beijing Wuzi UniversityBeijingLi ZhenpingP,P
Beijing Wuzi Xueyuan (Jichubu)BeijingCheng XiaohongPP
Central Univ. of Finance and EconomicsBeijingLi DonghongP
Yin XianjunH
Fan XiaomingM
Yin ZhaoYinP
China Agricultural UniversityBeijingLiu JunfengP,P
China University of Geosciences (Info) (Eng)BeijingYan DengH,P
Huang GuangdongP,P
College of Info. Sci. and Tech.BeijingShen FuxingP
North China Electr. Power U. (Automation)BeijingWen TanP
Peking UniversityBeijingDeng MinghuaPP
Liu XufengPP
Liu YulongH
(CS)BeijingTang HuazhongP
(Financial Math)Wu LanP
(Sci Comp)Wang MingH,P
Renmin University of China (Info)BeijingHan LitaoyP,P
Yang YunyanP
School of Computer and Information Tech. Tsinghua UniversityBeijingRen LiweiP
BeijingYe JunP,P
Hu ZhimingM,P
(Sftwr)Chi Chi HungM
Chongqin Chongqin University (Info) (Appl. Sftwr) (Stat)ChongqingHe RenbinP
Yang DadiP
Li FuH
Rong TengzhongP
Duan ZhengminM
Fujian Fujian Normal UnivFuzhouZhang ShengguiP
Xiamen University (Life Sci)XiamenZhong TanP,P
Long MinnanP
Gansu
Lanzhou Commercial College (Info)LanzhouLi BodeP
Guangdong
Guangzhou UniversityGuangzhouFu LinP
Shang DongP
Xiong JianP
Zhong BinP
Jinan University (CS)GuangzhouHu DaiqiangH
Zhang ChuanlinP
Luo ShizhuangP
(Electronics)Ye ShiqiP
South-China Normal Univ. (Comp Apps) (CS)GuangzhouLi HunanH
Wang HenggengH,H
(Stat)Zhang ShaohuiMH
South China University of Technology (Electr. Power)GuangzhouLiu QuanPM
GuangzhouQin Yong AnP
(CS)Tao Zhi SuiP
Sun Yat-sen University (CS)GuangzhouFeng GuocanP
Liu XiaomingM
(Phys)Bao YunH
(Geoography)Yuan ZuoJianP
Guangxi
University of Guangxi (Op'ss Research) (Info)NanningWu RuP,P
Wang XingM,P
Guizhou
Guizhou University for NationalitiesGuiyangSuo HongminP
Hong ZhenshengP
Hebei
Hebei Polytechnic UniversityTangshanLiu BaoxiangP
Jin BianchuanP
Meng JunboP
North China Electric Power UniversityBaodingGu GendaiH
Shi HuifengP
Zhang PoP
Zhang YagangP
(Electr. Eng.)Wang ShenghuaH
Heilongjiang
Biomedical Research InstituteHarbinWang QiP
Daqing Petroleum InstituteDaqingZhang ChangP
Kong LingP,P
Yang YunP
Harbin Engineering UniversityHarbinZhu LeiP
Yu TaoP
(Comp'l Math)HarbinYu FeiP,P
(Phys)Shen JihongP,P
Zhang XiaoweiP,P
(Sci)HarbinLuo YueshengP
Gao ZhenbinP,P
Hong GeP
Harbin Institute of TechnologyHarbinLiu KeanP
Shang ShoutingH,P
Wang XilianP,P
Jiao GuanghongM
Zhang ChipingH
Harbin Medical University (Bioinformatics)HarbinLiu SaLiPP
Wang QiangHuPH
Harbin Normal University (Info)HarbinYao HuanminP
Harbin University of Science and TechnologyHarbinChen DongyanP
Tian GuangyueP,P
Jia-Musi UniversityJia-MusiWen BinP
Zhang HongP,P
(Dean's Ofc)Jia-MusiFan WeiPP
Northeast Agri. U. of China, Chengdong Inst. (Accounting)HarbinZhang YaZhuoP
Zhang YaZhuoP
Hubei
Huazhong Univ. of Sci. and Tech. (Ind'l Eng.)WuhanGao LiangP
Dong YanP
Wuhan UniversityWuhanZhong LiuyiH
Ming HuP
Hu MingyuanP
Chu LuoH
Hu XingqiH
(Remote Sensing)WuhanHu XinqiP
Luo ChuP
(Electr. Eng.)WuhanHu XinqiP
Hu YuanP
(Info)Zhong LiuyiP
(River Eng.)Hu YuanmingP,P
(Banking & Financial Math)Luo ZhuangchuPP
(Dynamics and Machinery)Zhong LiuP
(Mech. Eng.)Qi HuxinP
(Stat)Zhong Liuyi
Wuhan University of TechnologyWuhanChen JianyeP
Chu YangjieP
He LangP
Huang XiaoweiP
Huang ZhangcanP
Liu YangP
Zhu HuapingP,P
(Stat)Li YuguangP
Mao ShuhuaP
Hunan
Central South UniversityChangshaQin XuanyunMH
(Traffic & Transp Eng)Yi LaoshiP
(Civil Eng & Arch)Yi KunnanP
Railway Campus (Eng)ChangshaYi KunnanP
Changsha University of Sci. and TechnologyChangshaTong QingShanP
Hunan University (Info)ChangshaMa ChuanxiuP
(Stat)Wang LipingP,P
National University of Defense TechnologyChangshaM.D. WuP,P
(CS)Xiong YueshanP
(Mgmnt Sci & Eng)Wang DanM,P
(Sys Sci)Mao ZiyangH
Inner Mongolia
Inner Mongolia UniversityHohhotHan HaiP
Wang MeiP
Ma ZhuangP
Jiangsu
China University of Ming and TechnologyXuzhouZhang XingyongP
(Ed Admin)Wu ZongxiangP
Jiang Su UniversityZhenjiangYang HonglinPP
Gui GuilongP,P
Xu GangP
(Sys Eng)Li YiminP,P
Nanjing UniversityNanjingHuang WeihuaHP
Yao TianxingPP
(Chem)Duan ChunyingP
(Phys)Gao JianP
(CS)Li NingP
(Finance)Lin HuiH
Nanjing Univ. of Sci. & Tech.NanjingXu ChungenP
Wang PinlingP
(Stat)Liu LiweiPM
Nanjing University of Posts and Telecomm.NanjingKong GaohuaM
Qiu ZhonghuaP
Xu LiWeiP
PLA U. of Sci. and Tech. (Command Auto.) (Eng)NanjingLiu ShoushengM
Shen JinrenP
Teng JiajunP
(Meteorology)Qin ZhengP
School of Computer Science and Technology Southeast UniversityXuzhouJiang ShujuanM
NanjingChen EnshuiPH
Jia XingangH,H
Wang LiyanPP
Zhang ZhiqiangH,H
Xuzhou Institute of TechnologyXuzhouJiang YingziH,P
Jiangxi
Nanchang UniversityNanchangChen TaoM
Chen YujuP
Liao ChuanrongP
Ma XinshengP
Jilin
Jilin UniversityChangchunShi ShaoyunP
Zhao ShishunP
Zou YongkuiPH
(CS)Lu XianruiP
(Eng)ChangchunLi SongtaoP
Fang PeichenM
Northeast Dianli UniversityJilinChang ZhiwenP,P
Guo XinchenPP
Xu ZhonghaiP
Zhang JieP
Zhou ShuoP
Northeast Normal UniversityChangchunLi ZuofengP,P
Liaoning
Dalian Maritime UniversityDalianChen GuoyanP
Zhang YunjieP
Zhang YunP
Dalian Nationalities Innovation CollegeDalianGuo QiangP
Bai RixiaMP
Ma YumeiP
Ge RendongP
Liu XiangdongP
Chen XingwenH,H
Shen LianshanP
Dalian University Institute of Info. and Eng.DalianLiu GuangzhiH
Gang JiataiP
Liu ZixinP
Dong XiangyuP
Tan XinxinP
Dalian University of TechnologyDalianYu HongquanH,P
Zhao LizhongHP
He MingfengH,P
(Sftwr)Li FengqiH
Li ZheP,P
City InstituteLi GuanP,P
Li LianfuPH
Dongbei U. of Finance and Econ. (Econometrics)DalianZheng YongbingH
Institute of University Students' InnovationDalianFu DonghaiH
He MingfengH
Liu GuangzhiH
Northeastern University (CS)ShenyangZhou FucaiP
Zhao ShuyingM
Distance Education College (CS)Ping SunP
Institute of AI and RoboticsCui Jian-jiangP
Software College (Info Security)Liu HuilinP
Hao PeifengH
Software College (Sftwr Eng)Xu JianzhongH
He XuehongH
Shenyang Institute of Aeronautical Eng.ShenyangFeng ShanPP
Zhu LimeiPP
Shenyang Pharmaceutical UniversityShenyangXiang RongP
Shaanxi
Northwestern Polytechnical UniversityXiánSun HaoH
Wang ZhenhaiP
(Phys)XiánGuo QiangP
Zhe LiuM
(Chem)Lei YoumingH
Xu YongP
Xi'an Communication InstituteXi'anDongsheng YangM
(Eng)Xi'anZhang JianhangM
(CS)Kang JinlongM
(Phys)Li GuoM
Xi'an Jiaotong UniversityXi'anHe XiaoliangPM
Dai YonghongMH
Xi'an University of TechnologyXi'anWang ShangpingH
(Automation and Info Eng)Mao CaoP,P
Xidian UniversityXi'anFeng HailinM
Zhu QiangM
(Eng)Li Wei
(CS)Song YueH
Shandong
Harbin Institute of Technology at WeihaiWeihaiLi BaojiaP
Qu RongningP
(Materials Sci & Eng)Cui LingjiangP
Shandong UniversityJinanHuang ShuxiangP,PP
Lu TongchaoH
(App! Math Research Inst)Liu BaodongH
Huang ShuxiangH
(Sftwr)Meng XiangXuH
Shandong University at WeihaiWeihaiYang BingPP
(Info Sci and Eng)Zhao HuaxiangP
Cao ZhulouP
Shanghai
Donghua University (Econ)ShanghaiWang XiaofengP
(Sci)Ge YongH
You SurongP
Lu YunshengP
East China University of Sci. and Tech.ShanghaiQin YanP
Lu YuanhongP
(Phy)Liu ZhaohuiP,PP
Fudan UniversityShanghaiYuan CaoP
Cai ZhijieP
Jiading No. 1 Middle SchoolShanghaiXie Xilin and Fang YunpingH,PP
Shanghai Finance & Economics CollegeShangHaiZhang LizhuP
Sun YuH
Tao LiangH
Shanghai Jiaotong UniversityShanghaiSong BaoruiO,M
Gang ZhouPM
Minhang BranchZhou GuobiaoP,PP
Huang JianguoMM
Shanghai Maritime U. (Mech Eng)ShanghaiDing SongkangP
Shanghai Normal UniversityShanghaiLiu Rongguan and Shi YongbingP
Guo Shenghuan and Zhu DetongP
Shanghai U. of Finance and EconomicsShanghaiLi FangH
Zhang Li-zhuH
(Econ)Zhou JianP
Sun YanH
Feng SuweiP
Shanghai Youth Centre of Sci. & Tech. Ed.ShanghaiChen GanP,PP
Wang WeipingP
Xu FengH
Tongji UniversityShanghaiXiang JialiangH
Liang JinH
(Chem)Chen XiongdaP
(Env Sci & Eng)ShanghaiYin Hailong & P
University of Finance and Econ. in ShanghaiShanghaiLi TaoP
Yin ChengyuanP
Youth Centre of Science & Technology Education of Hongkou DistrictShanghaiJian TianP
Yucai Senior High SchoolShanghaiLi ZhengtaiP
Yang ZhenweiH
Sichuan
Chengdu University of Tech. (Info & Mgmnt)ChengduWei HuaP
Yuan YongHH
Sichuan UniversityChengduHai NiuP
Zhou ShuchaoP
Zhou JieH,P
(Stat)Zhou JieM
Southwest University of Finance and Econ.ChengduSun JiangmingM
Sun YunlongP
U. of Electronic Science and Tech. of China (Info & CS)ChengduGao QingH,P
Qin SiyiP
Xu QuanziM
Tianjin
Tianjin UniversityTianjinRong XinP,P
(Chem)Shi GuoliangP
(Sftwr Eng)Huang ZhenghaiP
Tianjin
Civil Aviation University of ChinaTianjinNie RuntuH
Ming TianH
Nankai UniversityTianjinChen DianfaH
(Info)Ruan JishouP
(Automation)Chen WanyiP
(Phys)Zhou XingweiP
Yunnan
Yunnan University Kunming (Telecomm)KunmingZong RongH
Pei YijianP
Tan ZhiyiP
Zhejiang
Hangzhou Dianzi University (CS)HangzhouQiu ZheyongPP
(Info and Math)Zhang ZhifengPH
Zhejiang Gongshang UniversityHangzhouDing ZhengzhongP,P
(Info & CS)Hua JiukunP,P
Zhejiang Sci-Tech UniversityHangzhouLuo HuaH
(Phys)Hu JueliangP
Han ShuguangH
Zhejiang UniversityHangzhouYang QifanMH
Jiang YiweiH
City College (Info & CS)HangzhouWang GuiH
Zhang HuizengM
Kang XushengH
Zhejiang University Ningbo Inst. of Tech.NingboWang JufengH
Tu LihuiP
Yu XuefanP
Li ZheningLiP
Zhejiang U. of Finance and Econ. (Info)HangzhouWang FulaiP
Ji LuoH
Zhejiang University of TechnologyHangzhouZhou MinghuaP,P
(Jianxing)Wang ShimingO,P
FINLAND
Päivölä CollegeTarttilaJanne PuustelliH,M
Helsingin MatematikkalukioHelsinkiEsa LappiM
Juho PakarinenP
University of HelsinkiHelsinkiPetri OlaM
HONG KONG
Hong Kong Baptist UniversityKowloonC.S. TongH
W.C. ShiuP
INDONESIA
Institut Teknologi BandungBandungRieske HadiantiH
Agus GunawanH
IRELAND
National University of IrelandGalwayNiall MaddenM,H
University College CorkCorkAndrew UsherM
Ben McKayH
James GrannellM
Dmitrii RachinskiiM
NEW ZEALAND
Victoria UniversityWellingtonMark McGuinnessM
SOUTH KOREA
Korea Adv. Inst. of Sci. and Tech. (KAIST)DaejeonKi Hyoung KoM
Yong KimH,H
(Biosystems)Dongsup KimM
SOUTH AFRICA
University of StellenboschStellenboschJan van VuurenM,H
UNITED KINGDOM
University of OxfordOxfordJeffrey GiansiracusaM,P
+ +# Abbreviations for Organizational Unit Types (in parentheses in the listings) + +
(none)MathematicsM; Pure M; Applied M; Computing M; M and Computer Science; M and Computational Science; M and Information Science; M and Statistics; M, Computer Science, and Statistics; M, Computer Science, and Physics; Mathematical Sciences; Applied Mathematical and Computational Sciences; Natural Science and M; M and Systems Science; Applied M and Physics
BioBiologyB; B Science and Biotechnology; Biomathematics; Life Sciences
ChmChemistryC; Applied C; C and Physics; C, Chemical Engineering, and Applied C
CSComputerC Science; C and Computing Science; C Science and Technology; C Science and (Software) Engineering; Software; Software Engineering; Artificial Intelligence; Automation; Computing Machinery; Science and Technology of Computers
EconEconomicsE; E Mathematics; Financial Mathematics; Financial Mathematics and Statistics; Management; Business Management; Management Science and Engineering
EngEngineeringCivil E; Electrical Eng; Electronic E; Electrical and Computer E; Electrical E and Information Science; Electrical E and Systems E; Communications E; Civil, Environmental, and Chemical E; Propulsion E; Machinery and E; Control Science and E; Mechanisms; Operations Research and Industrial E; Automatic Control
InfoInformationI Science; I and Computation(al) Science; I and Calculation Science; I Science and Computation; I and Computer Science; I and Probability; I and Computing Science; I Engineering; Computer and I Technology; Computer and I Engineering; I and Optoelectronic Science and Engineering
PhysPhysicsP; Applied P; Mathematical P; Modern P; P and Engineering P; P and Geology; Mechanics; Electronics
SciScienceS; Natural S; Applied S; Integrated S
StatStatisticsS; S and Finance; Mathematical S; Probability and S
+ +EDITOR'S NOTE: For team advisors from China, I have endeavored to list family name first, and I thank Jie Fu (Beloit College '10) for her help in this regard. + +# Sprinkler Systems for Dummies: Optimizing a Hand-Moved Sprinkler System + +Ben Dunham + +Steffan Francischetti + +Kyle Nixon + +Carroll College + +Helena, MT + +Advisor: Mark Parker + +# Summary + +"Hand move" irrigation, a cheap but labor-intensive system used on small farms, consists of a movable pipe with sprinklers on top that can be attached to a stationary main. Our goal is a schedule that meets specific watering requirements and minimizes labor, given flow parameters and pipe specifications. + +We apply Bernoulli's energy-conservation equation to the flow characteristics to determine sprinkler discharge speeds, ranges, and flow rates. Using symmetry and a model of sprinkler coverage, we find that three sprinklers, operating 57 min at 9 consecutive cycling stations during four 11-hour workdays, with the sprinklers $9\mathrm{m}$ apart on the $20\mathrm{m}$ mobile pipe and six mainline stations spaced $15\mathrm{m}$ apart, will water more than $99\%$ of the field. Our computer model uses a genetic algorithm to improve the efficacy to $100\%$ by changing sprinkler spacing to $10\mathrm{m}$ and adjusting the mainline station spacing accordingly. + +The text of this paper appears on pp. 237-254. + +# Fastidious Farmer Algorithms (FFA) + +Matthew A. Fischer + +Brandon W. Levin + +Nikifor C. Bliznashki + +Duke University + +Durham, NC + +Advisor: William Mitchener + +# Summary + +# Summary + +An effective irrigation plan is crucial to "hand move" irrigation systems. "Hand move" systems consist of easily movable aluminum pipes and sprinklers that are typically used as a low-cost, low-scale watering system. Without an effective irrigation plan, the crops will either be watered improperly, resulting in a damaged harvest, or watered inefficiently, using too much water. + +We determine an algorithm for "hand move" irrigation systems that irrigates as uniformly as possible in the least amount of time. We physically characterize the system, determine a method of evaluating various irrigation algorithms, and test these algorithms to determine the most effective strategy. + +Using fluid mechanics, we find that we can have at most three nozzles on the 20-m pipe while maintaining appropriate water pressure. We model our sprinkler system after the Rain Bird 70H $1''$ impact sprinkler, which works at the desired pressure and has approximately a $0.6\mathrm{cm}$ diameter. Combining data and analysis, we confirm that the radius of the sprinkler will be $19.5\mathrm{m}$ . Researchers have proposed several models for the water distribution pattern about a sprinkler; we consider a triangular distribution and an exponential distribution. + +We do not consider schemes that do not water all areas of the field at least $2\mathrm{cm}$ every 4 days or water areas more than $0.75~\mathrm{cm / h}$ . The largest cost in time and labor is in moving the pipe. Thus, we look for a small number of moves that still gives the desired time and stability. From these configurations, computer analysis determines which is most uniform. + +For various situations, we propose an optimal solution. The bases of the sprinkler placement patterns are triangular and rectangular lattices. We craft three patterns to maximize application to the difficult edges and corners. + +- For calm conditions and a level field, the field can be watered with just two moves (the "Lazy Farmer" configuration). However, this approach is unstable, and even weak wind would leave parts of the field dry. With three moves, little stability is gained; so four positions is best. +- The "Creative Farmer" triangular lattice gives both stability and uniformity. The extra time is warranted because of its ability to adapt. +- We obtain even more stability using the "Conservative Farmer" model but at the price of a decrease in uniformity from the "Creative Farmer" approach. + +The text of this paper appears on pp. 255-268. + +# A Schedule for Lazy but Smart Ranchers + +Wang Cheng + +Wen Ye + +Yu Yintao + +Shanghai Jiaotong University + +Shanghai, China + +Advisor: Song Baorui + +# Summary + +We determine the number of sprinklers to use by analyzing the energy and motion of water in the pipe and examining the engineering parameters of sprinklers available in the market. + +We build a model to determine how to lay out the pipe each time the equipment is moved. This model leads to a computer simulation of catch-can tests of the irrigation system and an estimation of both distribution uniformity (DU) and application efficiency of different schemes of where to move the pipe. In this stage, DU is the most important factor. We find a schedule in which one sprinkler is positioned outside of the field in some moves but higher resulting DU $(92\%)$ and saving of water. + +We determine two schedules to irrigate the field. In one schedule, the field receives water evenly during a cycle of irrigation (in our schedule, 4 days), while the other schedule costs less labor and time. Our suggested solution, which is easy to implement, includes a detailed timetable and the arrangement of the pipes. It costs 12.5 irrigation hours and 6 equipment resets in every cycle of 4 days to irrigate the field with DU as high as $92\%$ . + +The text of this paper appears on pp. 269-283. + +# Optimization of Irrigation + +Bryan J.W. Bell + +Yaroslav Gelfand + +Simpson H. Wong + +University of California + +Davis, CA + +Advisor: Sarah A. Williams + +# Summary + +We determine a schedule for a hand-move irrigation system to minimize the time to irrigate a $30\mathrm{m}\times 80\mathrm{m}$ field, using a single $20\mathrm{m}$ pipeset with $10\mathrm{cm}$ -diameter tube and $0.6\mathrm{cm}$ -diameter rotating spray nozzles. The schedule should involve a minimal number of moves and the resulting application of water should be as uniform as possible. No part of the field should receive water at a rate exceeding $0.75\mathrm{cm}$ per hour, nor receive less than $2\mathrm{cm}$ in a four-day irrigation circle. The pump has a pressure of $420\mathrm{KPa}$ and a flow-rate of $150\mathrm{L / min}$ . + +The sprinklers have a throw radius of $14.3\mathrm{m}$ . With a riser height of 30 in, the field can be irrigated in $48\mathrm{h}$ over four days. Moreover, a single sprinkler is optimal. The pipes should be moved every $5\mathrm{h}$ and be at least $21\mathrm{m}$ apart. The resulting irrigation has precipitation uniformity coefficient of .89 (where 1 would be maximum uniformity). + +We deal with each constraint in turn. Using geometrical analysis, we convert the coverage problem to determining the least number of equal-sized circles that could cover the field. We perturb the solution to optimize uniformity, by applying a Simultaneous Perturbation Stochastic Approximation (SPSA) optimization algorithm. We perturb this solution further to find the minimal number of pipe setups, by experimentally "fitting" the pipesets through the sprinklers. The rationale for perturbation is that some drop in uniformity can be tolerated in favor of minimizing the number of setups while still ensuring that we irrigate the entire field. We feed the optimal layout of pipe setups to another algorithm that generates an irrigation schedule for moving the pipes. + +The text of this paper appears on pp. 285-294. + +# Sprinkle, Sprinkle, Little Yard + +Brian Camley + +Bradley Klingenberg + +Pascal Getreuer + +University of Colorado + +Boulder, CO + +Advisor: Bengt Forberg + +# Summary + +We determine an optimal algorithm for irrigating an $80\mathrm{m}\times 30\mathrm{m}$ field using a hand-move 20-m pipe set, using a combination of analytical arguments and simulated annealing. We minimize the number of times that the pipe is moved and maximize the Christiansen uniformity coefficient of the watering. + +We model flow from a sprinkler as flow from a pipe combined with projectile motion with air resistance; doing so predicts a range and distribution consistent with data from the literature. We determine the position of sprinkler heads on a pipe to optimize uniformity of watering; our results are consistent with predictions from both simulated annealing and Nelder-Mead optimization. + +Using an averaging technique inspired by radial basis functions, we prove that periodic spacing of pipes maximizes uniformity. Numerical simulation supports this result; we construct a sequence of irrigation steps and show that both the uniformity and number of steps required are locally optimal. + +To prevent overwatering, we cannot leave the pipe in a single location until the minimum watering requirement for that region is met; to water sufficiently, we must water in several passes. The number of passes is minimized as uniformity is maximized. + +We propose watering the field with four repetitions of five steps, each step lasting roughly $30\mathrm{min}$ . We place two sprinkler heads on the pipe, one at each end. The five steps are uniformly spaced along the long direction of the field, with the first step at the field boundary. The pipe locations are centered in the short direction. This strategy requires only 20 steps and has a Christiansen uniformity coefficient of 94, well above the commercial irrigation minimum of 80. Simulated annealing to maximize uniformity of watering re-creates our solution from a random initialization. + +The UMAP Journal 27 (3) (2006) 226-227. ©Copyright 2006 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +The consistency between solutions from numerical optimization and from analytical techniques suggests that our result is at least a local optimum. Moreover, the solution remains optimal upon varying the sprinkler profile, indicating that the results are not overly sensitive to our initial assumptions. + +The text of this paper appears on pp. 295-314. + +# Developing Improved Algorithms for Irrigation Systems + +Ying Yujie + +Jin Qiwei + +Zhou Kai + +Zhejiang University of Technology + +Hangzhou, China + +Advisor: Wang Shiming + +# Summary + +Our goal is an algorithm that minimizes the time to irrigate a relatively small field under given conditions. + +We focus on minimization of time, uniformity of irrigation, and feasibility. Our effort is divided into five basic parts: + +- We assess the wetted radius based on experimental results for several typical rotating spray sprinklers. +- We determine the number of sprinklers from an empirical formula for sprinkler flow. +- We simulate the water distribution pattern, using a $0.25\mathrm{m}\times 0.25\mathrm{m}$ grid. +- We evaluate the uniformity of water distribution by Christiansen's uniformity coefficient. +- We find an optimal irrigation schedule including when and where to move the pipes: We devise a single-lateral-pipe scheme and a multiple-lateral-pipes scheme; the latter gives better results. To irrigate more uniformly, we adjust the spacing between sprinklers and the spacing from the edge. Using our grid, we move the sprinklers symmetrically on both sides, node by node, to find the optimal positions for an improved multiple-lateral-pipes scheme. + +Simulations show that all three schemes perform acceptably in realistic conditions. The improved multiple-lateral-pipes scheme is superior, with minimum time and the highest Christiansen's uniformity coefficient (CU). We conclude that four sprinklers are required, the minimal amount of time is $732\mathrm{min}$ and the CU is $90\%$ . + +We do a sensitivity analysis of the variation of CU and of minimum time with wetted radius, which shows that our model is robust. + +The text of this paper appears on pp. 315-328. + +# Profit Maximizing Allocation of Wheelchairs in a Multi-Concourse Airport + +Christopher Yetter + +Neal Gupta + +Benjamin Conlee + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +To minimize Epsilon Airlines' cost of providing wheelchair assistance to its passengers, we examine the trade-off between explicit costs (chairs and personnel) and implicit costs (losses in market share). Our Multi-Concourse Airport Model simulates the interactions between escorts, wheelchairs, and passengers. Our Airline Competition Model takes a game-theoretic perspective in representing the profit-seeking behavior of airline companies. To ground these models in reality, we incorporate extensive demographic data and run a case study on 2005 Southwest Airlines flight data from Midland TX, Columbus OH, and St. Louis MO. We conclude that Epsilon Airlines should employ a "hub and spokes" strategy that uses "wheelchair depots" in each concourse to consolidate the movement of chairs. Across different airport sizes and strategies, we find that two escorts per concourse and two wheelchairs per escort are optimal. + +The text of this paper appears on pp. 329-344. + +# Minimization of Cost for Transfer Escorts in an Airport Terminal + +Elaine Angelino + +Shaun Fitzgibbons + +Alexander Glasser + +Harvard University + +Cambridge, MA + +Advisor: Michael Brenner + +# Summary + +We minimize the cost for Epsilon Airlines to provide a wheelchair escort service for transfers in an airport terminal. We develop probabilistic models for flow of flight traffic in and out of terminal gates, for the number of passengers on flight who require service, and for transfer destinations within the terminal. + +We develop an economic model to quantify both the short- and long-term costs of operating such a service, including the salaries of escorts, the maintenance and storage of wheelchairs, and the costs incurred when late escorted transfers delay a departing flight. + +We develop a simulated annealing (SA) algorithm that uses our economic models to minimize cost by optimizing the number and allocation of escorts to passengers. Having indexed the space of all possible escort allocations to be accessible to our SA, we selectively search the space of allocations for a global optimum. Although the space is too large to find a global optimum, our simulations suggest that the SA is effective at approximating this optimum. + +Using current airport and airline data, we break our analysis down into short- and long-term costs, simulating escort service operation under dynamic airport conditions, varying air traffic, airport size, and the fraction of traveling population that requests wheelchair-aided transfer (simulating a greater future abundance of elderly travelers). + +The text of this paper appears on pp. 345-361. + +# Application of Min-Cost Flow to Airline Accessibility Services + +Dan Gulotta + +Daniel Kane + +Andrew Spann + +Massachusetts Institute of Technology + +Cambridge, MA + +Advisor: Martin Z. Bazant + +# Summary + +We formulate the problem as a network flow in which vertices are the locations of escorts and wheelchair passengers. Edges have costs that are functions of time and related to delays in servicing passengers. Escorts flow along the edges as they proceed through the day. The network is dynamically updated as arrivals of wheelchair passengers are announced. + +We solve this min-cost flow problem using network flow techniques such as price functions and the repeated use of Dijkstra's algorithm. Our algorithm runs in an efficient polynomial time. We prove a theorem stating that to find a no-delay solution (if one exists), we require advance notice of passenger arrivals only equal to the time to traverse the farthest two points in the airport. + +We run our algorithm on three simulated airport terminals of different sizes: linear (small), Logan A (medium), and O'Hare 3 (large). In each, our algorithm performs much better than the greedy "send-closest-escort" algorithm and requires fewer escorts to ensure that all passengers are served. + +The average customer wait time under our algorithm with a 1-hour advance notice is virtually the same as in the full-knowledge optimal solution. Passengers giving only 5-min notice can be served with only minimal delays. + +We define two levels of service, Adequate and Good. The number of escorts for each level scales linearly with the number of passengers. + +One hour of advance notice is more than enough. Epsilon Airlines can make major improvements by using our algorithm instead of "send-closest-escort"; it should hire a number of escorts somewhere between the numbers for Adequate and Good service. + +The text of this paper appears on pp. 363-381. + +# Cost Minimization of Providing a Wheelchair Escort Service + +Matthew J. Pellicione + +Michael R. Sasseville + +Igor Zhitnitsky + +Rensselaer Polytechnic Institute + +Troy, NY + +Advisor: Peter Roland Kramer + +# Summary + +Epsilon Airlines provides a wheelchair escort service to passengers to who require aid. We use an optimized earliest-due-date-first (EDD) algorithm to minimize the overall cost. Our algorithm is broad enough to accommodate various airport concourses, flight schedules, and flight delays. In addition, it allows for wheelchair escorts to perform other tasks beneficial to the airline, such as provide information at a kiosk, to help reduce the overall cost. Moreover, it creates schedules for each employee. + +A naive strategy would be to employ the minimum number of escorts to guarantee that all passengers reach their gates on time. We show that this strategy is not optimal but can be improved by assigning different numbers of escorts to shifts based on expected traffic. For example, if Delta Airlines were to utilize the naive strategy at Atlanta International Airport, the cost would be over $5 million/yr, whereas our strategy reduces this cost to under$ 4 million/yr. A similar reduction in cost could be expected for Epsilon Airlines. + +The text of this paper appears on pp. 383-394. + +# A Simulation-Driven Approach for a Cost-Efficient Airport Wheelchair Assistance Service + +Samuel F. Feng + +Tobin G. Isaac + +Nan Xiao + +Rice University + +Houston, TX + +Advisor: Mark P. Embree + +# Summary + +Although roughly $0.6\%$ of the U.S. population is wheelchair-bound, the strain of travel is such that more than twice that amount relies on wheelchairs in airports [Haseltine Systems Corp. 2006]. + +Two issues have the greatest impact on the cost and effectiveness of this service: the number of wheelchairs and how they should be deployed. The proper number of escorts and wheelchairs is not only a question of the airport but of the volume of passengers, which can vary greatly. If escorts determine their own movements within the airport, lack of coordination could result in areas being unattended; however, fluctuation in requests could be so great that a territory-based plan could overwork some escorts and underwork others. + +We present an algorithm for scheduling of the movement of escorts that is both simple in implementation and effective in maximizing the use of available time in each escort's schedule. Then, given the implementation of this algorithm, we simulate the scheduling of requests in a given airport to find the number of wheelchair/escort pairs that minimizes cost. + +The text of this paper appears on pp. 395-407. + +# The Median is the Message for Efficient Wheelchair Service + +Connor Broadus + +Andrew Kim + +Xun Zhou + +Maggie L. Walker Governor's School + +Richmond, VA + +Advisor: John Barnes + +# Summary + +Every year, more than 17 million disabled passengers travel on commercial airlines, some with special needs that the airlines must meet. Meeting these needs can be challenging, because such requests are relatively rare and occur unexpectedly. However, if an airline does not set aside adequate provisions to help the disabled, flights may become delayed as handicapped passengers struggle to reach their destination. + +We model the situation and devise a protocol for an airline to use minimal resources to respond efficiently to such requests, balancing the costs of additional personnel against delayed flights. + +The model consists of three parts: + +- an algorithm for finding the number of escorts that an airport should hire (based on balancing costs); +- establishing that the best ratio of wheelchairs to escorts is one-to-one (even if a wheelchair-bound individual does not request an escort, someone is needed to transport the chair); +- showing that wheelchair service is most efficient when escorts work out of a central "hub" (a hub shortens the time for an escort to travel to the disabled passenger's gate). The ideal location for a hub is generally the median gate, the gate with an equal number of gates on either side; if there is no gate there, then a nearby lounge would suffice. + +To test the efficiency of our model, we simulated wheelchair service in large, medium, and small airports. Our model was successful in reducing + +delay times. However, the model is not perfect. It assumes that all escorts are perfectly efficient in their occupation and that all passengers are completely cooperative; the human element is a significant complication that our model does not address. A strength of our model is its flexibility: The algorithm for the number of escorts can adjust to changes in the population and in the airline industry. Thus, as the nation ages and the airline industry grows, our model will still be applicable. + +[EDITOR'S NOTE: This Meritorious paper won the Ben Fusaro Award for the Wheelchair Problem. The full text of the paper does not appear in this issue of the Journal, but a Judges' Commentary on the paper is on pp. 413-414.] + +Pp. 237-254 can be found on the Tools for Teaching 2006 CD-ROM. + +# Sprinkler Systems for Dummies: Optimizing a Hand-Moved Sprinkler System + +Ben Dunham + +Steffan Francischetti + +Kyle Nixon + +Carroll College + +Helena, MT + +Advisor: Mark Parker + +# Summary + +"Hand moved" irrigation, cheap but labor-intensive systems used on small farms, consists of a movable pipe with sprinklers on top which can be attached to a stationary main. Our goal is a schedule that meets specific watering requirements and minimizes labor, given flow parameters and pipe specifications. + +We apply Bernoulli's energy-conservation equation to the flow characteristics to determine sprinkler discharge speeds, ranges, and flow rates. Using symmetry and a model of sprinkler coverage, we find that three sprinklers, operating 57 min at 9 consecutive cycling stations during four 11-hour workdays, with the sprinklers $9\mathrm{m}$ apart on the $20\mathrm{m}$ mobile pipe and six mainline stations spaced $15\mathrm{m}$ apart, will water more than $99\%$ of the field. Our computer model uses a genetic algorithm to improve the efficacy to $100\%$ by changing sprinkler spacing to $10\mathrm{m}$ and adjusting the mainline station spacing accordingly. + +# Introduction + +Our challenge is to design a movable sprinkler system to meet a set of watering criteria on an $80\mathrm{m}\times 30\mathrm{m}$ field with specified flow characteristics of the main pipe. We must + +- decide how a stationary main water pipe should be positioned on the field; + +- determine the number of sprinklers, the sprinkler spacing on the movable pipe, and the spacing of the attachments for the movable pipe; and +- schedule moving the pipe. + +We make simplifications and formulate models. We determine the stationary pipe position and, ignoring friction, formulate first a simple one-sprinkler model, using the principles of conservation of flow and the Bernoulli energy equation. We use the results in the jet equation to find trajectory and range of a single sprinkler. We apply the same principles to multisprinkler systems. + +The area watered can be modeled as either a uniform disk or a ring. We minimize the overlapping areas in the disk model to eliminate underwatered areas and maximize uniform water distribution, for different numbers of sprinklers. This optimization determines sprinkler spacing, movable pipe spacing, and the number of moves to water the entire field. + +Using calculated values for flow rates and depth accumulation over time, we determine a schedule that minimizes time and effort. + +# Assumptions + +- The field is flat, so there is no change in energy (or head) of the system due to changes in elevation. +- One 80-m fixed pipe runs the length of the field, to which a lateral arm can be attached perpendicularly. +- There is only one lateral pipe $20\mathrm{m}$ long. +- The angle at which the water leaves the sprinkler is $30^{\circ}$ . +- Sprinklers can be modeled as $360^{\circ}$ rotary jets. +- The entire flow does not need to leave the sprinklers; a return system is implied. +- There is no wind; the sprinklers always water evenly over a circular area. +- Rain is not modeled; the system is turned off during rain. +- The sprinkler system is inactive for at least 8 h every night. + +# Models + +# System Requirements and Statistics + +The aluminum pipe system consists of a stationary main pipe with a diameter of $10\mathrm{cm}$ . Perpendicular to this pipe, a single 20-m-long (10-cm-diameter) + +lateral or movable pipe can be attached. On this pipe are a number of sprinklers (0.6 cm diameter) that can either sit directly on the movable pipe or connect to the pipe with 0.6 cm diameter vertical pipes. Water flow at the source is 150 L/min at a pressure of 420 kPa. Every area should receive no more than 0.75 cm depth of water in an hour and no less than 2 cm over four days. + +# Field Layout + +The field is $80\mathrm{m}$ by $30\mathrm{m}$ . The lateral or movable pipe is $20\mathrm{m}$ long when assembled with a number of rotating sprinklers attached to it. We interpret the sprinkler set to mean we can use only one 20-m-long pipe as the lateral movable pipe (multiple 20-m sections cannot be attached to one another). The lateral pipe can be connected perpendicularly to a fixed main pipe with the same diameter $(10\mathrm{cm})$ . We find it efficient and symmetric when the main pipe runs the length of the $80\mathrm{m}$ field and is inset $5\mathrm{m}$ onto the field. This ensures that if sprinklers are set on both ends of the lateral pipe, the field can still be watered symmetrically. We can optimize the number of connection points and spacing on the main pipe based on the number of sprinklers on the lateral pipe. We assume that valves can shut off flow to excess length of the main pipe in order to direct the entire flow into the lateral pipe. Our model also assumes that the entire flow does not exit the sprinklers, but instead only the pressure of the system forces the water from the sprinklers; the rest of the water is returned to the reservoir from either a flexible return pipe or irrigation ditch. + +# One-Sprinkler System + +To understand the outflow of the sprinklers, we simplify the model to include only one sprinkler and study the exit speed of the water leaving the rotating sprinkler head. This analysis can be done in terms of conservation of flow, or in terms of conservation of energy. + +# Conservation of Flow + +We assume that there is no speed loss due to friction, the bend in the pipe, or the condition and angle of the sprinkler head. + +Flow $(Q = vA = 150\mathrm{L / min} = 0.0025\mathrm{m^3 / s})$ through the pipe must be conserved, so the flow through the pipes satisfies + +$$ +v _ {2} A _ {2} = v _ {1} A _ {1}, +$$ + +where $A_{i}$ is the area of pipe $i$ and $v_{i}$ is the speed in it [Walski et al. 2004, 3]. Taking the main pipe as pipe 1 and measuring in meters, we have + +$$ +v _ {2} = \frac {0 . 3 1 8 \mathrm {m / s} \times \pi (0 . 0 5) ^ {2}}{\pi (0 . 0 0 3) ^ {2}} = 8 8. 3 \mathrm {m / s}, +$$ + +which—at almost 200 mph—is faster than a sprinkler could probably handle. + +# Conservation of Energy + +To take energy into consideration, we must make a bold assumption: Not all of the water running through the main pipe ends up on the field. To conserve energy, we must use the Bernoulli equation and neglect friction head loss at both positions 1 (the main pipe) and 2 (the sprinkler head) [Walski et al. 2004, 6]: + +$$ +\frac {P _ {1}}{\gamma} + z _ {1} + \frac {v _ {1} ^ {2}}{2 g} = \frac {P _ {2}}{\gamma} + z _ {2} + \frac {v _ {2} ^ {2}}{2 g}, +$$ + +where + +$$ +P = \mathrm {p r e s s u r e} \left(\mathrm {N / m ^ {2}}\right), +$$ + +$$ +\gamma = \mathrm {s p e c i f i c w e i g h t o f f l u i d} \left(\mathrm {N} / \mathrm {m} ^ {3}\right), +$$ + +$$ +z = \text {e l e v a t i o n a b o v e a r e f e r e n c e p o i n t} (\mathrm {m}), +$$ + +$$ +v = \text {f l u i d s p e e d (m / s) , a n d} +$$ + +$$ +g = \mathrm {g r a v i t a t i o n a l} \mathrm {a c c e l e r a t i o n (m / s ^ {2})}. +$$ + +For our parameter values, we have + +$$ +\frac {4 2 0 \mathrm {k N / m ^ {2}}}{9 . 8 1 \mathrm {k N / m ^ {3}}} + 0 + \frac {(0 . 3 1 8 \mathrm {m / s ^ {2}})}{2 (9 . 8 1 \mathrm {m / s ^ {2}})} = 0 + z _ {2} \mathrm {m} + \frac {v _ {2} ^ {2} (\mathrm {m / s}) ^ {2}}{2 (9 . 8 1) \mathrm {m / s ^ {2}}}. +$$ + +The pressure at point 2 (the sprinkler) is zero because at this point the water is being expelled through the nozzle and is under no pressure from the pipes [Finnemore 2002, 511]. So the exit speed of water out of the sprinkler, as a function of the height $z_{2}$ of the sprinkler off the ground is, after some algebra: + +$$ +v _ {2} (z _ {2}) \approx 4. 4 3 \sqrt {4 2 . 8 3 - z _ {2}}. +$$ + +The height of sprinklers is usually between 6 in and 4 ft depending on the crop (assuming no braces to support the sprinklers) [National Resources Conservation Service 1997]; therefore, we test the sensitivity of the speed based on height (for our purposes using a range of $0\mathrm{m}$ to $1\mathrm{m}$ ): + +$$ +v _ {2} (0) = 2 9. 0 \mathrm {m} / \mathrm {s}, \quad v _ {2} (1) = 2 8. 7 \mathrm {m} / \mathrm {s}. +$$ + +So the speed out of the sprinkler is not sensitive to height. For the remainder of the study, we use $z_{2} = 0.5$ m. + +# Trajectory + +First, we make some assumptions: + +- The speed found using energy conservation is the same initial speed that would occur through the nozzle at any discharge angle. + +- The maximum speed is not affected by the means of dispersing the water, even if the spray is more similar to a fan than a jet. +- The nozzle is frictionless. + +We use the single-sprinkler speed to find the range of the water discharged through the nozzle, via the jet equation [Finnemore 2002, 169]: + +$$ +z = \frac {v _ {z 0}}{v _ {x 0}} x - \frac {g}{2 v _ {x 0}} x ^ {2}. +$$ + +For each additional sprinkler, the speed is cut down proportionally, based on the conservation of flow equation $v_{2} = v_{1}A_{1} / A_{2}$ ; the effective outflow area $(A_{2})$ is increased proportionally, so the speed decreases to $v = v_{0} / n = 28.8 / n \, \mathrm{m/s}$ . When rearranged, the jet equation gives the range for the number of sprinklers: + +$$ +x = \frac {v \cos \theta \sqrt {v ^ {2} \sin^ {2} \theta - 2 g n ^ {2} z} + v \sin \theta}{g n ^ {2}}, \tag {1} +$$ + +where + +$x$ is the outer radius of the sprinkler coverage (range); + +$\theta$ is the angle to the horizontal at which the sprinkler discharges; + +$v$ is the speed of the water at the sprinkler head, found using the conservation of energy equation; + +$z$ is the change in height from the jet to the ground; + +$g$ is the acceleration due to gravity; and + +$n$ is the number of sprinklers on the lateral pipe. + +Since most rotary crop sprinklers discharge between $18^{\circ}$ to $28^{\circ}$ above the horizontal axis, but up to $35^{\circ}$ , we assume that $\theta = 30^{\circ}$ [Fipps and Dainello 2001]. + +# Multiple-Sprinkler System + +Given the design, a minimum $5\mathrm{m}$ radius is needed to reach the edges of the field. Based on our calculations from (1), we find with four sprinkler heads that the radius is $5.32\mathrm{m}$ . Therefore, no more than four sprinklers can be put on the lateral pipe and still have a sufficient range to reach the edge of the field. + +We need to know the rate at which the water flows out of the sprinklers to determine the rate of watering. Since $Q = vA$ , the flow $Q_{n}$ from each sprinkler when there are $n$ sprinklers is + +$$ +Q _ {n} = v _ {n} \pi (0. 0 0 3) ^ {2}. +$$ + +![](images/fe3f281665d9e0341dd7f4c2c39c1fafa03a0cb6f4f49f7ba62b7423d02410af.jpg) +Figure 1. The two sprinkler coverage model extremes. + +Two options represent the extremes of sprinkler coverage (Figure 1); we assume for both cases that the sprinkler is a well-distributed fan of water, meaning that the entire designated area for each sprinkler is watered uniformly. + +- The sprinkler head discharges uniformly over a disk, with outer radius $x$ m and inner radius $0$ m. +- The sprinkler discharges uniformly over a ring, with outer radius $x$ m and inner radius $x / 2$ m. We can justify this ratio because, realistically, a sprinkler discharging onto an area narrower than this would require too many additional sprinklers to hydrate the unwatered area around the center. + +We then find the rate (depth over time) $D$ (cm/h) at which the area covered by the sprinkler receives water: + +$$ +D = \frac {Q _ {n}}{A _ {c}}, +$$ + +where $A_{c}$ is the area covered by the sprinkler, with + +$$ +A _ {c} = \pi (x _ {o} ^ {2} - x _ {i} ^ {2}), +$$ + +where $x_{o}$ , $x_{i}$ are the outer and inner radii of the ring ( $x_{i} = 0$ m for disk). + +We calculate the depth over time $D$ distributed over the discharge area $A_{c}$ (Table 1) for both disk and ring models. + +As sprinklers are added to the lateral arm, the area covered by each sprinkler decreases; consequently, $D$ increases greatly. Recall our constraints: + +- No part of the field should receive more than $0.75 \, \text{cm/h}$ . + +Table 1. Disk Model: Effective radius, flow/sprinkler, and depth/time for $n$ sprinklers. + +
Sprinklers +(on lateral pipe)Effective radius +(m)Qn +(m3/h/sprinkler)Depth over area (cm/h)
DiskRing
1742.90.020.02
2191.50.130.17
391.00.390.52
450.70.821.10
+ +- Each part of the field should receive at least $2\mathrm{cm}$ every 4 days. + +One sprinkler is not time-efficient, since the lateral pipe would need to sit about four days to fulfill the minimum. Two sprinklers is also not efficient, since they cover an area far exceeding the field boundaries. Either one or two sprinklers would likely result in far too much pressure for a sprinkler to handle, for both the disk model and the ring model. We study only three- and four-sprinkler systems beyond this point. + +Four sprinklers causes a depth rate in excess of $0.75\mathrm{cm / h}$ . However, we can interpret this to mean the rate can exceed $0.75\mathrm{cm / h}$ as long as the pipe does not sit long enough for the cumulative depth to exceed $0.75\mathrm{cm}$ in one hour. In other words, the water can be shut off before the full hour is up. + +# Optimal Overlap Model (for Disks) + +We define a station to be one of the lateral positions, a cycle to be each station being watered once, and a watering to be the process of watering a single station. + +We try to optimize the system by arranging overlap of sprinklers so that no area gets hit more than twice in any cycle; doing this maximizes the time that the system can stay in one spot. + +We arrange the sprinklers to cover the edges of the field as nearly as possible—though possibly missing some small triangles along the edges of the field as we try to optimize the speed of watering. We assume enough soil permeability so that water seeps from surrounding watered areas. + +With four sprinklers, the outer two need to be at the ends of the pipe so that they can cover the edge as much as possible and unwatered area is minimized. With three sprinklers, the two outside sprinklers are $1.1\mathrm{m}$ from the end point of the lateral line and the third is in the center. It is ideal if the radius is large enough to cover the edge of the field and also hit the next sprinkler over on the mobile line; then the sprinklers can sprinkle a little less water over the edge, minimizing waste. + +To determine the move distance from station to station, we use the Pythagorean theorem: + +$$ +R ^ {2} = \left(\frac {1}{2} L _ {s}\right) ^ {2} + \left(\frac {1}{2} S _ {s}\right) ^ {2}, \qquad L _ {s} = 2 \sqrt {R ^ {2} - \left(\frac {1}{2} S _ {s}\right) ^ {2}}, +$$ + +where $L_{s}$ is the lateral spacing of the movable pipe, $S_{s}$ is the spacing between the sprinklers, and $R$ is the radius of the spray for each sprinkler (Figure 2). + +![](images/5eceafcfffa4bd00775e36656f2ff5e8d2626347807af37e7f7da9aee71bb8d9.jpg) +Figure 2. Sprinkler overlap. + +Lateral spacing is kept constant for as long as possible through the middle of the field. At the edge of the field, the two connections to the stationary line should be equidistant from the edge while still maintaining enough coverage. The lateral pipe spacing, sprinkler spacing, and number of moves required to cover the field can be seen in Table 2. + +Table 2. Pipe spacing, sprinkler spacing, and number of moves for 3 and 4 sprinklers. + +
SprinklersSpray radius (m)Sprinkler spacing (m)Pipe spacing (m)Moves to cover field
38.98.915.46
45.36.78.310
+ +To calculate the area inside the field that is double-watered, we use $L_{s}$ to find the length of the opposite edge of the isosceles triangle. The two equal sides are the radius of the disk. From the law of cosines $L_{s}^{2} = 2R^{2} - 2R^{2}\cos \theta$ , we can easily find $\theta$ ; for laterally overlapping wedges, we substitute $S_{s}$ for $L_{s}$ . The area of the wedge is found using $A = \theta \pi R^2 /360^\circ$ and double the area of the triangle is subtracted from it. + +The resulting areas of double, single, missed, and outside property watered are in Table 3. The area double-watered by three sprinklers during a cycle is greater than that for four sprinklers, but the area missed is much smaller. + +A strength of these models is that there is no specific order in which the field must be watered, since no area is hit more than twice in a cycle. Moving the sprinkler progressively from station to station across the field both minimizes move distance and reduces move time. + +The time in each position is calculated such that the lateral pipe is relocated before the overlapping sections receive more than $0.75\mathrm{~cm}$ in an hour. The combined flow rates of the sprinklers for both the three- and the four-sprinkler + +Table 3. Sprinkling imperfections (areas in $\mathfrak{m}^2$ + +
SprinklersDouble-wateredSingle-wateredMissedWatered outside field
31131125019717
41057128159140
+ +arrangements is faster than the allowed rate, especially in overlapping areas. The water must be shut off in time and left to seep in for the rest of the hour. + +# Sprinkling Time and Schedule + +We calculate the time to accumulate $0.75\mathrm{cm}$ in the areas hit by two sprinklers (the overlapping sprinkler areas). The $D$ -values for the overlap areas are twice those in Table 1. Thus, the time $t_n$ that the lateral pipe with $n$ sprinklers must water an area is + +$$ +t _ {3} = \frac {0 . 7 5}{2 D _ {3}} = 0. 9 6 0 \mathrm {h} = 5 8 \min , \quad t _ {4} = \frac {0 . 7 5}{2 D _ {4}} = 0. 4 5 5 \mathrm {h} = 2 7 \min . +$$ + +As seen in the Appendix, the number of lateral pipe stations to cover the entire field is 6 for three sprinklers and 10 for four sprinklers. [EDITOR'S NOTE: We omit the Appendix.] This result is important in realizing how many total moves are required to meet the requirement of the entire area getting at least $2.0\mathrm{cm}$ in four days. + +To find the number of cycles needed to be repeated over four days, we consider the areas that are not overlapped. These areas receive $(0.75\mathrm{cm} / \mathrm{cycle}) / 2 = 0.375\mathrm{cm} / \mathrm{cycle}$ ; so to receive $2\mathrm{cm}$ over four days requires $2\mathrm{cm} / (0.375\mathrm{cm} / \mathrm{cycle}) = 5.3$ cycles. Therefore, six complete cycles must occur over four days to meet the minimum water requirement, for either three or four sprinklers. + +For simplicity, we assume that the time to move the lateral pipe from one station to the next is uniformly $15\mathrm{min}$ . We display several statistics about our models in Table 4. + +Table 4. Sprinkler statistics for scheduling. + +
SprinklersWatering time (min)StationsCycles in 4 dTotal waterings per 4 dWaterings per d
35866369
4271066015
+ +Finally, we can make watering schedules for three and for four sprinklers. [EDITOR'S NOTE: We omit the schedules.] + +Each model requires six cycles over four days. Therefore, a cycle and a half should be completed each day to keep the workload balanced. For either + +number of sprinklers, the daily watering time is roughly the same (11 h), as is the total work time over four days (43 h). However, a considerably larger amount of time is spent moving the pipes using four sprinklers (15 stations/d vs. 9). Since one goal is to minimize time moving equipment, we recommend the three-sprinkler system, since watering takes roughly the same amount of time with much less effort and also leaves only $19\mathrm{m}^2$ unwatered instead of $59\mathrm{m}^2$ . + +# Ring Method of Area Estimation + +Some sprinklers water a ring-like area; we assume uniform water distribution over the area and that the outer spray radius is twice the inner radius. Considering the ring areas complicates the model significantly: + +- Preventing the same area being hit by the sprinklers more than twice in a cycle is impossible. +- To cover the entire area requires a significant amount of area to be watered three or four times as often as the areas watered once in a cycle (Figure 3). + +![](images/c4b2cd4ea67c2eec34022e270afcbb478d96c8be84e823fc0897fa7fcb77453f.jpg) +Figure 3. The ring model shows a great imbalance in the area watered. + +$\bullet$ Sprinkler + +Sprayed 1X + +Sprayed 2X + +Sprayed 3X + +Sprayed 4X + +- The lateral pipe must be moved more frequently and must water for a much smaller length of time to prevent the accumulation in heavily watered areas from exceeding $0.75 \mathrm{~cm} / \mathrm{h}$ . +- Many more cycles must occur to ensure that lightly watered areas receive a cumulative depth of at least $2.0\mathrm{cm}$ every four days. +- To prevent areas from getting overwatered while minimizing the underwater area requires a staggered station progression, thereby increasing the distance that the lateral line must be moved. +- The watering of the field can no longer be considered uniform. + +The ring model assumes equally spaced sprinklers arranged such that no area in the center of any radius is left unwatered. We do not explore sprinkler spacing but merely note that the spacing depends on the outer spray radius. + +Once the sprinkler spacing is determined, the lateral pipe spacing can be determined as in the disk model (optimal overlap model), as well as the number of stations required for a given number of sprinklers. + +For an algorithm on station progression for a cycle, there are several options: + +- Minimize the total distance that the lateral pipe must be moved in one cycle. This option + +- progresses similarly to the disk model (simply progress to the next station, with down-time at the endpoints of the main watering line); +- likely requires some wait time between waterings to meet the maximum watering requirement (no more than $0.75\mathrm{cm/h}$ on area), since there is significant overlap of watered areas; +- is not time-efficient. + +- Minimize wait time between waterings. This option + +- requires the pipe to be moved to another station immediately after a watering is complete; +- requires, to avoid overwatering certain areas, that the lateral pipe must be moved beyond an adjacent station upon completing another watering; +- requires a complex station-progression algorithm to ensure that the field is watered quickly and efficiently. +- has the weakness that the total distance that the lateral pipe is moved between waterings (as well as total distance moved during a complete cycle) is much greater than in the first option. In theory, the time required to move this extra distance could exceed the time that the farmer would need to wait using the first option. + +- Compromise by allowing some wait time between stations and allowing the lateral pipe to be moved beyond the adjacent station during the progression. + +Since one objective is to minimize the time and effort moving the pipes, we suggest the first option. + +# Head Loss + +A major weakness of our models is that they do not account for energy losses due to friction between the pipe and the water moving through it, also known as head loss. We apply the Hazen-Williams equation solved for meters of head loss per meter of pipe or friction slope [Walski et al. 2004, 17]: + +$$ +S _ {f} = \frac {1 0 . 7}{D ^ {4 . 8 7}} \left(\frac {Q}{C}\right) ^ {1. 8 5 2}, \tag {2} +$$ + +where + +$S_{f}$ is friction slope $(\mathrm{m} / \mathrm{m})$ + +$D$ is the diameter of the pipe (m), + +$Q$ is the flow rate $(\mathrm{m}^3 /\mathrm{s})$ , and + +$C$ is the Hazen-Williams friction coefficient (for aluminum, $C = 130$ ). + +With our given values of $D = 0.1 \, \text{m}$ and $Q = 0.0025 \, \text{m}^3 / \text{s}$ , we have $S_f = 0.00146 \, \text{m} / \text{m}$ , or a total head loss over the $100 \, \text{m}$ of pipe of $100S_f = 0.15 \, \text{m}$ , which is insignificant. + +For the farthest sprinkler position, there will be multiple tees with valves where the lateral line ties into the main line, and each has an associated loss of 0.3-0.4 meters of head [Walski et al. 2004]. These losses can add up and reduce the speed of the water exiting the sprinkler when the lateral arm is far from the water source. The problem is remedied by decreasing the distance between stations as the attachment points progress farther down the mainline. + +When we use (2) to check the head loss on the small sprinkler pipes, assuming that there is only one outlet for all of the flow, we obtain an astronomical friction slope, $S_{f} = 1305 \mathrm{~m / m}$ . This calculation is another justification for our assumption that not all of the flow exits the sprinklers and a return line of some sort is necessary. When adjusted for the proper flow based on the number of sprinklers, we obtain a friction slope of $12.5 \mathrm{~m / m}$ for four sprinklers and $21.4 \mathrm{~m / m}$ for three sprinklers. When we use the Bernoulli equation with losses due to friction accounted for, the appropriate losses are added to each side of the equation and once again solved for $v_{2}$ . The new speeds are: + +Three sprinklers: $v = 8.9 \, \text{m/s}$ instead of $9.6 \, \text{m/s}$ (for no friction) + +Four sprinklers: $v = 6.7 \, \text{m/s}$ instead of $7.2 \, \text{m/s}$ (for no friction) + +Three sprinklers can cover the field, but with four sprinklers the speed is too low to cover the edges reliably. + +# Computer Modeling and Solution Approach + +We created a computer program to model the effectiveness of solutions and to find its own near-ideal solution via a genetic algorithm. + +# The Computer Model + +We divide the field into a $200 \times 75$ cell grid of cells $0.4\mathrm{m}$ on a side and time into 5-min discrete intervals; each cell records how much water it receives during each time interval over the course of four days. Inputs are: + +- The direction that the mainline runs (cross-wise or length-wise). (We later found that length-wise leads to insurmountable pressure-loss.) +- The inset of the mainline from the edge of the field. + +- The inset of the first sprinkler from the beginning of the lateral line. +- The inset of the last sprinkler from the end of the lateral line. +- The total number of sprinklers on the lateral line (spaced evenly between first and last). +- A list of steps in the watering schedule. Each step consists of: + +- The distance in cells down the mainline to where the lateral is attached. +- The number of time intervals that the lateral operates at that location. + +In addition, the model accounts for several other variables that are hard-coded into the program but could be changed: + +- The time to move the lateral line (15 min). +- The "up-time" per day (16 h), that is, ensuring that the system stops at night. + +The program assigns a radius range to the sprinklers and a rate of water accumulation for every cell in that range, based on earlier calculations on sprinkler pressure and speed. After the entire schedule has been simulated, each cell is queried and assigned one of three conditions based on water accumulation: + +- Overwatered: If during any one-hour period the cell received more than $0.75 \, \text{cm}$ of water, it is considered overwatered, even if the total water over four days was less than $2 \, \text{cm}$ . +- Underwatered: If the cell is not overwatered, and it received less than $2\mathrm{cm}$ over the course of four days, it is considered underwater. +- Ideally-watered: If the cell is neither overwatered nor underwater, it is considered within the ideal watering range. + +# The Genetic Algorithm + +The genetic algorithm attempts to find optimal solutions through evolution. It creates 100 random sets of input for the testing model and tests each set, or "genome"; the 10 best-ranked genomes (see the Appendix for the ranking system) are selected as "parents" for another collection of 100 input sets (another "generation") [EDITOR'S NOTE: We omit the Appendix]. Ninety of the new genomes are created from two parents (10 parents allow for 45 unique combinations—two of each are used); the input values are the averages of their parents' values, with a small percentage of random variation added in. The remaining 10 genomes are created directly from one parent, with a slight amount of variation added. The computer proceeds through many generations, constantly improving the solution sets—in theory. In practice, the large number of input variables and the relatively small number of effective solutions means + +![](images/b87d8c39b94cd32f49df11e0e3ea4b829296acb96887b9b4155619bd48ddc3e6.jpg) +Figure 4. After three to four generations, progress ceases. (Number of Moves hugs the horizontal axis.) + +that it is extremely easy to worsen a solution but very difficult to improve it. After a few generations of progress, the model stagnates (Figure 4). + +With a much larger population size (tens of thousands), the chances increase of finding one or two better solutions in every generation; with our computing power, we are limited to a smaller population and thus cannot use the full range of input variables. + +The evolution simulation always favors certain values for particular variables: For three or four sprinkler heads, the simulation always picks three as optimal in the very first generation. In addition, the inset value of the mainline always tends towards centering the length of the lateral in the field, and the inset value of the first and last sprinklers always approaches zero within five to ten generations, suggesting that these sprinklers are best placed at the very beginning and end of the lateral line so as to leave less area unwatered. The simulation does not keep track of water that lands off of the field. By decreasing the areas that are watered twice and dumping a little extra water over the edge, unwatered area decreases to zero if lateral spacing is optimized. + +# The Station-Based Model + +Using noncomputer methods, we had determined that to water the entire field adequately with three sprinklers, six stations had to be visited six times each, with a watering time of less than $57\mathrm{min}$ for each visit (see Table 5). We tested the boundaries of this model by running it with a $60\mathrm{-min}$ and with a $55\mathrm{-min}$ operating time at each schedule step, a situation that should cause overwatering where sprinkler coverage overlaps. Figure 5 shows the results. + +![](images/4430aaf30d963bd23431efe23021e4ed38eabcdc7213491b6f76c5a62fa6dceb.jpg) +Figure 5. Station-based schedule with $60\mathrm{min}$ per station (left) and with $55\mathrm{min}$ per station (right). KEY: Red (leftmost region): cells underwatered or not yet watered. Blue (middle): cells being watered. Black (ovoids; none for $55\mathrm{min}$ ): cells overwatered. Green (right): cells in the ideal range. + +![](images/718c9d4c18bcb219db15ebae774957944fcabde320c12c338274cafa465ead13.jpg) + +For $60\mathrm{min}$ , our prediction of overwatering in overlaps is proven true; for $55\mathrm{min}$ , the lack of overwatering shows that our model is accurate. + +The next question is, Is our sequence of stations the most optimal? We start at one end of the field and move down by one station every 55 min until we reach the other end, where we stop for one cycle of down-time and then move back up the field. This cycle continues until we hit every station six times, requiring moving the line a total of $432\mathrm{m}$ . Cycling back and forth seems an obvious and efficient schedule, but is it possible to find something faster? + +# The Station-Based Genetic Algorithm + +We restructured the genetic algorithm to find the optimal station schedule. + +The simulation is set up to allow genetic change only in the station number to be visited at each schedule step; each visit must be $55\mathrm{min}$ long, and each station has to be visited 6 times, but the stations can be visited in any order. All other variables are fixed at the values that we had determined to be the most efficient. The scoring system is adjusted so that the only criterion for selection is moving the lateral line a lesser distance than the other genomes. + +The genetic algorithm stabilizes at a single solution within five elapsed generations, as shown in Figure 6. The mean distance required to move the lateral line over a four-day cycle drops from more than $900\mathrm{m}$ to a surprising $72\mathrm{m}$ , a major improvement over our "optimal" $432\mathrm{m}$ . + +The computer took an approach so simple as to be overlooked: Rather than moving on after turning off the water, the irrigation system is simply turned off for $15\mathrm{min}$ before resuming at the same location. The lateral line starts at one end of the field and operates for six on-off cycles before moving to the next station. + +Of course, this may not be an option for fields that cannot quickly absorb water, but it satisfies the conditions of the problem statement. Over any given hour, the total depth of water applied within the overlapping areas is less than $0.75\mathrm{cm}$ because of watering for only $55\mathrm{min}$ plus the 15-min move time. + +![](images/99008442064569fadeb39bbf766112eb40878b5c4434b10df8d302bb535c7d8f.jpg) +Figure 6. Mean total move distance. + +# Results and Conclusions + +We found spray ranges based on the number of sprinklers in the system. This model assumes no friction loss, and we found speed not to be sensitive to height. We investigated two models of coverage: disk and ring areas. Using the areas and flow rates from sprinklers to approximate depth accumulated in the areas, we eliminated one and two sprinklers based on the overshooting range, poor time efficiency, excessive waste, and large speeds. Calculating the range for five sprinklers showed that the spray would not reach the edge of the field. Therefore, we limited our models to three or four sprinklers on the lateral pipe. + +Using geometric methods, we found an optimized spacing of three and four sprinklers along the lateral pipe (using disks to model areas): + +- Three sprinklers: placed $1.1\mathrm{m}$ inset from the end of the 20-m pipe, spaced evenly at $8.9\mathrm{m}$ between sprinklers. +- Four sprinklers: a sprinkler at each end point of the 20-m pipe, spaced evenly at $6.7\mathrm{m}$ between sprinklers. + +Similarly, the lateral pipe spacing and number of stations/ moves required to cover the entire field was calculated for the two cases: + +- Three sprinklers: ${15.4}\mathrm{\;m}$ between stations; six stations. +- Four sprinklers: $8.3\mathrm{m}$ between stations; ten stations. + +Advantages to using three sprinklers are fewer moves per cycle and minimization of the area underwateried. + +We found that to meet the minimum water requirement of at least $2\mathrm{cm}$ depth over four days, the areas watered once per cycle are the driving factor, + +resulting in six cycles required every four days for both three and four sprinklers. Therefore, to balance the workload evenly over four days, one-and-a-half cycles must be completed each day. In accordance with the maximum rate that water can accumulate $(0.75\mathrm{cm / h})$ , the time that a sprinkler could water an overlapping area was calculated to be: + +- Three sprinklers: 58 min per station; 9 stations per day. +- Four sprinklers: 27 min per station; 15 stations per day. + +We also found lateral pipe can be progressively moved from station to station during a cycle. However, at end stations the best option (while maintaining uniform coverage) involves leaving the pipe at a station for two consecutive turns while scheduling a wait time between the two waterings. + +The total work day for these models is approximately the same. Using three sprinklers requires less move time, as well as fewer moves each day. + +The ring model requires more frequent relocation of the lateral pipe, many more cycles to ensure the proper amount of moisture in all areas, and a staggered progression of the lateral pipe. + +The biggest weakness of our initial models is assuming no friction head loss in the system. After some calculation of head loss, and reapplication of the Bernoulli energy equation, we found the new sprinkler discharge speeds and compared them to speeds calculated with no head loss: + +- Three sprinklers: $v = 8.9 \mathrm{~m} / \mathrm{s}$ instead of $9.6 \mathrm{~m} / \mathrm{s}$ (no friction) +- Four sprinklers: $v = 6.7 \mathrm{~m} / \mathrm{s}$ instead of $7.2 \mathrm{~m} / \mathrm{s}$ (no friction) + +Though the discharge speeds decrease, the speed of the optimized three-sprinkler system is still large enough to reach the edge of the field when the sprinklers are positioned on the end of the movable pipe. + +Our computer simulation found an ideal watering over a four-day period using the parameters of our previous three-sprinkler model, eliminating unwatered area by placing sprinklers on the ends of the lateral pipe and changing the spacing of the lateral attachments accordingly. + +# References + +Finnemore, John E., and Joseph B. Franzini. 2002. Fluid Mechanics with Engineering Applications. New York: McGraw-Hill. + +Fipps, Guy, and Frank J. Dainello. 2001. Irrigation. Chapter 5 in Vegetable Growers' Handbook, Web Edition, edited by Frank Dainello and Sam Cotner, Texas Agricultural Extension Service. http://aggie-horticulture.tamu.edu/extension/veghandbook/chapter5/chapter5.html. Accessed 4 February 2006. + +National Resources Conservation Service. 1997. National Engineering Handbook. Part 652: Irrigation Guide. ftp://ftp.wcc.nrcs.usda.gov/downloads/irrigation/neh652/ch6.pdf. Accessed 2 February 2006. +Rowan, Mike, et al. 2004. On-site sprinkler irrigation of treated wastewater in Ohio. http://ohioline.osu.edu/b912/step_5.html Accessed 5 February 2006. +Walski, Thomas, et al. 2004. Computer Applications in Hydraulic Engineering. 6th ed. Watertown, CT: Bentley Institute Press. + +![](images/1df83c702742f0b8031fd91d50c69b8bf2fa6abd58b7cdce32a9adae7499b64d.jpg) +Team members Ben Dunham, Kyle Nixon, and Steffan Francischetti. + +![](images/2cf411aa9d28707a8459f99c1622d6ef0dd6ea9bef8d8b9fac2084ae57b028f8.jpg) +Team advisor Mark Parker. + +# Fastidious Farmer Algorithms (FFA) + +Matthew A. Fischer + +Brandon W. Levin + +Nikifor C. Bliznashki + +Duke University + +Durham, NC + +Advisor: William Mitchener + +# Summary + +An effective irrigation plan is crucial to "hand move" irrigation systems. "Hand move" systems consist of easily movable aluminum pipes and sprinklers that are typically used as a low-cost, low-scale watering system. Without an effective irrigation plan, the crops will either be watered improperly, resulting in a damaged harvest, or watered inefficiently, using too much water. + +We determine an algorithm for "hand move" irrigation systems that irrigates as uniformly as possible in the least amount of time. We physically characterize the system, determine a method of evaluating various irrigation algorithms, and test these algorithms to determine the most effective strategy. + +Using fluid mechanics, we find that we can have at most three nozzles on the 20-m pipe while maintaining appropriate water pressure. We model our sprinkler system after the Rain Bird 70H 1" impact sprinkler, which works at the desired pressure and has approximately a $0.6\mathrm{cm}$ diameter. Combining data and analysis, we confirm that the radius of the sprinkler will be $19.5\mathrm{m}$ . Researchers have proposed several models for the water distribution pattern about a sprinkler; we consider a triangular distribution and an exponential distribution. + +We do not consider schemes that do not water all areas of the field at least $2\mathrm{cm}$ every 4 days or water areas more than $0.75~\mathrm{cm / h}$ . The largest cost in time and labor is in moving the pipe. Thus, we look for a small number of moves that still gives the desired time and stability. From these configurations, computer analysis determines which is most uniform. + +For various situations, we propose an optimal solution. The bases of the sprinkler placement patterns are triangular and rectangular lattices. We craft three patterns to maximize application to the difficult edges and corners. + +- For calm conditions and a level field, the field can be watered with just two moves (the "Lazy Farmer" configuration). However, this approach is unstable, and even weak wind would leave parts of the field dry. With three moves, little stability is gained; so four positions is best. +- The "Creative Farmer" triangular lattice gives both stability and uniformity. The extra time is warranted because of its ability to adapt. +- We obtain even more stability using the "Conservative Farmer" model but at the price of a decrease in uniformity from the "Creative Farmer" approach. + +# Description of Problem + +The goal is to irrigate a $30\mathrm{m}\times 80\mathrm{m}$ field as uniformly as possible while minimizing the labor/time required. We assume the following equipment and specifications: + +- Pipes of $10\mathrm{-cm}$ diameter with rotating spray nozzles of $0.6\mathrm{cm}$ diameter +- Nozzles are raised about $1\mathrm{m}$ from the pipe and can spray at angles ranging from $20^{\circ}$ to $30^{\circ}$ . +- Total length of the pipe is $20\mathrm{m}$ . +- A water source with a pressure of $420\mathrm{kPa}$ and a flow rate of $150\mathrm{L} / \mathrm{min}$ . + +We consider the following guidelines and assumptions: + +- No part of the field should receive more than $0.75 \mathrm{~cm} / \mathrm{h}$ . +- Every part of the field should receive at least $2\mathrm{cm}$ every four days. +- Overwatering should be avoided. +- Sprinklers are in working order and rotate $360^{\circ}$ , spraying uniformly with respect to rotational symmetry. +- The soil is approximately uniform and the terrain is flat. +- Wind is considered only in terms of stability. +- We can place a water supply pipe through the field along either its width or its length, which has multiple connection spots for the movable pipes. +- For such a small field, any move requires approximately equal time, so we need only minimize the total number of moves $M_T$ . +- In particularly arid areas, evaporation reduces the total water application but with no more than $5\%$ loss. +- We ignore rainfall, assuming that it is accounted for by delaying waterings. + +# Definitions and Notation + +Let $D$ be a distribution of sprinklers, including placement on the pipe and locations in the field. We consider accumulation over a region $R$ . Let + +$M_T(D) =$ total number of moves by the farmer required for a distribution, + +$Aver(D,R) =$ average application rate over region $R,$ + +$\operatorname{Var}(D, R) = \text{variance of the rate of application over region } R,$ + +$\max (D,R) =$ maximum rate of application over the region $R,$ and + +$\min (D,R) =$ minimum rate of application over the region $R$ + +# Pipe Capacity and Resulting Pressure/Radii + +# Watch out for the Rain Bird + +We derive the exit speed, flow, and water-drop drag coefficient for a sprinkler with our conditions and show that it agrees with the Rain Bird 70H 1" Brass Impact Sprinkler [Rain Bird Agricultural Products n.d.]. We assume laminar flow and use Bernoulli's equation: + +$$ +P _ {1} + \frac {1}{2} \rho v _ {1} ^ {2} + \rho g y _ {1} = P _ {2} + \frac {1}{2} \rho v _ {2} ^ {2} + \rho g y _ {2}, +$$ + +where + +$P_{i}$ is absolute pressure, + +$\rho$ is the density of water, + +$v_{i}$ is speed, + +$g$ is the gravitational constant, and + +$y_{i}$ is height. + +Because our field is flat, we have $y_{1} = y_{2}$ , so the height of our source relative to our sprinklers does not affect the exit speed $v_{2}$ : + +$$ +v _ {2} = \sqrt {\frac {2}{\rho} P + v _ {1} ^ {2}}, +$$ + +where $P$ is the relative pressure. We must first find the speed $v_{1}$ of water at our source: + +$$ +v _ {1} = \frac {1 5 0 \mathrm {L}}{\mathrm {m i n}} \times \frac {1 \mathrm {m i n}}{6 0 \mathrm {s}} \times \frac {1 \mathrm {m} ^ {3}}{1 0 0 0 \mathrm {L}} \times \frac {1}{\pi (0 . 0 5) ^ {2} \mathrm {m} ^ {2}} = \frac {1}{\pi} \mathrm {m / s}. +$$ + +Plugging $v_{1}$ into the equation for $v_{2}$ , we obtain + +$$ +v _ {2} = \sqrt {\frac {2}{1 0 0 0} \times 4 2 0 \times 1 0 0 0 + \frac {1}{\pi^ {2}}} \approx \sqrt {8 4 0} \approx 2 8. 9 \mathrm {m / s}. +$$ + +That's fast (about 60 mph)! It may be too fast. This exit speed does not take into account friction in the pipes, for which we propose an attenuation factor. The volume out of the sprinkler is the speed times the cross-sectional area of the sprinkler times the attenuation factor: + +$$ +Q = C _ {s} A _ {c} \sqrt {\frac {2}{\rho}} P, +$$ + +where + +$Q$ is the discharge in $(\mathrm{m}^3 /\mathrm{s})$ + +$C_s$ is the attenuation factor, and + +$A_{c}$ is the cross-sectional area $(\mathrm{m}^2)$ . + +Using pressure and discharge data from Rain Bird Agricultural Products [n.d.], we find the attenuation factor to be + +$$ +C _ {s} = \frac {Q}{A _ {c} \sqrt {\frac {2}{\rho P}}} = \frac {3 . 1 7 \times \frac {1}{3 6 0 0}}{\pi (0 . 0 0 3 1 7 5) ^ {2} \sqrt {8 0 0}} \approx 0. 9 8 3. +$$ + +This value shows very little loss due to friction. The escape speed with friction is + +$$ +v = 0. 9 8 3 \times 2 8. 9 \approx 2 8. 5 \mathrm {m} / \mathrm {s}. +$$ + +How many liters flow out of each sprinkler per minute is simply the speed multiplied by the area, converted to liters per minute: + +$$ +\frac {\text {V o l u m e}}{\text {u n i t t i m e}} = 2 8. 5 \times \pi (0. 0 0 3) ^ {2} \times \frac {1 0 0 0 \mathrm {L}}{\mathrm {m} ^ {3}} \times \frac {6 0 \mathrm {s}}{\min} = 4 8. 3 5 \mathrm {L} / \min . +$$ + +We can therefore use up to three sprinklers without using more than $150\mathrm{L / min}$ ; with more than 3 sprinklers, there will be a pressure drop. To find the new pressure, we use the continuity principle, which states that the volume of water flowing in equals the volume of water flowing out: + +$$ +A _ {s} v _ {s} = n A _ {N} v _ {N}, +$$ + +where + +$A_{s}$ is cross-sectional area of the source, + +$v_{s}$ is speed of water at our source, + +$n$ is number of sprinklers, + +$A_{N}$ is cross-sectional area of the sprinkler nozzle, and + +$v_{N}$ is speed out of the sprinkler nozzle. + +Solving for $v_{N}$ , we obtain + +$$ +v _ {N} = \frac {r _ {s} ^ {2}}{n \pi r _ {N} ^ {2}} = \frac {(5 \times 1 0 ^ {- 2}) ^ {2}}{n \pi (3 \times 1 0 ^ {- 3}) ^ {2}} \approx \frac {8 8}{n} \mathrm {m / s}, +$$ + +where $n > 3$ , $r_s$ is the radius of the pipe at the source, and $r_N$ is the radius of the sprinkler nozzle. + +For four sprinklers, the exit speed would be $22\mathrm{m / s}$ and the pressure would be $252\mathrm{kPa}$ . The pressure needs to be above $280\mathrm{kPa}$ [Rain Bird Agricultural Products n.d.]. Since too low a pressure would result in a low degree of uniformity, we limit ourselves to at most three sprinklers. + +# Kinematics Equations + +Because water droplets are small and the escape speed is above the terminal speed, drag must be taken into account. We have the following differential equations for speeds in the $x$ - and $y$ -directions: + +$$ +\frac {d v _ {x}}{d t} = - k v _ {x}, \quad \frac {d v _ {y}}{d t} = - g - k v _ {y}, +$$ + +whose solutions are + +$$ +y (t) = \frac {- g}{k} t + \left(\frac {v _ {0} k \sin \theta + g}{k ^ {2}}\right) \left(1 - e ^ {- k t}\right) + y _ {0}, +$$ + +$$ +x (t) = \frac {v _ {0} \cos \theta}{k} \left(1 - e ^ {- k t}\right) + x _ {0}. +$$ + +We use the following initial conditions (with some from Rain Bird Agricultural Products [n.d.]) to determine the drag constant: + +$$ +y _ {0} = 1 \mathrm {m}, \qquad x _ {0} = 0, \qquad v _ {0} = 1 \mathrm {m}, \qquad \theta = 2 1 ^ {\circ}. +$$ + +The published value for the radius for our system is approximately $19.5\mathrm{m}$ [Rain Bird Agricultural Products n.d.]. Using this distance and the above initial conditions, we determine the drag constant to be $k = 1.203$ + +We thus have an equation for how the radius of the water emitted by the sprinkler is determined by the height and angle of the sprinkler. Although we keep our sprinkler at factory settings, the farmer could modify the sprinkler to adjust the radius if needed. + +# Distribution from Standard Sprinkler + +While the sprinklers under consideration cover a disk of radius $19.5\mathrm{m}$ , the distribution need not be uniform over that area. Large droplets tend to travel farther, but the area near the perimeter is much larger than near the sprinkler head. We discuss various models for this behavior based on empirical data. + +# Triangular Model + +Smajstrela et al. [1997] propose that the water distribution can be modeled as a triangle. That is, the application rate falls linearly as a function of distance from the sprinkler head, disappearing outside the radius. Figure 1 shows an example with radius 25 ft. + +![](images/e60374fdb133e2a9a836252da82e27a3711e5eab196e79ae841729b2b254955a.jpg) +Figure 1. Triangular water distribution (redrawn from Smajstrla et al. [1997]). + +In three dimensions, the distribution is cone-shaped, centered about the sprinkler nozzle. To analyze a grid pattern, we sum the water distribution over a cone and analyze the resulting surface. + +# Experimental/Exponential Decay Model + +Louie and Selker [2000] experimentally tested the performance of a Rain Bird 4.37-mm nozzle. The distribution spikes within $2\mathrm{m}$ of the sprinkler head, then maintains an approximately uniform rate before decaying near the edge of the radius (Figure 2). We use exponentials to fit a curve to this graph. We then scale the width and the height of the function to correspond to the radius and water flow of our larger sprinkler: + +$$ +f (r) = \left(3 \times 0. 0 0 2 6 7 \times e ^ {- 0. 7 r} + 0. 0 0 2 6 7\right) \times e ^ {- (r / 1 9. 5) ^ {2 0}}. +$$ + +To get the three-dimensional distribution, we rotate the function about the $z$ -axis by replacing $r$ with $\sqrt{(x - a)^2 + (y - b)^2}$ for a sprinkler centered at $(a, b)$ . In some ways, this distribution is a worst-case scenario because of the large peak about the sprinkler. The curve is based experimentally on where drops landed but does not take into account possible spread on landing. + +![](images/f035c9f9460b89e5b7a66cfe0736ba1e438da0b09b62f57c3fe1997f12ac03e0.jpg) +Figure 2. Exponential water distribution (redrawn from Louie and Selker [2000]. + +# Comparison of Models + +The exponential decay model is the more realistic of the two models. It forces careful consideration of how long a sprinkler can be left. Any configuration acceptable for this model will most likely work under the triangular model too. + +# Conclusions + +With the exponential model, the application rate near the sprinkler head increases to $0.01\mathrm{m / h} = 10\mathrm{cm / h}$ . We are constrained to a maximum rate of $7.5\mathrm{cm / h}$ to avoid damage to the soil and crops. Thus, using the exponential model results in configurations where sprinklers run for less than the full $60\mathrm{min}$ each hour. We later discuss several methods to minimize the inconvenience that this constraint causes the farmer. + +A similar difficulty arises in the triangular model with three sprinklers. The best that we can do for the sprinkler in the middle is to space the sprinkler heads evenly, with one at each end. The distance of separation is then $10\mathrm{m}$ . Scaling the triangle for the values of the Rain Bird $(3.2\mathrm{m}^3 /\mathrm{h}, 19.5\mathrm{-m}$ radius), we get a peak height of about $8\mathrm{cm / h}$ ; so at $10\mathrm{m}$ , we get $4\mathrm{cm / h}$ . The middle sprinkler head would be receiving $16\mathrm{cm / h}$ , which is over twice the acceptable amount. + +For either model, three sprinklers can only be run for a limited time every hour. Thus, our proposed solutions have exactly two sprinklers on the pipe. + +# Analysis of Standard Grid Patterns + +We analyze standard grid patterns. Symmetric designs that cover a rectangular field include squares, rectangles, and triangles. To counteract the effects of varying models of distribution (triangular or exponential), all patterns employ overlapping sprinkler patterns. In most cases, researchers recommend $40 - 60\%$ overlap of radii to obtain the most uniform distribution, which also tends to be the most stable under windy conditions [Eisenhauer et al. n.d.]. + +We use the triangular distribution to evaluate grid patterns with the goal of finding the ideal side lengths as a ratio of the radius. Mathematical analysis shows that in terms of uniformity, the ideal rectangle is a square with side $1.1 \times (\text{radius})$ ; however, a triangle grid pattern obtains better uniformity, though with smaller spacing, $0.85 \times (\text{radius})$ . + +# Evaluation Methods + +We have two primary concerns in evaluating a grid pattern. The minimum value on the surface determines the time required to water the field, so we must watch for too low a minimum value. We measure uniformity by calculating the variance of the distribution. In each case, we consider a unit of the grid, that is, one square or one triangle, and plot the distributions of all sprinklers that water that square. The average rate and variance are + +$$ +A v e r (D, R) = \frac {1}{\mathrm {A r e a} (R)} \int_ {R} D (x, y), +$$ + +$$ +\operatorname {V a r} (D, R) = \frac {1}{\operatorname {A r e a} (R)} \int_ {R} (D (x, y) - A v e r (D, R)) ^ {2}, +$$ + +where $D(x,y)$ is the distribution and $R$ is the unit region. A large $\operatorname{Var}(D,R)$ means water will be applied nonuniformly and could result in poor growth. To aid in assessing the extent of variation, we also calculate $\max(D,R)$ . The difference between $\max(D,R)$ and $\min(D,R)$ gives a measure of how large the variation is. + +# Rectangular Grids + +For each of several vertical separations between sprinklers, from 0.8 to 1.2 times the radius, we consider a range of possible horizontal separations. Generally, the variance decreases and then increases as the horizontal separation increases, defining a clear minimum value, which for all configurations is approximately a horizontal separation of 1.1 times the radius. The best rectangular configurations turns out to be a square of side length $1.1 \times$ (radius). In the tests, the difference in maximum and minimum correlates closely with the variance, so we use that as the basis for comparison. + +# Triangular Grids + +For the triangular lattice, we must also model surrounding triangles because nearby sprinklers have a significant effect. + +We do not consider distances less than $0.8 \times$ (radius) because significant overwatering would take place. Thus, the most uniform configuration is for $0.85 \times$ (radius). This separation distance results in higher uniformity than the rectangular configuration. + +The exponential distribution gave similar results on the tests, indicating that triangular set-ups are a good enough approximation for comparing uniformity. + +# Proposed Irrigation Methods + +We proceed to design the pipe network and watering schemes. We make the following assumptions: + +- Water is distributed according to the exponential water distribution. +- The sprinklers are unmodified, and we place two of them at the ends of the pipe, separated by $20\mathrm{m}$ . +- The efficiency of the sprinkler irrigation system is at least $95\%$ , no more than $5\%$ is lost to evaporation and other factors. +- There exists infrastructure that can supply water along the center of the field. + +Our goal is a system that provides at least $2\mathrm{cm}$ of water at every point of the field every 4 days, and no more than $0.75\mathrm{cm}$ during any single hour. In addition, we would like the pattern of watering to be periodic with period of four days. We compare the different systems using the following criteria: + +- required number of moves $M_T$ of the pipes; +- hours of operation of the system; +- stability with respect to factors like wind and equipment malfunctions; and +uniformity of irrigation. + +The water falling right next to the sprinkler is $1\mathrm{cm / h}$ , which means we cannot have a sprinkler operational for more than $45\mathrm{min}$ in an hour; so the farmer must come and stop the sprinklers $45\mathrm{min}$ after they were turned on and to turn them on again (or move them) $15\mathrm{min}$ later. Since the pipes have valves that can easily be closed and opened, even under pressure, turning them on or off doing consumes only a few minutes. In addition, if one sprinkler is within the radius of another, the time that they can be operational will be severely reduced; we want to avoid such a situation by positioning the sprinklers at the ends on the $20\mathrm{m}$ pipe. + +We divided the field into four $20\mathrm{m} \times 30\mathrm{m}$ rectangular pieces, each of which is further subdivided into triangles by the two diagonals (Figure 3). + +It is impossible to water the whole field using our pipe of length $20\mathrm{m}$ and our sprinklers with their radius of irrigation of $19.5\mathrm{m}$ , since we cannot water two points separated by more than $19.5\mathrm{m} + 20\mathrm{m} + 19.5\mathrm{m} = 59\mathrm{m}$ . Therefore, we must move the pipe at least once; and since after 4 days the pipe should be in its initial position, we must move it twice per period. Therefore, our lower + +![](images/6ccb58364e5d3fc6e5e14872eca403d3fe71be9940e4ef89d8920d0a201c2e07.jpg) +Figure 3. The field, subdivided. + +bound on $M_T$ is 2. It would be nice to achieve this minimum. Insight into how to do this can be obtained by drawing circles with radius 19.5 at the points $A, C, E, A', C'$ , and $E'$ in Figure 4. We must position the sprinklers so that in each circle there is at least sprinkler. This leads to a scheme with two moves. + +![](images/8d5354ce00fb984876eec6c9eec628d00ae1241d97ba133de1286f82f26647bf.jpg) +Figure 4. Covering the edges. + +# The Case $M_{T} = 2$ + +Suppose that we position the pipe as shown in Figure 5. + +![](images/dcc6c1ab9c3be1506053ab26bec3dddf0c64a442baf82a7e8efe2f446ddd2cb3.jpg) +Figure 5. The Lazy Farmer configuration. + +The greatest distance from a sprinkler (denoted by a dot) is $\sqrt{10^2 + 15^2} = 18.02\mathrm{m}$ , and these points are precisely $A, C, E, A', C'$ , and $E'$ in Figure 5. + +From our exponential water distribution graph (Figure 2), the amount of water falling at those points is $2.25 \, \mathrm{mm/h}$ ; but since we operate a sprinkler for at most $45 \, \mathrm{min}$ , the actual value is $1.68 \, \mathrm{mm/h}$ . Thus, if we operate the system for 13 hours at each location, we get a minimum of $2.18 \, \mathrm{cm}$ of water at every point, for more than $2 \, \mathrm{cm}$ everywhere when we subtract loss due to evaporation. Therefore, the total time the system would be operational is 26 hours, the pipes would have to be moved twice, and the amount of water used would be $(2)(26 \, \mathrm{h})(45 \, \mathrm{min/h})(48.35 \, \mathrm{L/min}) = 113 \times 10^3 \, \mathrm{L}$ per period. If the watering were optimal, the required amount of water would be $(30 \, \mathrm{m})(80 \, \mathrm{m})(2 \, \mathrm{cm}) = 48 \times 10^3 \, \mathrm{L}$ of water. Therefore, the water efficiency is $42\%$ . As for uniformity, we calculate that the variance is $1.7 \times 10^{-6}$ , which corresponds to a high degree of uniformity. However, this configuration has a major disadvantage: Even a small change of $2 \, \mathrm{m}$ in the sprinkler radius (for instance, due to wind or to decrease in pressure in the pipes) can result in distant points such as $A$ receiving no water. Therefore, this configuration, although very uniform and with minimal $M_T$ , is not very stable. + +# The case $M_T = 3$ + +We want to have a smaller maximum distance $d_{\mathrm{max}}$ between a point in the field and the nearest sprinkler. Using a similar argument to the previous case, the resulting configuration should look something like Figure 6. + +![](images/8d81ee0802d2845c3a645d7fad11282fccc7450cd4dd6d7d9ee2c4659a6bb2b4.jpg) +Figure 6. Configuration for $M_T = 3$ . + +In such a configuration, $d_{\mathrm{max}} \geq 16.5$ , so the gain in stability is slight. In addition, there is a huge increase in operational time and water required, to $39 \mathrm{~h}$ and $169 \times 10^{3} \mathrm{~L}$ . Therefore, the case $M_T = 3$ results in bad configurations. + +# The case $M_{T} = 4$ + +With the increase in the number of times that we can move the pipes, the complexity of positioning them increases dramatically, making it nearly impossible to consider all configurations. However, since we want a stable configuration, we should have sprinklers close to the points $A$ , $A'$ , $E$ , and $E'$ . In addition, + +for uniformity, we should preserve some symmetry. The earlier triangular and rectangular patterns can be successfully applied in this case. The best way to reduce peaks in watering is to use a triangular pattern, like the one in Figure 7. + +![](images/3f75d3b3ac4058c5c0476d293a14ac8e75fafa3fe108ded9a8abdf84474b1902.jpg) +Figure 7. Creative Farmer layout. + +The sprinklers are first set at the vertices of equilateral triangles of side $20\mathrm{m}$ . After that, to minimize instability, the leftmost pipe is translated $5\mathrm{m}$ to the right and the rightmost $5\mathrm{m}$ to the left. Then $d_{\max} \leq 14$ , which implies that this scheme would work well provided that the wind does not result in more than $25\%$ deviation. This layout has a variance of $3.35 \times 10^{-6}$ , period of operation of $52\mathrm{h}$ , and water consumption of $226 \times 10^{3}\mathrm{L}$ per period. + +Another possibility is sprinklers in a rectangular pattern (Figure 8). + +![](images/08adfb9b5c7eb3f1a45ebe6c7f5775986b76c229d1cd99a0bf1a501232e5a356.jpg) +Figure 8. Conservative Farmer layout. + +The distance from the sprinklers to the points on the sides is less than $12\mathrm{m}$ , and the area between two pipe positions is within the radius of four sprinklers and thus would be watered no matter what the direction of the wind is. This irrigation would be good provided that the wind doesn't alter the area covered by more than $7\mathrm{m}$ . The layout's variance is $4.17 \times 10^{-6}$ , and the hours of operation and water consumption are the same as in the previous case. + +# The case $M_T > 4$ + +Since moving the pipes takes a lot of time, and in addition we have observed how the variance increases, even for the triangular configuration, we can conclude that the case $M_T > 4$ would not lead to a good layout. + +# Numerical Analysis of Proposed Strategies + +With the same criteria used to evaluate the standard grid patterns, we diagnose the algorithms on our $30\mathrm{m} \times 80\mathrm{m}$ test field. We set up the field on a grid with endpoints (0,0), (80,0), (0,30) and (80,30). This setup allows us to evaluate the entire field and take into account edge effects. The following four strategies are the best performers of several that we analyzed. The corresponding sprinkler placements are shown in Figure 9. + +![](images/77fd918293eb40ab929d295f0d9f700d924171a44b729b840af8e486b58e19e4.jpg) +Creative Farmer + +![](images/9f32d10615d33363633f0668f64ec76b8260daa85d83557902d88d7f1c6912c7.jpg) +Lazy Farmer + +![](images/b627977b39075059e806946ea71be4ea8744f0e93d38899af2b8f0d9cf40048f.jpg) +Conservative Farmer + +![](images/f57fcdc9805d0914bd1f407610a4ef6016a95f016bb1fa5f7e6dce2efdd4c1de.jpg) +Passive-Aggressive Farmer +Figure 9. Sprinkle placements for several strategies. + +# Lazy Farmer System + +This strategy uses few moves, has a high degree of uniformity, and can irrigate the entire field—perfect for farmers who would rather be shooting soda cans off of a fence post than lugging around a heavy aluminum tube. It takes $26\mathrm{h}$ and uses the least amount of water. + +# The Passive-Aggressive Farmer System + +This approach neither improves much upon the stability of the Lazy Farmer system nor saves time by using few pipe moves. Therefore, it would be perfect for an indecisive or passive-aggressive farmer. + +# The Conservative Farmer System + +This strategy is very stable, perfect for a farmer who is very careful and untrusting of the wind. It takes twice as much time as the Lazy Farmer approach and uses twice as much water. + +# The Creative Farmer System + +This is the second most uniform system. The setup is somewhat complicated, but some farmers may be up to the task. It is perfect for a farmer who regularly plays Sudoku and stopped watching the TV show MacGyver (1985-1992) because the farmer felt MacGyver lacked ingenuity. It takes just as long as the Conservative Farmer system and uses just as much water. + +# Conclusion + +There are only two worthwhile strategies. The Conservative Farmer system should be used in windy conditions or if the level of the field is somewhat nonuniform. The Lazy Farmer system should be used otherwise, because it is the fastest, easiest, and most uniform. + +We base our strategies and conclusions on data from a sprinkler manufacturer. We also examined specifications of sprinklers from other manufacturers and found little change in our results. + +The methods that we used to evaluate proposed strategies are general. Our method of analysis could be repeated to obtain optimal strategies in other cases with different parameter values (a different pipe, field, or water pressure). + +# References + +Eisenhauer, Dean, Derrel Martin, and Glenn Hoffman. n.d. Irrigation Principles and Management. Class notes. Chapter 11. Lincoln, NE: University of Nebraska. +Louie, Michael J., and John S. Selker. 2000. Sprinkler head maintenance effects on water application uniformity. Journal of Irrigation and Drainage Engineering 126 (3) (May / June 2000) 142-148. +Rain Bird Agricultural Products. n.d. Performance data: 70H $1''$ (25 mm) full circle, brass impact sprinkler. http://www.rainbird.com/pdf/ag/imptmtrc70h.pdf. +Smajstrla, A.G., F.S. Zazueta, and D.Z. Haman. 1997. Lawn sprinkler selection and layout for uniform water application. BUL320, Agricultural and Biological Engineering Department, Florida Cooperative Extension Service, Institute of Food and Agricultural Sciences, University of Florida. http://edis.ifas.ufl.edu/AE084. + +![](images/2d4a8d332667759225d75139ca8fc1351cd74097ce1fc3fe5e845cc811706a8b.jpg) +Team members Matthew Fischer, Brandon Levin, and Nikifor Bliznashki, in Duke apparel. + +![](images/c7ff406182d4d95a67b30bfe39ee4c949f2d8e3afcaf5c0b5b37f1656129160c.jpg) +Team advisor William Mitchener (on right), with friend. + +Pp. 269-328 can be found on the Tools for Teaching 2006 CD-ROM. + +# A Schedule for Lazy but Smart Ranchers + +Wang Cheng + +Wen Ye + +Yu Yintao + +Shanghai Jiaotong University + +Shanghai, China + +Advisor: Song Baorui + +# Summary + +We determine the number of sprinklers to use by analyzing the energy and motion of water in the pipe and examining the engineering parameters of sprinklers available in the market. + +We build a model to determine how to lay out the pipe each time the equipment is moved. This model leads to a computer simulation of catch-can tests of the irrigation system and an estimation of both distribution uniformity (DU) and application efficiency of different schemes of where to move the pipe. In this stage, DU is the most important factor. We find a schedule in which one sprinkler is positioned outside of the field in some moves but higher resulting DU $(92\%)$ and saving of water. + +We determine two schedules to irrigate the field. In one schedule, the field receives water evenly during a cycle of irrigation (in our schedule, 4 days), while the other schedule costs less labor and time. Our suggested solution, which is easy to implement, includes a detailed timetable and the arrangement of the pipes. It costs 12.5 irrigation hours and 6 equipment resets in every cycle of 4 days to irrigate the field with DU as high as $92\%$ . + +# Assumptions and Definitions + +- The weather is "fine" and the influence of wind can be neglected. +- The whole system is "ideal" in that evaporation, leaking, and other water loss can be neglected. + +- The water source can be put at any position of the field. In practice, a tube can be used to transport water from the pump to the pipe set. +- No mainline exists, so that all pipes join together and can be put at any position of the field. +- The time for a rancher to uncouple, move, and reinstall the pipe set is half an hour. +- The discharge of any sprinkler is the same. +- The design pressure of sprinklers is about $400\mathrm{kPa}$ and the sprinkler is an impact-driven rotating sprinkler. +- The diameter of the riser is the same as that of the pipe. +- The water pressures in pipes are assumed to be the same. In practice, there is a slight difference. + +Table 1. +Variables and constants. + +
VariableDefinitionUnits
\(p_{in}\)Water pressure in the pipe before a junctionkPa
\(p_{out}\)Water pressure in the pipe after a junctionkPa
\(p_{up}\)pup Water pressure at the sprinkler at a junctionkPa
\(v_{in}\)vin Water speed in the pipe before a junctionm/s
\(v_{out}\)Water speed in the pipe after a junctionm/s
\(v_{up}\)Water speed at the sprinkler at a junctionm/s
\(A_{in}\)Section area in the pipe before a junctioncm2
\(A_{out}\)Section area in the pipe after a junctioncm2
\(A_{up}\)Section area of the sprinkler at a junctioncm2
\(h\)Height of the sprinkler above the pipem
\(\Delta t\)Change in times
\(v_{source}\)Speed of the water sourcem/s
\(n\)Number of sprinklers-
\(distr(r)\)Distribution function of precipitation profile-
\(p\)Precipitation rate-
\(R\)Sprinkling rangem
\(r\)Distance from a sprinklerm
\(r_i\)Distance from the \(i\)th sprinklerm
\(\alpha\)Obliquity of the precipitation profilerad
\(pr(r)\)Precipitation function of one sprinklercm/min
DUDistribution uniformity of an irrigation system-
ConstantDefinitionValue
\rhoDensity of water1.0 kg/L
gAcceleration of gravity9.8 m/s2
+ +# Problem Analysis + +Our goal is to determine the number of sprinklers and the spacing between them and find a schedule to move the pipes, including where to move them. Our approach can be divided into three stages: + +- Determine the number of sprinklers. We figure out the pressure and speed of water from each sprinkler and then determine possible sprinkler numbers from engineering data. +- Determine where to put the pipes. We consider major factors, such as sprinkling time, moving time and distribution uniformity (DU). Since the pipe positions depend on the number of sprinklers and the precipitation profile, we just work out some problem-specific cases. However, our method can be used to solve any practical case. +- Determine the schedule to move the pipes. Referring to the water need of the field, we make a schedule that minimizes the time cost, which, obviously, is closely related to the number of moves of the pipes. + +# Model Development + +# Stage 1: Water Pressure and Speed + +![](images/c3c46b853ec15dceedb1caa4ade426873e1a3d84428520692954ef4e2a109b13.jpg) +Figure 1. Overall sketch for four sprinklers and four junctions. The pressure throughout the shaded area is the same, due to our assumption. + +We apply the law of conservation of energy. The work done by the forces is + +$$ +F _ {\mathrm {i n}} s _ {\mathrm {i n}} - F _ {\mathrm {u p}} s _ {\mathrm {u p}} - F _ {\mathrm {o u t}} s _ {\mathrm {o u t}} = p _ {\mathrm {i n}} A _ {\mathrm {i n}} v _ {\mathrm {i n}} \Delta t - p _ {\mathrm {u p}} A _ {\mathrm {u p}} v _ {\mathrm {u p}} \Delta t - p _ {\mathrm {o u t}} A _ {\mathrm {o u t}} v _ {\mathrm {o u t}} \Delta t. +$$ + +The decrease in potential energy is + +$$ +- m g h = - \rho g A _ {\mathrm {u p}} v _ {\mathrm {u p}} \Delta t h. +$$ + +![](images/0b216f9c68e7553931198fd2b64dfa15e8b77b9bcc0c16fc20d21bc7bc6a02e3.jpg) +Figure 2. Sketch of one junction. + +The increase in kinetic energy is + +$$ +\frac {1}{2} m v _ {\mathrm {u p}} ^ {2} + \frac {1}{2} m v _ {\mathrm {o u t}} ^ {2} - \frac {1}{2} m v _ {\mathrm {i n}} ^ {2} = \frac {1}{2} \rho A _ {\mathrm {u p}} v _ {\mathrm {u p}} \Delta t v _ {\mathrm {u p}} ^ {2} + \frac {1}{2} \rho A _ {\mathrm {o u t}} v _ {\mathrm {o u t}} \Delta t v _ {\mathrm {o u t}} ^ {2} - \frac {1}{2} \rho A _ {\mathrm {i n}} v _ {\mathrm {i n}} \Delta t v _ {\mathrm {i n}} ^ {2}. +$$ + +Putting these together, because of the law of conservation of energy, yields + +$$ +\begin{array}{l} p _ {\mathrm {i n}} A _ {\mathrm {i n}} v _ {\mathrm {i n}} \Delta t - p _ {\mathrm {u p}} A _ {\mathrm {u p}} v _ {\mathrm {u p}} \Delta t - p _ {\mathrm {o u t}} A _ {\mathrm {o u t}} v _ {\mathrm {o u t}} \Delta t - \rho g A _ {\mathrm {u p}} v _ {\mathrm {u p}} \Delta t h = \\ \frac {1}{2} \rho A _ {\mathrm {u p}} v _ {\mathrm {u p}} \Delta t v _ {\mathrm {u p}} ^ {2} + \frac {1}{2} \rho A _ {\mathrm {o u t}} v _ {\mathrm {o u t}} \Delta t v _ {\mathrm {o u t}} ^ {2} - \frac {1}{2} \rho A _ {\mathrm {i n}} v _ {\mathrm {i n}} \Delta t v _ {\mathrm {i n}} ^ {2}. \tag {1} \\ \end{array} +$$ + +Since the fluid is incompressible, we have + +$$ +A _ {\text {i n}} v _ {\text {i n}} = A _ {\text {u p}} v _ {\text {u p}} + A _ {\text {o u t}} v _ {\text {o u t}}. \tag {2} +$$ + +The diameters are all the same: + +$$ +A _ {\text {i n}} = A _ {\text {u p}} = A _ {\text {o u t}} = \pi \left(\frac {1 0 \mathrm {c m}}{2}\right) ^ {2}. \tag {3} +$$ + +According to the assumptions, at every junction we have + +$$ +p _ {\text {i n}} = p _ {\text {o u t}} = 4 2 0 \mathrm {k P a}, \tag {4} +$$ + +$$ +v _ {\mathrm {u p}} = \frac {v _ {\mathrm {s o u r c e}}}{n}, \tag {5} +$$ + +where + +$$ +v _ {\text {s o u r c e}} = \frac {1 5 0 \mathrm {L} / \mathrm {m i n}}{\pi \left(\frac {1 0 \mathrm {c m}}{2}\right) ^ {2}} = 0. 3 1 8 \mathrm {m} / \mathrm {s}. +$$ + +Therefore, from (2), (3), and (5), we have for the $i$ th junction + +$$ +v _ {\mathrm {i n}} = v _ {\mathrm {s o u r c e}} \left(1 - \frac {i - 1}{n}\right), \qquad v _ {\mathrm {o u t}} = v _ {\mathrm {s o u r c e}} \left(1 - \frac {i}{n}\right). +$$ + +Putting (1)-(5) together, we can obtain $p_{\mathrm{up}}$ at every junction. In fact, at the last (i.e., the $n$ th) junction, we have + +$$ +v _ {\mathrm {i n}} = v _ {\mathrm {u p}} = \frac {v _ {\mathrm {s o u r c e}}}{n}, \qquad v _ {\mathrm {o u t}} = 0. +$$ + +Putting these into (1), we get + +$$ +p _ {\mathrm {u p}} = p _ {\mathrm {i n}} - \rho g h, +$$ + +which means that the pressure at the last sprinkler is independent of $n$ . + +Commonly, $h$ is about $0.5\mathrm{m}$ to $1.5\mathrm{m}$ , and even if we assume that $h = 1.5\mathrm{m}$ , the $v_{\mathrm{up}}$ at the last junction will be $405\mathrm{kPa}$ , not far from $420\mathrm{kPa}$ . (If $h = 0.5\mathrm{m}$ , the last $v_{\mathrm{up}}$ will be $415\mathrm{kPa}$ .) + +From these equations, we know that $v_{\mathrm{up}}$ at the last junction differs the most from $420\mathrm{kPa}$ , that at the first junction is the closest to $420\mathrm{kPa}$ (and below $420\mathrm{kPa}$ ), and the values are decreasing slowly from junction 1 to junction $n$ . We conclude that the values of $v_{\mathrm{up}}$ at every junction are all below $420\mathrm{kPa}$ but very close to $420\mathrm{kPa}$ , no matter how many sprinklers. This fact explains our assumption that the design pressure of sprinklers is about $400\mathrm{kPa}$ . + +# Information and Analysis of Sprinklers + +The impact-driven sprinkler is the most widely used rotating sprinkler and the one that we assume is used. Some rotating sprinklers have a sector mechanism that can wet either a full circle or a circular sector. There are three main structure parameters of sprinklers: intake line diameter, nozzle diameter, and nozzle elevation angle. An empirical formula gives the spraying range of an impact-driven sprinkler: + +$$ +R = 1. 7 0 d ^ {0. 4 8 7} h _ {p} ^ {0. 4 5}, +$$ + +where $d$ is the nozzle diameter and $h_p$ is the operational pressure head. + +Table 2 shows data on impact sprinklers. Since for our problem the design pressure of the sprinklers is $400\mathrm{kPa}$ , we have medium-pressure sprinklers; in fact, they have the best application uniformity. + +Table 2. Data on sprinklers [Zhu et al. 1989]. + +
TypeDesign pressure (kPa)Range (m)Discharge (m3/h)
Low pressure<200<15.5<2.5
Medium pressure200–50015.5–422.5–32
High pressure>500>42>32
+ +Table 3 shows data on medium-pressure impact-driven sprinklers with $6\mathrm{~mm}$ nozzle diameter. For sprinklers working at $400\mathrm{kPa}$ (as assumed), the discharge is $2.5 - 3.5\mathrm{~m}^3/\mathrm{h}$ and spraying range is 18.5, 19, or $19.5\mathrm{~m}$ ; we use $19\mathrm{~m}$ as the range. The discharge of the source is $150\mathrm{~L/min} = 9\mathrm{~m}^3/\mathrm{h}$ ; thus, to fit every sprinkler's actual discharge to the design discharge, the number of sprinklers should be 3 or 4, because $9/3 = 3$ or $9/4 = 2.25$ , which are within the range $2.5 - 3.5\mathrm{~m}^3/\mathrm{h}$ (or close to it). + +Table 3. Data on nozzles with diameter $6\mathrm{mm}$ [Zhu et al. 1989]. + +
ModelNozzle diameter (mm)Design pressure (kPa)Discharge (m3/h)Range (m)
PY-11562001.2315.0
3001.5116.5
PY-12063002.1718.0
4002.5019.5
PY-1S20A (four nozzles)6 (×4)3002.9917.5
4003.4119.0
PY1S2063002.2218.0
4002.5319.5
15PY222.563502.4017.0
4002.5617.5
15PY23063502.4018.0
4002.5618.5
+ +Sprinklers with higher design pressure tend to have larger wetted diameters. However, deviations from the manufacturer's recommended pressure may have the opposite effect (increase in pressure, decrease in diameter), and uniformity will probably be compromised. Figure 3 shows typical precipitation distribution of one sprinkler with low, correct, and high sprinkler pressure. + +In practice, people use "catch-can" data to generate a precipitation profile of a "hand-move" irrigation system. That is, they put cans evenly in the field to catch the water; the precipitation profile of the irrigation is given by the amounts of water in the catch-cans. + +One measure of how uniformly the water is applied to the field is Distribution Uniformity (DU) [Merkley and Allen 2004]: + +$$ +\mathrm{DU} = \frac{\text {average precipitation of low quarter}}{\text {average precipitation rate}}\times 100\% . \tag{6} +$$ + +Usually, DUs of less than $70\%$ are considered poor, DUs of $70 - 90\%$ are good, and DUs greater than $90\%$ are excellent. A bad DU means that either too much water is applied, costing unnecessary expense, or too little water is applied, causing stress to crops. There must be good DU before there can be good irrigation efficiency [Rain Bird Agricultural Products n.d.]. To simplify our calculation, we approximate the precipitation profile of a single sprinkler (in Figure 3b) to a function $\mathrm{dist}(r)$ , which means that the relative precipitation rate in the position with a distance $r$ from the sprinkler (Figure 5). + +# Stage 2: Scheduling the Irrigation + +A schedule to move the pipes includes both where to move them and how long to leave them. We imagine a fixed irrigation system consisting of several $20\mathrm{m}$ pipes. If the system can meet the needs of the crops nicely—that is, + +![](images/4e9c741db097f6881f0c713e8ea376bab222d88e8f9766b9109ba7b4cf4405d6.jpg) + +![](images/59e68c44bebf78937dc8f4ea172f9f753a00fca3a50f13551f92909bf5697dba.jpg) + +![](images/355e8b500b541f8e86bce9bf014af6d60e1d201cdc30b7ca506e07bb4bc66022.jpg) +Figure 3. Relation between pressure and precipitation distribution (redrawn from Zhu et al. [1989]). a) Pressure is too low. b) Pressure is OK. c) Pressure is too high. + +![](images/625069fbee510066c5dc7c1f7a4c70eac3f8b165e033d0bcec43a75ed8828606.jpg) +Figure 5. Precipitation rate vs. distance to the sprinkler. + +with high Distribution Uniformity (DU)—then we just move the pipe from one position to another. So, we determine where to move the pipe by laying out a system of several $20\mathrm{m}$ pipes, and then decide for how long we should water the field before making the next move. First, we use a simulation of catch-can analysis to choose a layout with a high DU. + +# Catch-can Analysis + +Since the water sprayed by a sprinkler has a determined distribution $\mathrm{distr}(r)$ , we use the following method to simulate the catch-can test. + +For rectangular spacing (Figure 6a), we consider the rectangular region between four adjacent sprinklers. We pick 900 positions evenly distributed in + +the region. For each position, we calculate its relative precipitation rate $p$ : + +$$ +p = \sum_ {i} \operatorname {d i s t r} \left(r _ {i}\right), +$$ + +where $r_i$ is the distance from the $i$ th position to the sprinkler. Using (6), we calculate the DU of this irrigation system. + +![](images/fe3e8e2fdaa66cdc01ab027bfb9a7bf10a72b192b55dd9c6f215d865f44bea51.jpg) +Figure 6a. Rectangular spacing. + +![](images/9dbe21b2815c1bb06a5bc08930f5827783c3928f24f274871e89972277eef2c2.jpg) +Figure 6b. Triangular spacing. + +As we've already deduced, the number of the sprinklers should be 3 or 4, thus the sprinkler distance will be either $10\mathrm{m}$ $(= 20\mathrm{m} / (3 - 1))$ or $6.67\mathrm{m}$ $(= 20\mathrm{m} / (4 - 1))$ . So the DU is a function of the lateral distance. And this can also be applied to triangular spacing (Figure 6b). From the simulation, we get the results in Figure 7. + +![](images/973d461dcab8cd31d5408756c4ecee7f49bd8bff6f1cc17f0eb63192e0886bf0.jpg) +Figure 7. DU vs. lateral distance, in 4 different situations. + +The simulation shows that when lateral distance is $\leq 20$ , DU is acceptable ( $\geq 90\%$ ), regardless of the spacing and whether the sprinkler distance is $6.67\mathrm{m}$ or $10\mathrm{m}$ . But since larger lateral distance results in smaller amount of time required to irrigate the field (the number of moves to make will be fewer, we + +pick $20\mathrm{m}$ as the lateral distance. Figure 8 and 9 show the precipitation profile for the irrigation systems with sprinkler distance $10\mathrm{m}$ and lateral distance $20\mathrm{m}$ . + +![](images/6f760bbb0864559cc2ae1748b807f4a9c8a57f126cf0141d191561099dae6c63.jpg) +Figure 8. $\mathrm{DU} = 98.1\%$ . Left: Precipitation profile for rectangular spacing with sprinkler distance $10\mathrm{m}$ and lateral distance $20\mathrm{m}$ . Right: The 3D form of the precipitation profile. + +![](images/fc8705e014f447470bcf9b0411995d8fc6833b39a09864be0c79028a43df0a91.jpg) + +![](images/266fdca691e67acd57a78a751f3351395919d82e3528b831e5229bdb038c3fb1.jpg) +Figure 9. $\mathrm{DU} = 98.7\%$ . Left: Precipitation profile for triangular spacing with sprinkler distance $10\mathrm{m}$ and lateral distance $20\mathrm{m}$ . Right: The 3D form of the precipitation profile. + +![](images/1c1d9d52e78296ed1b4146c2f61fbdf87c16b3855a201b6e4851fd3fec7e833d.jpg) + +Considering that the field $(30\mathrm{m} \times 80\mathrm{m})$ is not large enough to implement triangular spacing when the pipe is $20\mathrm{m}$ long, we use rectangular spacing with only $0.7\%$ negligibly weaker DU. Before we layout the pipe set, we should first determine the maximum distance from the edge of the field to the sprinklers so that the DU is acceptable. We simulate a catch-can test on the rectangular region on the edge of the field (Figure 10). + +![](images/1bc9dc7cb387b4613a8111e1c50bab7053a57eba4c9b642994f7c77ec2a10742.jpg) +Figure 10. Region to simulate a catch-can test. + +The result is in Figure 11. The maximum length between the edge of the field and the sprinklers is $5\mathrm{m}$ , with an acceptable DU of $83\%$ . + +![](images/9f990a05cd233f50a48d8c2d01e678f263369ee6203f8274a52ae96f0f098a03.jpg) +Figure 11. DU vs. distance from the edge to the sprinkler in two different situations. + +# Layout of the Pipe Set + +Having three or four sprinklers along the pipe makes only a slight difference. Considering that the spraying radius (19 m) is too large compared with the sprinkler distance 6.67 m for four sprinklers, we choose to have three. Thus, there are only two feasible layouts (Figures 12 and 13). Layout 1 requires five moves and setups of the pipes, while Layout 2 requires six. + +![](images/2399e625763693537149e82c95a76f517dd3b46f3f5c33d0a7e98916906a86d4.jpg) +Figure 12. Layout 1. + +![](images/c00e6fc1faa2fac10c22069c5a90120a7cc428ac70989b7d81fb9ceb97636e76.jpg) +Figure 13. Layout 2. + +![](images/e7a6eab04a491e9103a64731cb00dac5c076bb6385256b6cf9944d7624b21f3b.jpg) +Figure 14. Catch-can test simulation result of Layout 1. + +![](images/82e4648f57d6c2c01699361518bf6e4373ca4537abd533f44ecb85d3f106c22c.jpg) +Figure 15. Catch-can test simulation result of Layout 2. + +We abandon Layout 1 because it has a very poor DU (Figure 14). After slightly changing the lateral distance in Layout 2 (Figure 15), we finally achieve a best DU of $89.5\%$ in Layout 3 (Figure 16). + +Then, if we are brave enough to move some sprinklers outside of the field, we achieve an even higher DU with Layout 4 (Figure 17). + +![](images/41a036d1caf1aee17b254cda0e472693d1010121d930531559b4427555710d89.jpg) +Figure 16. Upper: Layout 3. Lower: Catch-can test simulation result of Layout 3. + +![](images/3bd17a3f411d36b4c9e0d4d91fb994bb974a15ccd264aac8f3140ef222c830ab.jpg) +Figure 17. Upper: Layout 4. Lower: Catch-can test simulation result of Layout 4. + +# Calculation of the Precipitation + +To meet the problem's constraints that in any part of the field, + +Constraint A: The precipitation rate is $\geq 2\mathrm{cm} / 4\mathrm{d}$ + +Constraint B: The precipitation rate is $\leq 0.75\mathrm{cm / h}$ + +we should calculate the precipitation rate of the system in Layout 4 before scheduling the interval to irrigating the field and to move the pipe. The precipitation area of a sprinkler should be a circle with a radius $R$ , as Figure 18 shows. + +![](images/2abf10b47fa81aa518901ebb83a7cdd2905b92cdb2d5ad1537ffa3e0c85a5a08.jpg) +Figure 18. Precipitation area of a single sprinkler. + +The profile of the precipitation rate distribution is in Figure 5. To figure out the precipitation rate at a point a distance $r$ from the sprinkler, we first calculate the angle $\alpha$ in Figure 5. As we normalize the distribution, we get + +$$ +\int_ {0} ^ {R} [ (R - r) \tan \alpha ] 2 \pi r d r = 1, +$$ + +so + +$$ +\tan \alpha = \frac {3}{\pi R ^ {3}}. +$$ + +Then the precipitation rate is + +$$ +\operatorname {p r} (r) = v (R - r) \tan \alpha = \frac {3 (R - r)}{\pi R ^ {3}} v, +$$ + +where $R = 19 \, \text{m}$ , $v = 50 \, \text{L} / \text{min}$ . With Matlab, we calculated the precipitation rate at each point in the $80 \, \text{m} \times 30 \, \text{m}$ field, with the irrigation system working only once (Figure 19 Right) and after a complete cycle of moving the equipment and irrigating (Figure 19 Left). + +Figure 19. Precipitation rate of water in the field with Layout 4. +![](images/c6a170a4a54c8b393dafe0aa6c48db7dbab52813afbd457b5683c73a616758c5.jpg) +Left: The effect of six pipes together. Right: The effect of a $20\mathrm{m}$ pipe working alone. + +![](images/1eae386ac00e6f901c6f9acdff890d31087b0cb63914fde4b6cd4775187748ec.jpg) + +# Scheduling the Irrigation Time + +Figure 19 Right shows that when the sprinklers (only one $20\mathrm{-m}$ pipe at a time) are working, the maximum precipitation rate is $0.02561~\mathrm{cm / min}$ . To satisfy Constraint B, the period of irrigation should be less than + +$$ +\frac {0 . 7 5 \mathrm {c m / h}}{0 . 0 2 5 6 1 \mathrm {c m / m i n}} \approx 2 9 \mathrm {m i n / h}. +$$ + +Because the shorter the interval of irrigation, the more frequently the farmer must move the pipe, we choose a large but easy to implement interval: $25\mathrm{min / h}$ . Figure 19 Left shows that after a complete cycle of irrigation of the whole field the minimum precipitation rate is $0.0173~\mathrm{cm / min}$ . To satisfy Constraint A, the period of irrigation should be longer than + +$$ +\frac {2 \mathrm {c m} / 4 \mathrm {d}}{0 . 0 1 7 3 \mathrm {c m} / \mathrm {m i n}} \approx 1 1 6 \mathrm {m i n} / 4 \mathrm {d}. +$$ + +To meet this requirement, we irrigate the same place five times, each time for 25 min, or 125 min in all. Using the same method, we calculate the same parameters for Layout 3. + +Layout 4 not only has higher DU than Layout 3 but also saves $17\%$ of the water, because Layout 3 has a smaller minimum precipitation rate (0.0160 vs. 0.0173), which leads to more irrigation time (150 min vs. 125 min) in order to satisfy Constraint A. So we choose Layout 4 as our solution. + +# Stage 3: Schedule Design + +Our schedules can achieve a DU as high as $92.1\%$ . We give two concrete timetables; one requires considerably less moving time (3 h vs. 15 h, per 4-days) but waters less evenly on average. Both schedules have a 12.5-h irrigation time in one cycle with a DU of $92\%$ . Using a sprinkler with a sector mechanism, we can control the range of the sprinkler at the edges of the field, to reduce water waste. [EDITOR'S NOTE: We omit the tables.] + +# Strengths and Weaknesses + +# Strengths + +- We use real data on sprinklers to determine the number of sprinklers. +- We establish a model based on the engineering knowledge of sprinklers, find out the overall precipitation distribution, and then find an optimal schedule. +- Our model for the layout of the irrigation system is sprinkler-independent. If a sprinkler's precipitation profile is known, we can determine the precipitation profile across the whole field. +- The placement and schedule is very clear and easy to implement. + +# Weaknesses + +- Water pressure in the pipe may vary, and so the discharge of the sprinklers may not be exactly the same. + +# References + +Bernoulli's equation. Wikipediahttp://en.wikipedia.org/wiki/Bernoulli%27s_equation. +Merkley, Gary P., and Richard G. Allen. 2004. Sprinkle and Trickle Irrigation Lecture Notes. Utah State University, Fall Semester 2004, http://www.irri-net.org/documents/sprinkle%20and%20trickle%20 irrigation.pdf; pp. 25-27, 29, 40. +Rain Bird Agricultural Products. n.d. Distribution uniformity for sprinkler irrigation. http://www.rainbird.com/ag/du.htm. +Zhu, Y.Z., J.L. Shi, Y.S. Dou, et al. 1989. Handbook of Sprinkling Irrigation Engineering. Beijing: Water Resources and Electric Power Press (in Chinese); pp. 32-36, 40-42, 44. + +![](images/8afa48113324d6ce97b06f4c5ca1ecf43862a8db57b83d87fcb1f68d5a227bda.jpg) +Team members Wang Cheng, Wen Ye, and Yu Yintao. + +![](images/2b85cded0b1f361f17791a4f994ce4a8b0238cbadb22fc62424977b5f28200d0.jpg) +Team advisor Song Baorui. + +# Optimization of Irrigation + +Bryan J.W. Bell +Yaroslav Gelfand +Simpson H. Wong +University of California +Davis, CA + +Advisor: Sarah A. Williams + +# Summary + +We determine a schedule for a hand-move irrigation system to minimize the time to irrigate a $30\mathrm{m}\times 80\mathrm{m}$ field, using a single $20\mathrm{-m}$ pipeset with 10-cm-diameter tube and 0.6-cm-diameter rotating spray nozzles. The schedule should involve a minimal number of moves and the resulting application of water should be as uniform as possible. No part of the field should receive water at a rate exceeding $0.75\mathrm{cm}$ per hour, nor receive less than $2\mathrm{cm}$ in a four-day irrigation circle. The pump has a pressure of $420\mathrm{KPa}$ and a flow-rate of $150\mathrm{L / min}$ . + +The sprinklers have a throw radius of $13.4\mathrm{m}$ . With a riser height of 30 in, the field can be irrigated in $48\mathrm{h}$ over four days. Moreover, a single sprinkler is optimal. The pipes should be moved every $5\mathrm{h}$ and be at least $21\mathrm{m}$ apart. The resulting irrigation has precipitation uniformity coefficient of .89 (where 1 would be maximum uniformity). + +We deal with each constraint in turn. Using geometrical analysis, we convert the coverage problem to determining the least number of equal-sized circles that could cover the field. We perturb the solution to optimize uniformity, by applying a Simultaneous Perturbation Stochastic Approximation (SPSA) optimization algorithm. We perturb this solution further to find the minimal number of pipe setups, by experimentally "fitting" the pipesets through the sprinklers. The rationale for perturbation is that some drop in uniformity can be tolerated in favor of minimizing the number of setups while still ensuring that we irrigate the entire field. We feed the optimal layout of pipe setups to another algorithm that generates an irrigation schedule for moving the pipes. + +The UMAP Journal 27 (3) (2006) 285-294. ©Copyright 2006 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +# Assumptions + +# Main Assumptions + +- The sprinkler has a throw radius of $13.4 \mathrm{~m}$ . +- Zero wind conditions. While wind affects precipitation uniformity, we do not explore this option. +- The field is reasonably flat, which allows us to assume equal water pressure at sprinkler nozzles. +- The rancher operates in 12-h workdays. +- All sprinklers operate on a 30-in riser. This is the most common riser configuration that we found. +- The rancher does not use the sprinklers when it rains. +- The sprinkler application rate profile is semi-uniform for the rotating spray nozzle sprinkler. + +# Other Assumptions + +- There is only one accessible water source. +- The boundary of the field is lined with pipes that connect to the water source. +- Each pipe placement must be perpendicular to and touching one of the field boundaries. +- Set up time for the pipeset does not take more than an hour. +- The flow rate and water pressure from the source remains constant. +- As nozzle size increases, so does the flow rate loss. +- All sprinklers operate identically and do not malfunction. +- The throw radius of the sprinklers may exceed the bounds of the field. +- No sprinkler is placed outside the boundaries of the field. +- At the end of the workday, the sprinklers are shut off; the pipeset need not be disassembled. + +![](images/b3b0d3ab18e845c70fbcc4cac75ab43576f45116ba2756f36d2ef4278455ab70.jpg) +Figure 1. Sprinkler profile [Zoldoske and Solomon 1988]. + +# Possible Sprinkler Profiles + +We did not find any sprinkler application rate profiles for a $0.6\mathrm{cm}$ nozzle but we did find a profile for the Nelson Wind Fighter WF16 with a #16 Red nozzle $(1/8'' \approx 0.3\mathrm{cm})$ ; Figure 1 shows its application rate profile at 60 psi. + +We assume that the profile for a $1/4''$ ( $\approx 0.6 \, \text{cm}$ ) nozzle would be similar but with a higher application rate. The flow rate for the WF16 with a $1/8''$ nozzle is $3.42 \, \text{gal/min}$ ; for our nozzle diameter of $0.6 \, \text{cm} = 0.236''$ , we have a flow rate of $12.57 \, \text{gal/min}$ , so we estimate that the application rate is 3.3 times as great, taking into account an increased loss due to the sprinkler (see later for the formulas used). + +# Model + +# Overall Approach + +First, we tackle the requirement of sprinkling the entire field. Using geometrical analysis, we reduced this problem to a covering problem, which translates to finding the least number of equally-sized circles that can cover any given area. + +However, this solution results in placing the sprinklers outside the field boundaries, so we perturb the solution to readjust the placement while maintaining complete coverage of the field. + +We then use this new solution as a blueprint for finding the minimal number of pipe setups by experimentally "fitting" the pipesets through the sprinklers (if possible). We use an algorithm that iteratively perturbs the sprinkler layout and finds the minimum number of pipe setups. After a specified number of iterations, the algorithm outputs the minimum found. The rationale for perturbation is that we are willing to sacrifice some uniformity in order to find the least number of setups, while simultaneously ensuring that we still irrigate the entire field. + +We feed the layout of pipe setups to another algorithm to generate an irrigation schedule. + +# Simulating Sprinkler Irrigation + +Given the sprinkler positions, the sprinkler precipitation profile, and the length of time that they are on, we simulate in Matlab the sprinkler irrigation of the field. Figure 2 shows the the output for the sprinkler profile of Figure 1 with the sprinkler running for $1\mathrm{h}$ . + +![](images/86ba49312344bfe28e26cb096c8a4f477d1c16737854c444562f7ff284a6d2db.jpg) +Figure 2. Matlab simulation of sprinkler irrigation over $1\mathrm{h}$ + +To represent the field, we use a matrix of cells. For the simulation, we have a list of sprinkler positions, and for each sprinkler specify how long it runs. + +We iterate through the list of sprinkler positions; for each, we simulate the precipitation due to the sprinkler. + +To simulate precipitation from a sprinkler, we use a simple nested for loop to iterate through the cells within the wetted radius of the sprinkler. For each cell, we compute the distance to the sprinkler and then use the given sprinkler precipitation rate profile and the length of time the sprinkler runs to calculate the additional precipitation received by that cell. + +# Complete Sprinkler Coverage of Field + +No area of the field may receive less than $2\mathrm{cm}$ of water every four days. To accomplish this, we think of a sprinkler's wetted area as a circular disk located in a rectangle representing the field. The problem then reduces to covering this rectangle with disks. However, because the distribution profile for the sprinkler is nearly uniform (see Figure 2), allowing for radial overlaps + +will disturb overall uniformity and increase the number of sprinklers needed to cover the field. Hence, it is best to minimize overlap by minimizing the number of sprinklers while ensuring that every part of the field is completely covered. This can, however, be restated as a covering problem, in which we find the smallest number of equally-sized circular disks that can cover a given rectangle. Figure 3 displays this solution; no other configuration of circular disks can cover this rectangle more efficiently [Kershner 1939]. + +![](images/57c8831d3971a3ff3a87d14961c9ea6924a9da94cde03af9e9285dbf6a962a99.jpg) +Figure 3. Hexagonal covering. + +# Adjustment of the Covering Problem Solution + +This solution, however, is not completely useful, since it would result in some sprinklers being positioned outside the field. Moreover, adjusting sprinkler positions can also increase uniformity. In Figure 4, we show a possible placement of the field with the solution of the covering problem. + +We need to adjust this solution, to ensure that the sprinklers are inside the field while covering it as uniformly as possible. From the hexagonal covering pattern (Figure 3), we know that the center of circle 1 is supposed to be located at a distance of $X = R\sqrt{3}/2$ to the right of field's left boundary. From spline interpolation of the data from the sprinkler coverage profile, we estimate $R \approx 13.4 \, \text{m}$ . But at $13.4 \, \text{m}$ , the precipitation rate is zero. We not only want the rate greater than zero but also for the total application to reach $2 \, \text{cm}$ in $5 \, \text{h}$ . Placing this constraint on the precipitation rate, we find that it has a radius of only $11.5 \, \text{m}$ ; and anything beyond this will not provide the required coverage. Thus, to achieve the most complete coverage, we need to calculate the $x$ -coordinate of circle 1, using $R = 11.5 \, \text{m}$ , and then place all the other circles on the left row with the regular spacing of $13.4 \, \text{m}$ . This will result in the shift of the left + +![](images/b22725fa5494b5a110f58a62e3607a19cb740acd8abecbc3b0745eccd498b602.jpg) +Figure 4. Hexagonal covering of the field. + +row $13.4\mathrm{m} - 11.5\mathrm{m} = 1.93\mathrm{m}$ to the left, thus not only keeping the sprinklers within bounds but also increasing precipitation uniformity. + +# Maximizing Precipitation Uniformity + +Applying water to the field as uniformly as possible is a main concern, since doing so leads to efficient crop yields [Lamm 1998]. We tackle this optimization problem using a Simultaneous Perturbation Stochastic Approximation (SPSA) optimization algorithm [Spall 2096], with which we minimize the standard deviation from the average precipitation in the field. Figure 5 shows the result. + +After 5,000 iterations, the solution seems to mimic the shifted covering problem solution—that is, our shifting method achieves a uniformity coefficient that adequately approximates the one from the SPSA, hence yields a sprinkler output that approximately maximizes uniformity. + +# Algorithm: Minimization of Pipe Setups + +This algorithm takes as input the coordinates of sprinklers positions as determined by our approximation to the SPSA solution. From this layout, it minimizes the number of pipe setups by first selecting a sprinkler closest to the upper corner of the field, calculating the distances of all other sprinklers from it, and selecting the sprinkler with the shortest lateral distance (we cannot place a pipe diagonally). If this distance is less than or equal to the pipe length + +![](images/a83ad6c088f5d041e0628b03a7598cf37e913059e3ce2b08fe3625cbe0794a33.jpg) +Random initial placement. + +![](images/7036a2f4481924563574f3b9f59893437f68aeecd3dd497f3f2c01537fe42851.jpg) +After 500 iterations. + +![](images/18f3d62dad3c7436aa98b8baa1c2bc8d92f848ec8d74bffb0954ba74bb6d91d5.jpg) +After 5,000 iterations. +Figure 5. Sprinkler placement from the SPSA algorithm after a specified number of iterations. + +(20 m), the algorithm calculates the precipitation rates at points located within the overlapping radii; if a rate exceeds 0.75 cm/h, the algorithm goes onto the next closest sprinkler. + +# Irrigation System Calculations + +The problem statement specifies that the mainline pipe used in the handmove system is aluminum with diameter $10\mathrm{cm}$ , the sprinkler nozzle size is $0.6\mathrm{cm}$ , and the water source has pressure of $420\mathrm{kPa}$ and possible flow rate $150\mathrm{L / min}$ . For our calculations, we use the following formulas from Rain Bird Agri-Products Co. [2001]. + +# Hazen-Williams + +$$ +\mathrm {P r e s s u r e L o s s (p s i)} = 4. 5 5 \frac {\left(\frac {Q}{C}\right) ^ {1 . 8 5 2}}{(I D) ^ {4 . 8 7}} L, +$$ + +where + +$$ +Q = \text {p i p e f l o w (g a l / m i n)}, +$$ + +$$ +C = \text {r o u g h n e s s c o e f f i c i e n t (a l a m i n u m w / c o u p l e r s = 1 2 0)}, +$$ + +$$ +I D = \text {p i p e i n s i d e d i a m e t e r (i n) , a n d} +$$ + +$$ +L = \text {p i p e} +$$ + +# Nozzle Discharge + +$$ +\mathrm {D i s c h a r g e (g p m)} = 2 9. 8 2 \sqrt {P} D ^ {2} C _ {d}, +$$ + +where + +$$ +P = \text {n o z z l e p r e s s u r e (p s i)}, +$$ + +$$ +D = \text {n o z z l e o r i f i c e d i a m e t e r (i n) , a n d} +$$ + +$$ +C _ {d} = \text {n o z z l e d i s c h a r g e c o e f f i c i e n t (t a p e r e d} \approx 0. 9 6 \text {o r} 0. 9 8). +$$ + +Since $0.145\mathrm{kPa} = 1$ psi, the system pressure is at most $60.9~\mathrm{psi}$ . The nozzle size is $(0.6\mathrm{cm}) / (2.54\mathrm{cm / in}) = 0.236$ in; and assuming a nozzle discharge coefficient of 0.97, we obtain a flow rate per sprinkler of $12.6\mathrm{gal / min} = 47.58\mathrm{L / min}$ . The pressure loss due to the mainline pipe assuming four sprinklers is only $0.012\mathrm{psi}$ , which can be neglected. We assume that each sprinkler is on a 30 in riser and that the riser is a 1 in-diameter steel pipe. The pressure loss assuming a flow of $12.6\mathrm{gal / min}$ is $0.058\mathrm{psi}$ ; thus, we also can neglect pressure loss due to the riser. + +# Results + +Our model generates the optimal pipeset configuration shown in Figure 6. This configuration consists of 8 pipe movements each in intervals of $5\mathrm{h}$ (with an assumed $1\mathrm{h}$ time for moving and set up of equipment) and results in a total irrigation time of $48\mathrm{h}$ every 4 days, or approximately $12\mathrm{h / d}$ . Each pipeset contains only one sprinkler; if the sprinklers were closer than $21\mathrm{m}$ apart, the overlap of their wetted areas would yield precipitation greater than $75\mathrm{cm / h}$ . Less than $1\%$ of the field receive an insufficient amount $(2\mathrm{cm})$ of water, in areas at the edges, where the crop could easily be damaged by other factors. + +Table 1 shows the generated irrigation schedule for the repositioning of the sprinklers, given a 12-h work day for a rancher. Each pipe is set in place for $5\mathrm{h}$ . [EDITOR'S NOTE: We omit the table giving the irrigation schedule and coordinates for the sprinklers.] + +Based on our assumptions and the design of our algorithm, there is no faster way to irrigate this field while maintaining such a high measure of uniformity. + +As expected, this result is consistent with the earlier analysis, given our assumed sprinkler distribution profile. Our solution yields a uniformity coefficient of 0.89, unsurprisingly close to the optimal value of .90 generated by the SPSA after 5,000 iterations. + +![](images/12e9bda8b6d8e72c397e05d680e3b2505a438ef5baa0c47384831ddeee3f8089.jpg) +Figure 6. Best sprinkler placement, with pipe sets shown as black lines. + +# Weaknesses + +- The model does not account for change in the sprinklers' profile due to wind, which could lead to a completely different model for sprinkler placement. +- SPSA ideally could have produced the best possible solution, but we did not have time to run it for enough iterations. Using FORTRAN would increase the speed of calculations by a factor of about 10,000. +- The rancher has to work $12 \mathrm{~h} / \mathrm{d}$ , not $8 \mathrm{~h} / \mathrm{d}$ . +- The rotating spray nozzle profile in our model is half the size of the one prescribed in the problem statement. Even though we scale the precipitation rate, there is no guarantee that the sprinkler profile would not change with flow rate. +- Coefficient of Uniformity (CU) provides an average deviation from mean coverage. If the area is overwatered due to the overlap of different sprinklers placed at different times, or overwatered in one spot and underwatered at another, the CU may come out the same [Zoldoske and Solomon 1988]. Thus, CU is not the most accurate way to measure the uniformity and water application, especially for our purposes, since we care less about minor overwatering than about underwatering. +- We could not validate our model in real-life conditions. + +# References + +Kershner, R. 1939. The number of circles covering a set. American Journal of Mathematics 61: 665-671. + +Lamm, Freddie R. 1998. Uniform of in-canopy center pivot sprinkler irrigation. http://www.oznet.ksu.edu/irrigate/UICCP98.html. +Nelson Irrigation Corporation. 2001. Nelson R2000WF Rotator performance. http://www.nelsonirrigation.com/data/products/ACF25.pdf. +Rain Bird Agri-Products Co. 2001. Sprinkler and micro irrigation engineering formulas—U.S. units. http://www.rainbird.com/pdf/ag/FormulasUS.pdf. +Spall, James C. 2006. Simultaneous perturbation stochastic approximation: SPSA, A method for system optimization. http://www.jhuapl.edu/SPSA/. +Zoldoske, David F., and Kenneth H. Solomon. 1988. Coefficient of uniformity—What it tells us. *Irrigation Notes* (Center for Irrigation Technology, California State University, Fresno, CA), http://cati.csufresno.edu/cit/rese/88/880106/. + +![](images/4563f596670fa65f81b6d4c6b15aa54e9a0e809b828454c4a80c1f241de72cdd.jpg) +Team members Bryan Bell, Simpson Wong, and Yaroslav Gelfand. + +![](images/2948ad27c84e0f184a62dd076ea688c7433a3b873a9ad9aa7f411b57cada715f.jpg) +Team advisor Sarah A. Williams. + +# Sprinkle, Sprinkle, Little Yard + +Brian Camley + +Bradley Klingenberg + +Pascal Getreuer + +University of Colorado + +Boulder, CO + +Advisor: Bengt Fornberg + +# Summary + +We determine an optimal algorithm for irrigating an $80\mathrm{m}\times 30\mathrm{m}$ field using a hand-move 20-m pipe set, using a combination of analytical arguments and simulated annealing. We minimize the number of times that the pipe is moved and maximize the Christiansen uniformity coefficient of the watering. + +We model flow from a sprinkler as flow from a pipe combined with projectile motion with air resistance; doing so predicts a range and distribution consistent with data from the literature. We determine the position of sprinkler heads on a pipe to optimize uniformity of watering; our results are consistent with predictions from both simulated annealing and Nelder-Mead optimization. + +Using an averaging technique inspired by radial basis functions, we prove that periodic spacing of pipe locations maximizes uniformity. Numerical simulation supports this result; we construct a sequence of irrigation steps and show that both the uniformity and number of steps required are locally optimal. + +To prevent overwatering, we cannot leave the pipe in a single location until the minimum watering requirement for that region is met; to water sufficiently, we must water in several passes. The number of passes is minimized as uniformity is maximized. + +We propose watering the field with four repetitions of five steps, each step lasting roughly $30\mathrm{min}$ . We place two sprinkler heads on the pipe, one at each end. The five steps are uniformly spaced along the long direction of the field, with the first step at the field boundary. The pipe locations are centered in the short direction. This strategy requires only 20 steps and has a Christiansen uniformity coefficient of 94, well above the commercial irrigation minimum of 80. Simulated annealing to maximize uniformity of watering re-creates our solution from a random initialization. + +The consistency between solutions from numerical optimization and from analytical techniques suggests that our result is at least a local optimum. Moreover, the solution remains optimal upon varying the sprinkler profile, indicating that the results are not overly sensitive to our initial assumptions. + +# Introduction + +Maximizing the uniformity of irrigation reduces the amount of water needed [Ascough and Kiker 2002]. We use a $420\mathrm{-kPa}$ , $150\mathrm{-L / min}$ water source and a $20\mathrm{-m}$ hand-move pipe set for a $80\mathrm{m} \times 30\mathrm{m}$ field. We determine the number and placement of sprinkler heads on the pipe, together with a schedule of watering locations and times that maximizes uniformity and minimizes time required. + +# Initial Assumptions + +- We live in a boring place. We have a flat, windless, weatherless field. Though wind is often an influential factor, uniformity can be corrected to compensate for wind [Tarjuelo Martín-Benito et al. 1992]. +- Time required = number of moves. We attempt to minimize the number of moves; we do not consider any other kind of effort, such as minimizing the total distance that the pipe must be moved. +- Our sprinkler heads are ideal. The distribution of water from the sprinkler heads is radially symmetric and the same for every head. +- Average, not instantaneous overwatering. We can water an area for half an hour at a rate of $1.5 \mathrm{~cm} / \mathrm{h}$ , then leave it for half an hour, and this would not constitute overwatering. Without this assumption, it is impossible to meet the constraints on watering. +- The pipe must stay within the field. Our pipe locations remain completely on the field, though we allow water to fall off the field. + +# Slow-Watering and Fast-Watering + +We break watering techniques into two categories: + +- Slow-watering. Keep the pipe in one section of the field until it has been watered sufficiently, that is, we water the field in one pass. +- Fast-watering. Make multiple short passes, waiting for the field to absorb the water between runs. + +Slow-watering minimizes effort (the number of times we move the pipe) while fast-watering requires extra moves. Fast-watering also carries the physical risk of washing away the topsoil, but we ignore this. + +With the given constraints, we cannot create a slow-watering solution. To irrigate the field in one pass, we must keep the pipe in one position until the minimum is met. We should water at a rate no greater than $0.75\mathrm{cm / h}$ . But the rate of water flow, $150\mathrm{L / min} = 9\times 10^{6}\mathrm{cm}^{3} / \mathrm{h}$ , amounts to $0.375\mathrm{cm / h}$ over the entire field, or half the field at the maximum rate. However, our sprinkler cannot cover so great an area, hence cannot help overwatering the area that it reaches if we water for an hour or more. + +We are forced to choose a fast-watering technique, which involves several passes over the field. + +# Judging the Quality of Solutions for Fast Watering + +What solution is best? We want to minimize the number of times that we move the pipe, which is number of passes required times the number of pipe locations in each pass. + +How many passes? The number of passes is determined by the minimum watering criterion. If the minimum application rate is $S_{\mathrm{min}}$ , then to make sure that every location receives the minimum 2 cm of water, we need + +$$ +S _ {\min } t \times (\text {n u m b e r o f p a s s e s}) = 2 \mathrm {c m}, +$$ + +where $t$ is the watering time. + +How long to water? We choose $t$ so that we don't overwater. With, in one pass, a maximum application rate of $S_{\mathrm{max}} \, \mathrm{cm/h}$ , we can water only long enough to apply $0.75 \, \mathrm{cm}$ , the maximum possible in an hour: + +$$ +S _ {\max } t = 0. 7 5 \mathrm {c m}. +$$ + +Combining the two equations, we find + +$$ +\text {n u m b e r o f p a s s e s} = \left\lceil \frac {8}{3} \frac {S _ {\max}}{S _ {\min}} \right\rceil . \tag {1} +$$ + +The ratio $S_{\mathrm{max}} / S_{\mathrm{min}}$ decreases with increasing uniformity. In other words, increase in uniformity decreases the number of moves required. + +# Christiansen Coefficient of Uniformity + +$S_{\mathrm{min}} / S_{\mathrm{max}}$ is not a typical measurement of uniformity. We also use the Christiansen coefficient of uniformity, the most broadly used and well-recognized criterion for uniformity of watering [Ascough and Kiker 2002; Tarjuelo Martin-Benito et al. 1992]: + +$$ +C U = 1 0 0 \left(1 - \frac {\sigma_ {S}}{\langle S \rangle}\right), +$$ + +where $\sigma_S$ is the standard deviation of the application rate during a pass and $\langle S\rangle$ is the mean. + +# Summing over Sprinklers + +We determine $S(\vec{x})$ by superimposing the water flows from the sprinkler heads, using the expression $\varphi(|\vec{x} - \vec{x_1}|)$ to denote application of $1 \, \text{cm/h}$ at position $\vec{x}$ from the sprinkler head at $\vec{x_1}$ : + +$$ +S (\vec {x}) = \sum_ {k} t _ {k} \varphi \left(\left| \vec {x} - \vec {x _ {k}} \right|\right), +$$ + +where $t_k$ is the time spent at sprinkler head $k$ . + +# Determining the Sprinkler Profile $\varphi(r)$ + +To optimize the layout, we must know how the sprinkler applies water as a function of distance, $\varphi (r)$ . This is a complicated function that depends on the sprinkler type and pressure in the line; its form is not well-known and it is often is simulated numerically [Carrion et al. 2001]. + +# The Linear Model + +A first guess at the sprinkler function would be a simple decreasing linear function. In fact, this is reasonably consistent with measured data (Figure 1). + +![](images/565617fe06a81177551db4f184b2749ccbb25a27a8948f8fca25fb269e070c64.jpg) +Figure 1. The sprinkler function is approximately linear (redrawn from Carrión et al. [2001]). + +The linear approximation allows for simple solutions; in one dimension, it is possible to combine linear functions to lead to a uniform water distribution [Smajstrela et al. 1997]. Several other empirical models have been used for $\varphi(r)$ ; Mateos uses, among others, $\varphi(r) \sim (1 - r^2 / R^2)$ [Mateos 1998]. + +# Model of Water Distribution from a Sprinkler + +# Output Speed of Sprinklers + +We model a sprinkler head, ignoring rotational effects and angle, as a hole in the pipe. Bernoulli's equation states that along a streamline, + +$$ +P + \frac {1}{2} \rho v ^ {2} + \rho g h = \text {c o n s t a n t}, +$$ + +where + +$P$ is the pressure, + +$\rho$ is the density of water, + +$v$ is the fluid velocity, + +$g$ is the acceleration of gravity at the Earth's surface, and + +$h$ is the height of the fluid. + +We assume that the variation in height is negligible and consider a point within the pipe and a point at the hole. Then, with $P_w$ the water pressure and $P_a$ the atmospheric pressure, $P_w + \frac{1}{2} \rho v_w^2 = P_a + \frac{1}{2} \rho v_o^2$ implies that + +$$ +v _ {o} ^ {2} = v _ {w} ^ {2} + \frac {2 (P _ {w} - P _ {a})}{\rho} +$$ + +is the speed of outgoing water for one sprinkler. Typically, this result is stated as $v_{o} = \sqrt{2gH}$ , where $H = 2(P_{w} - P_{a}) / \rho$ is the pressure head and the speed of the water is considered negligible. In our case, however, $v_{w}$ is significant. + +With $n$ sprinklers, the continuity property requires that the flux in any one section of the tube is $J / A_{i}n$ , where $J$ is the total flux (150 L/min) and $A_{i}$ is the cross-sectional pipe area. Therefore, the output speed is + +$$ +v _ {n} = \sqrt {\left(\frac {J}{A _ {i} n}\right) ^ {2} + \frac {2 (P _ {w} - P _ {a})}{\rho}}, +$$ + +which ranges from 25 to $40\mathrm{m / s}$ , depending on the number of sprinkler heads. + +# Sprinkler Range + +The spray remains coherent for a while before breaking up into particles [Carrion et al. 2001; Kranz et al. 2005]. We treat the motion of the outgoing water drop as a projectile problem, first without air resistance, then with air resistance proportional to the square of the speed. + +Without air resistance, direct integration of the equations of motion gives the range and flight time for initial speed $v_{o}$ at angle $\theta$ to the horizontal: + +$$ +\text {r a n g e} = \frac {v _ {o} ^ {2}}{g} \sin 2 \theta , \quad \text {t i m e o f f l i g h t} = \frac {2 v _ {o} \sin \theta}{g}. \tag {2} +$$ + +Air resistance is often represented using a damping force quadratic in speed; the resulting equations cannot be solved analytically in general [Marion and Thornton 1988; Tan and Wu 1981]. However, in our system, the droplets have very large horizontal speeds and only a small vertical distance to fall. In the limit, we can ignore the vertical drag force, writing + +$$ +\frac {d v _ {x}}{d t} = - k v _ {x} ^ {2}, \qquad v _ {x} (0) = v _ {o}; \qquad \frac {d v _ {y}}{d t} = - g, \qquad v _ {y} (0) = 0. +$$ + +The equation in $x$ has solution $v_{x}(t) = 1 / (kt + v_{o}^{-1})$ , which yields + +$$ +x (t) = \frac {1}{k} \ln \left(k t v _ {o} + 1\right). +$$ + +This gives us one solution, but we need to consider variations. Different drop sizes have different drag forces, according to the Prandtl expression [Carrion et al. 2001; Marion and Thornton 1988]: + +$$ +k = \frac {C _ {d} \rho_ {a} A}{2 m}, +$$ + +where + +$C_d$ is the dimensionless drag coefficient (on the order of 1), + +$\rho_{a}$ is the density of air, + +$A$ is the cross-sectional area of the drop, and + +$m$ is the mass of the drop. + +We model the drop as a sphere of water, so $m = \rho_w(4/3)\pi R^3$ , where $R$ is the radius of the drop. Using $A = \pi R^2$ , we get + +$$ +k = \frac {3}{8} C _ {d} \frac {\rho_ {\text {a i r}}}{\rho_ {\text {w a t e r}}} \frac {1}{R}. \tag {3} +$$ + +This means that the distribution of $1 / k$ is exactly the distribution of $R$ , the droplet size distribution. The distance $x(t)$ is, to first order, proportional to $1 / k$ , so the size distribution of the drops directly controls the distribution of their distance! + +The probability that a drop flies a distance $x$ , in terms of the radius, is approximately + +$$ +P (X = x) \approx \frac {8}{3} \left(\frac {\rho_ {\mathrm {w a t e r}}}{\rho_ {\mathrm {a i r}}}\right) \frac {R}{C _ {d}} \ln \left(k t V _ {o} + 1\right) P (\mathcal {R} = R). +$$ + +Unfortunately, the distribution $P(\mathcal{R} = R)$ is not known. Raindrops follow the empirical distribution $\lambda \exp(-\lambda R)$ [Marshall and Palmer 1948], but there is no a priori reason to assume that sprinkler droplets do. The droplets are roughly spherical because of their surface tension, so we could also assume a Maxwell-Boltzmann distribution based on surface tension energy, $P(\mathcal{R} = R) = 1 / Z \exp(-J\pi r^2 / kT)$ , yielding a normal distribution. The drop-size distribution from fire sprinklers is described as log-normal [Sheppard 2002]. + +Since we are not certain about the exact distribution, we combine the physical intuition gained from this model with the simplicity of the linear model. + +# Making the Linear Model Physically Consistent + +We use the simple linear distribution but choose its properties to conserve water volume. We choose a linear shape such that: + +- the width of the shape depends on the speed, and +- the total leaving the sprinklers is the total water supplied. + +These two conditions determine the slope and intercept of the linear approximation. To do this in a realistic way, we must estimate the drag force using (3). We guess that $C_d$ for a water drop in air is 0.2 and assume that the largest radius of a water droplet is 0.05 cm; these are reasonable values [Tan and Wu 1981] that lead to $k = 0.15 \, \mathrm{m}^{-1}$ . + +We now develop the $x$ -intercept and $y$ -intercept of the linear system in terms of the physics. A water drop that falls for $t$ seconds travels + +$$ +x _ {n} (t) = \frac {1}{k} \ln \left(k t v _ {n} + 1\right) +$$ + +meters horizontally. This fixes the width of our linear distribution. Now, we ask: What is $t$ ? Normally, we would just calculate the amount of time for a drop to fall. For the $10\mathrm{cm}$ from the top of the pipe to the bottom of the pipe, this would be $0.14\mathrm{s}$ ; however, no sprinkler throws out drops horizontally. When we calculated the distance traveled, we assumed horizontal initial speed; we now correct for this by using the no-air-resistance theory. We choose $t$ as the no-resistance flight time from (2) with the sprinkler at a $45^{\circ}$ angle. This is physically reasonable because air resistance makes only a small correction to flight time [Marion and Thornton 1988]. + +Using this approximation, we find $18 - 21\mathrm{m}$ as typical values for $x_{n}$ , the sprinkler "throw," depending on the number of heads on the pipe. These results are consistent with typical sprinklers at pressures around $400\mathrm{kPa}$ [Carrion et al. 2001; Tarjuelo Martin-Benito et al. 1992]. + +# Conserving Volume + +Now that we know $x_{n}$ , we can determine the $y$ -intercept for the linear function. We know that in any unit of time, the amount of water coming into the pipe $(J)$ is the amount of water sprayed by the sprinklers. For a sprinkler profile $\varphi(r)$ , we can write this assumption as: + +$$ +n \int_ {r = 0} ^ {\infty} \int_ {\theta = 0} ^ {2 \pi} \varphi (r) r d r d \theta = 2 \pi n \int_ {r = 0} ^ {\infty} \varphi (r) r d r = J, +$$ + +where $J$ is the incoming flux of water, $150\mathrm{L / min}$ . For our linear $\varphi (r)$ , with $\varphi (0) = h_n$ and $\varphi (x_{n}) = 0$ , this is equivalent to + +$n$ (volume of cone with height $h_n$ and radius $x_n$ ) = $n\left(\frac{\pi}{3} h_n x_n^2\right) = J$ . + +![](images/cd21aa11ee37df855325b80bad64703fa1201cd8ef6ab2b99d78bda8b3949935.jpg) +Figure 2. The modeled water distribution $\varphi (r)$ changes with the number of heads on the pipe (left) and is consistent with the experimental results of Figure 1 (right). + +![](images/be7f319b8f10785682ac00206f093c57d30a971c8ab61a19ccac82776746c517.jpg) + +This equation lets us fix the height, and completely determine the water distribution from a sprinkler, + +$$ +h _ {n} = \frac {3 J}{\pi n x _ {n} ^ {2}}. +$$ + +The sprinkler profile $\varphi (r)$ is + +$$ +\varphi (r) = \max \left(\frac {1}{k} \ln \left(k t v _ {n} + 1\right) - r \left\{\frac {3 J k ^ {2}}{\pi n \left[ \ln \left(k t v _ {n} + 1\right) \right] ^ {2}} \right\}, 0\right). \tag {4} +$$ + +We illustrate it for a few values of $n$ in Figure 2. + +# Radial Approximation + +For sprinkler heads along a pipe, we get a roughly elliptical distribution of water application rates. If we approximate this as a radial distribution, we get a one-dimensional profile function. We can then find a set of pipe locations by superimposing these functions and maximizing uniformity. + +# Determining the Radial Profile + +Let $L$ be the length of the pipe. We require $\varphi(r)$ to be monotonically decreasing, as is the case for our approximate linear sprinkler profile and for some other distributions, such as the exponential and the normal centered at zero (Figure 3). + +Let $\bar{P}$ be the number of times that we move the pipe (which has $n$ sprinkler heads on it). Let also + +$\vec{p}_i$ and $\theta_{i}$ be the position and angle of the pipe, + +$t_{i}$ be the length of time that the pipe remains at the $i$ th position, and + +![](images/5731380d0197b7974b52cc21a8a1b5bae8682a3dad280c222a6b6409260af78f.jpg) +Figure 3. Sprinkler heads deliver water with a radially-symmetric distribution $\varphi(r)$ . + +![](images/c828a5e97ec41ccf0eabb78284ed90b5bc6867c3e2d4fbb879b74f24b3d16c37.jpg) + +$s_j$ be the position of the $j$ th head along the pipe, where $-\frac{L}{2} \leq s_j \leq \frac{L}{2}$ . + +We write a position on a pipe as the position vector of the pipe's center plus a part along the pipe's axis: + +$$ +\vec {l} (\tau) = \vec {p} + \tau \hat {u} _ {\theta}, +$$ + +where $\vec{u}_{\theta}$ is an unit vector with angle $\theta$ from the positive $x$ -axis, + +$$ +\hat {u} _ {\theta} = \hat {e} _ {x} \cos \theta + \hat {e} _ {y} \sin \theta . +$$ + +In this notation, the position of the $j$ th sprinkler on the $i$ th pipe location is $\vec{l}_i(s_j)$ , and the sum of water over the field is + +$$ +S (\vec {x}) = \sum_ {i = 1} ^ {P} t _ {i} \sum_ {j = 1} ^ {n} \varphi \left(\left| \vec {x} - \vec {l} _ {i} (s _ {j}) \right|\right). +$$ + +This $S(\vec{x})$ can be interpreted as a radial basis function interpolant [Powell 1987]. We write it as a sum over pipe locations rather than over sprinkler heads: + +$$ +S (\vec {x}) = \sum_ {i = 1} ^ {P} t _ {i} G _ {i} (\vec {x}), +$$ + +where $G_{i}(\vec{x})$ is the water distribution from the $i$ th pipe position, + +$$ +G _ {i} (\vec {x}) = \sum_ {j = 1} ^ {n} \varphi \left(\left| \vec {x} - \vec {l} _ {i} (s _ {j}) \right|\right). +$$ + +We define an approximation to $G_{i}(\vec{x})$ by breaking the sprinkler heads into infinitesimal pieces: + +$$ +\begin{array}{l} \widetilde {G} _ {i} (\vec {x}) = n \lim _ {k \to \infty} \sum_ {j = 0} ^ {k - 1} \frac {1}{k} \varphi \left(| \vec {x} - \vec {l} _ {i} (s _ {j}) |\right) \\ = n \int_ {- L / 2} ^ {L / 2} \varphi (| \vec {x} - \vec {l} _ {i} (\tau) |) d \tau . \tag {5} \\ \end{array} +$$ + +The second equality follows from assuming that the heads are sufficiently uniform to approximate an integral. The quantity $\widetilde{G}_i$ is approximately $G_{i}$ ; it is the limit of the water distribution as the number of heads becomes infinite while keeping constant the total volume of water that the pipe delivers. + +If $\vec{x}$ is a point along the pipe, $\vec{x} = \vec{p} + t\vec{u}_{\theta}$ , then (5) reduces to + +$$ +\widetilde {G} _ {i} (\vec {x}) = n \int_ {- L / 2} ^ {L / 2} \varphi (| t - \tau |) \mathrm {d} \tau . +$$ + +If $\varphi$ is zero (or approximately zero) outside a radius $r$ , and if $|t| < L - r$ , then the integral is constant (or approximately constant) with respect to $t$ . Hence the water distribution is dominantly characterized by the orthogonal distance from the pipe. Define $\mu(x)$ , the pipe radial water distribution function, as + +$$ +\mu (x) = \widetilde {G} (\hat {e} _ {x} x) = n \int_ {- L / 2} ^ {L / 2} \varphi (\sqrt {x ^ {2} + \tau^ {2}}) d \tau . +$$ + +The function $\mu(r)$ is the water distribution for the pipe, in analogy with the distribution $\varphi(r)$ for a sprinkler head. We use this function to approximate $G_{i}$ , again using the analogy with radial basis functions: + +$G_{i}(\vec{x})\approx \mu (|\vec{x} -\vec{z}|),$ where $\vec{z}$ is the closest point to $\vec{x}$ on the pipe. + +We then use $\mu$ to approximate the total water sum, + +$\widetilde{S} (\vec{x}) = t\sum_{i = 1}^{P}\mu (|\vec{x} -\vec{z}_i|),\qquad \text{where $\vec{z}_i$ is the closest point to $\vec{x}$ on the ith pipe.}$ + +The function $\mu$ is proportional to a smoothed version of $\varphi$ . With $\varphi(x) = \max(\lambda - |x|, 0)$ , we obtain the profile + +$$ +\begin{array}{l} \mu (x) = n \int_ {- L / 2} ^ {L / 2} (\lambda - \sqrt {x ^ {2} + \tau^ {2}}) _ {+} \mathrm {d} \tau \\ = \left\{ \begin{array}{l l} n \left[ \lambda \sqrt {\lambda^ {2} - x ^ {2}} + \frac {x ^ {2}}{2} \ln \frac {\lambda - \sqrt {\lambda^ {2} - x ^ {2}}}{\lambda + \sqrt {\lambda^ {2} - x ^ {2}}} \right], & | x | \leq \lambda ; \\ 0, & \mathrm {o t h e r w i s e .} \end{array} \right. \\ \end{array} +$$ + +The result is a unimodal function symmetric over the domain $[- \lambda, \lambda]$ . + +To indicate the flexibility of the method, we note that if $\varphi$ follows a normal distribution, so does $\mu$ ; or if $\varphi$ is monotonically decreasing, then so is $\mu$ . + +# Optimality of Periodic Solutions + +Consider an irrigation sequence where the pipe is oriented at the same angle $\theta$ and for the same duration $t$ at each step for $P$ steps. At each step, the pipe is + +![](images/489603fac038edd7ae8c0a96f404af39e97ab6ba9ed1918f25cf77196f506726.jpg) +Figure 4. $S(\hat{e}_x x)$ . + +moved laterally $w$ meters, so $\vec{p}_i = \vec{p}_0 + iw\vec{u}_{\theta + \pi / 2}$ . If, in the direction parallel to the pipe, $\vec{x}$ is not beyond the endpoints, then + +$$ +S (\vec {x}) = t \sum_ {i = 1} ^ {P} \mu \left(\left| \vec {u} _ {\theta + \pi / 2} \cdot \left(\vec {p} _ {0} - \vec {x}\right) + i w \right|\right). \tag {6} +$$ + +Consider watering a $W \times L$ rectangular area. Define the boundary margin $b$ as the distance between the boundary and the first pipe and the step widths $w = (W - 2b) / (P - 1)$ (Figure 4). Let $\vec{p_0} = b\hat{e}_x$ , $\theta = \frac{\pi}{2}$ , and $\vec{p_1} = p_0 + iw u_0$ , such that the sequence irrigates the region $R = [0, W] \times [-\frac{L}{2}, \frac{L}{2}]$ (Figure 5). + +![](images/f6693a55baf30d6aa08538a75e2d0a8e5f1464a197cad42429b54f4135aca68c.jpg) +Figure 5. A five-step irrigation sequence with parallel pipe orientations. + +We apply the "number of passes" uniformity criterion to $S(\vec{x})$ . We maximize this criterion by maximizing its approximation in terms of $\mu(r)$ : + +$$ +\mathcal {C} _ {u} = \frac {\min _ {\vec {x} \in R} S (\vec {x})}{\max _ {\vec {x} \in R} S (\vec {x})}. +$$ + +A higher criterion value indicates more uniform irrigation of $R$ . + +Since $S(\vec{x})$ is nearly constant in the direction along the pipe, $\mathcal{C}$ is closely approximated by + +$$ +\widetilde {\mathcal {C}} _ {u} = \frac {\min _ {x} S (\hat {e} _ {x} x)}{\max _ {x} S (\hat {e} _ {x} x)}, \tag {7} +$$ + +that is, $\mathcal{C}_u$ restricted to the east-west line through the middle of $R$ . Suppose that $w$ is wide enough relative to the decay of $\mu$ such that the overlap between nonadjacent pipe terms in (6) is negligible. Then for $\left\lfloor \frac{y - b}{w} \right\rfloor = k, k = 1,2,\ldots,(P - 1)$ , we have + +$$ +\widetilde {S} \left(\hat {e} _ {x} x\right) = \mu (x - b - (k - 1) w) + \mu (b + k w - x). \tag {8} +$$ + +Since + +$$ +\frac {\partial}{\partial x} \widetilde {S} \left(\hat {e} _ {x} \left[ b + \left(k + \frac {1}{2}\right) w \right]\right) = \mu^ {\prime} \left(\frac {w}{2}\right) - \mu^ {\prime} \left(\frac {w}{2}\right) = 0, +$$ + +the midpoints $x = b + \left(k + \frac{1}{2}\right)w$ between pipe positions are local extrema of the water sum $\widetilde{S}$ . Furthermore, by (8), each extremum attains the same value + +$$ +\widetilde {S} \left(\hat {e} _ {x} \left[ b + \left(k + \frac {1}{2}\right) w \right]\right) = 2 \mu \left(\frac {w}{2}\right). +$$ + +Since $\mu$ is monotonically decreasing, another set of extrema are the pipe positions $x = b + kw$ , each attaining the value $\mu(0)$ . At the ends of the field, + +$$ +\widetilde {S} (\hat {e} _ {x} x) = \left\{ \begin{array}{l l} \mu (b - x), & 0 \leq x \leq b; \\ \mu (x - b - (P - 1) w), & b + (P - 1) w \leq x \leq W. \end{array} \right. +$$ + +For the physically-derived sprinkler distribution (4), $\mu$ is simple enough that overlaps do not produce any other extrema. Therefore, $\widetilde{\mathcal{C}}_u$ is maximized by the choice of $b$ and $w$ such that all minima are equal and all maxima are equal. + +For wider $w$ , the pipe positions are maxima, the midpoints are minima, and the boundary margin $b$ is selected such that $\widetilde{S}(0\hat{e}_x) = \widetilde{S}(W\hat{e}_x) = 2\mu(\frac{w}{2})$ (as in Figure 4). For narrower $w$ , there is more overlap and the midpoints become maxima and $b$ is set to zero. + +Thus, the periodic solution maximizes the approximate criterion $\widetilde{C}_u$ : the periodic watering is locally optimal in uniformity of water delivery. + +# Complete Solution + +We restrict ourselves to periodic solutions; by symmetry, we place the pipe locations in the center of the field. We must now optimize over the number of pipe locations in one sweep of the field, the number of sprinkler heads, and the distribution of sprinkler heads along the pipe. + +# Analytical Prediction of Sprinkler Head Distribution + +We analytically determine the location of sprinklers on a pipe to maximize uniformity. Let $\varphi(r)$ be the radial water distribution + +$$ +\varphi (r) = h _ {n} \left(1 - \frac {r}{x _ {n}}\right) _ {+} = \max \bigl (h _ {n} (1 - \frac {r}{x _ {n}}), 0 \bigr). +$$ + +The sprinkler delivers a maximum of $h_n \, \mathrm{cm/h}$ at its center and delivery decays linearly to zero at radius $x_n \, \mathrm{m}$ . For $n = 2$ , we have $h_n = 1.20 \, \mathrm{cm/h}$ and $x_n = 19.0 \, \mathrm{m}$ . The variables $h_n$ and $x_n$ decrease as the number of sprinkler heads $n$ increases. Asymptotically, $x_n \to 17.9 \, \mathrm{m}$ (we need this lower bound later). The water distribution of the pipe is, as before, + +$$ +G _ {i} (\vec {x}) = \sum_ {j = 1} ^ {n} \varphi \left(| \vec {x} - \vec {l} _ {i} (s _ {j}) |\right). +$$ + +We select the sprinkler head locations $s_j$ that minimize our uniformity criterion $S_{\mathrm{max}} / S_{\mathrm{min}}$ of (1): + +$$ +\mathcal {C} _ {u} = \frac {\min _ {| x | \leq 1 5} G (x \hat {e} _ {x})}{\max _ {| x | \leq 1 5} G (x \hat {e} _ {x})}. +$$ + +For $n = 2$ , the pipe water distribution is + +$$ +G (x \hat {e} _ {x}) = \varphi (| x - s _ {1} |) + \varphi (| x - s _ {2} |). +$$ + +The symmetry of the optimization problem implies that the heads are best placed symmetrically on the pipe. Let $s = s_1 = -s_2$ , $s \geq 0$ . Since the heads must be on the pipe, $s \leq 10\mathrm{m} < x_n$ . Evaluating $G(x\hat{e}_x)$ reduces to three cases: + +Case 1: $s \leq x_{n} - 15$ + +![](images/5ee9ae5494807de05cb9db9b9be6320eadbfbea42fed731dad45b92e0f28c41a.jpg) + +Case 2: $x_{n} - 15 < s\leq \frac{x_{n}}{2}$ + +![](images/810f9fdecea14a1f0aa7e0b1e6d87ef3f6a7c8586ab7bfb8201459948c1a6a87.jpg) + +Case 3: $\frac{x_n}{2} \leq s \leq 10$ + +![](images/13f5f9d8175a0a3364661622cb3886cca34750adae936a25c9677919d9e36891.jpg) + +In the first two cases, $\mathcal{C}_u$ improves as $s$ increases. In the third case, $G$ increases at the endpoints and the value in the middle decreases as $s$ increases. Therefore, the uniformity criterion is optimized when + +$$ +\varphi (1 5 - s) = 2 \varphi (s), +$$ + +$$ +h _ {n} \left(1 - \frac {1}{x _ {n}} (1 5 - s)\right) = 2 h \left(1 - \frac {1}{x _ {n}} s\right), +$$ + +$$ +s = 5 + \frac {1}{3} x _ {n}; +$$ + +but this $s$ places the heads beyond the endpoints of the pipe. Since $x_{n} > 17.9\mathrm{m}$ , $s$ is greater than $10.95\mathrm{m}$ . + +The optimal choice is $s = 10\mathrm{m}$ , placing the heads at the ends of the pipe. + +For $n$ even, $n > 2$ , the solution is the same. Since $17.9\mathrm{m} < x_{n} < 21.0\mathrm{m}$ for all $n$ , the same restrictions apply and the optimal choice is $s_{j} = (-1)^{j}10\mathrm{m}$ . For odd $n$ , the symmetry requirement places the last sprinkler head in the center. + +# Optimization of Sprinkler Head Distribution + +Sprinkler heads should be positioned as close as possible to the end of the pipe. For a fixed number of heads, we determine the distribution of the sprinkler heads that minimizes the number of passes required, that is, minimizes $S_{\mathrm{max}} / S_{\mathrm{min}}$ and thus maximizes uniformity. Simulated annealing and the downhill simplex (Nelder-Mead) method [Hiller and Lieberman 2005; Press et al. 1992] determine essentially the same results as we predicted above from analytical considerations! + +# Pipe Number, Initial Position, Number of Heads + +We vary three parameters: + +- Pipe number $P$ controls the density of pipe positions in the field; +- Initial position $b$ is the offset, how the solution interacts with the boundaries; and +- Number of sprinkler heads $n$ . + +The allowable ranges for these parameters are narrowly restricted. For instance, with fewer than three pipe locations per pass, we always have a region that never gets watered. Also, solutions requiring more than ten pipe locations per pass are suboptimal by the limitations on fast-watering. + +The small range of parameters allows us to brute-force the optimization, calculating all possible cases (quantizing the variable $b$ , which controls the distance of the first pipe from the boundary). + +# Results of Brute-Force Variation + +We propose using five steps in a pass, with two sprinkler heads per pipe and periodic spacing of pipe locations, with the first pipe location on the boundary $(b = 0)$ . This requires four passes, or 20 moves, and has a uniformity coefficient of 94 (Figures 6-8); this solution is at least locally optimal. + +We determine the watering time from the constraints. We make four passes, at each pass staying $32\mathrm{min}$ at each of the locations in Figure 7; the total amount of water applied in $96\mathrm{h}$ is given in Figure 8. The total watering time required is around $11\mathrm{h}$ , though the farmer does not need to be present for all of this time. The steps could also easily be split up over a four-day period. + +Simulated annealing methods reproduce these values quickly, indicating that the solution space is reasonable. + +We also calculated the Christiansen uniformity coefficient for these states (Figure 9), which shows that our best solution maximizes uniformity as well as minimizing the number of moves. + +![](images/39215f5f31df629890f700139cefd928373f732a55bdc0d240fdcac864f04115.jpg) + +![](images/7dc408317cb06b5dd89fe888a5801d2d7686ce8c84f623e9a6f2005de6257702.jpg) + +![](images/1fcb1b9c601d1cb4a8f9277042b4001a676761731e8071e3fa6a5e1ad1f2978e.jpg) +Figure 6. The number of moves required is minimal for $P = 5, b = 0$ , and $n = 2$ . + +![](images/f74b38b508fbd4c40afe09e7bb5ccbb206fd925f892a57f0077566407ac43403.jpg) +Figure 7. The layout of moves for our solution $P = 5$ , $b = 0$ , and $n = 2$ . + +![](images/a2557dfe8cbbdc01b8043477327fdf1613cbd768beeca3e0d9838b707ea03526.jpg) +Figure 8. Our best solution has high uniformity $(\mathrm{CU} = 94)$ , and meets the minimum watering criterion. + +![](images/38ee899c45c708624ad310506034c372fb71c6444d2ca80a364c854d84a893fd.jpg) + +![](images/0554c78cdccd731d690d12fc2a8864810400da7678a931f32d04f67997744238.jpg) + +![](images/b552cd68ba8bdd0cbc0b533804b47faedee2f37261b1c47f067fb35f2f6eb9cc.jpg) +Figure 9. The uniformity of watering is at a maximum when the number of steps is at a minimum. + +We are at a minimum in all three orthogonal directions in parameter space, since $b > 0$ by our constraint that all pipe locations must be within the field. This is the halting condition for Powell's optimization method, so we have, once again, a local optimal point [Press et al. 1992]. + +# General Simulated Annealing Results + +We start with a completely random distribution of pipe locations and use simulated annealing to optimize for maximum uniformity. This, in many cases, reproduces our geometrically determined distribution (Figure 10). + +# Stability of Model under Alterations + +What if we change $\varphi(r)$ ? For $\varphi(r) = A_n \exp(r / x_n)$ , where $x_n$ is the "throw" of the sprinkler and $A_n$ is determined so as to conserve volume, simulated annealing to maximize uniformity gives results very similar to that of our linear $\varphi$ (Figure 11). The same occurs with $\varphi$ a normal distribution. + +![](images/173d304842ba03f7fc58d49076c7058ac6f0d2969d0d8597251809d2f98b4d86.jpg) + +![](images/991abd84ab5032b94e8e5ef81442cca6ea60b5f033fb0364ec16084e8982905d.jpg) + +![](images/8d113e914bde5eb8f9e9f477f5b919f9827a00afb661cd421f16066bdc5e2b33.jpg) +Figure 10. Simulated annealing converging to our geometrically-proposed solution. + +![](images/0d4614bb42ef3c5783c99c69294fc21cca493c01c3341056f012503dbe0bc5bc.jpg) + +![](images/b9abc7d20a9d0ec8e98b030af81329661313f8de854600e71c9847edf1e89d3b.jpg) +Figure 11. $\varphi(r) = A_n \exp(-r / x_n)$ , for $P = 3, 4, 5$ . Even when $\varphi(r)$ is changed, the general form of the optimum solution remains. + +![](images/269fa7134e1894a696251d9d67c4bc0f0d6651ccaabf9234d67b6f5ef338c75d.jpg) + +![](images/b91fc2b6d55b78091a88d5944737ed9722f419d265191740219fe8ab9661bced.jpg) + +# A Slow-Watering Solution + +If we relax the restrictions, we can create a slow-watering solution. We take our optimum solution and treat it in the case where the flow rate is cut in half, $J = 75 \mathrm{~L} / \mathrm{min}$ . This leads to a watering solution with CU of 95 that requires moving the pipe only five times, with watering shifts of around $2 \mathrm{~h}$ instead of four shifts of $30 \mathrm{~min}$ each. + +# Conclusion + +- We develop a physical model of sprinklers on a "hand move" pipe. +- Our model of the sprinklers predicts values for the range and distribution that are consistent with experiment. +- Using an averaging argument reduces the problem to one dimension and predicts a periodic solution. + +- Through a combination of optimization and geometric arguments, we develop a fast-watering solution that requires only 20 moves with uniformity coefficient of 94, far better than the "market-worthy threshold" of 80. +- We have shown that a periodic solution is a local maximum in uniformity of watering. +- The optimum distribution of sprinkler heads has the heads near the end of the pipe. +- Simulated annealing recreates our best solution. +- Reducing the water flow would allow a slow-watering solution that required only five total moves, with $\mathrm{CU} = 95$ . + +# Strengths and Weaknesses + +# Strengths + +- Solution quality. We have created a solution that works, with a relatively small number of moves and with a high uniformity. +- Consistency. Our analytical predictions are consistent with results from numerical optimization. Both approaches show that our solution is at least locally optimal. +- Stability. Searching the parameter space with simulated annealing reproduces our solution. +- Feasibility and simplicity. Our solution can be easily implemented by a rancher. +- Physical consistency. Our physical arguments produce sprinkler profiles very close to those measured experimentally. +- Flexibility. Our optimization techniques do not depend strongly on the sprinkler profile $\varphi(r)$ . + +# Weaknesses + +- Constant attention. Our system cannot be left for long periods of time—the water must be shut off, or the pipe moved, every $30\mathrm{min}$ . +- Lack of geometric flexibility. Though our general simulated annealing approach can adapt to different boundary conditions, our best solution depends strongly on the symmetry of the problem. +- No global optimum. Despite our extensive simulations, we cannot guarantee a global optimum. + +# References + +Ascough, G.W., and G.A. Kiker. 2002. The effect of irrigation uniformity on irrigation water requirements. Water South Africa 28 (2): 235-242. http://www.wrc.org.za/archives/watersa%20archive/2002/April/1490.pdf#search=%22%22The%20effect%20of%20irrigation%20uniformity%20on%20irrigation%20water%20requirements%22%. +Carrion, P., J.M. Tarjuelo, and J. Montero. 2001. SIRIAS: A simulation model for sprinkler irrigation. I. Description of model. *Irrigation Science* 20: 73-84. +Hiller, F.S., and G.J. Lieberman. 2005. Introduction to Operations Research. New York: McGraw Hill. +Kranz, Bill, Dean Yonts, and Derrel Martin. 2005. Operating characteristics of center pivot sprinklers. University of Nebraska-Lincoln, Institute of Agriculture and Natural Resources (G1532). http://www.ianrpubs.unl.edu/epublic/live/g1532/build/g1532.pdf. +Marion, J.B., and S.T. Thornton. 1988. Classical Dynamics of Particles & Systems. 3rd ed. New York: Harcourt Brace Jovanovich. +Marshall, J.S., and W.M. Palmer. 1948. The distribution of raindrops with size. Journal of Meteorology 5: 165-166. +Mateos, L. 1998. Assessing whole-field uniformity of stationary sprinkler irrigation systems. *Irrigation Science* 20: 73-81. +Powell, M.J.D. 1987. Radial basis functions for multivariable interpolation: A review. In Algorithms for Approximation, edited by J.C. Mason and M.G. Cox, 143-168. New York: Clarendon Press. +Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 1992. Numerical Recipes. New York: Cambridge University Press. +Sheppard, David Thomas. 2002. Spray characteristics of fire sprinklers. NIST GCR 02-838. Springfield, VA: National Institute of Standards and Technology. http://www.fire.nist.gov/bfrlpubs/fire02/PDF/f02021.pdf. Summary in Research and Practice: Bridging the Gap, Proceedings of the Fire Suppression and Detection Research Application Symposium, 123-149, http://www.fire.nist.gov/bfrlpubs/fire03/PDF/f03056.pdf. Orlando, FL: Fire Protection Research Foundation, 2003. +Smajstrla, A.G., F.S. Zazueta, and D.Z. Haman. 1997. Lawn sprinkler selection and layout for uniform water application. http://edis.ifas.ufl.edu/AE084. BUL320. Florida Cooperative Extension Service, Institute of Food and Agricultural Sciences, University of Florida. + +Tan, A., and S.T. Wu. 1981. The motion of fountain droplets. Chinese Journal of Physics 21 (2,3): 48-52. http://psroc.phys.ntu.edu.tw/cjp/v19/48. pdf#search=%22%22The%20motion%20of%20fountain%20droplets%22%22. +Tarjuelo Martin-Benito, José María, Manuel Valiente Gómez, and Juan Lozoya Pardo. 1992. Working conditions of sprinkler to optimize application of water. Journal of Irrigation and Drainage Engineering 118 (1): 895-913. + +![](images/27a85dde12fcbaf8f07f653c7a3e7211ca332643a92707a773c01d21a23ab00e.jpg) +Brian Camley, Pascal Getreuer, and Bradley Klingenberg. + +# Developing Improved Algorithms for Irrigation Systems + +Ying Yujie + +Jin Qiwei + +Zhou Kai + +Zhejiang University of Technology + +Hangzhou, China + +Advisor: Wang Shiming + +# Summary + +Our goal is an algorithm that minimizes the time to irrigate a relatively small field under given conditions. + +We focus on minimization of time, uniformity of irrigation, and feasibility. Our effort is divided into five basic parts: + +- We assess the wetted radius based on experimental results for several typical rotating spray sprinklers. +- We determine the number of sprinklers from an empirical formula for sprinkler flow. +- We simulate the water distribution pattern, using a $0.25\mathrm{m}\times 0.25\mathrm{m}$ grid. +- We evaluate the uniformity of water distribution by Christiansen's uniformity coefficient. +- We find an optimal irrigation schedule including when and where to move the pipes: We devise a single-lateral-pipe scheme and a multiple-lateral-pipes scheme; the latter gives better results. To irrigate more uniformly, we adjust the spacing between sprinklers and the spacing from the edge. Using our grid, we move the sprinklers symmetrically on both sides, node by node, to find the optimal positions for an improved multiple-lateral-pipes scheme. + +Simulations show that all three schemes perform acceptably in realistic conditions. The multiple-lateral-pipes scheme is superior, with minimum time and + +the highest Christiansen's uniformity coefficient (CU). We conclude that four sprinklers are required, the minimal amount of time is 732 min, and the CU is $90\%$ . + +We do a sensitivity analysis of the variation of CU and of minimum time with wetted radius, which shows that our model is robust. + +# Introduction + +# Structure of a Hand-Move Irrigation System + +A hand-move irrigation system has two kinds of pipes: a portable or buried mainline pipe, and a portable aluminum (sometimes plastic) lateral pipe with quick couplers and spray nozzles. + +# Definitions and Key terms + +Pipeset: Pipes that can be connected together in a straight line. + +Working pressure: Pressure at the water source (kPa). + +Hydraulic pressure: Equivalent to working pressure but with measurement in meters (m). + +Flow rate: Volume of water discharged per unit of time at the water source $(\mathrm{m}^{3} / \mathrm{h})$ . + +Sprinkler flow: Volume of water discharged per unit of time by a sprinkler $(\mathrm{m}^{3} / \mathrm{h})$ . + +Rotating spray nozzle: Water distribution device equipped with a rotating deflection pad to distribute water. + +Wetted radius: Farthest distance measured while the spray nozzle is rotating normally, from the spray nozzle centerline to the point where water is deposited (m). + +Precipitation: How much water reaches the ground, equivalent to natural rainfall (mm/h). + +Distribution pattern: Pattern showing precipitation by location in the field. + +Uniformity of distribution: Evenness of water throughout a field. + +Symbols are listed in Table 1. + +Table 1. +Symbols. + +
SymbolDescriptionUnits
QFlow ratem3/h
Q'Sprinkler flowm3/h
ACross-sectional area of nozzlem2
nThe number of sprinklers
μDischarge coefficient
HpHydraulic pressurem
dDiameter of nozzlemm
αTrajectory angle of nozzle°
gAcceleration due to gravitym/s2
PPrecipitationmm/h
RWetted radiusm
rDistance from the sprinklerm
ρAverage precipitation over the area covered by one sprinklermm/h
CUChristiansen's uniformity coefficient
(xi,yj)Coordinate of grid node in a network
NTotal number of grid nodes
TIrrigation timeh
hi,jPrecipitation at (xi,yj)mm
+ +# General Assumptions + +- There is no infiltration, evaporation, or wind. +- Time spent on moving the pipes is negligible. +- Sprinklers used in a pipe set are of the same type. +- The diameter of the sprinkler is small compared to the dimensions of the watered area. +- We ignore the height of sprinklers [Carrón et al. 2001]. +- Pressure at each sprinkler equals working pressure. +- Sprinkler flow rate remains stable. + +# Model Design + +# Wetted Radius + +The wetted radius mainly factors in working pressure and nozzle characteristics such as size, trajectory angle, and rotating velocity. Accelerating the rotating speed reduces wetted radius. Wetted radius depends on hydraulic + +pressure, diameter, and (not significantly) on trajectory angle, of nozzles via the relationship + +$$ +R = f (\alpha , h _ {p}, d). +$$ + +When the trajectory angle $\alpha$ is stationary, an empirical formula commonly used by manufacturers is + +$$ +R = \xi h _ {p} ^ {m} d ^ {n}, \tag {1} +$$ + +where $\xi, m,$ and $n$ are parameters evaluated by the manufacturer's testing at various water pressures. + +Applying least-squares to the experimental wetted radius of three typical rotation spray sprinklers, we get parameter values for four different trajectory angles (Table 2). We can substitute parameter values and easily obtain wetted radius. + +Table 2. +Values of parameters. + +
Trajectory angle (°)ξmn
711.460.3690.319
155.610.2250.734
22.58.630.1400.476
304.520.1280.844
+ +# Number of Sprinklers + +Sprinkler flow depends mainly on two factors: working pressure and diameter of the nozzle. Flow rate from a nozzle increases with working pressure and can normally be fitted to the equation [Zhao 1999]: + +$$ +Q ^ {\prime} = 3 6 0 0 \mu A \sqrt {2 g H _ {p}}, \tag {2} +$$ + +where + +$Q^{\prime}$ is sprinkler flow $(\mathrm{m}^{3} / \mathrm{h})$ + +$\mu$ is the discharge coefficient, usually between 0.75 and 0.98; and + +$A$ is the cross-sectional area of nozzle $(\mathrm{m}^2)$ + +The number of sprinklers required is $n \approx Q / Q'$ . + +# Water Distribution + +# Distribution Pattern of a Single Sprinkler + +Water distribution patterns are usually obtained under controlled no-wind conditions. Figure 1 is a common pattern for simulating precipitation from a single sprinkler [Mateos 1998]. + +![](images/3c9e9db866fb5126e47dcdacbce329ddc0b30e4d35d30e500f66ef0ef8a709a5.jpg) +Figure 1. Distribution pattern of an individual sprinkler. + +The mathematical function is + +$$ +P = \frac {2 Q ^ {\prime} T}{R ^ {2} \pi} \left(1 - \frac {r ^ {2}}{R ^ {2}}\right), +$$ + +where + +$P$ is the precipitation $(\mathrm{mm / h})$ + +$r$ is distance from the sprinkler $(\mathfrak{m})$ + +$Q^{\prime}$ is sprinkler flow $(\mathrm{m}^{3} / \mathrm{h})$ + +$R$ is wetted radius (m), and + +$T$ is irrigation time (h). + +# Water Distribution over the Whole Field + +We divide the field uniformly into sufficiently small grid squares $(0.25\mathrm{m} \times 0.25\mathrm{m})$ and overlap the precipitation from each sprinkler. + +# Average Precipitation + +To adjust the amount of water received on each part of the field per hour or per day, we introduce the concept of average precipitation (mm/h): + +$$ +\rho = \frac {Q ^ {\prime}}{R ^ {2} \pi}, \tag {3} +$$ + +where $Q^{\prime}$ is sprinkler flow $(\mathrm{m}^{3} / \mathrm{h})$ and $R$ is wetted radius (m). + +# Uniformity of Water Distribution + +Irrigation uniformity is a major factor in maintaining proper crop growth. We calculate a uniformity coefficient for the field [Wilcox and Swailes 1947], using that of Christiansen [1941]: + +$$ +\mathrm{CU} = \left(1 - \frac{\sum_{i}\sum_{j}(h_{i,j} - \bar{h})}{N\times\bar{h}}\right)\times 100\% , \tag{4} +$$ + +where + +$h_{i,j}$ is precipitation at $(x_i,y_i)$ (mm/h), + +$\bar{h}$ is the average value of all $h_{i,j}$ , and + +$N$ is the total number of grid nodes. + +# Model Validation on a Small Ranch + +# The Specifications + +The field is $80\mathrm{m}\times 30\mathrm{m}$ . Each pipe has a 10-cm inner diameter with rotating spray nozzles with $6\mathrm{mm}$ inner diameter, and the pipes connected together are $20\mathrm{m}$ long. The pressure of the water source is $420\mathrm{kPa}$ , with a flow rate of $150\mathrm{L / min}$ . No part of the field should receive more than $0.75\mathrm{cm / h}$ of water, and each part should receive at least $2\mathrm{cm}$ of water every 4 days. + +# Number of Sprinklers + +Given the flow rate and diameter of the nozzle, we calculate the sprinkler flow using (2), then assess that the number of sprinklers should be four. + +# The Conditions on the Ranch + +Our simulation has $121 \times 321$ total grid nodes, with grid size $0.25\mathrm{m} \times 0.25\mathrm{m}$ . Equation (4) becomes + +$$ +\mathrm{CU} = \left(1 - \frac{\sum_{i = 0}^{120}\sum_{j = 0}^{320}(h_{i,j} - \bar{h})}{121\times 321\bar{h}}\right)\times 100\% . +$$ + +# Schemes of Positioning and Moving + +We examine several typical workable schemes and compare them to find the optimal configuration. + +# Single Lateral Pipe + +If all pipes are connected into one lateral pipe, then we have the approximate average precipitation rate by (3) that should be satisfied: + +$$ +\frac {4 \times \frac {1}{4} Q}{\pi R ^ {2}} < 7. 5 \mathrm {m m}. \tag {5} +$$ + +If we use a rotating spray sprinkler with a trajectory angle of $30^{\circ}$ , then by (1) and Table 2, the wetted radius is $R \approx 20\mathrm{m}$ . + +# Description + +The mainline pipe is located across the field as shown in Figure 2. The lateral pipe is moved across through the field at right angles to the row direction. This lateral pipe has four sprinklers $6.67\mathrm{m}$ apart. + +# Results + +We design a schedule for four days. The minimum time is $1228\mathrm{min}$ , with four cycles, and $\mathrm{CU} = 78\%$ . The distribution pattern is shown in Figure 3. + +# Multiple Lateral Pipes + +We try to improve the uniformity of the precipitation by changing the position of the lateral pipe and the spacing between sprinklers, but we can't get a satisfactory result, since CU cannot be improved. In addition, wind normally has a significant impact on sprinklers with a higher trajectory angle. We conclude that more than one lateral pipe should be used. + +We conclude that two lateral pipes with two sprinklers on each are appropriate. In light of (3), the approximate average precipitation rate should be satisfied: + +$$ +\frac {2 \times \frac {1}{4} Q}{\pi R ^ {2}} < 7. 5 \mathrm {m m}. \tag {6} +$$ + +irrigation order sprinkler lateral pipes main line pipe 1,2,3,4 + +![](images/d1585e99346c21ddb8f4951d226c621ecb5fd568bc0ca39ea05d601edb719172.jpg) +Figure 2. Single-lateral-pipe scheme, with measurements in meters. + +![](images/9f191290e082f1da13b79cce3c55694bc329290b2ca325964431bd0e447fa870.jpg) +Figure 3. Distribution pattern for the single-lateral-pipe scheme. + +If we use a rotating spray sprinkler with a trajectory angle of $15^{\circ}$ , then by (1) and Table 2, the wetted radius is $R \approx 16\mathrm{m}$ . + +# Description + +The mainline pipe goes along the edge of field, connected to two lateral pipes. Each lateral has two sprinklers $5\mathrm{m}$ apart. The two lateral pipes are moved crossways. The irrigation order and positions of sprinklers are presented in Figure 4. + +![](images/cbfd4c3515ae142e90a656ca4972fe5688d227ac5d950d1cd0a8a4b44260de20.jpg) +Figure 4. Multiple-lateral-pipes scheme, with measurements in meters. + +# Results + +We design a schedule for four days. The minimum time is $920\mathrm{min}$ , with four cycles, with $\mathrm{CU} = 83\%$ . The distribution pattern is shown in Figure 5. + +# Improved Multiple-Lateral-Pipes Scheme + +# Description + +With multiple lateral pipes, precipitation is relatively excessive in the middle of the field, due to overlap. We move the sprinkler closer to the edge to uniformize the precipitation. Using a $0.25\mathrm{m} \times 0.25\mathrm{m}$ network, we move sprinklers on both sides, node by node symmetrically, to determine the optimal position. Two sprinklers $5\mathrm{m}$ apart on the same lateral, at $3\mathrm{m}$ or $8\mathrm{m}$ from the edge of the field, are optimal (Figure 6). + +# Results + +We design a schedule for four days. The minimum time is $732\mathrm{min}$ , with four cycles, and $\mathrm{CU} = 90\%$ . The distribution pattern is shown in Figure 7. + +![](images/d523cfc6dfd2aa13ddedce9ec7e9ba91b828d1123ba4799a61b43e656f640e44.jpg) +Figure 5. Distribution pattern for the multiple-lateral-pipes scheme. + +![](images/e59d2aef0a8bb6cceba69c528538645c388ed8bead8449f0ba79beb294e46ed8.jpg) +Figure 6. Improved-multiple-lateral pipes scheme, with measurements in meters. + +![](images/e7e7503bc47467ef64f835b9161a1a491a10d9cd524eecd96f62fcd61f0c613b.jpg) +Figure 7. Distribution pattern for the improved multiple-lateral-pipes scheme. + +# Conclusions + +# Irrigation Schedule + +[EDITOR'S NOTE: We omit the schedule.] + +# Comparison of Schemes + +The multiple-lateral-pipes scheme and the improved multiple-lateral-pipes scheme take less time (920 min and 732 min) than the single-lateral-pipe scheme (1228 min). + +The improved multiple-lateral-pipes scheme is superior, with both minimum time and the highest Christiansen's uniformity coefficient. + +# Sensitivity Analysis + +We do a sensitivity analysis of the variation in $CU$ and in minimum time with value of wetted radius. (Figures 8-9). Both figures show that our model is robust. + +To obtain the optimal scheme, we use an algorithm to move sprinklers on both sides node by node symmetrically. Figure 10 shows the sensitivity of CU to distance of the sprinkler from the main line. In our model, the difference of CU between two grid nodes is no more than $1.3\%$ . + +The sprinklers in the middle of the field can be shut off selectively and don't need to work continuously during one cycle; the minimum time is determined + +![](images/587a6a22694ab00c140c6c859ac4f59cec6ae0e996a01533ad5775e802cdd7bb.jpg) +Figure 8. The variation in CU caused by wetted radius (for minimum time). Our improved multiple-lateral-pipes scheme uses a wetted radius of $16\mathrm{m}$ + +![](images/02df6e5a0835f8e8772bf5bc3eac364a3fe29469f5820a0af6c828646551b1ef.jpg) +Figure 9. The variation of the minimum time caused by wetted radius. Our improved multiple-lateral-pipes scheme uses a wetted radius of $16\mathrm{m}$ . + +by the sprinklers near the edge. That way, we can not only carry out the irrigation more uniformly but also save much water. + +# Strengths and Weaknesses + +# Strengths + +- We find the water distribution pattern from the sprinklers, using simulation. +- We investigate different numbers of lateral pipes and different values of the wetted radius. +- We provide a good result and find an optimal scheme. + +![](images/ef3be013a3cf56f9db309df7bc5391e7d59b23cad86e72cddf2fd9df92d60d96.jpg) +Figure 10. Sensitivity of uniformity to distance of sprinkler from the main line. In our improved multiple-lateral-pipes scheme, the distance is $3\mathrm{m}$ . + +- We examine various approaches and modifications to find the best design for the irrigation system. + +# Weaknesses + +- We did not incorporate into our model some factors that might have effect in real life, such as infiltration, evaporation, and wind. + +# References + +Carrion, P., J.M. Tarjuelo, and J. Montero. 2001. Sirias: A simulation model for sprinkler irrigation. I. Description of model. *Irrigation Science* 20: 73-84. +Christiansen, J.E. 1941. The uniformity of application of water by sprinkler systems. Agricultural Engineering 22: 89-92. +Mateos, L. 1998. Assessing whole-field uniformity of stationary sprinkler irrigation systems. *Irrigation Science* 18: 73-81. +Wilcox, J.C., and G.E. Swailes. 1947. Uniformity of water distribution by some under-tree orchard sprinklers. Journal of Scientific Agriculture 27 (11): 565-583. +Zhao, Jingcheng. 1999. Technology of irrigation. Agricultural Science and Information 17: 59-61. + +![](images/6e09fc3e013153e2f3a48e46fd38e6ca234c218308e0f12daa18352ec1fbc0e3.jpg) + +![](images/0fa3c59b6921ac92ba24fa94fffa81ac9f1d18bcd0b997ebe0981b5d54412562.jpg) + +![](images/b78ab2fdaee637c05fc575f7d305cf66ab0900b8d6fc1368f8c423d5d2f4d6f6.jpg) + +Team members Ying Yujie, Jin Qiwei, and Zhou Kai. + +![](images/c77d97ff8e2c46d2d65912802def560ce27f50768f3ffc25b608bf5be64b1305.jpg) + +Team advisor Wang Shiming. + +# Judge's Commentary: + +# The Outstanding Irrigation Problem Papers + +Daniel Zwillinger + +Raytheon Company + +528 Boston Post Road + +Sudbury, MA 01776 + +# Introduction + +Irrigation planning is a real-life activity with many complexities; good system design can demonstrate profound water savings. For the contest problem, an entire region must be minimally watered but not overwatered, and trade-offs between fixed and periodically moved equipment must be made. + +As in any real-life modeling activity, the approaches, metrics, and results of others can be obtained with little effort—when applicable, this earlier work should be used, or improved upon. For example, the most widely used measure of irrigation uniformity in the turf industry is Christiansen's uniformity coefficient. Also, manufacturers' specifications of sprinkler characteristics are easily obtained. + +The components appearing in a solution must be identified. For this problem, the judges were looking for the following components: + +- Defined constraints on the problem, such as the needed water flow rate. +- Subjective constraints on the problem, such as what "optimal" is. +- Created constraints on the problem, such as the water distribution pattern from a single sprinkler. +- One or more metrics by which a solution can be evaluated. +- A procedure for obtaining an optimal solution. +- A description of the optimal solution. + +# Problem Specifics + +One of the first considerations is the meaning of "optimal" for this problem. It could be the number of times that the pipes must be moved, or it could be related to the distance that the pipes must be moved. Either of these, or other similar metrics, are reasonable. Minimizing the actual time of watering—selected as a metric by some teams—does not seem to be as useful; it does not obviously correlate to cost. + +The contest problem was stated without much detail. In fact, the problem didn't state exactly where the water outlet was in the field. For this reason, a high-level model, appropriate for simple problem descriptions, is warranted. The judging focused on resolving the difficulties—for this problem, the difficulty was in determining the pipe layout. Excessive detail in, say, the distribution of water from a sprinkler head, is not warranted. Use of either a realistic model (easily available from manufacturers) or a simple model is appropriate. + +In a real situation, the actual sprinkler head water distribution, wind, and other secondary considerations could be important. A description of how they affect the high-level model, and its solution, is warranted—even for a high-level model. However, only if the high-level model solution is complete is it appropriate to incorporate their effects. + +# Problem Areas + +Most of the teams approached the problem well and identified most of the components noted in the Introduction. There were two areas, however, that confused several teams. + +- The maximal soaking rate of $0.75 \, \text{cm/h}$ was intended to be (in the words of the Colorado team) an "average, not instantaneous overwatering" constraint. While mathematically equivalent to $0.0125 \, \text{cm/min}$ , it was not the problem's intent to prohibit a solution that watered at the rate of $0.025 \, \text{cm/min}$ , if this watering occurred for less than $30 \, \text{min}$ in an hour. +- Care is needed to determine the flow rate and pressure from the sprinkler heads when there is more than one. The Duke team had a very clean derivation of this result (although atmospheric pressure of approximately $100\mathrm{kPa}$ is missing in their computations). In summary: When the flow is pressure-limited (i.e., few sprinkler heads), then energy balance (Bernoulli's equation) can be used to determine the output speed. When the flow is volume-limited (i.e., several sprinkler heads), then mass conservation can be used to determine the output speed. + +# What Made Them Outstanding + +The Outstanding papers obtained solutions that could be shown to a customer. These papers obtained schedules by using analytical thinking and by numerical optimizations (using, for example, simulated annealing, genetic algorithms, or methodical searches); some did both. The length of the Outstanding papers, as submitted, varied from 17 to 60 pages, with an average of 29 pages. + +# Further Comments and Advice + +Some overall comments on the submissions and the judging process: + +- The summary should include: + +-problem synopsis, +- description of analysis, and +results. + +This section is worth writing, and then rewriting; it gets much attention. + +- An ideal paper concluded with an explicit watering/movement schedule and a statement about the effectiveness of the schedule. While a result may be exact for a given model, the model is only an approximation to reality. As such, it is unrealistic to report many decimal places in the results. In practice, pipe placement may be accurate to a foot or so; a placement schedule should not require centimeter accuracy. +- Many submissions created very detailed models. It is always an advantage to start with a simple, perhaps idealized, version of the problem. Even an approximate solution to this idealized problem, perhaps obtained by hand, can be used as a bound when comparing the results from more detailed models. Such back-of-the-envelope checks can be vital in checking the reasonableness of a solution. +- When there are different ways to attack a problem, try using several techniques. If they lead to the same answer, then the answer is probably close to correct. And when computer models are used, sensitivity analysis is especially important (and it should be relatively easy to carry out). For example, what happens to the field watering if the pipes are not placed in the exact right positions? +- "Dead ends" are typically useful only if they lead to an insight or constrain a model in some way. Details on such "dead ends" rarely contribute to a paper's overall ranking; at some point, more is less. + +- Graphics indicating the pipe layout and the resulting water distribution were created by most teams. These graphics reveal much information about a pipe layout and its solution. +- As is usually the case, the judges wanted to see justifications for the assumptions made. Reusing standard results (say, those obtained from a book or from the Web) is appropriate; but justifying their applicability is an important aspect of reuse. Note that re-deriving standard results adds little value. Also, the only assumptions that should appear are those used in the problem analysis. +- Finally, details must appear somewhere; if they appear in two places, then error-checking can occur. For example, when a mathematical statement appears suspect, the judges will often locate the computation in the code to see exactly what was implemented. + +# About the Author + +Daniel Zwillinger attended MIT and Caltech, where he obtained a Ph.D. in applied mathematics. He taught at Rensselaer Polytechnic Institute, worked in industry (Sandia Labs, Jet Propulsion Lab, Exxon, IDA, Mitre, BBN, Ratheon), and has been managing a consulting group for the last dozen years. He has worked in many areas of applied mathematics (signal processing, image processing, communications, and statistics) and is the author of several reference books. + +Pp. 333-366 can be found on the Tools for Teaching 2006 CD-ROM. + +# Profit Maximizing Allocation of Wheelchairs in a Multi-Concourse Airport + +Christopher Yetter + +Neal Gupta + +Benjamin Conlee + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +To minimize Epsilon Airlines' cost of providing wheelchair assistance to its passengers, we examine the trade-off between explicit costs (chairs and personnel) and implicit costs (losses in market share). Our Multi-Concourse Airport Model simulates the interactions between escorts, wheelchairs, and passengers. Our Airline Competition Model takes a game-theoretic perspective in representing the profit-seeking behavior of airline companies. To ground these models in reality, we incorporate extensive demographic data and run a case study on 2005 Southwest Airlines flight data from Midland TX, Columbus OH, and St. Louis MO. We conclude that Epsilon Airlines should employ a "hub and spokes" strategy that uses "wheelchair depots" in each concourse to consolidate the movement of chairs. Across different airport sizes and strategies, we find that two escorts per concourse and two wheelchairs per escort are optimal. + +# Introduction + +We study the procedures used by airlines to shuttle passengers from arriving flight to connecting flight. According to the U.S. Department of Transportation, "The delivering carrier shall be responsible for assistance in making flight connections and transportation between gates [for passengers needing assistance]" [2003]. With an aging population, more passengers need help. + +We develop two models. The Multi-Concourse Airport Model simulates the interactions of passengers, wheelchairs, and airport staff within an airport, following each passenger and tracking delays. Passengers needing wheelchair assistance are shuttled through the airport using one of three algorithms. + +The Airline Competition Model simulates a competitive marketplace over a 40-year period. Profit depends on both costs and market share; the latter fluctuates based on customer satisfaction. The Airline Competition Model provides a way to judge accurately and objectively the merit of wheelchair allocation strategies. + +# Key Terminology + +- Gate: A location where air passengers board flights. A given gate can be represented as an ordered pair $(i,j)$ , where $i$ indexes the concourse and $j$ indexes the gates in a concourse. +- Concourse: A collection of gates. Concourse $i$ contains $k_{i}$ gates and is represented as a vector $c_{i} = \langle (i,1),(i,2),\dots,(i,k_{i})\rangle$ . +- Airport: A collection of concourses, which we consider as a graph. +- Passenger: A traveler in an airport, associated with an arriving flight and with a connecting flight. We distinguish WPs (wheelchair passengers) from non-wheelchair passengers. +- Traffic: The mass of passengers in an airport. The level of traffic affects the number of WPs needing transport between gates. +- Wheelchair depot: A location where wheelchairs are stored while not in use. In the hub and spokes strategy, there is a depot in each concourse. +- Escort: An airline employee responsible for picking up WPs from arrival gates and transporting them to connecting gates. +- Missed flight: When a passenger arrives more than $15\mathrm{min}$ after the connecting flight's departure time, the flight leaves without them. +- Strategy: An algorithm for the flow of escorts and wheelchairs throughout the airport. + +# Basic Assumptions + +# Airport Layouts + +An airport consists of 1 to 10 concourses, each of with consists of 2 to 50 gates. Gates in the same concourse are generally located close to one another, while the travel time between concourses can be quite lengthy. Hence, we assume that inter-concourse travel is much lengthier than intra-concourse travel. + +Our model represents concourses and gates as nodes in a graph. + +Table 1. Variables and their meanings. + +
VariableDefinition
AAirport, graph of concourses
ciConcourse i, a node of A
CThe number of concourses in A
(i,j)Gate j within concourse i
kiNumber of gates in ci
DayType of day modeled by the simulation
YearThe year modeled by the simulation
Nt(i)Market share of airline i at time t
ft(i)Fraction of customers defecting for airline i at time t
PtYear t population of the airport market
χA strategy
W*Ratio of wheelchairs to wheelchair needing customers
E*Ratio of escorts to wheelchair needing customers
WtNumber of wheelchairs in year t
EtNumber of escorts in year t
UMUtility loss of missing a flight (>0)
+ +Table 2. Constants and their values. + +
ConstantDefinitionValue
pEEscort annual salary (including benefits)$40,000 /yr
pWPurchase price of a wheelchair$ 135
x*Maximum time that a plane will wait for a passenger15 minutes
ωProportion of passengers who are WPs1.6%
pinfProportion of WPs informing of arrival95%
vFVelocity of an escort walking alone250 ft/min
vSVelocity of an escort pushing a wheelchair180 ft/min
UmUtility loss for being delayed one minute on the runway0.0005Um
UGUtility loss for a chair idling one minute by a gate0.0001Um
pSStorage cost of a wheelchair$50/year
πAirline profit per customer$4.51
+ +# Wheelchair Passenger Needs + +WPs comprise a proportion $\omega$ of the total passenger pool (1.5% in 1996 [Conway 2001] and 1.6% in 2006 (the starting year of our model). + +A proportion $p_{\mathrm{inf}}$ of WPs inform our airline of their need before arrival; we assume $p_{\mathrm{inf}} = 95\%$ . Knowing $p_{\mathrm{inf}}$ and $\omega = .015$ and $p_{\mathrm{inf}} = .95$ , we use a binomial distribution to find a probability mass function for the arrival of WPs on a flight (Table 3). + +There are two ways that a WP can miss a connecting flight: + +- Late incoming flight: Roughly one-third of flights arrive late and about $5 \%$ + +Table 3. Unexpected wheelchair passengers in a flight of 120 people. + +
Unexpected PassengersProbability (%)
091.39
18.23
20.37
30.01
+ +are at least an hour late [U.S. Department of Transportation n.d.]. + +- Slow arrival of escorts: We try to minimize this risk. + +# Integrate Transportation + +- Average fast walking speed is $250\mathrm{ft / min}$ (3 mph), but average speed when arms are immobilized (as when pushing a wheelchair) is only $180\mathrm{ft / min}$ (2 mph) [Gross and Shi 2001]. We assume that an escort walks at these speeds. +- An escort can operate only one wheelchair at a time. U.S. Dept. of Transportation guidelines discourage leaving WPs unattended. Hence, the escort takes a WP to the connecting flight and remains until the flight leaves. +- Airport customer service employees (escorts) earn on average \(11.80/h; the annualized cost with benefits per escort is \(p_{E} = \)40,000 [Bureau of Labor Statistics 2004]. Transport wheelchairs bought in large batches cost $135/chair [Transport Wheelchairs 2005]. +- Passengers who arrive more than $15\mathrm{min}$ late to their connecting flight are left behind; airlines wait just this long for delayed arriving flights. +- Escorts are in contact with one other via radio. + +# Market Share and Delays + +Wheelchair service is not just a legal responsibility but a good idea from a customer-relations standpoint and enhances competition for market share. A passenger who misses their connecting flight (due to poor wheelchair allocation or a delayed arriving flight) must wait (perhaps several hours or overnight) for the next outbound flight. Additionally, that wait could make that passenger and others miss a subsequent connecting flight. + +# Multi-Concourse Airport Model + +# Formal Definition + +Let $A$ be an airport with $C$ concourses $c_{1},\ldots ,c_{C}$ and gates $\langle (1,1),\dots ,(C,k_C)\rangle$ where $k_{i}$ is the number of gates in concourse $c_{i}$ . Further, let $E^{*}$ be the ratio of escorts to total WPs and $W^{*}$ be the ratio of wheelchairs to total WPs. Escorts and chairs are assigned by some strategy $\chi$ . + +Several factors in the airport system are beyond our control: + +- $\omega$ , the proportion of passengers who are WPs; +Day, i.e., high- or low-traffic days; +- Year (passengers in years past 2006 have different demographics); and +- costs of wheelchairs $(p_W)$ and of wages $(p_E)$ for escorts, + +We take these factors as exogenous to our model, so the only control variables are $\chi, W^{*}$ , and $E^{*}$ . + +We seek to minimize cost by reducing both explicit costs (escort wages, wheelchair purchases) and implicit costs (lost business due to frequent delays). + +The Multi-Concourse Airport Model (MCAM) relates the three control variables to explicit costs and total delays. Delays include both missed flights and late departures as a result of missing passengers. + +Let $C$ be daily cost and $D$ be disutility of a delay. We want a function $f$ such that + +$$ +f (\chi , W ^ {*}, E ^ {*}) \longmapsto (C, D). +$$ + +The MCAM model runs a Monte Carlo simulation for several different days. This process gives the expected daily delays and costs, so the results of MCAM serve as a suitable proxy for $f$ . + +# Aggregating Delays and Disutility + +The airline has a set policy that a plane will wait up to $x^{*}$ min for a passenger heading toward the gate. Increasing $x^{*}$ decreases delay due to missing flights but increases delay for passengers waiting aboard planes; similarly, lowering $x^{*}$ favors boarded passengers at the expense of late passengers. + +The airline seeks an optimal value for $x^{*}$ to balance the discomfort of waiting passengers against the probability that the late passenger will arrive in time. + +Let disutility for small unexpected delays be linear in time, so if 120 passengers on a plane wait $15\mathrm{min}$ , the total utility loss is proportional to the $120\times 15 = 1,800$ min of delay. Also, let the utility loss from missing a flight be $-U_{m} < 0$ and set time $t = 0$ to be the flight's planned departure time. If a + +passenger is not at the gate at $t = 0$ but we know that they are on their way, then we wait up to $x^{*}$ min for them. + +Let $L$ be a random variable for the lateness of our passenger; the probability of arrival after $t = 0$ is $P(L \leq x^{*} | L \geq 0)$ . This means the late passenger benefits by $U_{m} P(L \leq x^{*} | L \geq 0)$ , while the others expect to wait $E[T | 0 \leq T \leq x^{*}]$ , since they will leave in at most $15 \, \text{min}$ . + +With $x^{*}$ chosen optimally, lost utility from waiting equals benefit to the late passenger. So, when $N$ passengers are waiting, optimality is achieved when + +$$ +N E [ T | 0 \leq T \leq x ^ {*} ] = U _ {m} P (L \leq x ^ {*} | L \geq 0). +$$ + +Given our past experience, we assume that $x^{*} = 15$ , so that + +$$ +U _ {m} = \frac {N \times E [ T | 0 \leq T \leq 1 5 ]}{P (L \leq 1 5 | L \geq 0)}. +$$ + +We determine average $P(L \leq x^{*} | L \geq 0)$ and $E[T | 0 \leq T \leq x^{*}]$ from our simulation results of an airport with a large enough supply of escorts and wheelchairs that every WP is immediately taken to their connecting flight (Table 4). + +Table 4. Benefit of waiting ${15}\mathrm{\;{min}}$ for late passengers. + +
AirportDayP(L ≤ 15|L ≥ 0) (%)E[T|0 ≤ T ≤ 15] (min)
MidlandLow-Delay31.55.0
ColumbusLow-Delay23.26.9
St. LouisLow-Delay19.88.5
MidlandHigh-Delay25.92.5
ColumbusHigh-Delay26.96.6
St. LouisHigh-Delay26.37.7
Average25.66.2
+ +An average plane has capacity 120 and load factor .695 (69.5% of seats occupied), so the effective $N$ is $120 \times .695$ , we have $U_{m} = (.695)(120)(6.2) / (.256) \approx 2000$ , which means that missing a flight is 2,000 times as bad as one person waiting 1 min—a reasonable result. + +# A Third Source of Disutility + +Idle wheelchairs near gates are an inconvenience to passengers and a liability risk to airlines. Every minute that a wheelchair sits at a gate, it contributes disutility equal to $20\%$ of the disutility of a single individual being delayed 1 min. + +# Aggregate Disutility + +To combine the three disutilities, note that $1 \mathrm{~min}$ of flight delay affects the $N$ people waiting at the equivalent of $\frac{N}{2000} U_{m} = 0.0005 N U_{m}$ , where $U_{m}$ is the disutility of missing a flight. Also, $1 \mathrm{~min}$ of a wheelchair idling by a gate provides disutility $20\%$ as large, or $0.0001 U_{m}$ . + +# The Strategy + +The strategy set $\chi$ governs the rules that escorts follow in making their decisions. These include: + +- How do escorts choose which WP to pick up? +- How do escorts find a chair to use? +- What do escorts do after they've dropped off their WP? +- Where do escorts leave a wheelchair when they are done using it? + +The MCAM tests three strategies, one random and two "intelligent." + +# Random Strategy + +When a WP arrives at the airport, a free escort is randomly chosen to shuttle them to the connecting gate. The escort stays with the wheelchair at the gate until the next assigned WP. + +# Intelligent Strategies + +The two intelligent strategies borrow their names from airline industry terminology: direct transfer and hub and spokes. + +The airport knows in advance about most WPs. Each escort and each gate agent (the airline representative at a gates) has an ordered list of expected WPs. The heuristic for ordering WPs waiting to be taken to connecting gates is: + +$H =$ time until flight leaves - time to reach gate (via wheelchair). + +If a WP is expected, an escort anticipates their arrival by waiting at the gate. (In our implementation, expected WPs are inserted into the waiting queue $20\mathrm{min}$ before their flight lands.) When unexpected WPs arrive, the gate agent there radios over an open channel so that everyone can update their lists. + +An escort who becomes free reports to the group. The WP at the top of the list is assigned to the closest free escort, who radios to find available wheelchairs, preferring one close to the WP or on the way to the WP. + +Our intelligent strategies differ about what escorts do after shuttling a WP. + +- Direct Transfer: The gate-based strategy runs all operations out of the gates in an airport. After an escort drops a WP off at gate $(i,j)$ , the escort and chair remain at that gate until assigned another WP. For the next assignment, say at gate $(i',j')$ , the escort radios to find a chair closer to $(i',j')$ . The strategy spreads the wheelchairs out among the gates so that any gate likely has a wheelchair nearby. A disadvantage is unattended chairs near gates. + +- Hub and Spokes: Each concourse has a wheelchair depot, where wheelchairs are stored. After an escort drops off a WP at gate $(i,j)$ , the escort returns the wheelchair to the depot in concourse $i$ . When escorts are idle and there are no WPs to be shuttled, the chairs wait in the depots (instead of at the gates). Wheelchair depots eliminate leaving chairs in high-traffic areas near the gates themselves, and escorts know that all available chairs are at depots. + +# Long-Term Concerns + +The MCAM simulates a single day of airport activities, but to choose a wheelchair/escort strategy we should consider long-term factors. Ideally, our model will take the data from the MCAM and use it to simulate several years of airline operation. The problem is that several factors that are constant in MCAM could change over 10 or 40 years. + +# Aging Population + +In 2000, one-sixth of the US population was over 60 years old [U.S. Census Bureau 2005] and $72\%$ of wheelchair users are in this age group [Conway 2001]; a person over 60 is 13 times as likely to need a wheelchair as a person under 60. + +The over-65 age group will grow by $40\%$ in the next 20 years [U.S. Census Bureau 2005]. We assume that in 2006 $1.6\%$ of passengers are WPs; using the Census Bureau's demographic data, we calculate the fraction of future air travelers who will be WPs. + +It is not clear whether this growing older group will take proportionately more or fewer plane flights in the future (a question of saved income and free time vs. health). We assume a middle ground: The proportion of wheelchair users in the country and on flights are directly proportional. In our models, we use the appropriate $\omega$ for each year. + +# Lower Profit Margins + +In 2004 and 2005, airline earnings were reduced by high jet-fuel prices. If high prices persist, airlines will be forced to raise ticket prices (sales will fall) or cut profit margins. We simulate this effect by using a smaller marginal profit per passenger in the long-run model than in the short-run model. + +# Airline Competition Model + +The MCAM model simulates the flows of escorts, chairs, and WPs. But we are searching for a cost-minimizing strategy. To do this, we need to model the changing market share of our airline (based on customer satisfaction) and + +derive a profit function for its operations. Maximizing profit is equivalent to minimizing costs if we view lost future business as a cost. + +The Airline Competition Model (ACM) uses the output of the MCAM to determine market share and profitability for a group of competing airlines. + +# Market Share + +A principal factor in an airline's long-run profitability is market share, the proportion of the market the airline holds at an airport. After experiencing flight delays or missed flights due to limited availability of wheelchairs, WPs and non-WPs may defect to other airlines. We model defection as a function of total disutility to an airline's passenger base. + +Let there be $M$ airlines and let $N_{t}^{(i)}$ be airline $i$ 's market share at time $t$ , with $f_{t}^{(i)}$ the fraction of its customers who defect at the end of period $t$ . Assuming that defecting passengers choose one of the $M - 1$ other airlines with equal frequency, the stochastic process for market share is + +$$ +\Delta N _ {t} ^ {(i)} = N _ {t + 1} ^ {(i)} - N _ {t} ^ {(i)} = \left(\sum_ {j \neq i} \frac {1}{M - 1} N _ {t} ^ {(j)} f _ {t} ^ {(j)}\right) - N _ {t} ^ {(i)} f _ {t} ^ {(i)}. \tag {1} +$$ + +Alternatively, we could suppose that defecting passengers distribute themselves among the $M - 1$ other airlines in proportion to airline market share: + +$$ +\Delta N _ {t} ^ {(i)} = \left(\sum_ {j \neq i} \frac {N _ {t} ^ {(i)}}{1 - N _ {t} ^ {(j)}} N _ {t} ^ {(j)} f _ {t} ^ {(j)}\right) - N _ {t} ^ {(i)} f _ {t} ^ {(i)} +$$ + +Without data on how consumers choose airlines, we assume that ticket price is the overwhelming factor. Ticket pricing is complex but does not vary with market share—the giant Southwest and tiny Vanguard Airlines offer comparable rates [Airline pricing data 2005]. Because of the importance of prices, we believe that (1) is more accurate. + +If the total market grows by a proportion $r$ , new passengers choose carriers so that market shares remain unchanged. We can express this assumption nicely using matrix notation. Letting $\mathbf{N}_{\mathbf{t}}$ denote the vector of market shares and define the elements of $M \times M$ matrix $\mathbf{A}$ : + +$$ +a _ {i j} = \left\{ \begin{array}{c l} \frac {1}{M - 1}, & \mathrm {i f} i \neq j; \\ - 1, & \mathrm {i f} i = j. \end{array} \right. +$$ + +This simplifies (1) to: + +$$ +\mathbf {N} _ {\mathbf {t + 1}} = (\mathbf {I} _ {\mathbf {M}} + \mathbf {A} \mathbf {F} _ {\mathbf {t}}) \mathbf {N} _ {\mathbf {t}}, +$$ + +where $\mathbf{F}^{(\mathbf{t})}$ is the matrix whose diagonal entries are $f_{t}^{(j)}$ , and whose off-diagonal entries are zero and $\mathbf{I}_{\mathbf{M}}$ is the $M\times M$ identity matrix. This formula iterates nicely to give the closed form + +$$ +\mathbf {N _ {T}} = \left[ \prod_ {1} ^ {T} (\mathbf {I _ {M}} + \mathbf {A F _ {t}}) \right] \mathbf {N _ {0}}. +$$ + +The distribution of the multivariate random variable $\mathbf{F}_{\mathbf{t}}$ is fixed in the short run while the proportion of handicapped individuals remains constant. In the short run, therefore, $\mathbf{N}_{\mathbf{T}}$ is (up to a constant) the product of $T$ uniformly distributed random variables, all distributed as $(\mathbf{I}_{\mathbf{M}} + \mathbf{A}\mathbf{F})$ (we drop the subscript on $\mathbf{F}$ because its distribution is independent of time in the short run). + +# Passenger Defection + +The rationale behind a profit-based model is to quantify the trade-off between the cost of accommodating WPs and the loss in market share associated with customer dissatisfaction. In the short term, airlines would be more prone to provide less accommodations because the resultant effect on market share would not be seen until the next period. However, in the long term, the market share for airlines providing poor service will suffer and profit will fall. Moreover, short-run costs include the fixed cost of purchasing wheelchairs, while long-run costs are the smaller costs of chair maintenance and replacement. Our profit function measures only the profit gained (or lost) from wheelchair allocation strategy. + +Let there be $J$ different types of days, each of with its distribution of delays. For example, days before holidays and weekends probably have longer delays. A day of type $j$ experiences a defection of $f_{t}^{(i,j)}$ from total market share, based solely on total disutility of passengers of airline $i$ traveling on that day. We also assume that traffic is constant over days of each type but varies across types; these traffic differences affect the underlying value for $f_{t}^{(i,j)}$ . Let there be $V_{j}$ days of type $j$ per year. + +The MCAM simulates total disutility of passengers given the parameters $(\chi, W, E)$ , and we obtain $f_{t}^{(i,j)}$ , the daily defection rate. This gives a distribution for the random variable $g_{t}^{(i,j)} = \log \left(1 - f_{t}^{(i,j)}\right)$ , for various days $t$ , implying a mean and variance for such a distribution, which we denote by the ordered pair $\left(\mu(i,j), \sigma(i,j)^2\right)$ : + +$$ +g _ {t} ^ {(i, j)} \sim \left(\mu^ {(i, j)}, (\sigma^ {(i, j)}) ^ {2}\right). +$$ + +The retention rate for a day is given by $1 - f_{t}^{(i,j)}$ , so total retention for a year is + +given by + +$$ +1 - F _ {t} ^ {(i)} = \prod_ {j = 1} ^ {J} \prod_ {v = 1} ^ {V _ {j}} \left(1 - f _ {v} ^ {(i, j)}\right). +$$ + +Taking logarithms yields + +$$ +\log \left(1 - F _ {t} ^ {(i, j)}\right) = \sum_ {j = 1} ^ {J} \sum_ {v = 1} ^ {V _ {j}} g _ {v} ^ {(i, j)}. +$$ + +By the Central Limit Theorem, if the values for $V_{j}$ are sufficiently large (they are 35 and 330 for our program), we have the approximate distribution + +$$ +\sum_ {v = 1} ^ {V _ {j}} g _ {v} ^ {(i, j)} \sim \mathrm {N} \left[ V _ {j} \mu^ {(i, j)}, V _ {j} \left(\sigma^ {(i, j)}\right) ^ {2} \right], +$$ + +where $N[\mu, \sigma^2]$ is the normal distribution. In our implementation, we use random draws for the realizations of $g_v^{(i,j)}$ . This implies that $F_t^{(i)}$ is approximately distributed as + +$$ +F _ {t} ^ {(i)} \sim 1 - \exp \left(\sum_ {j = 1} ^ {J} N \left[ V _ {j} \mu^ {(i, j)}, V _ {j} \left(\sigma^ {(i, j)}\right) ^ {2} \right]\right). +$$ + +Our profit model is constructed to hold all factors constant except wheelchair strategy. Because of this feature, when a WP misses a flight, it is always the result of poor wheelchair allocation and not of another factor. We assume that missing a flight causes a WP to defect from the airline with probability $p_d = 1/4$ . This high probability is reasonable, since the WP it appears as if the airline has neglected them by not shuttling them to their connecting gate. + +On day $t$ , airline $i$ has $P_{t}N_{t}^{(i)}$ passengers in its market share, of whom $n_{t}^{(i)}$ total are traveling on day $t$ with airline $i$ . + +The probability of not defecting after missing a flight is $1 - p_d$ . We assume that the probability of not defecting is multiplicative in the number of missed flights, that is, after missing $m$ flights, there is a $1 - (1 - p_d)^m$ probability of defection. + +We also assume that the disutility $D_{t}$ is uniformly distributed across all passengers, so each passenger has disutility $u_{t} = D_{t} / n_{t}^{(i)}$ , measured as a multiple of $U_{m}$ , the disutility of missing a flight. An individual defects with probability $1 - (1 - p_d)^{u_t}$ . + +By the Law of Large Numbers, the total number of individuals defecting on a particular day approximately equals the expected value of this random variable. We therefore have: + +$$ +f _ {t} ^ {(i, j)} = n _ {t} ^ {(i)} \left[ 1 - \left(1 - p _ {d}\right) ^ {u _ {t}} \right]. +$$ + +# Implementation of Delay Distributions + +In our implementation, we use actual 2005 daily data on average delays for Southwest Airlines. At each of three airports, about $10\%$ of days have particularly high delays [Southwest Airlines 2006]. So we categorize days as high-delay (the top $10\%$ , 35 days) or low-delay (the bottom $90\%$ , 330 days). + +Southwest Airlines reports an average load factor of $69.5\%$ (the percentage of occupied seats on a flight). Since high-delay days are often during peak travel times, we assume that flights operate at full capacity on such days; a weighted average calculation gives $100\%$ and $66\%$ for the load factors on high- and low-delay days, respectively. + +# Present Value of Profits + +We assume that costs and airline profits per passenger grow at a constant annual rate $r_C$ , and we let $r_D$ be the nominal interest rate. + +Let $\Pi_t^{(i)}$ denote the real (adjusted for inflation) profit in year $t$ for airline $i$ . If costs and profits per unit of good grow at a constant inflation rate, then $\Pi_t^{(i)}$ can be determined assuming zero inflation. Inflation will be included in the discount factor for the long-term calculation of firm profit. + +So, airline $i$ maximizes the expected present value of profits given the discount rate $\delta$ : + +$$ +\Pi^ {(i)} = \sum_ {t = 0} ^ {T} \delta^ {t} \Pi_ {t} ^ {(i)}. +$$ + +We let $T = 40$ and make projections out to 2046. The current inflation rate is approximately $2\%$ , so we estimate $r_C$ as $2\%$ , and $r_D$ is the forward risk-free rate in the future, which we assume to be constant and equal to $4.5\%$ . + +$$ +\delta = \frac {1}{1 - r _ {C} + r _ {D}} = \frac {1}{1 . 0 2 5} = \approx 0. 9 7 5. +$$ + +# The Profit Function + +Profit related to wheelchair policy can be split up into the following contributive factors: + +- total profits from passengers, which is proportional to market share; +- wheelchair purchase and replacement costs; +- wheelchair storage costs; and +- escort salary and benefits payments. + +We derive from these contributive factors a formula for profit, given by: + +$$ +\Pi_ {t} ^ {(i)} = \left\{ \begin{array}{l l} P _ {t} N _ {t} ^ {(i)} \pi - p _ {W} R _ {t} - p _ {S} W _ {t} - p _ {E} E _ {t}, & \mathrm {f o r} t > 0; \\ P _ {t} N _ {t} ^ {(i)} \pi - (p _ {W} + p _ {S}) W _ {t} - p _ {E} E _ {t}, & \mathrm {f o r} t = 0, \end{array} \right. +$$ + +where: + +$P_{t} =$ total number of airline passengers in the market, + +$N_{t}^{(i)} = \mathrm{market~share~of~airline~}i,$ + +$\pi =$ profit per passenger, + +$R_{t} =$ number of wheelchairs replaced in year $t$ + +$p_W =$ price of a wheelchair, + +\(p_{S} =\) annual storage cost of a wheelchair \((\\) 50)\), and + +$p_{E} =$ annual cost of an escort. + +Earlier, $W^{*}$ and $E^{*}$ were the proportions of chairs and escorts to the population, but here we need $W_{t}$ and $E_{t}$ , the actual number of chairs and escorts used. + +The cost in the first period differs because there is an initial purchase cost of wheelchairs. In later periods, wheelchairs need replacement at $20\%$ per year, so at the end of year $t$ we have $0.8W_{t}$ usable wheelchairs. Due to changes in market share, we may desire more or fewer wheelchairs in year $t + 1$ ; the target number of chairs is + +$$ +W _ {\text {t a r g e t}, t + 1} = W ^ {*} P _ {t} N _ {t} ^ {(i)}, +$$ + +and similarly, $E_{\mathrm{target},t + 1} = E^{*}P_{t}N_{t}^{(i)}$ . We won't throw away good chairs, so the number of chairs we use in year $t + 1$ is: + +$$ +W _ {t + 1} = \max \{0. 8 W _ {t}, W _ {\text {t a r g e t}, t + 1} \}. +$$ + +This gives the necessary number of chair replacements: + +$$ +R _ {t + 1} = \max \{0, W _ {\mathrm {t a r g e t}, t + 1} - 0. 8 W _ {t} \}. +$$ + +# Implementation + +The computer simulation of ACM relies heavily on the results produced by MCAM. We take $M$ airlines, each with a different $(\chi, W^{*}, E^{*})$ , and for a given year we estimate their total costs and total disutilities. + +Starting with the first year, ACM simulates the operation of airline $i$ by using its strategy (which remains fixed through the end year, $T$ ) as the input for MCAM. Realizing that high-delay and low-delay days affect airport operations differently, we simulate 35 high days and 330 low days in each year. + +This gives and output of total disutility which determines gain or loss in market share by (1). (In the first year, we start every airline with an equal market share.) We can calculate each company's profit for year $t$ and use the updated market shares in the calculation for year $t$ . + +This simulation runs for 40 years and result is a profit vector (across time) for each airline. We discount future periods at the rate $\delta$ and compare the present values that the various strategies $(\chi, W^{*}, E^{*})$ produce. Recall that our profit function does not determine the airline's actual profit (which involves buying planes, pilots, flight attendants, etc.) but only the profit related to wheelchair use in airports. The relative value of discounted profit is how we gauge which strategy is most attractive. + +# Case Study + +Southwest Airlines reported a 2005 profit of $313 million from 70.9 million passengers, or $4.41 / passenger. However, this number is already reduced by costs included in our profit function, namely wheelchair and escort costs, which we estimate at $0.10 / passenger. + +The data for our case study are quite extensive, including flight times, airport layouts, load factors, and average delays per airport, for airports in Midland TX, Columbus OH, and St. Louis MO. + +# Results and Observations + +# Multi-Concourse Airport Model + +We applied the random, direct-transfer, and hub-and-spokes strategies to the case-study data, using 5 escorts with 5, 10, or 20 wheelchairs, and 10 escorts with 20 chairs. There are several notable results: + +- Low-delay days and high-delay days give about the same disutility across all strategies when only 5 escorts are used. When 10 escorts are used, high-delay days give an average of 40 more disutility equivalents than low-delay days. A possible explanation is that 5 escorts are kept busy on both low- and high-delay days, but on a low-delay day 10 escorts is sufficient to handle all of the wheelchair traffic. On a high-delay day, some every late passengers will miss their flights even with 10 escorts available. +- The random strategy performs nearly identically to direct transfer under all chair/escort configuration combinations and both delay types. This could be due to the fact that the direct transfer strategy distributes wheelchairs so sparsely (at all the gates) that the assignment is essentially random. +- The hub-and-spokes strategy is less effective than the other two strategies when 5 escorts are used but more effective with 10 escorts. The hub-and-spokes algorithm involves streamlining wheelchair movement, so without + +adequate personnel it may not be efficient. In essence, the hub-and-spokes strategy requires a base number of escorts to be effective. + +# Airline Competition Model + +For each of the three strategies and four combinations of chairs and escorts, we calculate the market share (reported as $N_{year}$ ) as well as the final profit. + +The Hub and Spokes strategy with 10 escorts and 20 chairs earns the top market share in 2046 (11.1%), while the same strategy with 5 escorts and 20 chairs does the worst (7.2%). We believe that this is because the hub-and-spokes system is labor-intensive, since escorts must walk longer in taking chairs back to their depots. + +In terms of profit, the strategy (Hubs, 20, 10) wins again—by a lot! (106 vs. 101.9 for second-best). Interestingly the strategy (Random, 20, 10), which had the 3rd-best market share, also had the 2nd-worst profit. + +The winner is the hub-and-spokes configuration with 20 chairs and 10 escorts. + +# Analysis of the Models + +# Strengths of Model + +The model uses actual data, including flight delay distributions for all 365 days in 2005, to rank wheelchair policies in terms of market share and long-term profits. Over the long-term, it also accommodates projections of changes in the proportion of WPs in the airline passenger market. + +The model realistically captures uncertainty about prior information about need for wheelchairs. + +The model converts total passenger disutility into a defection rate, which captures the effects of quality of service in a competitive market. The dynamics show both the long-term and short-term effects of the trade-offs between budgeting and loss/gain in market share. + +# Weaknesses of Model + +Growth in passengers depends on many factors, including changing demographics, which may change from predicted values. + +# Conclusion + +We recommend the hub-and-spokes configuration with two escorts per concourse and two wheelchairs per escort, to maximize both gain in market share and long-term profit. These suggestions are a fast and effective formula for higher profits. After all, at the end of the day, all we want is to make Epsilon greater than Delta. + +# References + +Airline pricing data. 2005. www.expedia.com. +Bureau of Labor Statistics. 2004. Occupational employment statistics, November 2004. http://www.bls.gov/oes/current/oes396032.htm. +Conway, Bernie. 2001. Development of a VR system for assessing wheelchair access. http://www.fp.rdg.ac.uk/equal/Launch_Posters/equalsslidesconway1/sld001.htm. +Federal Aviation Administration. 2003. Airport layouts for the top 100 airports. http://www.faa.gov/ats/asc/publications/03_ACE/APP_D.PDF. +Gross, Ralph, and Jianbo Shi. 2001. The CMU Motion of Body (MoBo) Database. Pittsburgh, PA: Robotics Institute, Carnegie-Mellon University. +Southwest Airlines. 2006. 2005 Flight Data: MAF, CMH, STL. http://www.southwest.com. +Transport Wheelchairs. 2005. 1-800-Wheelchair.com. +U.S. Census Bureau. 2005. 2005 Population projections. http://www.census.gov/. +U.S. Department of Transportation. 2003. Nondiscrimination of the basis of disability in air travel. http://airconsumer OST.dot.gov/rules/rules.htm. +______ . n.d. Airline on-time statistics and delay causes. http://www.transtats.bts.gov/OT_Delay/OT_DelayCause1.asp. + +![](images/427c43904554c97203aeee22a8457be9d534170946f70e22c5d09140bd34da44.jpg) +Team members Benjamin Conlee, Neal Gupta, and Christopher Yetter. + +# Minimization of Cost for Transfer Escorts in an Airport Terminal + +Elaine Angelino +Shaun Fitzgibbons +Alexander Glasser +Harvard University +Cambridge, MA + +Advisor: Michael Brenner + +# Summary + +We minimize the cost for Epsilon Airlines to provide a wheelchair escort service for transfers in an airport terminal. We develop probabilistic models for flow of flight traffic in and out of terminal gates, for the number of passengers on flight who require service, and for transfer destinations within the terminal. + +We develop an economic model to quantify both the short- and long-term costs of operating such a service, including the salaries of escorts, the maintenance and storage of wheelchairs, and the costs incurred when late escorted transfers delay a departing flight. + +We develop a simulated annealing (SA) algorithm that uses our economic models to minimize cost by optimizing the number and allocation of escorts to passengers. Having indexed the space of all possible escort allocations to be accessible to our SA, we selectively search the space of allocations for a global optimum. Although the space is too large to find a global optimum, our simulations suggest that the SA is effective at approximating this optimum. + +Using current airport and airline data, we break our analysis down into short- and long-term costs, simulating escort service operation under dynamic airport conditions, varying air traffic, airport size, and the fraction of traveling population that requests wheelchair-aided transfer (simulating a greater future abundance of elderly travelers). + +The UMAP Journal 27 (3) (2006) 349-365. ©Copyright 2006 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +# Introduction + +We describe, quantify, and optimize a cost-effective approach to wheelchair-aided escorted transfers at Epsilon Airlines. + +We investigate the airport layout, particulars of flight travel and passengers, and the economics of the escort service. We use a simulated annealing (SA) algorithm to find a cost-optimized solution, so that Epsilon's many satisfied customers will continue to say: "Epsilon will always be in my neighborhood!" + +# Definitions and Key Terms + +- An airport terminal is an ordered pair $(\tau, d_{\text{terminal}})$ , where $\tau$ is an ordered collection of concourses each of which is an ordered collection of gates, and $d_{\text{terminal}}$ is a metric defining the distance between any two gates of the terminal (Figure 1). +- A concourse is an ordered collection of gates. +- A gate is an element of a concourse at which arrivals and departures take place. A gate's position in a terminal is determined by an ordered pair (concourse#, gate#) +- A wheelchair passenger (WP) is a passenger who arrives at a gate, has a connecting flight at another gate, and who can move between gates only by means of a wheelchair pushed by an airline escort. A WP normally notifies the airline in advance of the need for an escort. +- An escort is an airline employee who wheels WPs to connecting flights, with salary per shift of $P_{E}$ . +- A shift is an 8-hour period during peak operating hours of the terminal. +- A transfer job is an ordered 4-tuple + +$$ +\left(\text {c o n c o u s e} \times \text {g a t e} _ {\text {a r r}}, \text {t i m e} _ {\text {a r r}}, \text {c o n c o u s e} \times \text {g a t e} _ {\text {d e p}}, \text {t i m e} _ {\text {d e p}}\right) +$$ + +describing the time and place of arrival and of departure of a WP. + +- The delay time of a departure is the time difference between the actual departure time and the scheduled departure time of a given flight, given that the actual departure time is later. The actual departure time is when the last WP booked on the flight arrives at the departure gate. +- A shift's transfer schedule is the set of escort-required transfers that are scheduled throughout the shift. New WP announcements and unexpected non-escort-related delays cause the transfer schedule to be updated throughout the day. + +- An escort job list is the ordered set of transfer jobs performed by a particular escort over the length of the shift. +- An escort allocation algorithm is a general method of determining the number of escorts and specifying the escorts' behavior throughout a shift. + +# Model Assumptions and Key Concepts + +# Geometry of an Airport Terminal + +We demonstrate our optimization over one airport terminal geometry (with a varying number of concourses.) + +We represent a terminal by the union of a central hub with the cartesian product of $n$ concourses and $m$ gates per concourse: $\{0\} \cup \{1, \dots, n\} \times \{1, \dots, m\}$ . (Figure 1). We denote the distance between the start of the concourse to the concourse's first gate, as well as the distance between any two adjacent gates of a concourse, by $r_g$ , and the distance from the central hub to the start of each concourse by $r_h$ . The distance between any two non-hub points $(n_1, m_1)$ and $(n_2, m_2)$ of a terminal is given by the metric + +$$ +d _ {\mathrm {t e r m i n a l}} = \delta (n _ {1}, n _ {2}) | r _ {g} (m _ {1} - m _ {2}) | + (1 - \delta (n _ {1}, n _ {2})) | r _ {g} (m _ {1} + m _ {2}) + 2 r _ {h} |. +$$ + +![](images/10c74b1b95483171b06049717d038964ff6ae3060986383cfafca2cd8d1af6df.jpg) +Figure 1. Terminal map of Miami International Airport [n.d.] and a schematic 2-D diagram of our essentially 1-D representation of an airport terminal. + +![](images/fcbfc27f2cf20140c7fb201c154bb54a27b357fb5c491d14c8fdd6d30a5780b5.jpg) + +# Flight Specifications + +# Flow of Air Traffic at Terminal + +- We take as departure time the latest time that a passenger can board a flight. We define a parameter $t_{\mathrm{sit}}$ to be the time that a plane sits at a gate after boarding and before takeoff. + +- Every arrival flight becomes a departure flight at the same gate some fixed time later, and we call this fixed interval of time the turnover time $t_{\mathrm{to}}$ . We also call the departure flight incident to the arrival flight. +- We define a parameter $O_G$ , a terminal's gate occupancy, as the fraction of terminal gates at which planes are docked at any given time. +- We describe the flow of flight traffic through a terminal via five parameters: + +- the number $n$ of concourses per terminal, +- the number $m$ of gates per concourse, +- the sit time $t_{\mathrm{sit}}$ , +- the turnover time $t_{\mathrm{to}}$ , and +- the terminal gate occupancy $O_G$ . + +The time between terminal arrivals is + +$$ +t _ {\mathrm {g a p}} = \left\lceil \frac {t _ {\mathrm {t o}} + t _ {\mathrm {s i t}}}{m n O _ {G}} \right\rceil . +$$ + +- We generate a terminal's daily flight schedule as follows: We begin with no airplanes docked at the terminal, and let them arrive one at a time, each at a time $t_{\mathrm{gap}}$ apart, randomly assigning them to free gates as they come in. We generate a schedule of departures by having each arrival flight depart a time $t_{\mathrm{to}}$ after arrival. We run this simulation for 12 simulated hours and select the middle 8 to form our terminal shift schedule. + +# Delays + +- Arrival delays occur according to a normal distribution with mean and standard deviation $(\bar{x}_{\mathrm{delay}},\sigma_{\mathrm{delay}})$ , though we consider all delays under $15\mathrm{min}$ to be negligible, following the practice of most available flight delay statistics [U.S. Department of Transportation n.d.]. +- A departure flight incident to a delayed arrival flight is delayed by the same amount. However, we there are no other causes of departure delay (such other delays are tangential to the problem of transfer escorts). + +# Transfer Specifications + +# WPs per Flight + +- We fix a parameter $P_F$ as the number of passengers per flight. +- We let a parameter $S_{\text{transfer}}$ be the fraction of passengers who transfer from a flight upon arrival, and a parameter $S_{\text{wheelchair}}$ be the fraction who request wheelchair assistance. Therefore, the probability that a given passenger + +requires wheelchair service (the probability of a WP) is $P(\mathrm{WP}) = S_{\mathrm{transfer}} \times S_{\mathrm{wheelchair}}$ . (We neglect passengers who require wheelchair service but do not transfer.) Given this probability, we determine the probability of $r$ WPs on a flight, assuming a binomial distribution: + +$$ +P (r) = \frac {P _ {F} !}{r ! \left(P _ {F} - r\right) !} P (\mathrm {W P}) ^ {r} [ 1 - P (\mathrm {W P}) ] ^ {P _ {F} - r}. \tag {1} +$$ + +We cap $r$ at 3, since the probability of more is negligible. + +# Allowable Transfers + +- Given a schedule of $n$ transfers, and a general escort allocation algorithm, we define the $i$ th component of a delay time $n$ -vector $\vec{d} t$ to be the delay time of the $i$ th transfer. +- We fix a buffer time $t_{\text{buffer}}$ such that no airline will book a transfer time less than $t_{\text{buffer}}$ . +- We fix a greatest allowable transfer time $t_{\mathrm{max}}$ , to capture the fact that the transfer time between flights is not usually very long. +- The transfer time is uniformly distributed between $t_{\text{buffer}}$ and $t_{\text{max}}$ . + +# Method for WP Assignment + +For every flight, we choose the number of WPs according to (1). To each WP, we assign a randomly chosen occupied terminal gate as the destination for the transfer. If the gate's flight's departure time falls within the allowable transfer interval, we schedule the transfer; otherwise, we search at random for a new terminal gate. + +# At the Gate + +- For simplicity, we assume immediate deplaning of all passengers upon arrival. +- An escort may drop off a WP at a gate any time before departure. That is, WPs can board without the escort and escort's wheelchair. +- The time that it takes a WP to load in and out of a wheelchair is negligible. + +# Traversing the Terminal + +- An escort who completes a transfer job moves directly toward the arrival gate of the next job, even if required to wait there for the flight's arrival. An escort always takes the shortest route and has complete informational contact with the manager assigning jobs. + +- Escorts have two walking speeds: $v_{\text{walk}}$ (with an empty wheelchair) and $v_{\text{wheel}} = \frac{3}{4} v_{\text{walk}}$ (with an occupied wheelchair). + +# Wheelchairs and Wheelchair Storage + +A wheelchair in the hands of an escort presents no liability risk to other terminal traffic. Therefore, we stipulate that an escort walks around with a wheelchair at all times. + +# Economic Model of an Escort Service + +# Associated Costs + +- Flight delays: If there are not enough escorts, our service will cause flight delays. +- Salary of escorts. +- Maintenance of wheelchairs. +- Storage of wheelchairs: Wheelchairs cannot be left lying haphazard about a terminal before or after a shift—they pose a liability risk. So we stipulate that outside of the escort shift, wheelchairs are stored in a storage facility. + +# Quantification of Escort and Wheelchair Costs + +- Let $K$ be the number of escorts and $P_E$ be an escort's pay per shift. +- Let $W = K$ be the number of wheelchairs and let $P_W$ be their maintenance cost per shift. +- Let $A_W$ be the area that a wheelchair takes up in storage, and let $R_A$ be the daily rent of airport storage space per square foot. + +# The Short Term Dollar Cost of Delay Time + +- There is no effect of delay time on ticket sales. +- Escorts are paid on an annual basis, independent of extra hours logged due to flight delay. Therefore, delay times incur no overtime costs. +- The only short-term cost of flight delay, therefore, is the high cost of operating the plane during the delay. +- Therefore, given operating cost $C_{\mathrm{pl}}$ of a plane ( $/ \mathrm{min}$ ) and the delay time $dt$ of a flight (min), the short-term cost of a flight delay for duration $dt$ is $C_{\mathrm{pl}} dt$ . + +# The Long Term Dollar Cost of Delay Time + +- We attribute the long-term cost of delay time to two factors: the added operating cost of planes, and a long-term reduction in ticket sales. +- We quantify the reduction in ticket sales. Consider the standard Microeconomics 101 two-good problem: A consumer is faced with purchasing $t_{\epsilon}$ flight tickets on Epsilon Airlines or $t_{A}$ flight tickets on Airline A. Assume that Epsilon Airlines and Airline A are indistinguishable except for ticket prices $(P_{\epsilon}, P_{A})$ and total average delay times per shift $(D_{\epsilon}, D_{A})$ . The dollar-utility function of a consumer is + +$$ +U (t _ {\epsilon}, t _ {A}) = E _ {\mathrm {p a}} (t _ {\epsilon} + t _ {A}) - P _ {\epsilon} t _ {\epsilon} - P _ {A} t _ {A} - C _ {\mathrm {p a}} (D _ {\epsilon} t _ {\epsilon} + D _ {A} t _ {A}), +$$ + +where + +- $t_i$ is the number of airline $i$ tickets purchased per shift; +- $P_{i}$ is the price of an airline $i$ ticket; +- $E_{\mathrm{pa}}$ is the dollar utility that a consumer receives from a flight (assumed to be the same for all flights, for simplicity); +- $C_{\mathrm{pa}}$ is the dollar utility of passenger time in dollars/time; and +- $D_{i}$ is the average WP-related delay time per shift of airline $i$ . + +Optimizing this dollar utility with respect to the $t_i$ s, we find: + +$$ +\frac {\partial U}{\partial t _ {\epsilon} ^ {*}} = 0 \Longrightarrow P _ {\epsilon} = E _ {\mathrm {p a}} - C _ {\mathrm {p a}} D _ {\epsilon}, \quad \frac {\partial U}{\partial t _ {A} ^ {*}} = 0 \Longrightarrow P _ {A} = E _ {\mathrm {p a}} - C _ {\mathrm {p a}} D _ {A}. +$$ + +Note that + +$$ +D _ {\epsilon} = \frac {\sum_ {i} d t _ {i}}{F _ {\mathrm {s h i f t}}}, +$$ + +where the index $i$ runs over all the delay times of all transfers in a shift, and $F_{\mathrm{shift}}$ is the number of flights per shift. Now, recalling that $P_F$ is the passengers per flight, we find that + +$$ +P _ {F} F _ {\mathrm {s h i f t}} P _ {\epsilon} = E _ {P \mathrm {p a}} P _ {F} F _ {\mathrm {s h i f t}} - C _ {\mathrm {p a}} P _ {F} \sum_ {i} d t _ {i}. +$$ + +Since the product on the left side of the equation is total revenue per shift, we can associate the summands on the right with all contributing subrevenues and subcosts. (These numbers denote the contribution of single shifts to long-term revenue, after ticket prices have responded to changing market demand and market demand has responded to airline conditions—delay times, etc.). The effect of transfer delay times $\{dt_{i}\}_{i}$ per shift on long term revenue is given by $-C_{\mathrm{pa}}P_F\sum_i dt_i$ . + +# Summary: The Economic Model + +By the above arguments, we may finally write the short- and long-term expense functions of our escort service per shift: + +$$ +E _ {\mathrm {s t}} = K P _ {E} + K P _ {W} + K A _ {W} R _ {A} + C _ {P l} \sum_ {i} d t _ {i}, +$$ + +$$ +E _ {\mathrm {l t}} = K P _ {E} + K P _ {W} + K A _ {W} R _ {A} + C _ {P l} \sum_ {i} d t _ {i} + C _ {P a} P _ {F} \sum_ {i} d t _ {i}. +$$ + +# Expectations of Our Model + +We expect our model to exhibit the following behaviors: + +- It should optimize allocation of escorts by minimizing costs. +- As passenger traffic increases, the number of escorts should increase. +- The number of escorts should increase with the number of concourses. +- Since long-term costs affect ticket costs but short-term costs do not, the number of escorts that optimizes long-term budget concerns should exceed the number that optimizes short-term budget concerns. +- The number of escorts should increase with life expectancy. + +# Annealing-Based Optimization + +We consider the complete-information case where the transfer schedule is fully specified at the beginning of a shift. + +# Reduction to an Ordering Problem + +We use the assumptions about escorts to reduce the space of escort deployments to a simple permutation group. In this formulation, with just a single escort, the problem of finding an optimal allocation is equivalent to finding an optimal hamiltonian path (a path in an undirected graph which visits each node exactly once) through a fully connected graph (one in which each pair of nodes is connected by an edge). + +We define a job node as a transfer job indexed in a way convenient for our algorithm. It is characterized by arrival time $(t_1)$ , arrival gate $(x_1)$ , departure time $(t_2)$ , and departure gate $(x_2)$ . We index all this information by two numbers: node type (1 for a job node—we will see later that other node types are required to solve the multi-escort problem) and its position in the transfer schedule. We + +Table 1. +Variables and parameters. + +
VariableMeaningUnitsEstimate
nNumber of concourses per terminal2
mNumber of gates per concourses20
OGAverage terminal gate occupancy50%
KNumber of escorts hired2
W = KNumber of wheelchairs2
SwheelchairFraction of passengers who require a wheelchair0.4%
ParameterDefinitionValueUnitssource
PEEscort salary per shift80$/shift(1)
rgDistance between adjacent gates30m(2)
rhDistance between hub and concourse30m(3)
tsitTime a plane remains at gate after boarding10min(4)
ttoTime between arrival and incident departure60min(5)
xdelayMean flight delay time35.6min(6)
σdelayStandard deviation of flight delay time24.4min(7)
PFPassengers per flight90(8)
StransferFraction of passengers who transfer0.5(9)
tbufferMinimum allowable transfer30min(10)
tmaxMaximum allowable transfer120min(11)
vwalkAverage human walking speed1.3m/s(12)
vwheelSpeed at which escort wheels full wheelchair1m/s(13)
PWMaintenance cost of wheelchair per shift1$(14)
AWFloor area that a wheelchair takes up9sq.ft(15)
RADaily rent of airport commercial real estate2$/sq.ft(16)
CplOperating cost of a plane1495$/h(17)
CpaValue of passenger time44$/h(18)
+ +# Sources for Parameter Values + +
(1)http://www.avjobs.com/table/airsalry.asp
(2)Half the Boeing-747 wingspan (since gates alternate sides of concourse), http://airportbusiness.cygnus.proteus.com/article/article.jsp?id=1291\&siteSection=1
(3)Estimated to be same as (2)
(4)Approximate time for flight-attendant preparation, from personal experience
(5)Approximate time to fuel a Boeing-747, http://www.uk-trucking.net/?page=news\mode/view&id=113
(6)http://news.bbc.co.uk/1/hi/business/1833213.stm
(7)http://news.bbc.co.uk/1/hi/business/1833213.stm
(8)www.faa.gov/library/reports/delay_analysis/media/DCOS1995.doc
(9)Gatersleben, Michel R., and Simon W. van der Weij, Analysis and simulation of passenger flows in an airport terminal, in Simulation-A Bridge to the Future, Proceedings of the 31st Conference on Winter Simulation, vol. 2, 1226-1231; 1999
(10)Common airport practice
(11)Common airport practice
(12)http://phyun5.ucr.edu/~wudka/Physics7/Notes_www/node18.html
(13)3/4 walking speed of (12)
(14)http://www.1800wheelchair.com.asp/view-category-products.asp?category_id=498, roughly annually replaced
(15)Measured at home
(16)http://rebuildnewyork.nreimag.com/ar/real_estate_cw_report_new/, assuming that concourse rent is comparable to highest NYC commercial rents
(17)www.faa.gov/library/reports/delay_analysis/media/DCOS1995.doc
(18)www.faa.gov/library/reports/delay_analysis/media/DCOS1995.doc
+ +refer to such a node as job node $i$ , and refer to the information that it contains with double indices (i.e., $x_{i1}$ ). + +In general, we have multiple escorts. For $k$ escorts, we try to find an optimal set of $\leq k$ distinct paths through the graph defined by the set of job nodes such that every job node is included in exactly one path. In this formulation, the problem is formidable. However, a reformulation transforms the problem back into a straightforward hamiltonian-path optimizing problem [Bellmore and Hong 1974]. An escort node is characterized (and indexed) by two numbers: its node type (0 for an escort node) and its placement within the ordered list of escorts. Adding $k - 1$ escort nodes to our graph of job nodes partitions the transfer schedule into individual escort job lists for every hamiltonian path. + +We now define a tool for labelling hamiltonian paths through this extended graph. A configuration is an ordered sequence containing exactly one each of the job and escort nodes of the extended graph. + +Traveling from a job node to an escort node ends the escort's job list and opens up the job list for the escort indexed by the escort node that is landed on. The first escort in the list is always assumed to be selected at the beginning of the node sequence. Traveling to a job node from any node type, by contrast, adds that job to the escort job list of the current escort. This implies that any time that an escort node occurs at the beginning/end of a configuration, or two escort nodes occur consecutively within a configuration, an escort's job list is terminated with no jobs—equivalent to that escort never being hired for the shift. This formulation allows us to optimize over the number of escorts hired (up to a certain maximum number of escorts) simultaneously with optimizing over escort job assignments. + +Having thus formulated our problem as a choice of an optimal hamiltonian path through a graph of $n + k - 1$ nodes, we move on to solution via simulated annealing. + +![](images/f0efaddbbf5ea812e5beccfdc788faa2d30236f1302d4d517fc81badac2d10a2.jpg) +Figure 2. Various ways to represent an escort deployment configuration: (i) configuration vector, (ii) $k \leq 3$ paths through the graph of job nodes such that every job node is contained in exactly one path, (iii) hamiltonian path through the graph of job and escort nodes, and (iv) specification of each of the $(k = 3)$ individual escort job lists. + +# Simulated Annealing Preliminaries + +The inspiration for the simulated annealing (SA) approach is found in nature, in particular in the phenomenon of observable crystallization that occurs in certain systems in solid state thermodynamics when such a system is slowly cooled. The underlying idea is that if a system is cooled slowly enough from a disordered configuration at high temperature, it will tend strongly toward organizing itself into a highly optimized low-energy state. The subsequent discussion of annealing methods is adapted from Bentner et al. [2001]. + +One computes the number of microstates available to the total system. The fundamental assumption is that all available microstates occur with equal probability, given by the Boltzmann distribution. Normalizing this weighting distribution to obtain a probability distribution gives for microstate $\sigma$ + +$$ +\pi (\sigma) = \frac {1}{Z} \exp \left(- \frac {H (\sigma)}{k _ {B} T}\right), +$$ + +where + +$$ +Z = \sum_ {\tau} \exp \left(- \frac {H (\tau)}{k _ {B} T}\right) +$$ + +and $H(\sigma)$ is the energy of the system when in microstate $\sigma$ . The function $H$ is the hamiltonian function for the system. If we cool the system slowly enough to $T = 0$ so that equilibrium holds at all times (adiabatic cooling), then at $T = 0$ the system will be "frozen" into a lowest energy microstate. + +For our problem, the microstates are configuration vectors, and our hamiltonian is the expense function ( $E_{\mathrm{st}}$ (short-term) or $E_{\mathrm{lt}}$ (long-term). If we can simulate an adiabatic cooling, starting from an initial random microstate and moving through a trajectory of subsequent microstates, then we should obtain a deployment with lowest cost. + +# The Metropolis Criterion + +Simulated annealing essentially runs on a trial-and-error (Monte-Carlo) scheme, aided by a clever choice of "allowed moves" from each microstate. At each iteration, an allowed move is randomly selected and a decision is made whether or not to make that move. We apply the Metropolis criterion: + +$$ +p (\sigma \rightarrow \tau) = \left\{\begin{array}{l l}\exp \left(- \frac {\Delta H}{k _ {B} T}\right),&\mathrm {i f} \Delta H > 0;\\1&\mathrm {o t h e r w i s e .}\end{array}\right. +$$ + +Since our problem is nonphysical, we replace $k_{B}T$ in the formula with a control parameter $T$ in units of money. + +To lower temperature, a logarithmic process is generally used. We update the temperature after each attempted configuration step with the rule $T_{\mathrm{new}} = \alpha T_{\mathrm{old}}$ , where $\alpha$ is usually taken to be in the range (0.8, 0.999) [Bentner 2001]. + +# Choosing a Starting Temperature + +# Case 1 + +The idea is to start at a temperature sufficiently high that initially the trajectory is free to move across the configuration space. In this case, if $\alpha$ is close enough to 1 and the set of allowable transitions is chosen well, then the algorithm should slowly settle very near to the global minimum of the hamiltonian. The initial temperature can be chosen by performing a random walk across the configuration space with the defined set of configuration transitions (i.e., iterating the annealing algorithm from the initial condition with $T$ fixed at $\infty$ ). Multiply the observed maximum of the $\Delta H$ by 10 to obtain an initial temperature $T_{s}$ very likely to be sufficiently high to meet the above description. + +# Case 2 + +If a preprocessing step is added (i.e., use a naive algorithm to presort jobs among escorts), then it may no longer be desirable to start with so high a temperature. If there is a reason to be confident that the start is in the general vicinity of an optimum, then a good starting temperature may be the dollar value of the initial condition or even half that. + +# Our Hamiltonian + +We define the hamiltonian in terms of an algorithm that takes a given configuration as an input. This algorithm can be viewed in the flowchart in Figure 3. We define some of the objects found in the chart. + +We define two time functions that act on job nodes. + +- The first one acts on just a single job node: + +$$ +t _ {\mathrm {d e p}} \big ((1, i) \big) = \frac {d (x _ {i 1} , x _ {i 2})}{v _ {\mathrm {w h e e l}}}. +$$ + +This simply defines the amount of time that it takes for an escort to perform a job from pickup to drop-off. + +- The second one acts on a pair of job nodes: + +$$ +t \big ((1, i), (1, j) \big) = \frac {d (x _ {i 1} , x _ {i 2})}{v _ {\mathrm {w h e e l}}} + \frac {d (x _ {i 2} , x _ {j 1})}{v _ {\mathrm {w a l k}}}. +$$ + +This measures the time to perform the job on the first node from pickup to drop-off and then further to walk to the arrival gate for the second job. + +The algorithm starts at the $k = 1$ box. Where multiple arrows leave a box, all of those actions are performed, right to left (or up to down for horizontal arrows). + +![](images/211015bd92f5f5e08ecb5b0ece1eb36b537a17509b8ae4659120061f82c0fafc.jpg) +Figure 3. Flowchart outlining the algorithm for determining the hamiltonian value (cost) for a given input configuration. + +The job node branches require a little more explanation. On the rightmost branch, when the job node was preceded by an escort node, we add the cost of an escort to the total cost incurred by the configuration. The cost of an escort is not just salary $P_E$ but $(P_E + P_W + R_AA_W)$ , since an escort requires a wheelchair, with its maintenance and storage costs. + +On the middle branch ("let max ..."), there are two cases. To explain this, we note that the initial conditions for our escorts are defined with random trivial job nodes (i.e., job nodes with the same gate as both beginning and end). Now, say that the previous node was an escort node. Then $t$ is $t\big((1,\mathrm{ic}),(1,q)\big)$ , where $(1,\mathrm{ic})$ represents the trivial job node initial condition; that is, we take into account the time for the escort to get to the first job from the initial location. When the previous node was not an escort node, then $t$ is simply $t\big((1,q - 1),(1,q)\big)$ , i.e., the time from when the escort began transporting the WP on the previous job until arrival at the gate for the next job. The maximum over the previous elapsed time updated for additional travel time and the next job's arrival time takes into account that the escort cannot leave for the next job until both events have occurred (reached the arrival gate and the flight has arrived). + +Finally, on the leftmost branch, where $t_{\mathrm{dep}}$ is added to the elapsed time to compute the drop-off time $t_d$ , the function $t_{\mathrm{dep}}$ takes the node $(1,q)$ as its argument. This step does not update the elapsed time. Continuing along that branch, the cost for a single day is defined differently for long-term analysis vs. short-term. + +# Determining Neighboring Configurations + +We seek allowed moves that provide for fast traversal of the configuration space but with relatively modest energy (cost) differences among neighbors. Our criteria are: + +- It's best to keep things simple (i.e., move only a few around nodes in any transition), to avoid propagation effects. +- It's best to leave escort nodes in place and move job nodes around them. +- Only one escort ought to be added or deleted at a time, for salary considerations; this feature is enforced by our move types. + +To avoid large changes in time ordering and propagation effects, only job nodes with departure times within a threshold of each other (we used $40\mathrm{min}$ ) may be swapped. Likewise, a job node may be transferred only to a position where the job node ahead of it has a departure time within $40\mathrm{min}$ of its own. + +# Model Testing: Simulations and Discussion + +We present a naive test algorithm that simplistically assigns escorts, as a way to demonstrate our model's relative superiority. + +Given an ordered list of ordered escorts and a list of temporally-ordered transfer jobs, the naive algorithm treats the jobs as a queue. It takes a job off the queue and begins checking at the least-ordered escort. If the escort is free (has dropped off the last transfer or has had no job yet), the algorithm assigns the job to the escort. Otherwise, it checks if the next escort in order is otherwise free. If no escort is free for a given job, it assigns the job to an escort at random. + +We ran simulations in a large airport environment and compared performance of our simulated annealing solution to this naive algorithm. We ran 100 random shifts over a range of numbers of escorts (using the starting condition provided by the naive algorithm), while modifying the hamiltonian so that the cost of an escort becomes zero. Essentially, we are simulating to determine how many employees should be hired so that (on an average basis) costs are minimized. We then reconstruct the full costs before plotting them and determining a minimum. From Figure 4, our simulated annealing solution (corner points on upper line) significantly outperforms the naive algorithm (lower line) at all numbers of escorts. + +![](images/070ce93b1cd94f2d70dd80fb9324e85672e908fe3215a2ea6939dfcd8198efb1.jpg) +Figure 4. Trials for four concourses of 25 gates each, half the gates occupied by planes, $0.4\%$ wheelchair passengers, and short-term cost analysis. For any number of escorts, the simulated annealing solution (corner points on lower line) yields a lower cost than the naive algorithm (upper line) because it is more efficient in allocating jobs. The apparent mean costs of 0 are an artifact of the large scale (1 vertical unit = $10,000); the optimal number of escorts in the simulated annealing solution is 4, with average cost about $475. + +To check our model's response to the dynamic changes, we ran our algorithm for + +- increasing concourses per terminal (airport size); +- increasing average gate occupancy (air traffic); +- increasing $S_{WC}$ , simulating a future increase in life expectancy and thus an increase in abundance of elderly among air travelers; +- short-term vs. long-term. + +# We find that + +- the optimal number of escorts increases from 1 in the small single-concourse terminal to 2 for the two-concourse terminal and to 4 for the large four-concourse terminal; +- as terminal traffic increases, the required number of escorts increases. +- increasing the percentage of passengers requiring wheelchair assistance increases the number of escorts needed; +- when there are enough escorts so that there is negligible delay time, the short-term and long-term costs are essentially the same—but when there is delay, its cost is exaggerated in the long-term model. The optimal solution tends to hire too many escorts for the average day. + +Finally, we address the stability of our algorithm. We take the same initial configuration, and run our full algorithm on it (starting from a random initial configuration) 50 times in two different scenarios: stressed and unstressed. Ideally, this would create very tight distributions in both cases, but the distributions in the stressed case are wider. This observation suggests that from a random initial configuration we need to use more time steps to ensure repeatability than from a preprocessed initial condition. + +# Weaknesses and Strengths of the Model + +# Weaknesses + +- Every time a new piece of information is acquired (delayed flight, notification of a WP), we must perform a new optimization. +- Another weakness involves the hamiltonian function. There is a range of configurations in which delay time is zero (and the number of escorts is the same). When the hamiltonian is constant across a large set, the annealing algorithm is designed to take any proposed move that keeps it in the set, essentially a random walk. In many cases, we have the ability to specify a + +preference between two configurations with a delay time of zero; we prefer as many escorts as possible to be as early as possible on their deliveries. + +- We do not have an optimal method to step through configuration space. + +# Strengths + +- Our algorithm finds good allocations even for few escorts, and effectively reduces cost better than a reasonable alternative algorithm. +- Our model is robust. It easily accommodates any airport, and any number of wheelchair passengers per flight, and does so with speed. As a result, our model can easily be applied to any one of Epsilon Airline's fine terminals. +- Another feature of our model is that the coarseness of the optimization can be tuned with the logarithmic parameter $\alpha$ (i.e. by increasing the number of steps taken, $\alpha$ is made successively closer to 1, making our simulated annealing more and more adiabatic). + +# Conclusion + +We have found a solution to the problem of wheelchair transfer escort allocation by employing the techniques of simulated annealing. We present a robust method to determine the optimal number of escorts, under a variety of traffic and population conditions. We have also mapped out short- and long-term budget effects. Our algorithm assigns escorts to jobs—on a real time and cost-effective basis. + +# References + +Bellmore, Mandell, and Saman Hong. 1974. Transformation of multisalesmen problem to the standard traveling salesman problem. Journal of the Association for Computing Machinery 21 (3): 500-504. +Bentner, Johannes, Günter Bauer, Gustav M. Obermair, Ingo Morgenstern, and Johannes Schneider. 2001. Optimization of the time-dependent traveling salesman problem with Monte Carlo methods. Physical Review E 64 (3): 036701-036708. +Miami International Airport. n.d. Airline ticket counters. http://www.miami-airport.com/html/airlineTickets_counts.html. Accessed 3 February 2006. +U.S. Department of Transportation. n.d. Airline on-time statistics and delay causes. http://www.transtats.bts.gov/OT_Delay/OT_DelayCause1.asp. Accessed 4 February 2006. + +# Application of Min-Cost Flow to Airline Accessibility Services + +Dan Gulotta + +Daniel Kane + +Andrew Spann + +Massachusetts Institute of Technology + +Cambridge, MA + +Advisor: Martin Z. Bazant + +# Summary + +We formulate the problem as a network flow in which vertices are the locations of escorts and wheelchair passengers. Edges have costs that are functions of time and related to delays in servicing passengers. Escorts flow along the edges as they proceed through the day. The network is dynamically updated as arrivals of wheelchair passengers are announced. + +We solve this min-cost flow problem using network flow techniques such as price functions and the repeated use of Dijkstra's algorithm. Our algorithm runs in an efficient polynomial time. We prove a theorem stating that to find a no-delay solution (if one exists), we require advance notice of passenger arrivals only equal to the time to traverse the farthest two points in the airport. + +We run our algorithm on three simulated airport terminals of different sizes: linear (small), Logan A (medium), and O'Hare 3 (large). In each, our algorithm performs much better than the greedy "send-closest-escort" algorithm and requires fewer escorts to ensure that all passengers are served. + +The average customer wait time under our algorithm with a 1-hour advance notice is virtually the same as in the full-knowledge optimal solution. Passengers giving only 5-min notice can be served with only minimal delays. + +We define two levels of service, Adequate and Good. The number of escorts for each level scales linearly with the number of passengers. + +One hour of advance notice is more than enough. Epsilon Airlines can make major improvements by using our algorithm instead of "send-closest-escort"; it should hire a number of escorts somewhere between the numbers for Adequate and Good service. + +# Introduction + +Annually, 2.2 million disabled people over the age of 65 travel [Sweeney 2004]. Major airlines provide wheelchairs and escorts (attendants) for disabled passengers but struggle to manage costs. We consider how few escorts are needed and how best to route them through the day. The algorithm should be flexible enough to deal with incomplete information about the arrival times of passengers or with an unexpected arrival. We recast the problem as a min-cost flow problem, the daily schedule of the escorts being the network flow. + +# Terminology and Conventions + +- Job. The process of an escort picking up a wheelchair passenger (WP) at an arrival gate and taking them to their connecting departure gate. +- Job starting time. The time when a WP arrives at the airport. If an escort begins a job after the starting time, the WP had to wait before being helped. +- Job ending time. The time at which a WP needs to be at the departure gate to board in a timely manner. If an escort cannot finish a job after the ending time, the passenger misses their plane entirely! +- Vertex and Edge. Our airport is set up as a graph that is used for distance calculations. Our algorithm is set up as a min-cost flow problem on a network that uses a completely unrelated graph. +- Price, Cost, and Effective Cost. We use the standard graph-theory definitions for these words, not business definitions. Price and cost are not the same thing: Cost is the penalty for performing jobs (which is negative, since we want people to perform jobs), while price is an artificial construct that we need to run Dijkstra's algorithm. + +# Assumptions and Assumption Justifications + +# About Airports and Flights + +- In our airport graphs, each edge has uniform density of gates. This is not always the case in real life, but the mathematics of our model would not change if this assumption were removed, only the maps used in its implementation. +- The passengers' flight connections go between random gates. Since passengers may be connecting a wide variety of places, the airline cannot optimize for all the distances between departure and arrival gates. + +- Arrival times and gates do not change less than $1\mathrm{h}$ before arrival; departure times and gates do not change less than $3\mathrm{h}$ before departure. We permit planes to be delayed or have gates changed but not while an escort is helping a WP or moving to pick up a WP. This assumption is not fully realistic, but it should not affect our end results too much and it makes the model easier to implement. We can still have the unexpected arrival of WPs. +- All passengers are making connections, not arriving at their final destinations. This assumption is unrealistic, but our model could easily be changed to handle final destinations as well, by adding a gate to represent the baggage claim and having passengers depart from this gate. +- Effective layover times range from $45\mathrm{min}$ to $132\mathrm{min}$ , linearly biased towards shorter layover times. Effective layover time means that we take into consideration that WPs are usually the last to deboard and the first to board. These times are arbitrary, not based on empirical data. +- WP arrivals are random and uncorrelated. There is no tendency for such passengers to travel in groups. Although this assumption might be false around the time of the Special Olympics or other specific events, for a typical day it is reasonable. + +# About Escorts + +- Escorts receive orders via computer. Communication of orders to escorts is fast and robust, and a computer runs the algorithm. +- Escorts all walk at the same pace. A typical modeling assumption. +- Escorts travel half as fast when a person occupies the wheelchair as with an empty wheelchair. The factor of two is arbitrary, in the absence of data. +- The level of traffic in the airport does not affect travel times within the airport. We make this assumption so that when we analyze our results we avoid confounding the effects of increasing the number of WPs in the airport with the effects of changing the airport layout. +- Escorts do not abandon WPs. Regulations prohibit airlines from leaving a passenger in a wheelchair unattended for more than $30\mathrm{min}$ [U.S. Department of Transportation n.d.]. Such a passenger may still require transportation to restrooms. The escort remains on hand until the passenger boards, when the flight crew takes over. So each escort pushes a single wheelchair that never leaves their sight (hence less concern about loss of equipment). + +# Cost Structure + +We tabulate statistics on the delays in picking up and dropping off WPs for a given traffic load and number of escorts. We graph the delay statistics vs. number of escorts and find the "sweetspot" at which the marginal utility of adding another escort declines. + +The term cost in this paper is a network-flow definition associated with being late in picking up a passenger, not a direct monetary expenditure. We assign a cost of 1 unit for each minute that a passenger must wait to be picked up upon arriving at the airport. If the passenger cannot be taken to their departure gate at least $15\mathrm{min}$ before flight time, they miss preboarding and a 30-unit penalty is charged for delaying takeoff of the plane. If the passenger misses a flight, a penalty of 100,000 units is charged. + +# Simulated Airport Structure + +We model the physical structure of the airport as a graph. An edge represents a corridor and a vertex is where corridors meet. Each edge is assigned an amount of time that it takes to transverse it (in minutes, rounded up). We do not distinguish between the left and right sides of a terminal. Some edges are designated as filled with gates from which planes arrive and depart. Escorts can walk only part way down an edge, taking the appropriate fraction of traversal time to do so. + +We randomly generate the arrival and departure locations of WPs at arbitrary points on the edges, corresponding to our assumption that gates are spread uniformly along the concourses. + +We simulate the following airports: a single straight line, Boston Logan Terminal A (two concourses connected by an underground walkway), and Chicago O'Hare Terminal 3 (Concours G, H, K, and L). + +We precompute the shortest path lengths between each pair of vertices, using the Floyd-Warshall algorithm [Floyd 1962]. This algorithm repeatedly computes $d_{k}(i,j)$ , the minimum path length from $i$ to $j$ passing through only vertices in the set $\{0,1,\ldots ,k\}$ , for all pairs of vertices $i,j$ . The key insight is that + +$$ +d _ {k} (i, j) = \min \left(d _ {k - 1} (i, j), d _ {k - 1} (i, k) + d _ {k - 1} (k, j)\right). +$$ + +The runtime of this algorithm on a graph with $n$ vertices and $m$ edges is $O(n^{3})$ . + +We use this algorithm for ease of implementation. Later we perform shortest-distance computations on a different network using Dijkstra's algorithm. + +# Network Flow Definitions and Principles + +We give a brief summary of flows [Blum 2005; Demaine and Karger 2003]. + +Definition. A network is a directed graph where each edge is given a positive real valued capacity and where two vertices are labeled source and sink. + +Definition. A flow is an assignment of nonnegative real numbers to edges such that the number for an edge is at most its capacity; for all vertices other than the source or sink, the sum of values on edges from the vertex equals the sum of values on edges to the vertex. The size of a flow is the sum of values on edges from the source minus the sum of on edges to the source. + +Definition. A max-flow is a flow of maximum possible size for a given network. + +We can also assign real-valued costs per unit flow to the edges, so that we can talk about the cost of a flow. + +Definition. The cost of a flow is the sum over all edges of the cost of the edge times the value that the flow assigns to that edge. + +Definition. A min-cost-max-flow is a max-flow with the smallest possible cost of any max-flow. + +If all the edge capacities are integers, there is always a solution that assigns an integer flow to each edge. We will create a data structure to dynamically find such solutions to min-cost-max-flow to decide which escorts should handle which job. + +Definition. For a network and a flow on it, the residual network is another network on the same vertex set whose edges have capacities equal to the amount of extra flow that could be sent through that edge. Since the flow across some edges can now be decreased, the residual graph has some edges that are the reverse of edges in the original graph. More precisely, the capacity of an edge $(i,j)$ in the residual network is the capacity of $(i,j)$ in the original network plus the flow from $j$ to $i$ minus the flow from $i$ to $j$ . In the residual graph, the cost of $(i,j)$ should be the negative of the cost of $(j,i)$ . + +The residual graph is useful because solving min-cost-max-flow on the original graph is equivalent to solving it on the residual graph. + +Definition. A circulation is a flow in which the total flow into all vertices (including source and sink) equals total flow out of them. + +Definition. A min-cost circulation in a network is a circulation of minimum cost. + +If $f$ is already a max-flow, the residual graph can accept no more flow. Hence the problem we must solve is to find a min-cost circulation. + +Definition. A price function is a function that associates a price to each vertex. + +Definition. The effective cost of an edge $(i,j)$ is $c_{i,j} + p_i - p_j$ , where $c_{i,j}$ is the cost of $(i,j)$ , and $p_i$ and $p_j$ are the prices associated with $i$ and $j$ respectively. + +Lemma 1. The cost of a circulation does not change if we replace costs by effective costs. + +Let $f_{i,j}$ be the flow through edge $(i,j)$ for circulation $f$ . Let $c_{i,j}$ be the cost of $(i,j)$ . Let $p_i$ be the price at $i$ . Then the cost of $f$ using effective costs is + +$$ +\begin{array}{l} \sum_ {i, j} f _ {i, j} \left(c _ {i, j} + p _ {i} - p _ {j}\right) = \sum_ {i, j} f _ {i, j} c _ {i, j} + \sum_ {i} p _ {i} \left(\sum_ {j} f _ {i, j} - \sum_ {j} f _ {j, i}\right) \\ = \sum_ {i, j} f _ {i, j} c _ {i, j} + \sum_ {i} p _ {i} \cdot 0 = \sum_ {i, j} f _ {i, j} c _ {i, j}, \\ \end{array} +$$ + +which is the cost of the circulation $f$ . + +Hence solving a min-cost circulation problem is equivalent to solving the same problem using effective costs. If all costs are nonnegative, the empty circulation is the min-cost circulation. + +Suppose that all of the graph's vertices are reachable from some particular vertex $v$ and all edges have nonnegative cost. We compute shortest paths from $v$ to all other vertices using costs as edge weights, for the following reason: Let $w$ and $w'$ be arbitrary other vertices. If we then introduce prices $p_w = d(v, w)$ , meaning that the price at $w$ is the distance between $v$ and $w$ , then: + +- Since $d(v, w) \leq d(v, w') + c_{w', w}$ , the effective cost from $w'$ to $w$ is $c_{w, w'} + d(v, w') - d(v, w)$ , which is nonnegative. +- Since there is a path from $v$ to $w$ along which the above inequality is strict, there is a path from $v$ to $w$ of effective cost 0 for all $w$ . + +# Dijkstra's Algorithm + +Our algorithm uses Dijkstra's algorithm every time a new WP arrives or is scheduled to arrive. Dijkstra's algorithm [1959] computes single-source shortest paths in a graph with nonnegative edge weights (i.e., for some vertex $v$ , it computes $d(v, w)$ for all $w$ ). The idea is to determine the distances from $v$ in order of increasing distance (a breadth-first search). We use Dijkstra instead of Floyd-Warshall because Dijkstra is more efficient for single-source shortest paths. We approximate $d(v, w)$ and $\min_u(d(v, u) + l_{u, w})$ , where $l_{u, w}$ is the length of the edge from $u$ to $w$ and the minimum is over neighbors $u$ of $w$ with known distance from $v$ . The key insight is that the shortest approximate distance is actually correct. + +# Algorithm Overview + +We associate a cost to picking up a job after its starting time, equal to the number of minutes late the job starts, with 30 min more penalty for not transporting a WP to a gate at least 15 min early for preboarding. We associate a cost of effectively infinite magnitude to completely failing to perform a job. + +We want at any given time to be able to schedule escorts to complete the currently known jobs at minimal cost. We add jobs to network as the arrival or scheduled arrival of WPs becomes known. + +![](images/d1c3f65056dc4f38f8d79ab62a38e2ded27dd30fa54f50e14beca5f8230ff45d.jpg) +Figure 1. The graph for which we want to solve min-cost flow. Vertices represent jobs, and flow along the edges represents the movement of escorts. Delays are represented as costs, and the edges are labeled with these costs. + +We restate the problem as a min-cost-max-flow problem (Figure 1). The escorts' schedule is a network flow—of escorts, not WPs. The source is escorts arriving at the start of the day, the sink is them leaving at the end of the day. The vertices are unbusy escorts and the beginning and end of known jobs. + +In a min-cost-max-flow problem, we set costs for traversing edges. However, an escort neglecting to pick up a WP involves not traversing an edge; we cannot charge for this directly. We equivalently charge a large negative cost of $-100,000$ for taking the job. At the end of the day, we use this large negative cost to check that all WPs were transported to their flights. + +If we knew exactly when each passenger would arrive, we could simply solve min-cost-max-flow on the graph to find the optimal schedule. Instead, we add jobs to our graph as we learn of impending arrivals. We also must delete edges for jobs already performed or that can no longer be performed. We maintain our data structure under the following updates. + +# Updates in the Algorithm + +Time Passing: As time passes, an escort may no longer be able to make it to a job in time; we update the lateness costs (edge costs) for this escort. An edge in the current flow corresponds to a job that an escort is trying to get to on time, so its cost does not decrease. For a job that an escort is walking to, the passage of + +1 min of time makes the edge cost 1 min more expensive but moves the escort 1 min closer, so these effects cancel. Hence, this update does not decrease any effective cost in our residual graph and maintains our invariants (Figure 2). + +![](images/481e4b6bfb67a3931f49349ce203fda3daa1de84bdf6884661a736b31aa680da.jpg) +Figure 2. When time passes, we update edge costs. + +Taking up a job: When an escort starts work on a job, we delete the vertices corresponding to the escort and to the start of the job (along with adjacent edges) and connect the source to the vertex corresponding to the end of the job. Doing this makes for a corresponding change in the residual graph but does not produce negative effective prices (Figure 3). + +Finishing a job: When an escort finishes a job, the vertex corresponding to the end of that job should be relabeled to correspond to the current location of the escort (Figure 4). If a new job is assigned, the escort walks toward the job; an escort with no job scheduled does not move. + +A new job is announced: A new job is announced when the arrival or predicted arrival time of a WP becomes known. We create the vertices $u$ and $v$ corresponding to the beginning and end of the job. Next, we create the edges pointing to $u$ from every end of job and every escort who could make it to this job in time, and edges from $v$ to the sink and to the starts of jobs that could be reached after completing the job given by edge $uv$ . Lastly, we create the edge $E$ from $u$ to $v$ . All edges are given the appropriate costs based on the current time. Assign a high enough price to $v$ and a low enough price to $u$ so that the effective prices of all edges into $u$ or out of $v$ are positive. Then in the residual graph minus $E$ , all effective prices are nonnegative. Our entire graph is accessible from $v$ , because from $v$ we can reach the sink and trace back each of the escorts' schedules to the source. Each job that someone is scheduled to complete can be reached + +![](images/6ec8877a3347e1f3a3b3df15c12f75a872af69dc551bcbec1597da7552995809.jpg) +Figure 3. When an escort starts a new job we make sure other escorts don't take the same job. + +![](images/fa36be32c3ec6025d23e7fbd07a50323d327e150174f264030798d969ff2b989.jpg) +Figure 4. When a job ends, we relabel the job end vertex as an escort vertex. + +by tracing back that escort's schedule from the sink. All job start vertices that could be accomplished, but are not scheduled to be, can be reached by getting to the vertex corresponding to an escort who can reach the job and following the edge from to the job's beginning. + +Applying the technique above, and adding the size of the smallest cost path from $v$ to $w$ to the price at $w$ , we get nonnegative values for all of these effective costs and a path $P$ from $v$ to $u$ in the residual graph consisting of edges with zero effective cost. If the effective cost of $E$ , is nonnegative, we are done; otherwise, we augment our flow by the cycle $PE$ . This negates the effective cost of all these edges in the residual graph and changes their direction, and hence makes all the effective costs nonnegative (Figure 5). + +# Algorithm Performance + +This data structure is fairly efficient. Suppose that there are $A$ escorts and $J$ jobs. The runtime of time passing is $O(AJ)$ ; that of taking up or finishing a + +![](images/4faa7c9dd1864b98ef41bc2d4599fcc26349f2bdc792b099fb4a35a4d4bd086a.jpg) +Figure 5. We add new jobs as they are announced. + +job is $O(1)$ ; and learning about a new job depends on the runtime of Dijkstra's algorithm and is bounded by $O\big((A + J)^2\log (A + J)\big)$ [Dijkstra 1959]. + +# Our Algorithm Finds Optimal Solutions + +We present a theorem that states what classes of algorithms find a solution that transports all passengers to their flights without delays, provided such a solution exists. + +We formulate a more general version of the problem as follows. We have some number of escorts; they arrive in the airport at predetermined times of day in predetermined places, but particular escorts are not guaranteed to leave at a particular time. We are required to complete jobs, each requiring that an escort be in a particular location at a given time and remain occupied until released at another known location and time; that end location can be reached from the start location in the time allotted. + +All jobs are announced enough ahead of time that an escort could reach the starting location from anywhere in the airport from the time when it was announced. This time is bounded by the time to traverse between the farthest two points in an airport (e.g., 18 min in Chicago O'Hare Terminal 3 with an empty wheelchair; for any airport, advance notice of $30 - 60\mathrm{min}$ for arrival of a WP is more than enough). + +We stipulate: + +- Escorts are interchangeable. +- An escort finishing a job is equivalent to a new escort appearing at a given time and place. +- An escort taking up a job or leaving work is equivalent to an escort disappearing at a given time and place. + +Algorithm: When a new job is announced: + +- Considering all escorts in the airport and all escorts who will arrive in the airport (or finish jobs), and considering all known jobs that have not started yet, determine which escorts can do which jobs. +- Find (if one exists) a pairing that associates an escort to each job. + +- If none exists, the jobs cannot all be completed; +- if such a pairing does exist, tell the escorts to start to their assigned jobs (or do so once they appear). + +Theorem 2. This algorithm works if the problem admits a solution. + +[EDITOR'S NOTE: We omit the authors' proof, which uses the marriage lemma [Halmos and Vaughan 1950].] + +Our algorithm has an additional property: + +If it is possible not to be late for any job nor miss any job, our algorithm will produce such a solution. + +# Comparison against a Greedy Algorithm + +We implemented also a "send-closest-escort" algorithm, which greedily assigns escorts to the closest job according to four rules: + +1. When a job becomes known, assign the closest available escort. +2. When a new escort becomes available, assign that escort to the closest available job. +3. Never unassign an escort from a job. +4. Escorts who have nothing to do stay put where they are. + +For rules 1 and 2, jobs are not assigned until a fixed time before the arrival time of the passenger (the time to traverse the farthest two points). + +# Experimental Setup + +We use three maps as our test airports: a single straight line, Boston Logan Terminal A (two concourses connected by an underground walkway), and Chicago O'Hare Terminal 3 (Concours G, H, K, and L). + +For each simulation, we ran an 8-h shift with time increments of 1 min. For each terminal and varied numbers of escorts, we ran the simulation 10 times with the same passenger arrival times and gates. + +We suppose that $95\%$ of WPs give 1 h notice and $5\%$ show up with only 5 min before penalties accrue. + +We ran each airport terminal at two loads of traffic, heavy and light. Heavy traffic is 84 wheelchair passengers in the 8-hour period at the single-line terminal, 155 at Logan Terminal A, and 550 at O'Hare Terminal 3; light traffic is half as many. We arrive at the heavy traffic number for O'Hare from an estimate of Thanksgiving traffic [Chicago Department of Aviation 2001], scaling, and assuming that $1\%$ of passengers need wheelchairs; the numbers for Logan and the straight-line terminal are scaled from the number of gates. + +# Results + +In Figures 6-7, the numbers of escorts are plotted as data points; each data point is the average of 10 simulations. The average wait times are the total wait time divided by the number of passenger served. Missed passengers do not affect the average wait time, since their wait time is effectively all day. Results in the lower left-hand corner are more desirable. Any deviation from zero on the $x$ -axis is bad because some passenger was outright ignored! + +The results corresponding to the perfect-knowledge solution are almost exactly covered by the corresponding results for our algorithm. + +Although the "send-closest-escort" algorithm is frequently competitive for average wait times, it is significantly worse in terms of jobs missed. It scales roughly linearly in balancing delay time and number of jobs missed; it does not perform well without a large number of escorts. + +We define two service levels, with corresponding numbers of escorts: + +- Adequate: The average number of missed passengers in each scenario is lower than 1. +- Good: Everyone is served, with an average wait time under 15 min. + +In Table 1, we summarize the minimum number of escorts needed to reach each service level. + +For each airport, the difference between the Good and Adequate service levels is roughly a factor of two, with slightly increasing returns to scale; with larger scales, the staff are spread more uniformly, so it is less likely that a job will crop up with nobody close enough to take it. + +![](images/3b1ddd72fa4b2de904ea5fd3ead43ba0f035af05c3fc12dc65cb75789f7690fe.jpg) + +![](images/77b07b14038b719c3f0939044f4e1da9ef1d101a758fb56b9196cf05411e3c59.jpg) + +![](images/976cd73828c24ead8d9d1b43de41b8efa89d9cc07002f6f48760658c948b3cbb.jpg) +Figure 6. Light traffic: Passengers missed and average delay for each number of escorts, for each airport. + +![](images/3cdf607b4dd0c02fca90f576e605b25098d12b074722111efa23d5971073936f.jpg) + +![](images/3ebeee1c8e36d04bc08f46340f4cacc7411fc98add7a7c3260bcb3ad6b489a07.jpg) + +![](images/332fcf57543cf8f9725480e60c7585977275976d621e0d3148223a727e8261de.jpg) +Figure 7. Heavy traffic: Passengers missed and delays for each number of escorts, for each airport. + +Table 1. Numbers of escorts needed to achieve service levels. + +
AirportTrafficPassengers servedNumber of escorts
Adequate ServiceGood Service
Linelight4269
heavy841018
Loganlight78917
heavy1551531
O'Harelight2752756
heavy55047106
+ +# Sensitivity to Parameters + +# Short-Notice Passengers + +We varied the percentage of short-notice passengers from $0\%$ and $100\%$ using the Logan airport map and heavy traffic. The algorithm is very insensitive to this percentage. Why? Because 5 min is enough to cover a lot of ground. + +# Airport Geometries + +For either 155 WPs or 515 (5% of whom give short notice), our algorithm does roughly equally well in all airports. Geometry mainly affects time wasted traveling from one job to another—which should not be significant, since the maximum diameters of our airports are 11, 12, and $18\mathrm{min}$ . + +# Load Scaling + +Sensitivity to load scaling is important because the proportion of WPs is expected to increase. We tested the Logan airport map using passenger loads from $50\%$ to $150\%$ of the heavy traffic load. The numbers of escorts for Adequate and for Good service scale close to linearly (Figure 8). + +# Strengths and Weaknesses + +# Strengths + +- Algorithm optimal with perfect knowledge. +- Performs well with only modest advanced notice. Advance notice of $1 \mathrm{~h}$ is more than enough. Even if many passengers show up with no advance warning, our algorithm performs well. + +![](images/ab6c2f3a9e96375ac9a0c06c932673df29c562d9c260fb3ca99474d853584fab.jpg) +Figure 8. Scaling of escorts as WPs increase. + +- Proved an interesting theorem. Given sufficient advance notice, our algorithm handles all passengers with no delay if doing so is possible. In practice, a company might stop hiring escorts when delays are small rather than hire until delay is nonexistent. +- Efficient algorithm runtime. Our algorithm runs in real time with modest computational resources. + +# Weaknesses + +- Requires a computer. +- Requires a job end time to be specified. For the algorithm to compute when an escort will no longer be occupied and assign orders into the future, we must set a fixed job end time before starting the job. If an end time is unspecified, the problem becomes NP-complete, because it can be used to solve the hamiltonian path problem. +- Algorithm uses only factual knowledge of today, no statistical projections. Our algorithm plans based on current knowledge of scheduled arrivals but makes no attempt to guess where a WP may appear. +- Hard to explain algorithm to nonmathematicians. + +# Conclusion + +Our model envisions the problem in terms of flow of escorts through the work day. We present an algorithm with compelling theoretical basis for producing good results. + +We make the following recommendations: + +- WPs do not need to provide much advance notice. Requesting assistance at first check-in is more than enough notice for all connecting flights. +- The "send-closest-escort" algorithm is not good. +- The number of escorts for a given level of service scales linearly with the number of passengers. +- Airlines should hire a number of escorts somewhere between the numbers required for Good and Adequate service levels. Fewer escorts than for Adequate service results in severe problems; more escorts than needed for Good service produces diminishing returns. + +# Appendix A: Terminal Maps + +These are the graphs for our small, medium, and large simulated airports. A large boldface numbers on an edge is the length of time to traverse the edge; these numbers should be doubled for an occupied wheelchair. The Logan and O'Hare graphs are superimposed on digitally edited versions of the official terminal maps [Massachusetts Port Authority n.d.; City of Chicago n.d.]. The gate positions in the single-line terminal are those used in our simulation; those on the other two maps are the gate positions in real life. In our simulation, gates occur every minute of walking distance along the edges except for edges that are transportation between concourses. + +![](images/6c04395cf3843ad21873d4a64c7027e59b687ddabb2418d560c51c26ab6878b2.jpg) + +![](images/9661028023d6b67537153658788183404a487ca75b0803f598e4026112e81ac2.jpg) + +![](images/7535289ae75f849a15fd1f0f55ee2b448cc7cbd3ffced382c8d8150811ce5820.jpg) + +# References + +Blum, A. 2005. 15-451 Algorithms Course Notes. 10/25/2005. http://www.cs.cmu.edu/afs/cs/academic/class/15451-f05/www/lectures/lect1025.txt. +Chicago Department of Aviation. 2001. The Chicago Airport System prepares passengers for Thanksgiving holiday travel. (16 November 2001). http://www.flychicago.com/dao/avi_news/doa_avi_news_pr_73.shtm. +City of Chicago. n.d. O'Hare Terminal 3 Map. http://www.chicagoairports.com/ohare/terminals/D3.pdf. +Demaine, E., and D. Karger. 2003. 6.854 Advanced Algorithms Course Notes. 9/24/2003. http://theory.lcs.mit.edu/classes/6.854/05/scribe/max-flow-dff.ps. + +Dijkstra, E. 1959. A note on two problems in connexion with graphs. Numerische Mathematik 1: 269-271. +Floyd, R. 1962. Algorithm 97 (SHORTEST PATH). Communications of the ACM 5 (6): 345. +Halmos, P., and H. Vaughan. 1950. The marriage problem. American Journal of Mathematics 72: 214-215. +Massachusetts Port Authority. n.d. Logan Terminal A Map. http://www.massport.com/planning/pdf/terma_map_detail.pdf. +Sweeney, M. 2004. Travel Patterns of older Americans with disabilities. U.S. Bureau of Transportation Statistics. http://www.bts.gov/programs/bts_working_papers/2004/paper_01/pdf/entire.pdf. +U.S. Department of Transportation. n.d. New horizons: Information for the air traveler with a disability. http://airconsumer.osr.dot.gov/publications/HorizonsPrintable.doc. + +![](images/8baa45bd792845f67257ee948d67e09b0ee037586677f1473b067279820ca8f6.jpg) + +Team advisor Martin Z. Bazant and team members Dan Kane, Dan Gulotta, and Andrew Spann. + +Pp. 387-412 can be found on the Tools for Teaching 2006 CD-ROM. + +# Cost Minimization of Providing a Wheelchair Escort Service + +Matthew J. Pellicione + +Michael R. Sasseville + +Igor Zhitnitsky + +Rensselaer Polytechnic Institute + +Troy, NY + +Advisor: Peter Roland Kramer + +# Summary + +Epsilon Airlines provides a wheelchair escort service to passengers who require aid. We use an optimized earliest-due-date-first (EDD) algorithm to minimize the overall cost. Our algorithm is broad enough to accommodate various airport concourses, flight schedules, and flight delays. In addition, it allows for wheelchair escorts to perform other tasks beneficial to the airline, such as provide information at a kiosk, to help reduce the overall cost. Moreover, it creates schedules for each employee. + +A naive strategy would be to employ the minimum number of escorts to guarantee that all passengers reach their gates on time. We show that this strategy is not optimal but can be improved by assigning different numbers of escorts to shifts based on expected traffic. For example, if Delta Airlines were to utilize the naive strategy at Atlanta International Airport, the cost would be over $5 million/yr, whereas our strategy reduces this cost to under$ 4 million/yr. A similar reduction in cost could be expected for Epsilon Airlines. + +# Assumptions + +- The original problem can be adequately modeled with a numerical simulation that uses a discrete time step $\Delta t$ and discrete length step $\Delta d$ , provided that $\Delta t, \Delta d$ are small compared to actual dimensions. +- The layout of the airport concourse(s) is known, along with positions of gates. The concourse(s) can be on one or two levels. + +- There is a kiosk or information desk in the concourse that can communicate with escorts, for example, by using walkie-talkies. +- The movement of escorts is constrained to a rectangular lattice. +- The escort transports a wheelchair passenger (WP) from an incoming flight gate to a connecting flight gate. +- The number of WPs on a flight is small compared to the total number of passengers on the flight [Backman et al. 2004]. This assumption allows for tracking individual passengers rather than flights. +- Most WPs are known to the airline in advance, but some arrive unexpectedly. +- Both incoming flights and connecting flights can be delayed. +- When escorting a WP, the wheelchair and the escort stay together. +- An unoccupied escort performs another job (such as providing information at the information desk) [Backman et al. 2004]. +- The goal is to minimize the cost of escorting WPs. + +# Approach + +Our objective is to provide cost-minimizing staffing and inventory recommendations, as well as an algorithm to generate optimal schedules for wheelchair escorts. + +We model the geometry of the airport and simulate arriving and departing WPs according to a fixed schedule. We add unscheduled WP arrivals and allow for random unscheduled flight delays. We set out rules to govern the behavior of an escort at each time step throughout their shift. These rules are comparisons between choices that the escort can make; the algorithm predicts which choice will result in the least cost to the airline, based on an objective cost function. + +# Formulating the Optimization Criteria + +The costs of operating an escort service are associated with: + +- flight delays due to WPs arriving late to connecting flights; +- WPs having to wait for service (i.e., long-term decrease in ticket sales as a result of damage to airline reputation); +- employing escorts; + +- depreciation, maintenance, and storage of wheelchairs; and +- reassigning idle employees to alternative tasks (a "negative cost"). + +We adopt the following notation, where the length of one time step $\Delta t$ in the simulation is $1\mathrm{min}$ : + +$$ +w = \text {n u m b e r o f e s c o r t s o n t h e s h i f t} +$$ + +$$ +T = \text {l e n g t h o f a s i n g l e s h i f t (m i n)} +$$ + +$$ +K = \text {w a g e o f a w o r k e r} (\mathbb {S} / \min ) +$$ + +$$ +W _ {0} = \text {c o s t s a s s o c i a t e d w i t h w e e l c h a i r s} +$$ + +$$ +\omega (t) = \text {n u m b e r o f e s c o r t s e m p l o y e d a t a n a l t e r n a t i v e j o b a t t i m e} t. +$$ + +$$ +\psi \left(t _ {\mathrm {f d}}\right) = \text {n u m b e r o f f l i g h t s d e l a y e d} t _ {f d} \min +$$ + +$$ +c _ {\psi} \left(t _ {\mathrm {f d}}\right) = \text {c o s t a s s o c i a t e d w i t h d e l a y i n g a f l i g h t b y} t _ {\mathrm {f d}} \min (\$) +$$ + +$$ +\phi \left(t _ {\mathrm {p w}}\right) = \text {n u m b e r o f W P s} +$$ + +$$ +c _ {\phi} \left(t _ {\mathrm {p w}}\right) = \text {c o s t a s s o c i a t e d w i t h h a v i n g a p a s s e n g e r w a i t} t _ {\mathrm {p w}} \min (\$) +$$ + +Quantities are measured at discrete points in time (at each minute). We can now mathematically express the costs listed above: + +$$ +\text {F l i g h t D e l a y C o s t} = \sum_ {t _ {\mathrm {f d}} = 1} ^ {\infty} \psi \left(t _ {\mathrm {f d}}\right) c _ {\psi} \left(t _ {\mathrm {f d}}\right), \tag {1} +$$ + +$$ +\text {P a s s e n g e r W a i t i n g C o s t} = \sum_ {t _ {\mathrm {p w}} = 1} ^ {\infty} \phi \left(t _ {\mathrm {p w}}\right) c _ {\phi} \left(t _ {\mathrm {p w}}\right), \tag {2} +$$ + +$$ +\text {E m p l o y m e n t} = w K T, \tag {3} +$$ + +$$ +\text {W h e e l c h a i r C o s t} = W _ {0}. \tag {4} +$$ + +Note that $W_0$ depends only on the airport and the number and type of wheelchairs, so is constant with respect to the number of escorts on a particular shift, which can be varied. + +# Multitasking Employees + +A key feature of our model is that we allow employees to take on a secondary task. This task can be any other job the escort can perform when not actively assisting a WP, such as providing information at a kiosk. This cuts down on inefficiency associated with low volume points during a shift. + +Guided by microeconomic theory, we assume that the extra benefit to the airline from adding a worker to perform a task is inversely proportional to the number of workers already contributing to that task [Wikipedia 2006a]. Mathematically, this implies that the benefit from $n$ employees performing a certain task is approximately proportional to $\ln (n + 1)$ . Since the airline would + +have to hire another worker to perform this secondary task if no escorts were available, the benefit to the airline from a single employee working a secondary job should equal the employee's wage. This implies that the net negative cost for a single shift is + +$$ +\frac {- K}{\ln 2} \sum_ {t = 0} ^ {T} \ln (\omega (t) + 1). \tag {5} +$$ + +# Assumptions About Waiting Costs + +We assume that the average total cost associated with delaying a flight is \(44 / \min\), and the average total cost associated with delaying a single passenger is approximately \)0.25 / min [Federal Aviation Administration 2000]. Since the marginal cost of waiting an additional minute independent of the total delay, it follows that + +$$ +c _ {\phi} \left(t _ {\mathrm {p w}}\right) = 0. 2 5 t _ {\mathrm {p w}}, \qquad c _ {\psi} \left(t _ {\mathrm {f d}}\right) = 4 4 t _ {\mathrm {f d}}. +$$ + +Since the expression $\psi(t_{\mathrm{fd}}) t_{fd}$ is the sum of the delay times of all planes delayed exactly $t_{\mathrm{fd}}$ minutes, the aggregate flight delay time is + +$$ +T _ {\mathrm {f d}} = \sum_ {t _ {\mathrm {f d}} = 1} ^ {\infty} \psi (t _ {\mathrm {f d}}) t _ {\mathrm {f d}}. +$$ + +Similarly, the aggregate passenger wait time is + +$$ +T _ {\mathrm {p w}} = \sum_ {t _ {\mathrm {p w}} = 1} ^ {\infty} \phi (t _ {\mathrm {p w}}) t _ {\mathrm {p w}}. +$$ + +These give alternative expressions for the costs defined in (1) and (2), + +Flight Delay $\mathrm{Cost} = 44T_{\mathrm{fd}}$ , Passenger Waiting $\mathrm{Cost} = 0.25T_{\mathrm{pw}}$ . + +# The Cost Function for a Shift + +Combining the results in (1)-(5), the total cost is + +$$ +C = 4 4 T _ {\mathrm {f d}} + 0. 2 5 T _ {\mathrm {p w}} + w K T + W _ {0} - \frac {K}{\ln 2} \sum_ {t = 0} ^ {T} \ln (\omega (t) + 1). +$$ + +# The Cost Function for the Year + +The objective is to minimize cost not only on a day-to-day basis, but also over the entire year. We assume that each of the 1,095 8-hour shifts during a year fall into one of the three following categories of air traffic: + +- Light: These are the shifts from $4 \mathrm{P.M.}$ to $12 \mathrm{A.M.}$ and from $12 \mathrm{A.M.}$ to $8 \mathrm{A.M.}$ on days of the year in the bottom $90 \%$ (329 days) of air traffic days. These shifts comprise $60 \%$ (657) of all 1095 shifts during the year. We estimate that mean traffic on these days is approximately one-half of total mean traffic. +- Heavy: We define these shifts as the top $10\%$ 8-hour shifts by air traffic (top 110 shifts). We estimate that these shifts have a mean traffic 2.5 times total mean traffic. +- Medium: Those shifts not falling under the above two definitions (328 remaining shifts). We estimate that these shifts have a mean traffic 1.5 times total mean traffic. + +The cost function for the year is a weighted sum of the cost functions for each type of day, which we denote as $C_l$ , $C_m$ , $C_h$ for light, medium, and heavy traffic days: + +$$ +C _ {\text {a n n u a l}} = 6 5 7 C _ {l} + 3 2 8 C _ {m} + 1 1 0 C _ {h}. \tag {6} +$$ + +# Algorithm Implementation + +We implement an algorithm that dynamically assigns escorts. The overall behavior is depicted in the flow chart on page 392. The algorithm is divided into six main parts. + +# Input + +The algorithm requires the layout of the concourse(s) and the flight schedule of known WPs. The layout of the concourse(s) includes the positions of the information kiosk, the arrival and departure gates, and an elevator if the concourse has two floors. The flight schedule includes the incoming and connecting flight gates and times. + +# Main Simulation Loop + +In each time step the algorithm queues WPs, assigns escorts, and moves escorts to their optimum destinations. + +# Queueing WPs + +The first task of the algorithm is to consider the possibility of the arrival of an unexpected WP. The average number of unexpected arrivals is taken to be a fixed percentage of the number of expected WPs, which can be specified in the algorithm; we take it to be $5\%$ . + +![](images/22233044d7465350b8135a076e67a5b316faf51eae6381e46053b123bb49bb5f.jpg) +Figure 1. Flow chart outlining the algorithm used in the numerical simulations. The output of the simulation gives the aggregate flight delay time $T_{\mathrm{fd}}$ , the aggregate passenger waiting time $T_{\mathrm{pw}}$ and $\omega(t)$ , the number of escorts employed in an alternative job at time $t$ . + +Each WP (previously known or new) is assigned a score, the amount of time before their connecting flight departs. The algorithm then reorders the queue, inserting newly-known WPs who have not yet been assigned an escort. WPs with a lower score are served first. This score function reduces the total flight delay time $t_{\mathrm{fd}}$ as much as possible, since this is the most costly delay for the airline. + +# Escort Assignment + +The nearest available escort is assigned to the WP at the head of the queue using the "city block" distance $|\Delta x| + |\Delta y|$ instead of the Euclidean distance $\sqrt{(\Delta x)^2 + (\Delta y)^2}$ . + +An escort assigned to a WP may be able to return to (or remain at) the information kiosk and perform an alternative job for some time before picking up the assigned WP. + +# Motion and Status of Escorts + +After the assignment of escorts, escorts who are not at their destination are moved toward it. In each time step $\Delta t$ , an escort can move one lattice distance $\Delta d$ . This defines a natural speed, $\Delta d / \Delta t$ . We take $\Delta t = 1$ min and the speed of escorts to be 2.5 feet/second (reasonable, considering that a concourse may have obstacles including other passengers). It follows that $\Delta d = 150$ ft. + +When an escort reaches the incoming or connecting gate for a WP, the escort's status is updated accordingly. + +However, when an escort reaches the incoming flight gate of a WP, the WP may not have arrived yet. We denote the probability of a flight being delayed by $p_{\mathrm{delay}}$ ; in fact, $p_{\mathrm{delay}} \approx 0.29$ [Mueller and Chatterji 2002]. For delayed flights, we take the length of the delay to be distributed exponentially [Wikipedia 2006b]. + +# Output + +The main simulation loop continues until all WPs on the input flight schedule have been escorted to connecting flights. Once this condition is met, the algorithm outputs the aggregate WP waiting time $T_{\mathrm{pw}}$ , aggregate flight delay time $T_{\mathrm{fd}}$ caused by a WP not arriving for their connecting flight on time, and $\omega(t)$ , the number of escorts employed in an alternative task at time $t$ . + +# Case Studies + +We did case studies of Delta Airlines concourses in three airports; O'Hare International Airport (Chicago, IL), John F. Kennedy International Airport (New York, NY), and Atlanta International Airport (Atlanta, GA). + +We take the wage of an escort to be K = $3.50/h, an average of the $7/h paid by some airlines and $0 paid to volunteers. + +In each case study, we generated a simulated schedule of passengers over an 8-h shift that approximates the actual frequency of incoming and connecting Delta Airlines flights in each airport [City of Chicago 2005; Schumacher 1999]. Using the mean time interval between arrivals and departures, the airline's number of terminals, and the number of planned passengers from airline flight and concourse data, we modeled the number of expected WP arrivals per interval as a Poisson process. Unexpected WPs are not included in this schedule, but are accounted for in the simulation. Also, passengers are assumed to connect to Delta Airlines flights, which allows for analysis of the Delta Airlines concourse in isolation. + +For each airport, we obtained satellite images of the concourse(s) from Google Earth in order to find distances between gates and kiosks. We found gate information at Delta Airlines' Website [2006]. We translated this information into a grid layout of the concourse, to serve as input to the simulation. Gates much less than the lattice spacing ( $\Delta d = 150$ ft) apart are assigned a common point on this grid. For the JFK concourse, a point was also assigned for an elevator; this concourse has arrivals and departures on different floors, so every route from an arrival gate to a departure gate includes a trip on the elevator (assumed to take one time step $\Delta t = 1$ min). + +# JFK International Airport + +For a medium-size two-concourse airport terminal, we consider the Delta Airlines concourses at JFK International Airport, with 20 gates. The low, medium, and high traffic flows are 10, 30, and 50 incoming flights per 8-h period [Delta Airlines 2006]. Even though the JFK concourses are larger than those at O'Hare, many fewer Delta Airlines planes fly into JFK than into O'Hare. The predicted cost of numbers of escorts for these 8-h shifts are shown in Figure 2. The costs for small numbers of escorts are astronomically high because of the cost of delayed planes and missed flights, since there are too few escorts for the demand. + +![](images/5d53c31ffb4800f86ea6552dda7bb313ecd34973a53a1338de1df1bb4b8bfcf8.jpg) +Figure 2. Simulated cost curves under low, medium, and high traffic flow in 2005 for the Delta Airlines JFK International Airport concourses. + +The least-cost numbers of escorts for each traffic flow rate are 1, 3, and 6, at costs $40,$ 115, and $190 per shift. From (6), the minimum total annual cost of escorts is $121,000. + +# O'Hare International Airport + +The Delta Airlines concourse at O'Hare International Airport has only five main gate areas. The low, medium, and high traffic flows are 43, 129, and 215 incoming flights per 8-h period [Delta Airlines 2006]. The predicted cost of numbers of escorts for these 8-h shifts are shown in Figure 3. + +![](images/6d2121fd0b1543ef5c9d998bb019247eef56050f3c600739eb82795c394b544a.jpg) +Figure 3. Simulated cost curves under low, medium, and high traffic flow in 2005 for the Delta Airlines Chicago O'Hare concourse. + +The least-cost numbers of escorts for each traffic flow rate are 3, 6, and 9, at costs $90,$ 180, and $300 per shift. From (6), the minimum total annual cost of escorts is $152,000. + +# Atlanta International Airport + +Delta Airlines has its headquarters in Atlanta, with a large four-concourse terminal with 20 main gate areas. Even though JFK has the same number of gates, the gates in Atlanta are spread much farther apart and handle much more traffic. The low, medium, and high traffic flows are 107, 321, and 535 incoming flights per 8-h period [Delta Airlines 2006] The predicted cost of numbers of escorts for these 8-h shifts are shown in Figure 4. + +The least-cost numbers of escorts are 32, 40, and 70, at costs of $1,180,$ 3,880, and $6,430 per shift. From (6), the minimum total annual cost is $3,977,000. This figure is much larger than for O'Hare or JFK because the concourses at Atlanta are roughly four times as large and handle between three and ten times the traffic of the other airports. The main component of the cost for Atlanta is the cost of flight delays. To compensate for the larger concourses, we set the average time between connecting flights in Atlanta to 75 min (in agreement with actual flight schedules), as opposed to 45 min for the other airports. + +![](images/50deba41a4b7a6c72fe34a1eb22de47fc7427f86600e3c826d2189d754d0291c.jpg) +Figure 4. Simulated cost curves under low, medium, and high traffic flow in 2005 for the Delta Airlines Atlanta International Airport concourses. + +# Predictions of Future Escort Needs + +Those who are most likely to require wheelchair assistance at airports are senior citizens. The senior population will increase $7\%$ over the next 5 years, $87\%$ over the next 25 years, and $112\%$ over the next 45 years [Department of Health and Human Services 2006]. Assuming that flying patterns for this age group remain unchanged, and that the layout of airports does not significantly change, our algorithm can predict the change in cost to the airline. The number of flight arrivals to the JFK Delta concourse under medium traffic will increase from 50 to about 100 over the next 30 years. Using our simulation, we find that the optimal number of escorts will increase from 6 to 9, with a corresponding increase $600 per 8-h shift. These results are depicted in Figure 5, with predictions of costs also for low traffic and high traffic. + +# Model Strengths and Weaknesses + +- The simplicity of the model makes it very versatile. Parameters such as the layout, the speed of WP and escort, costs, and other parameters can be specified with minimal modifications to the algorithm. +- Simulation times are quite small for even the busiest airport. +- The lattice and the use of a "city block" distance is more natural and realistic for the interior of buildings. +- The algorithm is simple and intuitive, making it easy to communicate and justify. + +![](images/31e4f4b60082a43c87028a56b53017ba1f175c502f32d07b8b1241ab427a5ccb.jpg) +Figure 5. Simulated cost curves under predicted future low, medium, and high traffic flows in 2036 for the Delta Airlines JFK International Airport concourses. + +- The algorithm is also practical, outputting a schedule for each escort. + +# Conclusion + +We have presented a systematic study of how to administer a wheelchair escort service at the least cost to an airline. Our algorithm can predict the costs under different concourse layouts, flight schedules, arrival of unexpected WPs, flight delays, and reallocation of escorts to a secondary task to reduce cost. We applied our approach and presented results for case studies of Delta Airlines terminals in Chicago, New York, and Atlanta. + +# References + +Backman, C., E. Di Mauro, L. Scheuring, and S. Wyss. 2004. Managing service quality: Taking care of handicapped passengers at Arlanda Airport. Uppsala, Sweden: Uppsala University. +City of Chicago. 2005. Monthly operations, passengers, cargo summary by class: City of Chicago Airport activity statistics. December, 2005. http://www.flychicago.com/doa/stats/1205SUMMARY.pdf. +Delta Airlines. 2006. Flight status and updates. http://www deltas.com/traveling_checkin/flight_status Updates/index.jsp. +Department of Health and Human Services, Administration on Aging. 2006. AoA statistics—Aging into the 21st century. http://www.aoa.dhhs.gov/prof/Statistics/future_growth/aging21/demography.asp + +Federal Aviation Administration, Bureau of Transportation Statistics. 2000. Air carrier flight delays and cancellations. http://www.oig.dot.gov/StreamFile?file=/data/pdfdocs/cr2000112.pdf. +Mueller, Eric R., and Gano B. Chatterji. 2002. Analysis of aircraft arrival and departure delay characteristics. AIAA 2002-5866. American Institute of Aeronautics and Astronautics: AIAA's Aircraft Technology, Integration, and Operations (ATIO) 2002 Technical 1-3 October 2002, Los Angeles, California. +Schumacher, B. 1999. Proactive flight schedule evaluation at Delta Air Lines. Winter Simulation Conference, INFORMS Simulation Society. +Wikipedia, the Free Encyclopedia. 2006a. Diminishing returns. http://en.wikipedia.org/wiki/Diminishingreturns. Accessed 5 Feb 2006. +_____. 2006b. Exponential distribution. http://en.wikipedia.org/wiki/Exponential_distribution. Accessed 5 Feb 2006. + +![](images/16727434ef134eb020838aa0823a26cb9add4a1233623da002d520624abb218b.jpg) +Team members Michael Sasseville, Matthew Pelliccione, and Igor Zhitnitsky. + +![](images/0ff0ac5e74929b847103dae8669bb97ff983fed9d8613ed81c175c8adb688a3a.jpg) +Team advisor Peter Kramer. + +# A Simulation-Driven Approach for a Cost-Efficient Airport Wheelchair Assistance Service + +Samuel F. Feng + +Tobin G. Isaac + +Nan Xiao + +Rice University + +Houston, TX + +Advisor: Mark P. Embree + +# Summary + +Although roughly $0.6\%$ of the U.S. population is wheelchair-bound, the strain of travel is such that more than twice that amount relies on wheelchairs in airports [Haseltine Systems Corp. 2006]. + +Two issues have the greatest impact on the cost and effectiveness of this service: the number of wheelchairs and how they should be deployed. The proper number of escorts and wheelchairs is not only a question of the airport but of the volume of passengers, which can vary greatly. If escorts determine their own movements within the airport, lack of coordination could result in areas being unattended; however, fluctuation in requests could be so great that a territory-based plan could overwork some escorts and underwork others. + +We present an algorithm for scheduling of the movement of escorts that is both simple in implementation and effective in maximizing the use of available time in each escort's schedule. Then, given the implementation of this algorithm, we simulate the scheduling of requests in a given airport to find the number of wheelchair/escort pairs that minimizes cost. + +# Methods And Assumptions + +We propose a stochastic simulation-driven optimization procedure. We partition the problem into three categories: pre-simulation processing, simulation + +rules and dynamics, and optimization. The pre-simulation phase generates the necessary inputs for the simulation phase, such as the airport layout and a master passenger request list containing the wheelchair assistance requests for a single day. The simulation phase consists of a continuous event-based model of passenger arrival/departure and wheelchair/escort movement. Finally, we minimize costs over the number of escorts. + +# Pre-Simulation + +# Airport Layout + +The simulation is customized for the layout of each airport. We represent an airport as a bidirectional graph, in which nodes indicate gates, entrance/exit points, or other places of similar interest. Edges between nodes indicate travel paths, usually through the main hallway of a concourse. We constructed the graphs with the aid of satellite images [Google.com n.d.]. + +Passengers must travel through long corridors to reach departure gates. A typical concourse has gates on either side of the main corridor. We assume that the time required to travel from any gate to any gate is substantial; that is, even if two gates are adjacent or on opposite sides of the main corridor, we assess the travel time between the two gates. + +The graphical representation of the airport is encoded in an $n \times n$ adjacency matrix $A$ , with entry $A_{i,j}$ denoting the travel time between location $i$ and location $j$ . We determine travel times by figuring the actual distance divided by a walking speed of 3 mph. The shortest possible travel times (calculated using Dijkstra's algorithm [Wang 2006]) from every location to every other location is referenced in matrix $D$ , with entry $D_{ij}$ denoting the shortest travel time between nodes $i$ and $j$ . + +We assume that the escorts know the shortest path between any two gates, because they are familiar with the airport environment. In our simulations we do not consider the distance between the gate and the airplane. + +# Wheelchairs And Escorts + +We treat a wheelchair and its escort as a single traveling entity, the "escort." The airline may have additional wheelchairs on hand in the event of a malfunction, and we incorporate the cost of the additional wheelchairs into the maintenance and storage costs of wheelchairs in operation. The escort's job is to travel to the arrival gate of the passenger and transfer the passenger to the departure gate. + +An important assumption is that the number of escorts remains constant throughout the simulation period. In reality, escorts rotate in shifts; but with a simulation period of one day, we assume that escorts starting a shift immediately replace escorts ending one. Similarly, during the simulation we do not allow hiring or firing of workers, nor buying or breaking of wheelchairs; in- + +stead, we represent the costs associated with these actions with a sunken cost term in the total cost function. + +# Passenger Request List + +Given a terminal, we create a flight schedule for one day. We look at the total number of passengers who pass through the airport in one day. We estimate the average load of a plane to be 125 passengers, which we use to estimate the number of flights arriving or departing at the terminal in one day. Observation of departing flight information at a busy airport [City of Atlanta 2006] confirms the information in another source [Neufville 1976]: There is regular activity between $6\mathrm{A},\mathrm{M}$ , and $10\mathrm{P},\mathrm{M}$ . and relatively few flights at night. We therefore space our departures evenly between these times, and then perturb these values by a random shift of less than an hour, so that we are certain not to test our algorithm against just one schedule. Subsequently, these flights are assigned to specific gates. + +Next, we create the requests for the day. We generate the number of requests based on the passenger volume that we are trying to mimic and the percentage of the population that requires wheelchair assistance. For different runs, this value was either $0.6\%$ or $1.2\%$ [Haseltine Systems Corp. 2006]. Each request is assigned an arriving flight and a departing flight, with the assumption that no layover of less than half an hour should be attempted. We assume that a certain percentage of wheelchair passengers have phoned the airline ahead of time; time of request is set to 0. For the remainder, we generate random request times, varying from more than five hours to a half hour before arrival of the wheelchair passenger. We sort this list by request time, so that when the algorithm descends the list, it mimics a dispatcher's receiving the requests at varying times throughout the day, including the wheelchair needs that occur with little notification. + +Different daily scenarios can be modeled by altering the generation of the request list. The request list models the passenger traffic load throughout the simulation, since it contains a greater concentration of requests during peak hours of operation. Furthermore, request frequency throughout the day can be increased to reflect operation during holiday-travel seasons at hub airports, or yearly peak-travel at airports in popular vacation destinations. + +# Scheduling Plan + +We assume that escorts communicate with a dispatcher via a walkie-talkie, and that the dispatcher has a schedule for each escort. + +An escort who has completed a task calls the dispatcher to find out the next. We assume that the dispatcher knows how long it takes an escort to get between two points $x$ and $y$ , which we call $\delta_{xy}$ . + +A dispatcher receives requests at varying times throughout the day. Each request contains four pieces of information: the time and location of the pas + +senger's arrival, $t_a$ and $a$ , and the time and location of departure, $t_d$ and $d$ . For the passenger to reach the destination in time, the escort must arrive at a location by some final time, + +$$ +t _ {f} = t _ {d} - \delta_ {a d}. +$$ + +The algorithm's version of "first come, first served" is that one task cannot replace another on a schedule if its final time is later. This keeps every schedule compact: If a switch is made, the only result is the new task starting sooner after the previous one (switching will be discussed shortly). + +To determine which escort should be assigned a passenger, the dispatcher first finds out if anyone can complete the task. From those who can, he selects one who can complete it in an optimal way, as follows. + +The worst way in which a request can be fulfilled is for the escort to bring the passenger to his destination late yet within the window $\delta_w$ in which the flight is delayed. To determine if escort $e$ can do this, the dispatcher looks at what location $l_e$ the escort will be at completion of the last job before the final time of the request, and when that job will be completed, $t_e$ . We need + +$$ +t _ {e} + \delta_ {e a} < t _ {f} + \delta_ {w}. +$$ + +An escort who meets this requirement is in group $O_{1}$ + +A better way in which a request can be fulfilled is if an escort can bring the passenger to the destination on time but by removing another passenger from his schedule later. The condition for this one is the same as above, but without the delay term: + +$$ +t _ {e} + \delta_ {e a} < t _ {f}. +$$ + +An escort who meets this requirement is in group $O_2$ . + +Because we want to do as little reshuffling of schedules as possible, an even better situation is for an escort to be able to take on a request but have to push back the time of completion of later tasks without forcing any late departures. An escort who meets this requirement is in group $O_{3}$ . + +Finally, the ideal situation is when the dispatcher can assign a request to an escort without rescheduling that escort's later tasks. If the next task is to start at time $t_s$ at location $s$ , then escort $e$ must be able to complete the request with enough time to go from $d$ to $s$ by $t_s$ , + +$$ +t _ {e} + \delta_ {d s} < t _ {s}. +$$ + +An escort who meets this requirement is in group $O_4$ . + +The dispatcher determines the most optimal group that is not empty and chooses an escort in it for the request whose previous task brings that escort closest to the arrival location of the passenger in the new request. If a new request bumps out one or more queued requests, they must be rescheduled before the dispatcher advances to the next request on the list. + +If a request cannot be scheduled, every escort will either be busy with another request or will be too far away to arrive in time. In such a case, the passenger must be scheduled for a later flight and/or compensated for missing the flight, and the situation falls out of the algorithm. + +# Simulation Algorithm + +We envision the problem as a temporal "packing" problem—the dispatcher must fit as many tasks onto the escorts' schedules as possible. + +Algorithm: MAINSIMULATION $(D, R, N_E)$ +create escort task array $E$ $I = \mathrm{FINDINDEXUNFULFILLEDREQUEST}(R)$ +while $I > 0$ +do $\begin{array}{r}O = \mathrm{MAKEOPTIMALITYMAT}(D,R_I,E)\\ \mathrm{comment:~Now~execute~request}\\ (R,E) = \mathrm{EXECUTEREMUEST}(D,R,E,I,O)\\ I = \mathrm{FINDINDEXUNFULFILLEDREQUEST}(R) \end{array}$ totalmissed $= \mathrm{SUM}(R_{\mathrm{missed~flights}})$ +totaldelay $= \mathrm{SUM}(\mathrm{delaytime})$ +return (totalmissed, totaldelay) + +MAIN SIMULATION() handles the entire simulation. We input the shortest-travel-time matrix $D$ , list of requests $R$ , and number of escorts $N_E$ . Entry $E_j$ of the task array is escort $j$ 's task schedule. The two main routines within the simulation are MAKEOPTIMALITYMAT() and EXECUTEREGUEST(). Together, they determine which escort is most suited to be assigned the request at hand, and how the current schedule of the escort will be changed to accommodate the new request. MAKEOPTIMALITYMAT() makes a $N_E \times 4$ matrix, with row $j$ representing the inclusion in or exclusion from the optimality groups of escort $e_j$ . + +Each row of MAKEOPTIMALITYMAT() is generated by OPTIMALITYCHECK(), shown on the next page. + +For each request, OPTIMALITYCHECK() assigns escorts into optimality groups. OPTIMALITYCHECK() creates a row of an optimality matrix $O$ , with entry $O_{i,j}$ denoting whether escort $i$ is group $j$ -optimal—that is, $i$ is in group $j$ . + +Finally, EXECUTEREQUEST() (shown on p. 401) assigns an escort, if possible. + +As the groups descend in optimality, the dispatcher undertakes more and more actions in attempting to assign the request at hand. + +Algorithm: OPTIMALITYCHECK $(D, R_I, e_j)$ +initialize $O_{j} = (0,0,0,0)$ +if Escort $j$ is Group 1 Optimal (can fulfill $R_I$ with delay) then $O_{j,1} = 1$ +if Escort $j$ is Group 2 Optimal (can fulfill $R_I$ w/o delay) then $O_{j,2} = 1$ +if Escort $j$ is Group 3 Optimal (can fulfill $R_I$ w/o removing tasks) then $O_{j,3} = 1$ +if Escort $j$ is Group 4 Optimal (can fulfill $R_I$ w/o shifting tasks) then $O_{j,4} = 1$ +return $(O_j)$ + +# Deployment Plan + +We measure costs in United States Dollars (USD). There are two cost categories: + +- costs dependent on the number of wheelchair/escort pairs, specifically, wheelchair maintenance/storage costs and escorts' wages, and +- costs dependent on the number of delayed flights and passengers who miss their flights. + +All wheelchair and escort costs are considered fixed throughout the (one-day) duration of a simulation. + +In our cost function, $N_{E}$ is the number of escorts, $R$ is the daily request list, $D$ is the airport layout, $X$ is the number of missed flights, and $Y$ is total amount of time that flights that must be held at the gate due to late passengers. The objective function is thus: + +$$ +[ X, Y ] = \mathrm {M A I N S I M U L A T I O N} (D, R, N _ {E}), +$$ + +$$ +C (X, Y) = \operatorname {C o s t} _ {\text {f i x e d}} + \operatorname {C o s t} _ {\text {m i s s}} X + \operatorname {C o s t} _ {\text {d e l a y e d}} Y +$$ + +The average cost per hour of a flight held at the gate is assumed to be $1,018, the average cost in 1986 [Nathan L. Kleinman, Stacy D. Hill 1998] adjusted for inflation [Friedman 2006]. The cost of missing a flight is$ 500, the price of ticket reimbursement and/or lodging if necessary. Our fixed costs include the maintenance costs of the wheelchairs and wages of escorts. We assume that wheelchairs cost $130/yr/chair [Alexander 2006] to maintain, store, and replace as needed, and that escort wages are $10/h. [Avjobs.com 2006]. The + +Algorithm: EXECUTEREQUEST $(D,R,E,I,M)$ +if column 4 of $O$ contains a 1 +then $\left\{ \begin{array}{l}\text{Find escort with closest previous task } e^{*}\\ \text{Insert task into schedule of } e^{*}\\ \text{Mark request } R_{I}\text{ as fulfilled} \end{array} \right.$ +else if column 3 of $O$ contains a 1 +then $\left\{ \begin{array}{l}\text{Find escort with closest previous task } e^{*}\\ \text{Determine insertion spot } s\text{ for task}\\ \text{Move jobs after } s\text{ back farthest possible}\\ \text{Insert task into schedule of } e^{*}\\ \text{Mark request } R_{I}\text{ as fulfilled}\\ \text{Move jobs after } s\text{ forward farthest possible} \end{array} \right.$ +else if column 2 of $O$ contains a 1 +then $\left\{ \begin{array}{l}\text{Find escort with closest previous task } e^{*}\\ \text{Determine insertion spot } s\text{ for task}\\ \text{Move jobs after } s\text{ back farthest possible}\\ \text{Insert task into schedule of } e^{*}\\ \text{Mark request } R_{I}\text{ as fulfilled}\\ \text{Remove overlapping tasks in schedule}\\ \text{Mark corresponding requests as unfulfilled}\\ \text{Move remaining jobs after } s\text{ forward farthest possible} \end{array} \right.$ +else if column 1 of $O$ contains a 1 +then $\left\{ \begin{array}{l}\text{Find escort with closest previous task } e^{*}\\ \text{Determine insertion spot } s\text{ with minimum delay}\\ \text{Move jobs after } s\text{ back farthest possible}\\ \text{Insert task into schedule of } e^{*}\\ \text{Mark request } R_{I}\text{ as fulfilled}\\ \text{Remove overlapping tasks in schedule}\\ \text{Mark corresponding requests as unfulfilled}\\ \text{Move remaining jobs after } s\text{ forward farthest possible} \\ \text{Log delay of } R_{I} \end{array} \right.$ +else $\left\{ \begin{array}{l}\text{Mark task as fulfilled}\\ \text{Log } R_{I}\text{ as missed flight} \end{array} \right.$ +return (E, R) + +$\mathrm{Cost}_{\mathrm{fixed}}$ term is thus given by + +$$ +\operatorname {C o s t} _ {\text {f i x e d}} = \left(1 0 + \frac {1 3 0}{3 6 5 \times 2 4}\right) H N _ {E} +$$ + +where $H$ is the simulation period in hours. + +# Results and Analysis + +# Test Protocol + +Before delving into the results of test simulations, we clarify certain protocols used. + +Each data point on every plot represents 10 trials. +- Error bars are one standard deviation. +- Half of all requests are known ahead of passenger arrival (typical). + +# Optimizing Schedules + +We demonstrate the effectiveness of adaptive scheduling, that is, the allowing of escorts to be placed in optimality groups 1, 2, and 3. We simulated a small one-concourse setting with moderate traffic (15,000 passengers/day) and successively removed placing escorts in optimality groups 1, 2, and 3, in that order. In other words, the simplest algorithm is for a request to be missed if no escort is in $O_4$ , the next simplest algorithm is for a request to be missed if no escort is in $O_4$ or $O_3$ , etc. These simulations were carried out assuming that $1.2\%$ of the passengers require wheelchair assistance. + +There are large gains from shifting around existing tasks in an escort's schedule, that is, the inclusion of group $O_{3}$ . The effect is substantial—we avoid hiring five or six escorts, resulting in savings of about $\$1,000/\mathrm{d}$ (Figure 1). Savings would increase with passenger traffic and airport size. With several airports, Epsilon Airlines may see total savings on the order of $\$10,000/\mathrm{d}$ , merely by adopting our adaptive scheduling. + +# Performance/Sensitivity across Concourses + +We use Chicago O'Hare terminals as test locations for examining performance in different-sized settings. We ran simulations for one-, two- and four-concourse settings, corresponding to Figures 2—4. Each set of simulations spans three levels of terminal traffic, assuming that $0.6\%$ of passengers require wheelchair assistance. Comparing costs vs. number of escorts, we observe + +![](images/b41495e57c7b5be13e9c9bcbd17c0fc96989ef166eca7f41ebc3ff3701ad51b5.jpg) +Figure 1. Comparison of scheduling methods. + +an increase in the optimal number of escorts roughly proportional to the increase in the number of concourses. Furthermore, as airport traffic increases, we generally see a rise in the number of escorts needed to maintain optimal cost. Figure 4 shows that in a single-concourse setting, increased traffic density barely increases the optimal number of escorts, whereas Figures 3 and 4 show substantial increases in escort demand within the two- and four-concourse settings, due to longer travel times between gates. + +# Performance/Sensitivity across Airports + +Our algorithm has the flexibility to address any airport configuration, since we use satellite images of an airport to convert it to its node-edge representation. + +We perform a comparison analysis on three airports. We examine two-concourse sections from New York's LaGuardia (LGA), Chicago O'Hare (ORD), and Dallas/Fort Worth (DFW) airports, at two separate traffic levels, assuming that $1.2\%$ of passengers require wheelchair assistance. From the satellite images, we hypothesized that DFW would fare the worst, due to passengers having to walk from one side of the circular terminals to the other (almost a mile), and that Chicago O'Hare would perform best, since it has a more compact layout, with passengers able to cross through a triangular intersection among concourses. + +However, Figure 5 shows that algorithm performance is statistically equivalent, despite different airport geometries, over the wide range of numbers of passengers per day, and over 10 averaged trials for each data point. Our al + +![](images/34d9f811490507a38a40886552b1b476088a82772c10bcde60bd739d996de043.jpg) + +![](images/e9a825f2da9e0c8815771e3404e0daf14b9a232f7d4bd8c95d23e98712401282.jpg) +Figure 2. Chicago O'Hare Terminal 3 West Concourse only. + +![](images/4d78e929b6963995a906d204352dd2537e06b3cc596610a2641c34f94dda7b17.jpg) + +![](images/9b98252544247eda1a1df63add45707f53e3359201feb9945747e44a5c0eb6ed.jpg) + +![](images/558ede80a5f346939670a895ec865b3a88872b76ca6aebcdeaa178e23088d9fa.jpg) +Figure 3. Chicago O'Hare Terminal 2. + +![](images/56cebcc2346474f7e8bcdbe50234ab029de2fbe9d8009fbd28cb0ce3ee3bd176.jpg) + +![](images/a28ef6824c8a27673528b2faa54a446588eae3e4b1afa6ea1b90af5f3a7d46ae.jpg) + +![](images/a6ce2bcfba73356a987967a3752f99ad09408a29f4b3d21baac0f90559c59ee9.jpg) +Figure 4. Chicago O'Hare Terminal 3. + +![](images/7aeddfd5add12bf4be301fcd50934f46821c53dbaf3e5535d99f34a99c947ae2.jpg) + +gorithm performs equally well regardless of airport configuration, though for higher traffic levels we might see differences among airport geometries. + +![](images/997525da73f89364378ebebb14c8bb745aaf7d729f4fec4fa008a07d3e231d21.jpg) +Figure 5. Comparison of algorithm performance at different airports. + +![](images/8517559271fc6a0aba8c9c0a548579f9da668d10046c4d651aa1328a6895c3e9.jpg) + +# Predicting For An Aging Population + +The demand for wheelchair assistance will increase in the future. The question is, What does this entail? Should an increase of $10\%$ in the requests per capita be treated the same as a $10\%$ increase in total volume of passengers? The answer from our simulations appears to be that it should be treated as more. A $10\%$ percent increase in population would increase the number of + +scheduled flights per day; a $10\%$ increase in requests per capita would not. In other words, as the average passenger ages, the result is not just more requests but more requests per plane. + +We ran two series of simulations for a two-concourse airport. + +- We increased the number of passengers (and thus correspondingly the number of flights) from 33,000 to 42,000. +- We held the number of passengers constant at 30,000 and increased the request percentage from $1.32\%$ to $1.68\%$ . + +In each series, the number of requests is the same, but the average minimum cost is greater when the percentage increases (Figure 6). + +![](images/618a81f205201d6c7df60f7c352f5d536e2ff19855b204aa78b943ba2b5e1489.jpg) +Figure 6. Comparison of increase in requests. + +# Conclusions + +A weakness of our approach is that we do not specifically optimize towards minimizing damages. For example, an escort may have five short tasks (in terms of time from arrival gate to departure gate) queued, when a lengthy request is received. The dispatcher may end up striking three or four tasks from the escort's schedule in order to fulfill the one new task—but the removed tasks' passengers may miss their departure times. We sacrifice many passengers for the sake of one. However, this situation is rare, since we try to minimize the chance of any passenger missing his/her flight. If this type of situation is + +encountered frequently, then we have probably not optimized the number of escorts. + +Our method also assumes no delayed arrivals or departure delays due to other causes. + +The dispatcher has the difficult task of managing escorts' schedules, but this is feasible with a computer. + +These weaknesses are outweighed by the demonstrated robustness and improvements of our approach. We have shown in a variety of settings that not only does our algorithmic approach work but it outperforms simpler algorithms by substantial margins. + +# References + +Alexander, Richard. 2006. Lifecare planning for the BK amputee: Future medical costs. http://consumerlawpage.com/article/amputee.shtml. +Avjobs.com. 2006. Aviation career salary ranges. http://www.avjobs.com/table/airsalry.asp. +City of Atlanta. 2006. Atlanta international airport flight information. http: //www atlanta-airport.com. +Friedman, S. Morgan. 2006. Inflation calculator. www.westegg.com/inflation/. +Google.com. n.d. Google maps. http://maps.google.com. +Haseltine Systems Corporation. 2006. Haseltine Systems Corporation. http://www.haseltine.com/data.html. +Kleinman, Nathan L., Stacy D. Hill, and Victor A. Ilenda. 1998. Simulation optimization of air traffic delay cost. In Proceedings of the 1998 Winter Simulation Conference, edited by D.J. Medeiros, E.F. Watson, J.S. Carson, and M.S., Manivannan, 1177-1181. +Neufville, Richard De. 1976. Airport Systems Planning. Cambridge, MA: MIT Press. +Wang, Yi. 2006. Dijkstra algorithm consistent with cyclic paths. http://www.mathworks.com/matlabcentral/fileexchange/loadFile. do?objectId=7869&objectType $\equiv$ file. + +![](images/00f83bd0d6ebb03663c0524e896d83b4b6a4e839b42cfa402327207495f78b5d.jpg) +Team advisor Mark Embree and team members Toby Isaac, Nan Xiao, and Sam Feng. + +# Judges' Commentary: + +# The Fusaro Award Wheelchair Paper + +Peter Anspach + +National Security Agency + +Ft. Meade, MD + +anspach@aol.com + +Marie Vanisko + +Dept. of Mathematics + +California State University Stanislaus + +Turlock, CA + +mvanisko@csustan.edu + +The Ben Fusaro Award for the 2006 B problem went to the team from Maggie L. Walker Governor's School in Richmond, VA. Their paper fell just short of the Outstanding designation due to a slightly less sophisticated level of mathematics than could have been used. However, the paper exemplified some outstanding characteristics: + +- It presented a high-quality application of the complete modeling process; +- it demonstrated noteworthy originality and creativity in their modeling effort; and +- was well-written, in a clear expository style making it a pleasure to read. + +Addressing real-world problems involves formulating a mathematical description of the problem, solving the mathematical model, interpreting the mathematical solution, and critically evaluating the model. Before a team could formulate a mathematical description of the problem, it was necessary to do research to estimate reasonable values for parameters to be used. + +The Maggie L. Walker team began by getting current statistics on the number of wheelchair passengers and how airlines and airports serve their needs. In addition, they looked at the Department of Transportation Congressional Report on disability-related airline complaints. From their assumptions, it was clear that the team considered many issues. Certain assumptions—for example, wheelchairs are always functional, an important issue—treated issues that + +might seem superfluous but are otherwise not tractable in a model. The team justified their assumption that the pattern of calls for wheelchairs follows a Poisson process. + +The team considered how many escorts and wheelchairs to place at each airport, and the most efficient way for escorts and wheelchairs to move around. The team considered small and large airports and linear, pier, satellite, and curvilinear-shaped concourses. + +Their model consisted of three parts: + +- an algorithm for finding the number of escorts that an airport should hire, based on the need to balance costs and recognizing that the primary costs are the salaries of escorts; +- establishing that the best wheelchair-to-escort ratio is one-to-one; +- showing that wheelchair service is most efficient when escorts have a central hub, whose location depends on the concourse type. + +To test the efficiency of their model, the team used a spreadsheet to simulate wheelchair service in small, medium, and large airports. They recognized that some of their assumptions—for example, that all escorts are perfectly efficient and all passengers are completely cooperative—weaken their model by ignoring the human element. However, they demonstrated the flexibility in their model, allowing for changes as the airline industry grows and the traveling population ages. + +This paper is a fine example of the fact that mathematical modeling can be done at many levels. The team is to be congratulated on their thoroughness, their clarity, and their utilization of the mathematics that they knew to create their own model and solve the problem at hand. The judges felt that the model itself was both reasonable and well thought out. + +# About the Authors + +Marie Vanisko is in her fifth year of teaching at Cal State Stanislaus. Prior to that, she taught for 31 years at Carroll College in Montana and was a visiting professor at the U.S. Military Academy at West Point. She chairs a College Board committee for the SAT Subject Tests in Mathematics and serves on a national joint committee for the NCTM and MAA. For each of the past two years, Marie has co-directed an MAA Tensor Foundation grant project for high school girls, entitled Preparing Women for Mathematical Modeling, with the hope of encouraging more young women to select careers that involve mathematics. She serves as a judge for the COMAP MCM and HiMCM has also been active in the MAA PMET (Preparing Mathematicians to Educate Teachers) project. + +# Announcing the launch of a new journal dedicated to the history of mathematics education! + +![](images/7f65a6c270354aa68f36471488319e54d24e771c39c2ef7f5b1fc56e7ada1845.jpg) + +The International Journal of the History of Mathematics Education will begin taking subscriptions in calendar year 2007. The journal will have two (2) issues each year and will be available both in print and online. + +Our single issue for 2006 is now available for free download. For 2007, as a special introductory offer, all subscriptions are available at half price. Join with us as we explore this exciting and growing new field. + +# Visit www.comap.com/historyjournal.html for FREE download and subscription information. + +Chief Editor: Gert Schubring (Bielefeld University, Germany) + +Managing Editor: Alexander Karp (Teachers College, Columbia University, USA/Russia) + +The major aim of this journal is to provide mathematics teaching and mathematics education with its memory, in order to reveal the insights achieved in earlier periods (ranging from Ancient time to the late $20^{\text{th}}$ century) and to unravel the fallacies of past events (e.g., reform euphoria). + +# Statement of Ownership, Management, and Circulation (All Periodicals Publications Except Requester Publications) + +
1. Publication Title +The UMAP Journal2. Publication Number3. Filing Date +9/20/2006
0197-3622
4. Issue Frequency +Quarterly5. Number of Issues Published Annually +46. Annual Subscription Price +$104.00
7. Complete Mailing Address of Known Office of Publication (Not printer) (Street, city, county, state, and ZIP+4®) +COMAP, Inc. +175 Middlesex Tpk, Suite 3B, Bedford, MA 01730Contact Person +Kevin Darcy
Telephone (Include area code) +781/862-7878 x131
+ +8. Complete Mailing Address of Headquarters or General Business Office of Publisher (Not printer) + +# Same + +9. Full Names and Complete Mailing Addresses of Publisher, Editor, and Managing Editor (Do not leave blank) + +Publisher (Name and complete mailing address) + +Solomon Garfunkel + +175 Middlesex Tpk, Suite 3B, Bedford, MA 01730 + +Editor (Name and complete mailing address) + +Paul Campbell + +700 College Street, Beloit, WI 53511 + +Managing Editor (Name and complete mailing address) + +Tim McClean + +175 Middlesex Tpk, Suite 3B, Bedford, MA 01730 + +10. Owner (Do not leave blank. If the publication is owned by a corporation, give the name and address of the corporation immediately followed by the names and addresses of all stockholders owning or holding 1 percent or more of the total amount of stock. If not owned by a corporation, give the names and addresses of the individual owners. If owned by a partnership or other unincorporated firm, give its name and address as well as those of each individual owner. If the publication is published by a nonprofit organization, give its name and address.) + +
Full NameComplete Mailing Address
Consortium for Mathematics and175 Middlesex Tpk., Suite 3B
Its Applications, Inc. (COMAP, Inc.)Bedford, MA 01730
+ +11. Known Bondholders, Mortgagees, and Other Security Holders Owning or Holding 1 Percent or More of Total Amount of Bonds, Mortgages, or Other Securities. If none, check box + +
Full NameComplete Mailing Address
+ +12. Tax Status (For completion by nonprofit organizations authorized to mail at nonprofit rates) (Check one) + +The purpose, function, and nonprofit status of this organization and the exempt status for federal income tax purposes: + +Has Not Changed During Preceding 12 Months +□ Has Changed During Preceding 12 Months (Publisher must submit explanation of change with this statement) + +
13. Publication Title +The UMAP Journal14. Issue Date for Circulation Data +11/15/2006
15. Extent and Nature of CirculationAverage No. Copies Each Issue During Preceding 12 MonthsNo. Copies of Single Issue Published Nearest to Filing Date
a. Total Number of Copies (Net press run)725740
b. Paid Circulation +(By Mail and Outside the Mail)(1)Mailed Outside-County Paid Subscriptions Stated on PS Form 3541(Include paid distribution above nominal rate, advertiser's proof copies, and exchange copies)650675
(2)Mailed In-County Paid Subscriptions Stated on PS Form 3541 (Include paid distribution above nominal rate, advertiser's proof copies, and exchange copies)00
(3)Paid Distribution Outside the Mails Including Sales Through Dealers and Carriers, Street Vendors, Counter Sales, and Other Paid Distribution Outside USPS®3035
(4)Paid Distribution by Other Classes of Mail Through the USPS (e.g. First-Class Mail®)00
c. Total Paid Distribution (Sum of 15b (1), (2),(3), and (4))680710
d. Free or Nominal Rate Distribution +(By Mail and Outside the Mail)(1)Free or Nominal Rate Outside-County Copies included on PS Form 354100
(2)Free or Nominal Rate In-County Copies Included on PS Form 35411218
(3)Free or Nominal Rate Copies Mailed at Other Classes Through the USPS (e.g. First-Class Mail)00
(4)Free or Nominal Rate Distribution Outside the Mail (Carriers or other means)00
e. Total Free or Nominal Rate Distribution (Sum of 15d (1), (2), (3) and (4))1218
f. Total Distribution (Sum of 15c and 15e)692728
g. Copies not Distributed (See Instructions to Publishers #4 (page #3))814
h. Total (Sum of 15f and g)700742
i. Percent Paid (15c divided by 15f times 100)9898
+ +16. Publication of Statement of Ownership +□ If the publication is a general publication, publication of this statement is required. Will be printed in the Third issue of this publication. + +□ Publication not required. + +17. Signature and Title of Editor, Publisher, Business Manager, or Owner + +![](images/bc8f93fb4aee03e70e0fa50dc414c0592d0f9e8346767c1e050233c769c34e74.jpg) + +Date + +11/15/2006 + +I certify that all information furnished on this form is true and complete. I understand that anyone who furnishes false or misleading information on this form or who omits material or information requested on the form may be subject to criminal sanctions (including fines and imprisonment) and/or civil sanctions (including civil penalties). \ No newline at end of file diff --git a/MCM/1995-2008/2007ICM/2007ICM.md b/MCM/1995-2008/2007ICM/2007ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..c7063f7ee3b89580d57edd3c2c9b1ae8d104cac1 --- /dev/null +++ b/MCM/1995-2008/2007ICM/2007ICM.md @@ -0,0 +1,1943 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Associate Director, + +Mathematics Division + +Program Manager, + +Cooperative Systems + +Army Research Office + +P.O.Box 12211 + +Research Triangle Park, + +NC 27709-2211 + +David.Arney1@arl.army.mil + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Production Editor + +Timothy McLean + +Distribution + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 28, No. 2 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meyer + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young University + +Army Research Office + +AT&T Shannon Research Laboratory + +University of Houston-Downtown + +Harvey Mudd College + +Oberlin College + +Troy University—Montgomery Campus + +University of Wisconsin—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Harvey Mudd College + +Adelphi University + +Eastern Washington University + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes a CD-ROM of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2720 $104 + +(Outside U.S.) #2721 $117 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2770 $479 + +(Outside U.S.) #2771 $503 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2740 $208 + +(Outside U.S.) #2741 $231 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2710 $41 + +(Outside U.S.) #2710 $41 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +# Send address changes to: info@comap.com + +COMAP, Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730 © Copyright 2007 by COMAP, Inc. All rights reserved. + +# Vol. 28, No. 2 2007 + +# Table of Contents + +# Editorial + +Write Your Own Contest Entry Paul J. Campbell 93 + +# Special Section on the ICM + +Results of the 2007 Interdisciplinary Contest in Modeling Chris Arney 99 + +Optimizing the Effectiveness of Organ Allocation Matthew Rognlie, Peng Shi, and Amy Wen 117 + +Analysis of Kidney Transplant System Using Markov Process Models Jeffrey Y. Tang, Yue Yang, and Jingyuan Wu 139 + +Judges' Commentary: The Outstanding Kidney Exchange Papers Chris Arney and Kathleen Crowley 159 + +Practitioner's Commentary: The Outstanding Kidney Exchange Papers Sommer Gentry 167 + +Author's Commentary: The Outstanding Kidney Exchange Papers Paul J. Campbell 173 + +![](images/6dbf1ae03289bb102df8133f3bc10c11f6017950408ecabb5b84c0315f5522e6.jpg) + +# Editorial + +# Write Your Own Contest Entry + +Paul J. Campbell + +Dept. of Mathematics and Computer Science + +Carl Mendelson + +Dept. of Geology + +Yaffa Grossman + +Dept. of Biology + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Introduction + +This year's ICM was marred by the disqualification of two teams. Their papers would have been judged Outstanding—except that the papers weren't their work. The papers included numerous entire paragraphs from other sources without any acknowledgment. The teams presented as their own the work of others: They utterly failed to distinguish where the sources' words ended and the team's began. + +# The Rule + +The very first contest rule for the ICM (and the MCM) is + +1. Teams may use any inanimate source of data or materials—computers, software, references, web sites, books, etc., however all sources used must be credited. Failure to credit a source will result in a team being disqualified from the competition. + +# What to Do + +We reproduce below advice and examples that we give to our classes, modified slightly for ICM/MCM teams. Further examples and advice are offered by Hacker [2007]. + +When you research a topic for a paper, project, or presentation, you collect information from books, journal and magazine articles, lecture notes, Web pages, and other sources. + +You then construct your own statements about a topic, and occasionally you may want to use direct quotations from a text. Either way, there must be + +- a citation: the source of the information must be acknowledged in the text of your work, and +- a reference: bibliographic information must be provided at the end of the document or in a footnote, depending on the style book used in the discipline. + +# Some Guidelines + +As you work on a topic, ideas initially provided by an author or colleague become very familiar. Often they become so familiar that you may begin to think of them as your own! To avoid omissions or errors in acknowledging the sources of ideas, take thorough notes as you research. These notes should include bibliographic information and page numbers (or bookmarked URLs) for finding the information again. + +Here are some guidelines: + +- Widely known facts do not need attribution ("The U.S. population has increased substantially over the last 50 years.") +- Your references do not need to include sources that are not cited in your paper. +- Lines of reasoning (such as the outline of a mathematical model) taken or adapted from others must be credited, even if you do not use any wording from the source. +- All data sets, equations, figures, photos, graphs, and tables must be credited with their sources. +- It is OK to quote a source directly. To do that correctly, + +- either use quotation marks or (particularly for a long quotation) indent and set off the quotation from the rest of the text, +- cite the source (including page number), +- make sure you include among your references the full bibliographic data for the reference, and + +- get the quotation right. + +- It's OK to summarize an author's thoughts. You still must cite the source, but in addition you must avoid close paraphrases. It is not acceptable to adopt an author's phrases, sentences, or sentence structure, even if you substitute synonyms or modify a few words. Use your own words and your own syntax. The best way to do that is not to have the source text in front of you when you write. + +# Instructive Examples + +Here is an excerpt from Benton [1989, 4]: + +The fin-back Dimetrodon was able to keep warm by orienting its "sail" perpendicular to the direction of sunlight. + +The following examples should help you understand the nature of plagiarism. All examples use the citation style of this Journal, which is the publication outlet for Outstanding ICM/MCM papers. Further style details of the Journal are at Campbell [2007]. (Of course, if you are preparing a non-contest paper for a different journal, you should acquaint yourself with its style policy and follow that.) + +1. The fin-back Dimetrodon was able to keep warm by orienting its "sail" perpendicular to the direction of sunlight [Benton 1989]. + +Why is this plagiarism? Since the statement is a direct quotation, attribution (to Benton) is not enough; you also need quotation marks (and many style manuals require you to cite the page number). Here's the proper way to write this example: + +"The fin-back Dimetrodon was able to keep warm by orienting its 'sail' perpendicular to the direction of sunlight" [Benton 1989, 4]. + +2. Benton [1989] claimed that the fin-back Dimetrodon was able to keep warm by orienting its "sail" perpendicular to the direction of sunlight. + +This is properly attributed to Benton, but it still is plagiarism. Tell why. + +3. The sail-back Dimetrodon could keep warm by orienting her "sail" perpendicular to the sunlight. + +There is no attribution. This is plagiarism, despite alteration of a few words. + +4. The sail-back Dimetrodon could keep warm by orienting her "sail" perpendicular to the sunlight [Benton, 1989]. + +This is almost exactly the same as 3). This time, there is attribution of the source, though it is not clear that the twice-used "apt term" of "sail" is Benton's and not the writer's. Also, although a few words have been changed + +from the original, the sentence structure is identical. Of course, the writer could continue to change the words until the result was almost completely different from the original. Hacker [2007] clearly indicates that this example is plagiarism. + +# How to Avoid Plagiarism + +Develop your own approach. Don't rely on another author to dictate the organization of your report. If you adopt that organization, you might be tempted to adopt the sentence structure as well, and maybe even the wording—and this can lead to plagiarism. How do you develop your own approach? + +- read about your subject, +develop an outline, +- establish a set of topics to discuss, +- take notes from sources, and +- write those notes on index cards, one per topic. + +You'll end up with a collection of note cards, each of which has ideas from five or six authors. You will then see connections among the ideas of a number of different authors; these new connections will allow you to express your ideas (and those of the authors) from a unique viewpoint, a viewpoint that requires a sentence structure and a vocabulary that differ from those of the original author(s). One way to do this is to read one of your note cards, then turn it over and write your own thoughts on the matter. + +For example, suppose that your paper is to have a section on the thermal physiology of mammals and mammal-like reptiles: + +Mammals are truly warm-blooded: they have specific physiologic mechanisms that maintain their body within a narrow range of temperatures. Mammal-like reptiles probably exploited a variety of strategies; for example, Dimetrodon may have used its "sail" as a thermoregulatory device [Benton 1989, 4]. + +Here the statement about warm-bloodedness in mammals does not need attribution—warm-bloodedness in mammals is common knowledge. In addition, the language has been changed substantially, so quotation marks are not necessary. However, the specific point about Dimetrodon needs attribution, particularly since the word "sail" is an "apt term" that you did not invent but is Benton's. + +# Application to the ICM/MCM + +The contest rule is simple and explicit. Given the unpleasant circumstances this year, however, the Contest Director will review it with a revision in mind to urge team members to review guidelines such as those given above. An important new policy in the ICM/MCM will be: + +At the discretion of the judges, papers will be checked for originality of content and proper attribution of sources used. Any paper with an unattributed direct quotation or paraphrase, uncredited line of reasoning from another source, or uncredited figure or table will be disqualified. The team advisor will be informed of the disqualification, and the plagiarism policy of the institution may be invoked. + +In particular, all citations should include specific page numbers or URLs in the reference. Any references obtained electronically should specifically state so and include the exact URL or other source reference (not just say, for example, the generic page http://stats.bls.gov). For example, if your source is a journal article but you acquired it electronically (e.g., finding it on the Web or through JSTOR or another periodical database), you should give both the journal reference and the page-specific URL; doing so is an aid to the reader in locating a copy, which is the purpose of full bibliographic information in references. + +# References + +Benton, M.J. 1989. [This reference is fictitious!] + +Campbell, Paul J. 2007. Guide for authors. The UMAP Journal 28 (1): 91-92. + +Hacker, Diana. 2007. A Writer's Reference Sixth Edition with Writing in the Disciplines. New York: Bedford/St. Martin's. + +# About the Author + +![](images/f07231cdf6689082e3d7492ff33200f615f6b2e786bb2afb2092bb86bab29b55.jpg) + +Paul Campbell is Professor of Mathematics and Computer Science at Beloit College, where he was Director of Academic Computing from 1987 to 1990. He has been the editor of The UMAP Journal of Undergraduate Mathematics and Its Applications since 1984. + +Carl Mendelson is Professor of Geology at Beloit College. He teaches Earth history, paleontology, and planetary geology. He is interested in early life on Earth and other planets. + +![](images/4f2e3cca453a5ae7724ae5f24e70915eacb41e3c326aaf6c380983c5ebe4b7da.jpg) + +Yaffa Grossman is Associate Professor of Biology at Beloit College and has chaired the Environmental Studies program since 2003. She received the James R. Underkofler Award for Excellence in Undergraduate Teaching Award in 2005. She is an author of PEACH, a computer simulation model of peach tree growth and development. (Photo by Greg Anderson.) + +# Modeling Forum + +# Results of the 2007 Interdisciplinary Contest in Modeling + +Chris Arney, ICM Co-Director + +Division Chief, Mathematical Sciences Division + +Program Manager, Cooperative Systems + +Army Research Office + +PO Box 12211 + +Research Triangle Park, NC 27709-2211 + +David.Arney1@arl.army.mil + +# Introduction + +A record total of 273 teams from five countries spent a weekend in February working on an applied modeling problem involving managing and promoting organ transplantation in the 9th Interdisciplinary Contest in Modeling (ICM). This year's contest began on Thursday, Feb. 8 and ended on Monday, Feb. 12, 2007. During that time, teams of up to three undergraduate or high school students researched, modeled, analyzed, solved, wrote, and submitted their solutions to an open-ended interdisciplinary modeling problem involving public health policy concerning organ transplants. After the weekend of challenging and productive work, the solution papers were sent to COMAP for judging. Two top papers, judged to be Outstanding by the expert panel of judges, appear in this issue of The UMAP Journal. + +COMAP's Interdisciplinary Contest in Modeling and its sibling contest, the Mathematical Contest in Modeling, involve students working in teams to find and report a solution to an open problem. Centering its educational philosophy on mathematical modeling, COMAP supports the use of mathematical tools to explore real-world problems. It serves society by developing students as problem solvers in order to become better informed and prepared as citizens, contributors, consumers, workers, and community leaders. + +This year's public-health problem was challenging in its demand for teams to utilize many aspects of science and mathematics in their modeling and analysis. The problem required teams to understand the basic science of organ transplantation and understand and model the complex policy issues associated with the health and psychological issues of organ failure and treatment in order to advise the Congress and the Department of Health Services on how to manage the nation's transplant network. In order to accomplish their tasks, the students had to consider many difficult and complex issues. Political, social, psychological, and technological issues had to be considered and analyzed along with several challenging requirements needing scientific and mathematical analysis. The problem also included the ever-present requirements of the ICM to use thorough data analysis, research, creativity, approximation, precision, and effective communication. The author of the problem was Paul J. Campbell, Professor of Mathematics and Computer Science at Beloit College. The problem came from a seminar of his on the mathematics behind the popular TV series Numb3rs. A commentary from Dr. Campbell appears in this issue of The UMAP Journal. + +All members of the 273 competing teams are to be congratulated for their excellent work and dedication to scientific modeling and problem solving. The judges remarked that this year's problem was challenging with many interdisciplinary tasks. + +Next year, we will continue with the public health theme for the contest problem. Teams preparing for the 2008 contest should consider reviewing interdisciplinary topics in the area of public health modeling and analysis. + +Start-up funding for the ICM was provided by a grant from the National Science Foundation (through Project INTERMATH) and COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS) and IBM. + +# The Kidney Exchange Problem + +# Transplant Network + +Despite the continuing and dramatic advances in medicine and health technology, the demand for organs for transplantation drastically exceeds the number of donors. To help this situation, the U.S. Congress passed the National Organ Transplant Act in 1984, establishing the Organ Procurement and Transplantation Network (OPTN) to match organ donors to patients with organ needs. Even with all this organizational technology and service in place, there are nearly 94,000 transplant candidates in the U.S. waiting for an organ transplant and this number is predicted to exceed 100,000 very soon. The average wait time exceeds three years—double that in some areas, such as large cities. Organs for transplant are obtained either from a cadaver queue or from living donors. The keys for the effective use of the cadaver queue are cooperation + +and good communication throughout the network. The good news is that the system is functioning and more and more donors (alive and deceased) are identified and used each year with record numbers of transplants taking place every month. The bad news is that the candidate list grows longer and longer. Some people think that the current system with both regional and national aspects is headed for collapse with consequential failures for some of the neediest patients. Moreover, fundamental questions remain: Can this network be improved and how do we improve the effectiveness of a complex network like OPTN? Different countries have different processes and policies; which of these work best? What is the future status of the current system? + +# Task 1 + +For a beginning reference, read the OPTN Website (http://www.optn.org) with its policy descriptions and data banks (http://www.optn.org/data and http://www.optn.org/latestData/viewDataReports.asp). Build a mathematical model for the generic U.S. transplant network(s). This model must be able to give insight into the following: Where are the potential bottlenecks for efficient organ matching? If more resources were available for improving the efficiency of the donor-matching process, where and how could they be used? Would this network function better if it was divided into smaller networks (for instance at the state level)? And finally, can you make the system more effective by saving and prolonging more lives? If so, suggest policy changes and modify your model to reflect these improvements. + +# Task 2 + +Investigate the transplantation policies used in a country other than the U.S. By modifying your model from Task 1, determine if the U.S. policy would be improved by implementing the procedures used in the other country. As members of an expert analysis team (knowledge of public health issues and network science) hired by Congress to perform a study of these questions, write a one-page report to Congress addressing the questions and issues of Task 1 and the information and possible improvements you have discovered from your research of the different country's policies. Be sure to reference how you used your models from Task 1 to help address the issues. + +# Focusing on Kidney Exchange + +Kidneys filter blood, remove waste, make hormones, and produce urine. Kidney failure can be caused by many different diseases and conditions. People with end-stage kidney disease face death, dialysis (at over \(60,000/yr), or the hope for a kidney transplant. A transplant can come from the cadavers of an individual who agreed to donate organs after death, or from a live donor. In the U.S., about 68,000 patients are waiting for a kidney from a deceased donor, + +while each year only 10,000 are transplanted from cadavers and 6,000 from living individuals (usually relatives of the patients). Hence the median wait for a matching kidney is three years—unfortunately, some needy patients do not survive long enough to receive a kidney. + +There are many issues involved in kidney transplantation—the overall physical and mental health of the recipient, the financial situation of the recipient (insurance for transplant and post-operation medication), and donor availability (is there a living donor willing to provide a kidney?). The transplanted kidney must be of a compatible ABO blood type. The 5-year survival of the transplant is enhanced by minimizing the number of mismatches on six HLA markers in the blood. At least 2,000 would-be-donor/recipient pairs are thwarted each year because of blood-type incompatibility or poor HLA match. Other sources indicate that over 6,000 people on the current waiting list have a willing but incompatible donor. This is a significant loss to the donor population and worthy of consideration when making new policies and procedures. + +An idea that originated in Korea is that of a kidney exchange system, which can take place either with a living donor or with the cadaver queue. One exchange is paired-kidney donation, where each of two patients has a willing donor who is incompatible, but each donor is compatible with the other patient; each donor donates to the other patient, usually in the same hospital on the same day. Another idea is list paired donation, in which a willing donor, on behalf of a particular patient, donates to another person waiting for a cadaver kidney; in return, the patient of the donor-patient pair receives higher priority for a compatible kidney from the cadaver queue. Yet a third idea is to expand the paired kidney donation to 3-way, 4-way, or a circle ( $n$ -paired) in which each donor gives to the next patient around the circle. On November 20, 2006, 12 surgeons performed the first-ever 5-way kidney swap at Johns Hopkins Medical Facility. None of the intended donor-recipient transplants were possible because of incompatibilities between the donor and the originally intended recipient. At any given time, there are many patient-donor pairs (perhaps as many as 6,000) with varying blood types and HLA markers. Meanwhile, the cadaver queue receives kidneys daily and is emptied daily as the assignments are made and the transplants performed. + +# Task 3 + +Devise a procedure to maximize the number and quality of exchanges, taking into account the medical and psychological dynamics of the situation. Justify in what way your procedure achieves a maximum. Estimate how many more annual transplants your procedure will generate, and the resulting effect on the waiting list. + +# Strategies + +Patients can face agonizing choices. For example, suppose a barely compatible—in terms of HLA mismatches—kidney becomes available from the cadaver queue. Should they take it or wait for a better match from the cadaver queue or from an exchange? In particular, a cadaver kidney has a shorter half-life than a live donor kidney. + +# Task 4 + +Devise a strategy for a patient to decide whether to take an offered kidney or to even participate in a kidney exchange. Consider the risks, alternatives, and probabilities in your analysis. + +# Ethical Concerns + +Transplantation is a controversial issue with both technical and political issues that involve balancing what is best for society with what is best for the individual. Criteria have been developed very carefully to try to ensure that people on the waiting list are treated fairly, and several of the policies try to address the ethical concerns of who should go on to the list or who should come off. Criteria involved for getting on or coming off the list can include diagnosis of a malignant disease, HIV infection or AIDS, severe cardiovascular disease, a history of non-compliance with prior treatment, or poorly controlled psychosis. Criteria used in determining placement priority include: time on the waiting list, the quality of the match between donor and recipient, and the physical distance between the donor and the recipient. As a result of recent changes in policy, children under 18 years of age receive priority on the waiting list and often receive a transplant within weeks or months of being placed on the list. The United Network for Organ Sharing Website recently (Oct. 27, 2006) showed the age of waiting patients as: + +Under 18: 748 + +18 to 34: 8,033 + +35 to 49: 20,553 + +50 to 64: 28,530 + +65 and over: 10,628 + +One ethical issue of continual concern is the amount of emphasis and priority on age to increase overall living time saved through donations. From a statistical standpoint, since age appears to be the most important factor in predicting length of survival, some believe kidneys are being squandered on older recipients. + +# Political Issues + +Regionalization of the transplant system has produced political ramifications (e.g., someone may desperately need a kidney and is quite high on the queue, but his or her deceased neighbor's kidney still can go to an alcoholic drug dealer 500 miles away in a big city). Doctors living in small communities, who want to do a good job in transplants, need continuing experience by doing a minimum number of transplants per year. However, the kidneys from these small communities frequently go to the hospitals in the big city and, therefore, the local doctors cannot maintain their proficiency. This raises the question, perhaps transplants should be performed only in a few large centers, by a few expert and experienced surgeons? But would that be a fair system and would it add or detract from system efficiency? + +Many other ethical and political issues are being debated. Some of the current policies can be found at http://www.unos.org/policiesandbylaws/policies.asp?resources=true. For example, recent laws have been passed in the U.S. that forbid the selling or mandating the donation of organs, yet there are many agencies advocating for donors to receive financial compensation for their organ. The state of Illinois has a new policy that assumes everyone desires to be an organ donor (presumed consent) and people must opt out if they do not. The Department of Health and Human Services Advisory Committee on Organ Transplantation is expected to recommend that all states adopt policies of presumed consent for organ donation. The final decision on new national policies rests with the Health Resources and Services Administration within the U.S. Department of Health and Human Services. + +# Task 5 + +Based on your analysis, do you recommend any changes to these criteria and policies? Discuss the ethical dimensions of your recommended exchange procedure and your recommended patient strategy (Tasks 3 and 4). Rank order the criteria you would use for priority and placement, as above, with rationale as to why you placed each where you did. Would you consider allowing people to sell organs for transplantation? Write a one-page paper to the Director of the U.S. Health Resources and Services Administration with your recommendations. + +# Task 6 + +From the potential donor's perspective, the risks in volunteering involve assessing the probability of success for the recipient, the probability of survival of the donor, the probability of future health problems for the donor, the probability of future health risks (such as failure of the one remaining kidney), and the post-operative pain and recovery. How do these risks and others affect the decision of the donor? How do perceived risks and personal issues (phobias, irrational fears, misinformation, previous experiences with surgery, level of + +altruism, and level of trust) influence the decision to donate? If entering a list paired network rather than a direct transplant to the relative or friend, does the size $n$ of the $n$ -paired network have any effect on the decision of the potential donor? Can your models be modified to reflect and analyze any of these issues? Finally, suggest ways to develop and recruit more altruistic donors. + +(Note: None of the data files, including the data in the appendices, are included in this article. They are available on the COMAP Website, at http://www.comap.com/undergraduate/contestms/mcm/contestss/2007/problems/.) + +# The Results + +The 273 solution papers were coded at COMAP headquarters so that names and affiliations of the authors were unknown to the judges. Each paper was then read preliminarily by "triage" judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary, the model description, and overall organization are the primary elements in judging a paper. Final judging by a team of modelers, analysts, and subject-matter experts took place in March. The judges classified the 273 submitted papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
Kidney Exchange24216958273
+ +The two papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries by the author and the final judges. We list those two Outstanding teams and the Meritorious teams (and advisors) below. The complete list of all participating schools, advisors, and results is provided in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +"Optimizing the Effectiveness of Organ Allocation" + +Duke University + +Durham, NC + +David Kraines + +Matthew Rognlie + +Peng Shi + +Amy Wen + +"Analysis of Kidney Transplant System Using Markov Process Models" + +Princeton University + +Princeton, NJ + +Ramin Takloo + +Jeff Tang + +Yue Yang + +Jingyuan Wu + +# Meritorious Teams (38) + +Anhui University, China (Mingsheng Chen) + +Asbury College, Wilmore, KY (Duk Lee) + +Beijing Jiao Tong University, China (4 teams) (Yuchuan Zhang) (Zhouhong Wang) + +(Hong Zhang--2 teams) + +Beijing Language and Culture University, China (Rou Song) + +Beijing University of Posts and Telecommunications, China (Hongxiang Sun) + +Berkshire Community College, Pittsfield, MA (Andrew Miller) + +Cambridge University, England (Thomas Duke) + +Chengdu University of Technology, China (Yuan Yong) + +Duke University, Durham, NC (2 teams) (Anita Layton) (Fernando Schwartz) + +East China University of Science & Technology, China (Lu Xiwen) + +Fudan University, China (Yuan Cao) + +Harbin Institute of Technology, China (2 teams) (Shouting Shang) (Jiqyun Shao) + +Hunan University, China (Chuanxiu Ma) + +Nanjing University, China (Xu LiWei) + +Olin College, Needham, MA (Burt Tilley) + +Peking University Health Science Center, China (Zhang Xia) + +PLA University of Science and Technology, China (2 teams) (Yao Kui) (Zheng Qin) + +Rice University, Houston, TX (Robert Hardt) + +South China University of Technology, China (Shen-Quan) + +Southeast University, China (Zhiqiang Zhang) + +Shandong University, China (3 teams) (Baodong Liu) (Jianliang Chen) (Huang Shuxiang) + +Shandong University at Weihai, China (Huaxiang Zhao) + +United States Military Academy, West Point, NY (Joseph Lindquist) + +University of Electronics Science, China (3 teams) (Du Hongfei) (Lihui Wang) (Li Mingqi) + +University of Geosciences, China (2 teams) (Guangdong Huang - 2 teams) + +Xidian University, China (2 teams) (Xuewen Mu) (Xiaogang Qi) + +Zhejiang Gongshang, University, China (Ding Zhengzhong) + +Zhejiang University, China (Yong Wu) + +Zhejiang University City College, China (Waibin Huang) + +Zuhai University, China (Zhiwei Wang) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and the Head Judge. Additional awards were presented to the Princeton University team from the Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Contest Directors + +Chris Arney, Division Chief, Mathematical Sciences Division, Army Research Office, Research Triangle Park, NC + +Joseph Myers, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Associate Director + +Richard Cassady, Dept. of Industrial Engineering, University of Arkansas, Fayetteville, AR + +Judges + +Kathleen Crowley, Dept. of Psychology, College of Saint Rose, Albany, NY + +Michael Matthews, Dept. of Behavioral Sciences, U.S. Military Academy, West Point, NY + +Shawnie McMurran, Dept. of Mathematics, California State University, San Bernardino, CA + +Bret McMurran, Dept. of Economics, Chaffey Community College, Rancho Cucamonga, CA + +Frederick Rickey, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Frank Wattenberg, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Triage Judges + +Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY: + +Eric Brown, Gabriel Costa, Jong Chung, Rachelle DeCoste, Amy Erickson, Keith Erickson, Edward Fuselier, Greg Graves, Michael Harding, Alex Heidenberg, Josh Helms, Anthony Johnson, Thomas Kastner, Jerry Kobylski, Ian McCulloh, Barbara Melendez, Thomas Messervey, Fernando Miguel, Jason Miseli, Joseph Myers, Mike Phillips, Todd Retchless, Wiley Rittenhouse, Mick Smith, Heather Stevenson, Rodney Sturdivant, Krista Watts, Robbie Williams, Erica Slate Young + +Dept. of Behavioral Sciences, U.S. Military Academy, West Point, NY: + +Mike Matthews + +# Acknowledgments + +We thank: + +- the Institute for Operations Research and the Management Sciences (INFORMS) for its support in judging and providing prizes for the winning team; +- IBM for their support for the contest; +- all the ICM judges and ICM Board members for their valuable and unflagging efforts; +- the staff of the U.S. Military Academy, West Point, NY, for hosting the triage and final judgings. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, style has been adjusted to that of The UMAP Journal, and the papers have been edited for length. Please peruse these student efforts in that context. + +To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Editor's Note + +As usual, the Outstanding papers were longer than we can accommodate in the Journal, so space considerations forced me to edit them for length. It was not possible to include all of the many tables and figures. + +In editing, I endeavored to preserve the substance and style of the papers, especially the approach to the modeling. + +—Paul J. Campbell, Editor + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +$\mathbf{M} =$ Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONCITYADVISORC
CALIFORNIA
Harvey Mudd CollegeClaremontDarryl YongH,P
Susan MartonosiH
COLORADO
University of ColoradoColorado SpringsRadu C. CascavalP
DenverGary A. OlsonH
ILLINOIS
Monmouth College (Phys)MonmouthChristopher G. FasanoP
IOWA
Simpson College (Bio & Env Sci)IndianolaRyan RehmeierH
(Chem & Phys)Werner S. KollnH
(CS)Paul CravenH
KENTUCKY
Asbury CollegeWilmoreDuk LeeM,H
MASSACHUSETTS
Berkshire Comm. CollegePittsfieldAndrew S. MillerM
Franklin W. Olin College of Eng.NeedhamBurt S. TilleyM,P
Massachusetts Institute of Tech.CambridgeEric LaugaH
MINNESTOA
Bemidji State Univ.BemidjiRichard SpindlerP
MONTANA
Carroll CollegeHelenaHolly S. ZulloH
Kelly S. ClineH
Mark ParkerP
NEW JERSEY
Princeton UniversityPrincetonRamin Takloo-BighashO
NEW YORK
United States Military AcademyWest PointJoseph LindquistM
Randal HickmanH
NORTH CAROLINA
Duke UniversityDurhamAnita LaytonM
David KrainesO
Fernando SchwartzM
OHIO
Youngstown State UniversityYoungstownG. Jay KernsH
Paddy TaylorP
Angie SpalsburyH
TEXAS
Rice UniversityHoustonRobert M. HardtM
VIRGINIA
Godwin HS Sci., Math., and Tech. CtrRichmondAnn W. SebrellH
James Madison UniversityHarrisonburgAnthony TongenH
Hasan N. HamdanH
Maggie Walker Governor's SchoolRichmondHarold HoughtonH
WASHINGTON
Pacific Lutheran UniversityTacomaBryan C. DornerP
Mei ZhuH
CHINA
Anhui
Anhui University (Stat)HefeiChen MingshengM
Zhou LigangH
Hefei University of TechnologyHefeiSu HuamingH
Huang YouduH
Univ. of Sci. and Tech. of China (Foreign Lang) (Stat)HefeiZhang ManjunH
Cui WenquanP
Beijing
Beihang University (Eng)BeijingLi ShangzhiH
Beijing Insitute of TechnologyBeijingXu HoubaoH
Li XueweniH
Yan Xiao-XiaH
Beijing JiaoTong University (Comp Math) (CS)Wang BingtuanP
Yu YongguangP
Zhang HongM,M
(Econ)Li HuiH,P
(Electronics)You ChengchaoH
(Info)Zhang YuchuanM
(Bus)Zhang ShangliH
(Phys)Huang XiaomingH
(Stat)Li WeidongH
(Stat)Wang ZhouhongM
(Traffic Eng)Wang JunH
Beijing Language and Culture Univ. (Info)BeijingZhao XiaoxiaH
(Bus)Xun EndongH
Song RouM
Beijing U. of Aero. and Astro. (Eng)BeijingFeng WeiH
Beijing University of Chemical Tech.BeijingLiu DaminH,H
(Chem)Li WeiH
Beijing University of Posts and Telecomm.BeijingSun HongxiangM
(CS)He ZuguoH
(Info)Zhang WenboP
(Phys)Ding JinkouH
Zhou QingP
(Telecomm Eng)Yuan JianhuaP
Central University of Finance and EconomicsBeijingYin XianjunH
Fan XiaomingH,P
China University of Geosci. Technology (Info)BeijingXing YongliH
Huang GuangdongM,M
North China Electric Power UniversityBeijingQiu QiRongP
Peking University (Biomath)BeijingLiu YulongH,H
An JinbingP
Zhang XiaM
(Econ)Dong ZhiyongH
(Electr Eng)Zhu LidaH,H
(Eng)Huang KefuH
Dong ZijingH,H
(Finance)Zhang ShengpingH
(Med Sci)Cao BoH
Tang ZhiyuH
Renmin University of China (Stat)BeijingXiao YuguP
Tsinghua UniversityBeijingXie JinxingH
Lu MeiH,H
Chongqing Chongqing University (Info) (Stat)ChongqingWen LuoshengP
Xiao JianH
Duan ZhengminH
Rong TengzhongH
Guangdong Jinan University (CS)GuangzhouHu DaiqiangH
Fan SuohaiP
Shi Zhuang LuoH
South China Normal Univ. (Finance) (Info)GuangzhouLi HuNanH
Yu JianHuaP
South China Univ. of Tech.GuangzhouLiu Shen-QuanM
Qin Yong-AnH
(CS)Liang Man-FaH
Sun Yat-senGuangzhouFan ZhuJunH
Feng GuoCanH
(CS)Chen ZePengH
Zhuhai College of Jinan UniversityZhuhaiZhang YuanbiaoH
(Packaging Eng)Wang ZhiweiM
Hebei
Hebei University of TechnologyTianjinYu XinkaiH
North China Electric Power U.BaodingShi HuifengH
Li JingganguH
Heilongjiang
Harbin Engineering Univ.HarbinZhu LeiH,H
Luo YueshengH
(Info)Yu FeiP
Zhang XiaoweiH
Harbin Institute of TechnologyHarbinShang ShoutingM
Meng XianyuH
Shao JiqyunM,H
(Env Science & Eng)Tong ZhengH
Harbin University of Sci. and Tech.HarbinLi DongmeiP
Chen DongyanP
Tian GuangyueH
Wang ShuzhongP
Heilongjiang Inst. of Sci. and Tech.HarbinZhang HongyanP
Yuan YanhuaP
Hubei
Huazhong Univ. of Sci. and Tech.WuhanHe NanzhongH
(CS)Ke ShiH
(Electr & Info Eng)Yan DongH
Wuhan Univ. (Remote Sensing)WuhanLuo ZhuangchuH
Hunan
Central South UniversityChangshaZheng ZhoushunH
Hu Zhaoming and Yi KunnanH
Hunan UniversityChangshaYan Huahui YanP
Ma BolinP
(Info)Ma ChuanxiuM
(Op Res)Yang YufeiH
National U. of Defense Tech.ChangshaMao ZiyangH
(Phys)Yong LuoH
Inner Mongolia
Inner Mongolia University (Automation)HuHeHaoTeMa ZhuangH
Wang MeiH
Jiangsu
China University of Mining and Tech. (CS)XuzhouZhang XingyongH,P
Jiang ShujuanH
Nanjing UniversityNanjingMin KongH
Nanjing Univ. of Posts & Telecom.NanjingXu LiWeiM
Kong GaohuaH
Qiu ZhonghuaH
Nanjing University of Science & Tech.NanjingWei XiaoH
Huang ZhenyouH
PLA University of Sci. and Tech. (Command Automation)NanjingTian ZuoweiH
(Communication Eng) (Meteorology)Yao KuiM
Zheng QinM
Southeast UniversityNanjingZhang ZhiqiangM,H
Jia XingangH,H
Sun ZhizhongH
Jilin
Jilin UniversityChangchunLu XianruiH
(Comm Eng)Cao ChunlinoH
Liaoning
Dalian Maritime UniversityDalianYang ShuqinH
Zhang YunjieH,P
Dalian Nationalities University (CS & Eng)DalianLiu Xiang DongP
Wang Li MingP,P
(Dean's Ofc)Zhang Heng BoH
Ge Ren DongH
Guo QiangH
Ma Yu MeiP
Li Xiao NiuP
Liu Jian GuoP
(Innovation Ed)Bai Ri XiaH,P
Liu YanP
Dalian Navy AcademyDalianHuang Li weiP,P
Dalian UniversityDalianGang JiataiP
Liu ZixinH
Tan XinxinH
(Info)Liu GuangzhiP
Dalian University of TechnologyDalianCai YuP
Dai WanjiH
He MingfengH
+ +
INSTITUTIONCITYADVISORC
(Innovation)Feng LinH
He MingfengP
Xi WanwuH
(Software)Zhe LiH
Jiang HeH
Xu ShengjunP
Shenyang Inst. of Aero Eng. (Sci)ShenyangZhu LimeiH
Liu WeifangH
(Info)Li WangH
Shaanxi
Northwest A&F UniversityYanglingWang JingminH,P
Northwestern Polytechnical U. (Phys)Xi'anLu QuanyiH
Xidian UniversityXi'anZhou ShuishengH
Mu XuewenM
(Comp Math)Qi XiaogangM
Xi'an Jiaotong UniversityXi'anDai YonghongH
Mei ChanglinH
Li JichengH
Shandong
Shandong UniversityJinanHuang ShuxiangH,P
Liu SiyuanH
Liu BaodongM,H
Huang ShuxiangM
Cui YuquanH
Liu BaodongH
Chen JianliangM
(Materials Sci)Ma GuolongP
(Info)WeihaiYang BingH,H
Zhao HuaxiangM,H
Shanghai
Donghua UniversityShanghaiYou SurongH
Chen JingchaoP
(Info)Li DeminH
East China Univ. of Sci. and Tech.ShanghaiLu XiwenM,H
Lu YuanhongP
Su ChunjieH
(Phys)Qian XiyuanH
Fudan UniversityShanghaiTan YongjiH
Cao YuanM
Cai ZhijieH
(Mech & Eng)Cui ShengH
Shanghai Finance UniversityShanghaiLiang YumeiH
(Insurance)Wang KeyanH
+ +
INSTITUTIONCITYADVISORC
Shanghai Jiaotong UniversityShanghaiLiu XiaojunH
Shanghai Nanyang Model High School, +Class 10 Senior 2ShanghaiYang BeijingH
Shanghai Univ. of Finance & Econ. (Stat)ShanghaiWang XiaomingH
Sichuan
Chengdu University of Tech. (Info)ChengduYong YuanM
Univ. of Elec. Sci. and Tech. of China +(Biotech)ChengduDu HongfeiM,H
Li MingqiM,H
(Electr Tech)Wang LihuiM
(Info)Zhang YongH
Tianjin
Nankai University (Mgmnt Sci & Eng)TianjinHou WenhuaH
Zhejiang
Hangzhou Dianzi University (Info) +(Phys)HangzhouQiu ZheyongH
Zhang ZhifengH
Cheng ZongmaoH
Ningbo Inst. of Tech. of Zhejiang U.NingboTu LihuiP
Liu WeiH,H
Zhejiang Gongshang UniversityHangzhouDing ZhengzhongM,H
Zhou MinghuaH
(Info)HangzhouZhao HengP
Zhejiang Normal Univ. +(CS)JinhuauHe GuolongH
Qu Youtian QuH
Sheng ZuxiangP
Zhejiang Sci-Tech UniversityHangzhouHan ShuguangP
Zhejiang U. City College (CS)HangzhouZhang HuizengH
Huang WaibinM
(Info)Kang XushengH
Zhejiang UniversityHangzhouYong WuM
Yang QifanP
Tan Zhiyi
Zhejiang Univ. of Tech. +(Jianxing College)HangzhouWu XuejunH
Zhuo WenxinH
FINLAND
Päivölä College of MathematicsTarttilaJanne Juho Eemil PuustelliP
INDONESIA
Institut Teknologi BandungBandungEdy SoewonoH
Agus Yodi GunawanH
UNITED KINGDOM
Cambridge University (Phys)CambridgeThomas A. DukeM
+ +# Editor's Note + +For team advisors from China, I have endeavored to list family name first. + +# Abbreviations for Organizational Unit Types (in parentheses in the listings) + +
(none)MathematicsM; Pure M; Applied M; Computing M; M and Computer Science; M and Computational Science; M and Information Science; M and Statistics; M, Computer Science, and Statistics; M, Computer Science, and Physics; Mathematical Sciences; Applied Mathematical and Computational Sciences; Natural Science and M; M and Systems Science; Applied M and Physics
BioBiologyB; B Science and Biotechnology; Biomathematics; Life Sciences
BusBusinessB; B Management; B and Management; B Administration
ChmChemistryC; Applied C; C and Physics; C, Chemical Engineering, and Applied C
CSComputerC Science; C and Computing Science; C Science and Technology; C Science and (Software) Engineering; Software; Software Engineering; Artificial Intelligence; Automation; Computing Machinery; Science and Technology of Computers
EconEconomicsE; E Mathematics; Financial Mathematics; E and Management; Financial Mathematics and Statistics; Management; Business Management; Management Science and Engineering
EngEngineeringCivil E; Electrical Eng; Electronic E; Electrical and Computer E; Electrical E and Information Science; Electrical E and Systems E; Communications E; Civil, Environmental, and Chemical E; Propulsion E; Machinery and E; Control Science and E; Mechanisms; Mechanical E; Electrical and Info E; Materials Science and E; Industrial and Manufacturing Systems E
InfoInformationI Science; I and Computation(al) Science; I and Calculation Science; I Science and Computation; I and Computer Science; I and Computing Science; I Engineering; I and Engineering; Computer and I Technology; Computer and I Engineering; I and Optoelectronic Science and Engineering
PhysPhysicsP; Applied P; Mathematical P; Modern P; P and Engineering P; P and Geology; Mechanics; Electronics
SciScienceS; Natural S; Applied S; Integrated S; School of S
SoftwareSoftware
StatStatisticsS; S and Finance; Mathematical S; Probability and S; S and Actuarial
+ +# Optimizing the Effectiveness of Organ Allocation + +Matthew Rognlie + +Peng Shi + +Amy Wen + +Duke University + +Durham, NC + +Advisor: David Kraines + +# Introduction + +The first successful organ transplant occurred in 1954, a kidney transplant between twin brothers in Boston [Woodford 2004]. Since then, although the number of transplants per year has steadily risen, the number of organ donors has not kept up with demand [Childress and Liverman 2006] (Figure 1). + +To ensure equitable distribution of available organs, Congress passed the National Organ Transplant Act in 1984. The act established the Organ Procurement and Transplantation Network (OPTN), a regionalized network for organ distribution [Conover and Zeitler 2006]. In 2000, the U.S. Department of Health and Human Services (HHS) implemented a additional policy called the Final Rule, which ensured that states could not interfere with OPTN policies that require organ sharing across state lines [Organ Procurement . . . 1999]. + +The organ matching process involves many factors, whose relative importance depends on the type of organ involved. These include compatibility, region, age, urgency of patient, and waitlist time [Organ Procurement ... 2006]. Although most countries use the same basic matching processes, systems vary in their emphasis on particular parameters [Transplantation Society ... 2002; UK Transplant 2007; Doxiadis et al. 2004; De Meester et al. 1998]. + +In 2006, kidneys comprised $59\%$ of all organs transplanted [Organ Procurement ... 2007]. In determining compatibility in kidney transplants, doctors look at: + +- ABO blood type: The ABO blood type indicates the presence of two types of antigens, A and B, present in the patient's body. Antigens are foreign molecules or substances that trigger an immune response. People with blood + +![](images/a91c81e76a9d3f07bf40fe00493e8ae5a5d24bf25bc215cc32ee5fd641b1a63f.jpg) +Figure 1. Number of transplants and cadaveric organ donors. Source: OPTN Annual Report 2005 [U.S. Organ Procurement . . . 2005]. + +type A have antigen A in their body, people with blood type B have antigen B, people with blood type AB have both, and people with blood type O have neither. A person with blood type AB can receive an organ from anyone, a person with blood type A or B can receive an organ from a person of blood type O or the same blood type, and a person with blood type O can receive an organ only from someone with type O blood. + +- Human Leukocyte Antigens (HLA): HLA indicates a person's tissue type, whose most important components are the A, B, and DR antigens. Each antigen consists of two alleles, and matching all six components results in a significantly increased success rate for kidney transplants. Patients with mismatched components, however, can still survive for many years [U.S. Organ Procurement ... 2005]. +- Panel Reactive Antibody (PRA): PRA is a blood test, measuring the percentage of the U.S. population that blood samples are likely to react with. It tests for the presence of antibodies, proteins that bind to foreign molecules [University of Maryland ... 2004a]. Blood can become more sensitive due to previous transplants, blood transfusions, or pregnancies [Duquesnoy 2005]. + +Kidney transplants are common partly because kidneys, unlike most other organs, can be safely obtained from live donors. In fact, live-donor kidneys are more effective than cadaveric kidneys, with longer half-lives and lower rejection rates [Gentry et al. 2005]. Over $75\%$ of living donors in 2004 were related (parents, siblings, spouses) to the transplant recipients [Childress and Liverman 2006]. However, some people willing to donate to an intended recipient cannot because of blood type or HLA incompatibility, leaving over $30\%$ of patients without a suitable kidney transplant [Segev et al. 2005]. One solution is kidney paired donation (KPD), which matches two incompatible donor-recipient pairs where the donor of each pair is compatible with the recipient of the other, satisfying both parties [Ross et al.. Another is list paired donation (LPD), where a recipient receives higher priority on the waitlist if an associated donor gives to another compatible recipient on the waitlist [Gentry et al. 2005]. + +We incorporate all these factors in modeling the various aspects of transplantation. First, we focus on the U.S. network and produce a generic model of the processes that impact the number of people on the waitlist, the number of transplants, and the length of wait time. To illustrate our model, we use data specific to kidney transplants, and also examine the policy of Eurotransplant for ideas on improving the current U.S. system. We then construct a model of list paired donation to determine how to maximize the number of exchanges while maintaining compatibility. Finally, we analyze the implications of our model for patient and donor decisions, taking note of important ethical and political issues. + +# Generic U.S. Transplant Network + +# Overview + +We model the generic U.S. transplant network as a rooted tree (growing downward). The root represents the entire network, and its immediate children represent the regions. Each node represents some kind of organization, whether an Organ Procurement Center, a state organization, or a interstate region. At each node, there is a patient wait list, the concatenation of the wait list of the node's children. + +We approximate the network's functioning as a discrete-time process, in which each time step is one day with four phases: + +- In phase I, patients are added to the leaf nodes. We approximate the rate of wait list addition by a Poisson process; doing so is valid because we can reasonably assume that the arrivals are independent, identically distributed, and approximately constant from year to year. Suppose that this number of candidates added to the wait list at time $t$ is addition $_t$ , then we model + +$$ +P r (\mathrm {a d d i t i o n} _ {t + 1} - \mathrm {a d d i t i o n} _ {t} = k) = \frac {e ^ {- \lambda} \lambda^ {k}}{k !}. +$$ + +For the rate constant $\lambda$ , we use the number of organ applicants in a given year, $\lambda \approx$ (number of new applicants)/365.25. + +- In phase II, we add cadaver organs to the leaf nodes. As with patients, we model cadaver arrivals as a Poisson process, with rate the average number of cadaver organs added in a given year. +- In phase III, we allocate organs based on bottom-up priority rules. A bottom-up priority rule is a recursive allocation process propagated up from the bottom of the tree, which requires any organ-patient match to meet some minimum priority standard. For example, for kidney allocation, the first priority rule is to allocate kidneys to patients who match the blood type and HLA profile exactly. Within this restriction, OPTN dictates that kidneys be allocated locally first, then regionally, then nationally. In our model, this corresponds to moving from the leaves up the tree. Matched organ-patient pairs undergo transplantation, which has a success rate dependent on the quality of the match. (In later sections, we explore the success rate as also a function of the experience of the doctors at the center and the quality of the kidney.) +- In phase IV, we simulate the death of patients on the waiting list. We treat the death rate $k$ of a patient as a linear function $aT + b$ of the person's wait time $T$ . Hence, calculating from time 0, a person's chance of survival to time $T$ is $e^{-kT} = e^{-(aT + b)T}$ . + +Under this mathematical model, our problem becomes finding a good tree structure and an appropriate set of bottom-up priority rules. + +# Simulation + +To study this model, we average results over many simulations of the kidney transplant network. Our simulation works as follows: At every time round, in phase I, we generate a number according to the Poisson distribution of the number of new candidates. For each new patient added, we randomly generate the person's race and age according to data on race and age distributions. Using the person's race, we generate the person's blood type and HLA makeup, according to known distributions, and the patient's PRA, based on probabilities published by the OPTN. + +Similarly, in Phase II, we generate a list of donor organs according to known distributions of blood type and HLA makeup. Moreover, we record where the organ was generated, so we can study the effect of having to move the organ before transplantation, the time for which lowers its quality. + +In Phase III, we implement recursive routines that traverse the tree from the bottom up, following the OPTN system for kidneys. To model the success rate of an operation, we use the statistics published by the OPTN; our main method of determining whether an operation is successful is the number of + +HLA mismatches. Success is affected by the sensitivity of the person, measured by the person's PRA, which we model by adding a linear term to the success rate. Moreover, we reduce the success rate by 5 percentage points if the organ is not procured from the same center as the patient; $5\%$ is the average effect on the success rate of increasing the delay by 10-20 hrs, according to OPTN data. + +In Phase IV, we regress the coefficients $a$ and $b$ of the previous section, and use this formula to calculate the probability of death. + +To adjust the parameters for this model, we use the OPTN national data for the national active wait list for cadaver kidneys from 1995 to 2004 and feed into our model the number of donations for each year. + +# Results of the Basic Model + +To quantify the quality of a network, we use a set of objective functions, which represent various ideas about the desirability of policy outcomes. For these functions, let + +$x$ be the number of "healthy" patients to receive a successful transplant each year, + +$y$ be the corresponding number of "sick" patients (those with some terminal illness or serious medical condition) to receive a successful transplant, + +$a$ be the average age of the transplant recipients, and + +$m$ be the maximum wait time in the queue. + +We examine the following objective functions: + +$x + y$ : This is simply the number of successful transplants per year. + +$(100 - a) \times (x + y)$ : This considers the premise that transplants are more valuable when given to young recipients. + +$x + 0.5y$ : This is a stylized adoption of the idea that transplants given to terminally ill recipients are less valuable. + +$(x + y) / \max (9,m)$ : This incorporates queue wait time. + +We also include a proposed tradeoff between big and small centers: + +- In a big center, the doctors are more experienced. We simulate this by decreasing the success rate of operations at centers that do not perform a threshold number of operations per year. +- With small centers, kidneys are allocated on a more local basis, which minimizes deterioration of organs in transportation. We simulate this by applying a penalty when kidneys are moved to larger regional centers, and also when kidneys are moved between centers. + +![](images/4267cc14215d67114de6ab71bd99ded6f24ec216690599410ac4a6aaf1e1f43c.jpg) +Figure 2. Flowchart for simulation. + +# Summary of Assumptions + +- Arrivals in the waiting queue, both of cadaver donors and needy patients, are independent and randomly distributed +- The generic U.S. transplant network can be simulated as a rooted tree +- Death rate can be approximated as a linear function of time on the waiting list. + +# Other Countries' Transplantation Policies + +We researched the policies of other countries, such as China, Australia, and the United Kingdom; they differ little from the U.S. policy. China uses organs from executed prisoners, which we do not believe to be ethical. We decided that the policies of Eurotransplant have the best groundwork: People analyze their policy each year, tweaking the waiting-time point system. + +The Eurotransplant policy does not emphasize regions as much, with the maximum number of points for distance being 300. In contrast, the number + +of points received for zero HLA mismatch is 400. The Eurotransplant policy also has greater emphasis on providing young children with a kidney match, giving children younger than 6 years an additional 1095 waiting-time points. + +We implemented the Eurotransplant policy in our model to see if that policy could also benefit the U.S., but we found little difference. + +# Utilizing Kidney Exchanges + +A promising approach for kidney paired exchange is to run the maximal matching algorithm over the graph defined by the set of possible exchanges. However, this approach takes away from the autonomy of patients, because it requires them to wait for enough possible pairs to show up before performing the matching, and sometimes it may require them to take a less than perfect matching. + +We sought to improve this supposedly "optimal solution" by implementing list paired donation in our model. + +According to each patient's phenotypes, we calculate the expected blood types of the person's parents and siblings, and make that the person's contribution to the "donor pool." In other words, the person brings to the transplant network an expected number $r$ of potential donors. We then make the patient perform list paired donation with the topmost person in the current queue who is compatible in blood type to the donor accompanying the new patient. According to our research, kidneys from live donors are about $21\%$ better than cadaver kidneys in terms of success rate. Thus, it is in the cadaver-list person's best interest to undergo this exchange. + +We find that for any value of $r$ from 0.2 to 2, list paired donation drastically decreases the length of the waitlist, by factors as large as 3, and makes the queue size stabilize (Figure 3). + +# Patient Choices + +What should a patient do when presented with the opportunity for a kidney? The decision is not clear-cut; for instance, if the patient is offered a poorly matched kidney now, but a well-matched kidney is likely to arrive in a reasonable time, the patient should perhaps wait. We examine the this tradeoff. + +We assume that a patient who has already received a kidney transplant may not receive another in the future. While this is not always true, it suffices for the purposes of our model, since we posit a choice between accepting a "lesser" kidney today and a better kidney later. (When a patient receives a second kidney transplant after the first organ's failure, there is no reason to expect a better organ, since the patient cannot immediately return to the top of the cadaver kidney queue, and live donors are likely to be more reluctant after a previous failure.) + +![](images/b5515a86343ef1310ce7e3716bb12380cc12efe092929014f7ed10ce9fcd9610.jpg) +Figure 3. Wait time (in days) for various values of donation rate $r$ , with list paired donation, applied over time to the current waitlist. + +We assume that patients want to maximize expected years of life. + +Let there be a current transplant available to the patient; we call this the immediate alternative and denote it by $\mathcal{A}_0$ . The patient and doctor have some estimate of how this transplant will affect survival; we assume that they have a survival function $s_0(0,t)$ that describes chance of being alive at time $t$ after the transplant. We further assume that this survival function is continuous and has limit zero at infinity: In other words, the patient is neither strangely prone to die in some infinitesimal instant nor capable of living forever. + +The patient also has a set of possible future transplants, which we call future alternatives and write as $(\mathcal{A}_1,\mathcal{A}_2,\ldots ,\mathcal{A}_n)$ . Each future alternative $\mathcal{A}_i$ also has a corresponding survival function $s_i(t_0,t)$ , where $t_0$ is the starting time of transplant and $t$ is the current time. We assume that there is a constant probability $p_i$ that alternative $\mathcal{A}_i$ will become available at any time. While this is not completely true, we include it to make the problem manageable: More complicated derivations would incorporate outside factors whose complexity would overwhelm our current framework. Finally, if the patient opts for a future alternative and delays transplant, survival is governed by a default survival function $s_d$ . + +# Summary of Assumptions + +- The patient can choose either a transplant now (the immediate alternative $\mathcal{A}_0$ ), or from a finite set of transplants $(\mathcal{A}_1, \mathcal{A}_2, \ldots, \mathcal{A}_n)$ in the hypothetical future (the future alternatives). +- Each alternative has a corresponding survival function $s_i(t_0, t)$ , which describes the chance of survival until time $t$ as a function of $t$ and transplant starting time $t_0$ . Survival functions have value 1 at time 0, are continuous, and have limit zero at infinity. +- Each future alternative $\mathcal{A}_i$ has a corresponding constant probability $p_i$ of becoming available at any given time. Hence, the probability at time $t$ of the alternative not yet having become available is $e^{-p_i t}$ . +- A default survival function $s_d(t)$ defines the chance of survival when there has not yet been a transplant. To maintain continuity, $s_d(t_0) = s_i(t_0, t)$ . +- The patient can have only one transplant. +- The patient attempts to maximize expected lifespan; in case of a tie in expected values, the patient chooses an option that provides a kidney more quickly. + +The survival functions must behave consistently; they cannot become wildly better or worse-performing relative to each other. We propose a formal definition to capture this concept. + +A separable survival function $s_i(t_0, t)$ is one that can be expressed as the product of two functions, one a function of only $t_0$ and the other a function of only $t - t_0$ : + +$$ +s _ {i} (t _ {0}, t) = a _ {i} (t _ {0}) b _ {i} (t - t _ {0}). +$$ + +We stipulate that $b(0) = 1$ . In a separable set of survival functions, all functions are individually separable with the same function $a(t_0)$ . + +Is it be reasonable to assume that for any patient, the set of survival functions is separable? It is not an entirely natural condition, and indeed there are cases where it does not seem quite right—for instance, when some $t_0$ is high, so that higher values of $t$ approach extreme old age, where survival decreases rapidly and the patient is less likely to survive than the product of $a$ and $b$ predicts. But in this case, the absolute error is small anyway: $a(t_0)$ accounts for the probability of survival that stems from waiting for a kidney until time $t_0$ , and thus if $t_0$ is large, $a(t_0)b(t - t_0)$ is likely to be quite tiny as well. + +Moreover, separability is intuitively reasonable for modeling the effects of a delayed kidney donation. The function $a(t_0)$ measures the decrease in survival rate that results from waiting for an organ transplant. This should be consistent across all survival functions for a given patient; we express this notion in the + +concept of a separable set. Meanwhile, the factor $b(t - t_0)$ accounts for the decrease in survival during the time $(t - t_0)$ spent with the new kidney. + +Consequently, we assume that our survival functions are separable. This will lead us to an explicit heuristic for lifespan-maximizing decisions, which is the goal of this section. + +For $\mathcal{A}_i$ and $\mathcal{A}_j$ two future alternatives in a separable set, we assign an order according to: + +$$ +\int_ {0} ^ {\infty} b _ {i} (t) d t \leq \int_ {0} ^ {\infty} b _ {j} (t) d t \longleftrightarrow \mathcal {A} _ {i} \leq \mathcal {A} _ {j} +$$ + +We turn to the derivation of an lifespan-maximizing strategy. Such a strategy, when presented with alternative $\mathcal{A}_i$ at time $t_0$ , will either accept or wait for other alternatives. In fact: + +Theorem. If a patient's alternatives form a separable set, then the optimal strategy is either to accept an alternative $\mathcal{A}_i$ at all times $t_0$ or to decline it at all times $t_0$ . If the patient declines $\mathcal{A}_i$ , then the patient must decline all alternatives less than or equal to $\mathcal{A}_i$ in the order relation defined above. Similarly, if the patient accepts $\mathcal{A}_j$ , then the patient must accept all alternatives greater than or equal to $\mathcal{A}_i$ . + +Proof: The patient will accept the alternative or probabilistic bundle of alternatives that the patient's survival functions indicate gives the greatest lifespan. For alternative $\mathcal{A}_i$ , the expected lifespan beyond time $t_0$ is + +$$ +\int_ {0} ^ {\infty} s _ {i} (t _ {0}, t) d t. +$$ + +Suppose that a patient at time 0 declines this alternative in favor of some optimal set of future alternatives. Furthermore, suppose that this set includes some alternative $\mathcal{A}_k$ such that $\mathcal{A}_k \leq \mathcal{A}_i$ . Then the expected lifespan from this set is + +$$ +\left(\sum_ {j} p _ {j} + p _ {k}\right) \int_ {0} ^ {\infty} \exp \left[ - \left(\sum_ {j} p _ {j} + p _ {k}\right) t _ {0} \right] a (t _ {0}) \int_ {0} ^ {\infty} \frac {\left(\sum_ {j} p _ {j} b _ {j} (t) + p _ {k} b _ {k} (t) \right.}{\sum_ {j} p _ {j} + p _ {k}} d t d t _ {0}, +$$ + +where $j$ ranges over all alternatives $\mathcal{A}_j$ in the optimal set except $\mathcal{A}_k$ . This double integral does not mix integration variables and is therefore equal to a product of two integrals: + +$$ +\left(\sum_ {j} p _ {j} + p _ {k}\right) \int_ {0} ^ {\infty} \exp \left[ - \left(\sum_ {j} p _ {j} + p _ {k}\right) t _ {0} \right] a (t _ {0}) d t _ {0} \int_ {0} ^ {\infty} \frac {\sum_ {j} p _ {j} b _ {j} (t) + p _ {k} b _ {k} (t)}{\sum_ {j} p _ {j} + p _ {k}} d t. +$$ + +Since $\mathcal{A}_k$ is less than or equal to $\mathcal{A}_i$ , and $\mathcal{A}_i$ was declined in favor of the set of alternatives that we are examining, the presence of the $k$ term in the weighted average under the right integrand lowers the value of the average. The previous expression is thus less than + +$$ +\left(\sum_ {j} p _ {j} + p _ {k}\right) \int_ {0} ^ {\infty} \exp \left[ - \left(\sum_ {j} p _ {j} + p _ {k}\right) t _ {0} \right] a (t _ {0}) d t _ {0} \int_ {0} ^ {\infty} \frac {\sum_ {j} p _ {j} b _ {j} (t) + p _ {k} b _ {k} (t)}{\sum_ {j} p _ {j}} d t. +$$ + +Using integration by parts on the left, we finally get: + +$$ +\begin{array}{l} \left[ 1 + \int^ {\infty} \exp \left[ - \left(\sum_ {j} p _ {j} + p _ {k}\right) t _ {0} \right] a (t _ {0}) d t _ {0} \right] \left[ \int_ {0} ^ {\infty} \frac {\sum_ {j} p _ {j} b _ {j} (t) + p _ {k} b _ {k} (t)}{\sum_ {j} p _ {j}} d t \right] \\ < \left[ 1 + \int_ {0} ^ {\infty} \exp \left[ \left(- \sum_ {j} p _ {j}\right) t _ {0} \right] a (t _ {0}) d t _ {0} \right] \left[ \int_ {0} ^ {\infty} \frac {\sum_ {j} p _ {j} b _ {j} (t)}{\sum_ {j} p _ {j}} d t \right]. \\ \end{array} +$$ + +The expression on the right is strictly larger than our starting expression, but it is also equal (as inverse integration by parts shows) to the expected lifespan for the same optimal set of alternatives except without $\mathcal{A}_k$ . This is a contradiction: By removing $\mathcal{A}_k$ from our "optimal" set, we have found a bundle with longer expected lifespan, indicating that the original set was not truly optimal. Our assumption that $\mathcal{A}_k$ is part of the optimal set is therefore false; in general, this means that when alternative $\mathcal{A}_i$ is declined for an optimized set of future alternatives, no alternative less than or equal to $\mathcal{A}_i$ can be in that set. + +An analogous argument proves the opposite result: When alternative $\mathcal{A}_i$ is taken, all alternatives greater than or equal to $\mathcal{A}_i$ must also be taken when possible. + +That the choice to accept or decline a given alternative is independent of the time of decision now follows immediately. With separable survival functions, the only difference between the expected lifetimes of alternatives and optimal sets over a time interval from $t_1$ to $t_2$ is the constant ratio $a(t_2) / a(t_1)$ , which does not alter the direction of the inequality sign. + +This theorem immediately implies a heuristic for an optimal strategy: + +# Heuristic for Finding an Optimal Strategy over Separable Survival Functions: + +1. Determine the set of possible alternatives and the separable survival functions accompanying each. +2. Use the order relation given earlier to put the alternatives in order from $\mathcal{A}_1$ to $\mathcal{A}_n$ , with $\mathcal{A}_1$ lowest and $\mathcal{A}_n$ highest. +3. Start with alternative $\mathcal{A}_n$ +4. Label the current alternative $\mathcal{A}_k$ . Determine whether the expected value for a set including all alternatives $\mathcal{A}_{k-1}$ and greater is higher than the expected value for the set of alternatives at and above $\mathcal{A}_k$ . +5. If yes, move down to $\mathcal{A}_{k-1}$ and repeat the previous step, unless you are already at $\mathcal{A}_0$ . In that case, it is optimal to take all alternatives available, in particular, the immediate alternative. +6. If no, the optimal strategy is to take all alternatives from $\mathcal{A}_k$ to $\mathcal{A}_n$ , but none smaller. + +# Ethical and Political Ramifications + +It is reasonable to question kidney assignment to a patient who is less likely, for whatever reason, to benefit fully from the transplant's impact on lifespan. + +We incorporate these situations into our model by altering the objective function for a particular class of patients. We simply alter the "returns" that determine how we measure success in the first place. This is a clean and efficient way to incorporate both practical (diseased people are not likely to benefit much from organ transplants) and moral ("save the kids!") judgments into our model. + +![](images/3371b785d842b295048336ddb10ec5e675533359ff8bb87c57c61198da4eae2c.jpg) +Figure 4. Results of policy of subtracting one point from the objective function of those above 60. + +That transplantation is an enormously complex medical procedure, demanding dedicated facilities and experienced doctors, raises questions about the location of the surgery. Should we always ship kidneys to large, well-established medical centers, which may be more consistent in performing the operation? Or should we make transplants mostly local, so that all facilities become more experienced and maintain proficiency? + +To reflect these concerns, we add two new and important wrinkles to our model. + +- First, we introduce a "doctor experience function," which maps greater experience in transplant surgery to greater success and consistency in performing the procedure. Although it is impossible to pinpoint a precise analytical relationship between the number of transplants performed and success in performing them, we use regression on an OPTN data set to identify a rough linear function, and examine the effect of several other functions as well. +- Currently in the United States, fewer than half of cadavers with the potential to provide kidneys are used as donors. This is due to a system that relies heavily on family wishes: In most situations, doctors will defer to family members on the decision not to donate organs, even when there is preexisting affirmation of desire to donate from the deceased individual. + +One dramatic improvement, already implemented in some countries and since early this year in the State of Illinois, is presumed consent. Under presumed consent, every individual (possibly with limited restrictions) is assumed to have given consent for postmortem organ donation. An individual opposed to the prospect must explicitly "opt out." In many other countries, such a system has dramatically increased the pool of cadaveric kidneys available for transplant; in Austria, for instance, availability is nearly equal to demand. The result in the U.S. would not be quite so favorable; the Austrian system "benefits" from a high rate of traumatic road deaths. + +In our model of living kidney donation, donors are limited to individuals with a substantial relationship to the patient: spouse, siblings, parents, children, and close friends. Excluding the black market in commercial kidneys, this setup accurately reflects the situation in the U.S. today, and in an ideal world it would be enough to provide high-quality living organs to those in need. But while list paired donation dramatically improves the usefulness of this system, its dependence on a limited pool of kidneys prevents full distribution. Moreover, some related donors decide against donation, further narrowing the supply for matching schemes. + +Multiple studies identify financial disincentives as some of the main barriers to donation. Surprisingly, donors are sometimes liable for a portion of the medical costs of their procedure (and its consequences). Meanwhile, they often lose income, as they generally cannot work for some period of time during and after the transplant. And although exact figures on the number of potential donors dissuaded by these costs are difficult to obtain, sources suggest possible percentages as high as $30\%$ . + +What are the solutions? First, an authority (most likely the government) could provide for the full medical expenses of the operation, along with insurance for any future health consequences. + +Some observers have proposed an even more radical reform: legalizing trade in human organs and creating "kidney markets" to ensure supply. A worldwide black market already exists in live-donor kidneys, offering some insights into how a legal system might work. The verdict is unclear; researchers + +have been both surprised by the quality of black-market transplants and appalled by them. Arguably, however, concerns about quality and safety in an legal organ-trading system are misplaced, since there is little reason to believe that a regulated market (with transplants conducted by well-established centers) would be any worse than the general kidney transplant system. + +More controversial is the ethical propriety of compensation from organs. By banning all "valuable considerations" in exchange for organs, the National Organ Transplant Act of 1984 expressed a widespread sense that any trade in organs is ethically appalling. Many commentators assert that it would inevitably lead to exploitation and coercion of the poor. At the same time, others claim that markets in organs are morally obligatory: If these markets are the only way to save lives, they must be implemented. We do not take a firm position on the ethics of this question but recommend studies of it. + +# Donor Decision + +When considering donating a kidney, a potential donor takes into account many factors: the risk to self, the risk of future health issues, personal issues, and the chance of transplantation success. + +# Immediate Risk to Donor + +Especially when the recipient of the kidney has no relation to the donor, the risk to the donor is of greatest importance. After all, they are putting their lives at risk when they are not otherwise in any danger of dying themselves. Of course, steps are taken to ensure that the donor is healthy enough to undergo a successful operation. At many institutions, the criteria for exclusion of potential living kidney donors include kidney abnormalities, a history of urinary tract infection or malignancy, extremely young or old age, and obesity. In addition, the mortality rate around the time of the operation is only about $0.03\%$ [Jones et al. 1993]. + +# Future Health Concerns + +There have been suggestions that the early changes that result from the removal of the kidney, increase in glomerular filtration rate and renal blood flow, may lead to insufficiency of kidney function later on. However, a study of 232 kidney transplant patients, with a mean follow-up time of 23 years, demonstrated that if the remaining kidney was normal, survival was identical to that in the overall population [Jones et al. 1993]. In fact, another study suggests that kidney donors live longer than the age-matched general population, most likely due to the bias that occurs in the selection process [Ramcharan and Matas + +2002]. Therefore, there is not a higher risk for development of kidney failure in the long run for kidney transplant donors. + +# Psychological Issues + +There may, however, be future psychological issues stemming from depression. According to donor reports from a follow-up conducted by the University of Minnesota, $4\%$ were dissatisfied from their donation experience, with nonfirst-degree relatives and donors whose recipient died within a year of transplant more likely to say that they regretted their decision and wish that they had not donated [Johnson et al. 1999]. To reduce this percentage, a more careful selection based on a rigorous psychosocial evaluation should be conducted. + +# Personal Issues + +Some potential donors believe that they would incur the costs for the operation, discouraging them from following through with donating [United Network ... 2007]. This is simply not true. Also, a survey of 99 health insurance organizations found that kidney donation would not affect an insured person's coverage and a healthy donor would be offered health insurance, so money is not a problem [Spital and Kokmen 1996]. However, money is lost from time away from work, and time away from home is also another significant contributor to the decision of a potential donor. Another personal issue that might deter potential donors is their attitude towards surgery in general, which may be affected by irrational fears or previous experiences. Such potential donors would be unable to cope psychologically with surgery, in spite of knowledge of the high probability of success. + +# Relationship + +The relationship between the potential donor and the intended recipient is also a key factor. Close family members and the spouses of the recipients are three times as willing to participate in paired donation compared to other potential donors [Waterman et al. 2006]. Although it is hard to quantify, there also appears to be a significant number of altruistic donors. Most potential donors would not want to participate in nondirected donation, since it would not benefit their intended recipients, but $12\%$ of potential donors were extremely willing to donate to someone they did not know [Waterman et al. 2006]. Since the risk/benefit ratio is much lower for living anonymous donors (LAD), there are concerns that such donors are! psychologically unstable. However, studies conducted by the British Columbia Transplant Society indicate otherwise. About half of the potential LADs who contacted their center and completed extensive assessments that looked at psychopathology and personality disorder met rigorous criteria to be donors [Henderson et al 2003]. + +Across the different donor-exchange programs, there is a association between the willingness of potential donors to participate and how likely they thought their intended recipient would receive a kidney. With kidney paired donation, willingness to participate is $64\%$ [Waterman et al. 2006]. For a compatible donor-recipient pair, the willingness for a direct transplant would be arguably closer to $100\%$ . So we hypothesize that as the size $n$ of a kidney exchange increases, potential donors become less willing to participate, because of the increasing chance for error in one of the swaps and the increasing difficulty of coordination. Potential donors would not wish to go through so much trouble without certainty of acquiring a kidney for their recipient, especially since they are giving up one of their own kidneys. For list paired donation, the data are inconclusive; in our model, we hypothesized that a random percentage of a predefined donor base would become actual organ donors. + +There are essentially four basic models of systems that deal with incentives for live donors: + +- Market compensation model, which is based on a free-market system in which the laws of supply and demand regulate the monetary price for donating a kidney. +- Fixed compensation model, where all donors are paid a fixed amount regardless of market value, for any trouble caused by the donation. +- Expense reimbursement model, which covers only the expenses incurred by the donor, such as travel and childcare costs, that are related to the transplant process. +- No-compensation model, the current system in the U.S., which forces an altruistic donor to cover his or her own expenses [Israni et al. 2005]. + +The market compensation model guarantees that the demand for kidneys will be met, as seen from Iran's organ market [President's Council on Bioethics 2006]. But it discourages altruistic donors, since they gain only monetary and no altruistic benefit. Also, the large demand for kidneys would likely drive the price up, causing ethical concerns about kidneys becoming a commodity available only to the rich. To encourage more altruistic donors, we argue that the expense reimbursement model is the best approach. This model allows altruistic donors to volunteer for the transplantation procedure without worrying about financial costs, and, unlike the fixed compensation model, prevents donors from making a profit from their donation. When there is an opportunity for profit, there is a risk of developing a market for organs, which many would argue is unethical. Still, we believe that given the enormous potential for increase in the kidney donor pool, it is important to investigate the possible results from compensation and incentive-based systems. + +# Conclusion + +We believe in the absolute necessity of implementing a list paired donation system. Its dramatic positive effect on outcomes for kidney patients in need of replacement organs is remarkable and cannot be ignored. + +We have also found several other less striking results. + +- The basic importance of geography. High transportation time leads to deterioration of the organs being transferred; in poorly designed systems, this occurs even when there are plausible and equally valuable local routes for transmission. +- We recognize the importance of age and disease stage in allocating kidneys. When these factors are incorporated into our objective function, altering the point system for allocation decisions becomes important. +- It is critical to reflect the objectives of the transplant system. Without a well-verified and established relationship between what we include in our allocation decisions and our moral and ethical bases for judgment, we will always be dissatisfied. + +# Evaluation of Solutions + +# Strengths + +Our main model's strength is its enormous flexibility. For instance, the distribution network can acquire many different structures, from a single nationally-run queue to a heavily localized and hierarchical system. Individuals, represented as objects in $\mathrm{C}++$ code, are made to possess a full range of important attributes, including blood type, HLA type, PRA level, age, and disease. Including all these factors into a single, robust framework, our model enables realistic simulation of kidney allocation but remains receptive to almost any modification. + +This strength allows us to make substantive conclusions about policy issues, even without extensive data sets. By varying parameters, allocation rules, and our program's objective function—all quite feasible within the structure—we can examine the guts of policymaking: the ethical principles underlying a policy, the implementation rules designed to fulfill them, and the sometimes nebulous numbers that govern the results. + +# Weaknesses + +Although we list the model's comprehensive, discrete simulation as a strength, it is (paradoxically) also the most notable weakness. Our results lack clear illustrative power; data manipulated through a computer program cannot achieve + +the same "aha" effect as an elegant theorem. Indeed, there is a fundamental tradeoff here between realism and elegance, and our model arguably veers toward overrealism. + +Second, our model demands greater attention to numbers. While its general structure and methodology are valid, the specific figures embedded in its code are not airtight. For instance, the existing literature lacks consensus on the importance of HLA matching, possibly because developments in immunogenic drugs are changing the playing field too rapidly. Our use of parameters derived from OPTN data cannot guarantee numerical accuracy. + +Third, and perhaps most fundamentally, the bulk of our simulation-based analysis hinges on the "objective functions" that we use to evaluate the results. This raises a basic question: What "good" should the kidney allocation system maximize? We attempt to remedy this problem by including multiple objective functions. + +# Appendices + +# Appendix I: Letter to Congress + +We have undertaken an extensive examination of organ procurement and distribution networks, evaluating the results of differences in network structure on the overall success of the transplant system. In particular, we took a close look at the impact of two representative schemes for kidney allocation. First, we evaluated a simple model consisting of one nationally-based queue, where kidney allocation decisions are made without regard for individual region. Second, we looked at a more diffuse system with twenty regionally-based queues. We simulated the trade-off in effectiveness between the two systems by including a "distance penalty," which cut success rates for organs that were transported between different regions. This was implemented to recognize the role that cold ischemia, which is necessary for long-distance transportation of organs, plays in hurting transplant outcomes. + +Higher levels of this parameter hurt the single-queue system, which transports its kidneys a noticeably greater average distance. Indeed, we found that for essentially all values of the distance penalty parameter, the multiple-queue system was superior. This is because the regionalized system in our simulation, modeled directly on the American system, uses geography to allocate organs when there is a "tie": when the organ has many similarly optimal potential destinations. This approach has no apparent downside, while minimizing inefficiency tied to unnecessary organ transportation. We recommend that you preserve the current regionally-based allocation system. Additionally, if you desire to allocate additional funds to improve the organ distribution system, we suggest that you support the streamlining of organ transportation. + +We also compared the American OPTN organ allocation system to the analogous Eurotransplant system. Both use rubrics that assign "points" for various + +characteristics important to their matching goals. They differ, however, in their scheme of point assignment; while the contrasts are largely technical, they have real impacts on the overall welfare of transplant patients. Although the systems were relatively close in effectiveness overall, our simulations identified the American point system as slightly superior. Therefore, we recommend you preserve the main points underlying the OPTN point allocation scheme. + +# Appendix II: Letter to the Director of the U.S. HRSA + +We write having undertaken an extensive simulation-based review of the various political and ethical questions underlying decisions about organ allocation, and of policies for increasing live and cadaveric organ donation. First, we implemented a portion of code that represents the value of transplant center and doctor experience in improving donation outcomes. When we input a sizable experience effect, which does not appear to exist from the empirical data at this point, we found that a centrally based allocation and treatment system became substantially more effective than a more diffuse, multiple-queue based model. Given the lack of evidence that there is actually such an "experience" effect—the main available data, which comes from the Organ Procurement and Transportation Network, does not appear to indicate one—we advise caution before changing the system to reflect this theoretical result. + +We also examined the usefulness of including heavy weights for age and terminal illness in the kidney allocation system. In general, we concluded that such measures are both justified and effective. Although blood type, HLA match, and PRA are still the most fundamental factors to consider, we believe that the extreme difference in effectiveness of helping people of different ages and health status justifies substantial inclusion of those factors in the allocation process. + +Finally, we examined literature and existing research as they relate to the possibility of changing our system of consent and compensation for donation. We recommend implementing a "presumed consent" system for cadaveric organs, which will automatically tally deceased individuals as donors unless they have specifically opted out of the system. We also recommend exploring, but not necessary implementing, the possibility of some sort of compensation for live kidney donation. While we are mindful of widespread ethical concerns about the practice, we believe that the extreme demand for kidneys should prompt us to consider all alternatives. Specifically, we suggest a pilot program of light to moderate compensation for live kidney donors, and a thorough review of the outcome and change in incentives for those involved. + +# References + +Childress, J.F., and C.T. Liverman. 2006. Organ Donation: Opportunities for Action. Washington, DC: National Academies Press. + +Conover, C.J., and E.P. Zeitler. 2006. National Organ Transplant Act. http:// www.hpolicy.duke.edu/cyberexchange/Regulate/CHSR/HTMLs/F8-National\ %20organ\%20Transplant\%20Act.htm. Accessed 12 February 2007. +De Meester, J., G.G. Persjin, T. Wujciak, G. Opelz, G., and Y. Vanrenterghem. 1998. The new Eurotransplant Kidney Allocation System: Report one year after implementation. Transplantation 66 (9): 1154-1159. +Doxiadis, I.I.N., J.M.A. Smits, G.G. Persijn, U. Frei, U., and F.H.J. Claas. 2004. It takes six to boogie: Allocating cadaver kidneys in Eurotransplant. Transplantation 77 (4): 615-617. +Duquesnoy, R.J. 2005. Histocompatibility testing in organ transplantation. http://tpis.upmc.edu,tpis/immuno/wwwHLAtyping.htm. Accessed 12 February 2007. +Gaston, R.S., G.M. Danovitch, R.A. Epstein, J.P. Kahn, A.J. Mathas, and M.A. Schnitzler. 2006. Limiting financial disincentives in live organ donation: A rational solution to the kidney shortage. *American Journal of Transplantation* 6 (11): 2548-2555. +Gentry, S.E., D.L. Segev, , and R.A. Montgomery. 2005. A comparison of populations served by kidney paired donation and list paired donation. American Journal of Transplantation 5 (8): 1914-1921. +Henderson, A.J.Z., M.A. Landolt, M.F. McDonald, W.M. Barrassle, J.G. Soos, W. Gourlay, et al. 2003. The living anonymous kidney donor: Lunatic or saint? American Journal of Transplantation 3 (2): 203-213. +Israni, A.K., S.D. Halpern, S. Zink, S.A. Sidhwani, and A. Caplan. 2005. Incentive models to increase living kidney donation: Encouraging without coercing. American Journal of Transplantation 5 (1): 15-20. +Johnson, E.M., J.K. Anderson, C. Jacobs, G. Suh, A. Humar, B.D. Suhr, et al. 1999. Long-term follow-up of living kidney donors: Quality of life after donation. Transplantation 67 (5): 717-721. +Jones J., W.D. Payne, and A.J. Matas 1993. The living donor: Risks, benefits, and related concerns. *Transplant Reviews* 7 (3): 115-128. +Organ Procurement and Transplantation Network. 1999. Final rule. http:// www.optn.org/policiesAndBylaws/final_rule.asp. Accessed 12 Febru- ary 2007 +_____. 2006. Policies. http://www.optn.org/policiesAndBylaws/policies.asp. Accessed 12 February 2007. +_____. 2007. Data. Retrieved February 12, 2007 from http://www.optn.org/data. +President's Council on Bioethics. 2006. Organ transplant policies and policy reforms. http://www.bioethics.gov/background/crowepaper.htm. Accessed 12 February 2007. + +Ramcharan, T., and A.J. Matas. 2002. Long-term (20-37 years) follow-up of living kidney donors. American Journal of Transplantation 2 (10): 959-964. +Ross, Lainie Friedman, David T. Rubin, Mark Siegler, Michelle A. Josephson, J.R. Thistlethwaite Jr., and E. Steve Woodle. 1997. Ethics of a kidney paired exchange program. New England Journal of Medicine 336 (24) (12 June 1997): 1752-1755. +Segev, Dorry L., Sommer E. Gentry, Daniel S. Warren, Brigitte Reeb, and Robert A. Montgomery. 2005. Kidney paired donation and optimizing the use of live donor organs. Journal of the American Medical Association 293 (15) (20 April 2005): 1883-1890. +Spital, A., and T. Kokmen. 1996. Health insurance for kidney donors: How easy is it to obtain? Transplantation 62 (9): 1356-1358. +Transplantation Society of Australia and New Zealand, Inc. 2002. Organ allocation protocols. http://www.racp.edu.au/tsanz/oapmain.htm. Accessed 12 February 2007. +UK Transplant. 2007. Organ allocation. http://www.uktransplant.org.uk/ukt/about_transplants/organ_allocation/organ_allocation.jsp. Accessed 12 February 2007. +United Network for Organ Sharing. 2007. Media information. http://www.unos.org/news/myths.asp. Accessed 12 February 2007. +U.S. Organ Procurement and Transplantation Network and the Scientific Registry of Transplant Recipients. 2005. OPTN/SRTR Annual Report. http://www.optn.org/AR2005/default.htm. Accessed 12 February 2007. +University of Maryland Medical Center. 2004a. Overview of the High PRA Rescue Protocol. http://www.umm.edu/transplant/kidney/highpra.html. Accessed 12 February 2007. +_______. 2004b. Living kidney donor frequently asked questions. http://www.umm.edu/transplant/kidney/qanda.html. Accessed 12 February 2007. +Waterman, A.D., E.A. Schenk, A.C. Barrett, B.M. Waterman, J.R. Rodrigue, E.S. Woodle, et al. 2006. Incompatible kidney donor candidates' willingness to participate in donor-exchange and non-directed donation. American Journal of Transplantation 6 (7): 1631-1638. +Woodford, P. 2004. Transplant Timeline. *National Review of Medicine*, 1(20), http://www.nationalreviewofmedicine.com/images/issue/2004/issue20_oct30/Transplanttimeline.pdf. Accessed 12 February 2007. + +![](images/41d736eee6a3f8db3cec2ec6fee1ba54de0f7f7da60863e011d5ac7a11014fad.jpg) + +Duke ICM 2007 team members Matt Rognlie, Amy Wen, and Peng Shi, wearing T-shirts with "nice numbers," and advisor David Kraines. + +# Analysis of Kidney Transplant System Using Markov Process Models + +Jeffrey Y. Tang + +Yue Yang + +Jingyuan Wu + +Princeton University + +Princeton, NJ + +Advisor: Ramin Takloo-Bighash + +# Summary + +Abstract: We use Markov processes to develop a mathematical model for the U.S. kidney transplant system. We use both mathematical models and computer simulations to analyze the effect of certain parameters on transplant waitlist size and investigate the effects of policy changes on the model's behavior. + +Our results show that the waitlist size is increasing due to the flooding of new waitlist members and insufficient deceased donor and living donor transplants available. Possible policy changes to improve the situation include presumed consent, tightening qualifications for joining the waitlist, and relaxing the requirements for accepting deceased donors. + +We also evaluate alternative models from other countries that would reduce the waitlist, and examine the benefits and costs of these models compared with the current U.S. model. We analyze kidney paired exchange along with generic $n$ -cycle kidney exchange, and use our original U.S. model to evaluate the benefits of incorporating kidney exchange. + +We develop a model explaining the decisions that potential recipients face concerning organ transplant, then expand this consumer decision theory model to explain the decisions that potential organ donors face when deciding whether to donate a kidney. + +We finally consider an extreme policy change—the marketing of kidneys for kidney transplants—as a method of increasing the live-donor pool to reduce waitlist size. + +# Introduction + +The American organ transplant system is in trouble: Waitlist size is increasing; as of February 2007, 94,000 candidates were waiting for a transplant, among them 68,000 waiting for kidneys. We create a mathematical model using a Markov process to examine the effects of parameters on waitlist size and to investigate the effects of policy changes. Possible policy changes to improve the situation include assuming that all people are organ donors unless specifically specified (presumed consent), tightening qualifications for joining the waitlist, and relaxing the requirements for accepting deceased donors. + +We evaluate alternative models from other countries that could reduce the waitlist, and examine the benefits and costs of these models compared with the current U.S. model. We analyze the Korean kidney paired exchange along with the generic $n$ -cycle kidney exchange, and use our original U.S. model to evaluate the benefits of incorporating the kidney exchange. The Korean model increases the incoming rate of live donors, which is preferable because livedonor transplants lead to higher life expectancy. However, this policy alone cannot reverse the trend in waitlist size. + +We also develop a model explaining the decisions that potential recipients face concerning organ transplant. We expand this consumer decision theory model to explain the decisions that potential organ donors face when deciding whether or not to donate a kidney. Finally, we consider an extreme policy change-the marketing of kidneys for kidney transplants as a method of increasing the live donor pool to reduce waitlist size. We consider two economic models: one in which the government buys organs from willing donors and offsets the price via a tax, and one in which private firms are allowed to buy organs from donors and offer transplants to consumers at the market-equilibrium price. + +# Task 1: The U.S. Kidney Transplant System + +# Background: Kidney Transplants + +- Blood Type: Recipient and donor must have compatible blood types (Table 1). +- HLA: Recipient and donor must have few mismatches in the HLA antigen locus. Because of diverse allelic variation, perfect matches are rare. Mismatches can cause rejection of the organ. +- PRA: PRA is a blood test that measures rejection to human antibodies in the body. The value is between 0 and 99, and its numerical value indicates the percent of the U.S. population that the blood's antibodies reacts with. High PRA patients have lower success rates among potential donors[U so it is more difficult to locate donate matches for them (Table 2). + +Table 1. Compatible blood types [American National Red Cross 2006]. + +
Recipient blood typeDonor red blood cells must be:
AB+O-O+A-A+B-B+AB-AB+
AB-O-A-B-AB-
A+O-O+A-A+
A-O-A-
B+O-O+B-B+
B-O-B-
O+O-O+
O-O-
In U.S. population:7%38%6%34%2%9%1%3%
+ +Table 2. Relationship between PRA and transplant waiting time [University of Maryland . . . 2007]. + +
Peak PRAProportion of waiting listMedian waiting time to transplant (days)
0–1960%490
20–7921%1,042
80+19%2,322
+ +# Explanation of Model + +The Organ Procurement and Transplantation Network's (OPTN) priority system for assigning and allocating kidneys is used as the core model for the current U.S. transplantation system [Organ Procurement . . . 2006]. The OPTN kidney network is divided into three levels: the local level, the regional level, and the national level. There are 270 individual transplant centers distributed throughout the U.S. [Dept. of Health and Human Services 2007], organized into 11 geographic regions. + +The priority system for allocation of deceased-donor kidneys to candidates on the waitlist takes into account proximity of recipient to donor, recipient wait time, and match to donor, with location carrying greater weight, according to a point system [Organ Procurement . . . 2006]: + +- Wait time points A candidate receives one point for each year on the waiting list. A candidate also an additional fraction of a point based on rank on the list: With $n$ candidates on the list, the $r$ th-longest-waiting candidate gets $1 - (r - 1) / n$ points. So, for example, the longest-waiting candidate $(r = 1)$ gets one additional point, the newest arrival on the list $(r = n)$ gets $1 / n$ additional points. +- Age points The young receive preferential treatment because their expected + +lifetime with the transplant is greater. Children below 11 years of age get 4 additional points, and those between 11 and 18 get 3 additional points. + +- HLA mismatch points Because there are two chromosomes, the possible number $m$ of mismatches in the donor-recipient (DR) locus of the HLA sequence is 0, 1, or 2. A candidate-donor pair gets $2 - m$ points. + +# Model Setup + +We model the entry and exit of candidates from the waitlist with a continuous-time Markov birth/death process [Ross 2002]. It accommodates reduction of the waitlist size (arrivals of living donors and deceased donors and deaths and recoveries of waitlist candidate) and waitlist additions. + +- In 2006, 29,824 patients were added to the kidney transplant waitlist, while 5,914 transplants had living donors, so $5914 / (29824 + 5914) \approx 17\%$ of incoming patient cases have a willing compatible living donor. +- The procedure for allocating deceased-donor kidneys is [Organ Procurement ... 2006, 3.5, 3-16ff]: + +- First, match the donor blood type with compatible recipient blood types. The only exceptions are: + +* Type O donors must be donated to type O recipients first, and: +* Type B donors must be donated to type B recipients first. + +- Perfect matches (same blood type and no HLA mismatch) receive first priority. +- If a kidney with blood type O or B has no perfect-matching candidates in the above procedure, then the pool is reopened for all candidates. +- In the $17\%$ of cases of no a perfect match with any recipient [Wikipedia 2007], then sort by PRA value (higher priority to high PRA; high PRA means low compatibility, which likely means being on the waitlist for a long time), then by regional location of the kidney, then by points in the point allocation system. + +# Summary of Markov Process + +Let $N_{t}$ be a random variable indicating the number of people in the waitlist at time $t$ . The properties of $N_{t}$ can be generalized in Figure 1, where + +- Each arrow represents a possible event at the current state $(N)$ . +- The rate at which each event occurs is exponentially distributed. + +![](images/14503c11196b5601684b87f720e2e63862dc71786e359b880a323c1ed5092843.jpg) +Figure 1. Markov process model of waitlist. + +- After the event occurs, by memorylessness of the exponential distribution, the time is reset to zero, as if nothing has happened. +- Wait time is assumed zero for compatible live donor transplants. +- Because there are so many local centers (270), we simplify our model to consider the region (of which there are 11) as the lowest level of waitlist candidates. +- Candidates who become medically unfit surgery are removed from the wait list and in our model are classified as deaths. +- Candidates whose conditions improve enough are removed from the wait-list. Both these people and those recovering from surgery have exponential remaining lifetime with mean 15 years. +- We use the parameter values in Table 3, which come from the OPTN database using values from 2006. + +Table 3. Means of exponential distributions. + +
SymbolRateMean
λ1new waitlist arrivals81.7 d
λ2incoming patients with living donors available16.2 d
λ3 = λ1 + λ2total incoming patients (independent RVs)81.7 + 16.2 = 97.9 d
μ1arrivals of deceased donor transplants26.9 d [Norman 2005]
μ2waitlist deaths27.0 d
μ3waitlist condition improves per day2.4 d
μ4 = μ1 + μ2 + μ3waitlist departures (independent RVs)26.9 + 27.0 + 2.4 = 56.3 d
TABtime of life after surgery0, if candidate dies;
[European Medical Tourism 2007]15 y with transplant.
+ +# Analysis of Model + +Our two variables to indicate strength of model strategy are the number of people in the waitlist (or the number of people who get transplants) and optimizing the matches so as to maximize lifetime after receiving a transplant. + +# Efficient Allocation of Kidney Transplants + +We build a new model to take into account the effects of both distance and optimal match. A kidney arriving at a center can be given to the best matching candidate at that center, the best in the region, or the best in the country. + +Of 10,000 candidate recipients, on average 37 are from the center, 873 are from the region outside the center, and 9,090 are from the nation outside the region. Using a uniform distribution on $(0,1)$ , we randomly assign scores to each of the 10,000, rank them by score, and take the highest rank at each level. We iterate this process 10,000 times and find the average rank of the top candidate in each area (Table 4). + +Table 4. Average quality of top candidate in each area. + +
Probability that top candidate is in this groupAverage rank (from bottom) of top candidate among 10,000
Center1/270= 0.37%9739.7
Region outside center1/11 - 1/270= 8.72%9989.7
Nation outside region10/11= 90.90%9999.9
+ +Transportation of the kidney can lead to damage, because of time delay in transplanting. Thus, we posit a damage function $f$ that depends on the location of the recipient: lower in the center, slightly higher in the region but outside the center, and even higher in the country but outside the region, i.e., + +$$ +f (\text {l o c a l}) < f (\text {r e g i o n a l}) < f (\text {n a t i o n a l}). +$$ + +Let us assume that when a kidney arrives in a center, it goes to the center, the region, or outside the region with probabilities $a_1$ , $a_2$ , and $a_3$ . Let $G$ be the weighted score for the kidney, with + +$$ +\begin{array}{l} G = a _ {1} \cdot (1 - f (\text {l o c a l})) \cdot \operatorname {s c o r e} _ {\text {l o c a l}} + a _ {2} \cdot (1 - f (\text {r e g i o n a l})) \cdot \operatorname {s c o r e} _ {\text {r e g i o n a l}} \\ + a _ {3} \cdot (1 - f (\text {n a t i o n a l})) \cdot \text {s c o r e} _ {\text {n a t i o n a l}}. \tag {1} \\ \end{array} +$$ + +and expected value + +$$ +\begin{array}{l} E (G) = a _ {1} \cdot (1 - f (\text {l o c a l})) \cdot 9 7 3 9. 7 + a _ {2} \cdot (1 - f (\text {r e g i o n a l})) \cdot 9 9 8 9. 7 \\ + a _ {3} \cdot (1 - f (\text {n a t i o n a l})) \cdot 9 9 9 9. 9. \tag {2} \\ \end{array} +$$ + +Optimizing $G$ as a function of the $a_{i}$ is a linear programming problem, but we cannot solve it without assessing the damage function for different regions. + +# Minimizing the Waitlist + +There are some who argue that the wait time assignment is too lax and leads to unfair waitlists. In the current system, urgency is specifically stated as have no effect on the points used to determine who receives a transplant [Organ Procurement ... 2006]. A patient is permitted to join the waitlist (in more than one region, even) when kidney filtration rate falls below a particular value or when dialysis begins. Getting on the waitlist as early as possible helps "pad" the points for waiting time. A patient not yet on dialysis can afford to wait longer yet may receive a kidney sooner than others joining later who have more urgent need. Urgency has no effect on a patient's rank for receiving a kidney. A possible solution is to tighten the conditions for joining the waitlist, so that a patient's wait time begins at dialysis. This policy would slow the rate of growth of the waitlist, at the expense of more waitlisted patients dying. + +A strategy to increase the rate of deceased-donor arrival, already policy in Illinois, is to presume that everyone desires to be an organ donor unless they specifically opt out. + +Figure 2 shows the field space of combinations from rates for these two policies. + +![](images/c6c4afc2c0f92ed626884d0786dd7bb07aaf8f0e1f0c54916a9932dd157eeafb.jpg) +Figure 2. Net waitlist arrival rate per year. + +Using both strategies could make net waitlist arrival rate negative, for example, if waitlist arrivals can be decreased by $25\%$ and donor size by $17\%$ . + +# Model Strengths + +- The Markov process, with exponentially distributed entry/exit times, makes calculations simple. + +- Minimizing the waitlist depends on only two variables. +- The model incorporates HLA values, PRA distributions, no-mismatch probabilities, region distribution, and blood-type distribution and compatibility requirements. +- The model is compatible with alternative strategies, such as a paired exchange system. + +# Model Weaknesses + +- Remaining lifetime after surgery should be adjusted, since an exponential distribution for remaining lifetime is appropriate only until a certain age. +- The model cannot account for patients'. We assume that all patients offered a kidney take it if the HLA value is reasonable, which may not be the case. +- The model does not make distinctions for race and socioeconomic status. Different races have differing wait times [Norman 2005, 457]. +- We assume independence of random variables, so that increasing or decreasing parameters will not affect other parameters. +- Our emphasis on waitlist size neglects waitlist time; another approach would be to try to minimize waitlist wait time. + +# Tasks 2 and 3: Kidney Paired Exchange + +# Background + +As noted at the University of Chicago Hospitals, "In 10 to 20 percent of cases at the Hospitals, patients who need a kidney transplant have family or friends who agree to donate, but the willing donor is found to be biologically unsuited for that specific recipient" [Physicians propose ... 1997]. + +In the simple kidney paired exchange system (Figure 3), there are two pairs of patient / donor candidates. Each donor is incompatible with the intended patient but compatible with the other patient. Surgery is performed simultaneously in the same hospital on four people, with two kidney removals and two kidney transplants. + +However, for not all patient-donor pairs will there be a mutually compatible partner pair. In such a case, it is possible for the cycle to expand to $n$ patient-donor pairs, with each donor giving to a compatible stranger patient (Figure 4). Since such an exchange requires at least $2n$ surgeons at the same hospital, higher-order exchanges are less desirable on logistic grounds. + +A kidney paired exchange program does not affect the intrinsic model outlined for Task 1. The only change when live incompatible pairs get swapped is + +![](images/9687e58a12c697d70d0468ffa085ab8f86171fb616afc26349797df63fa94511.jpg) +Figure 4. Kidney paired exchange system. + +![](images/834c0bb2d534fde44b6d9f40909999a5f56e5425ba568db7df7996efe9e1abbf.jpg) +Figure 5. $n$ -way kidney exchange. + +that the rate of candidates entering the waitlist is reduced. However, for those in the waitlist, the same procedure is still being used. + +- Kidney paired exchanges have higher priority than larger cycles, for logistical reasons. +- Recipients must receive a transplant in the region in which they are on the waitlist (this reduces travel time). +- Exchanges with no mismatch are prioritized over exchanges with mismatch. + +Waiting time for an exchange is assumed to be 0, as was live donor matches in Task 1. After an exchange, all individuals involved are removed from the pools of donors and recipients. + +We use Region 9 as a sample region to test our model. Region 9 has a waitlist (6058) similar to the average waitlist per region, and 909 candidates $(15\%)$ have willing but incompatible donors. We ran our simulation 100 times and computed averages. Table 5 shows extrapolation of the results nationwide. + +# Analysis + +In 2006, there were 26,689 kidney transplants nationwide, including only a few kidney paired exchanges. The approximately 9,656 additional transplants yearly indicated in Table 5 would have been a $36\%$ increase and would have reduce the waitlist correspondingly by $14\%$ , from 69,983 to 60,327. + +Table 5. +Averaged results of repeated simulations of multiple-pair transplant exchange nationwide (extrapolated from Region 9 data). + +
Kind of matchTransplantsPercentage
2-way no mismatch2
2-way non-perfect9,646
3-way no mismatch0
3-way non-perfect8
Total transplants9,65692%
Candidates with willing but incompatible donor10,497
+ +Another option is to consider multiple exchanges for all donor-recipient pairs in a particular center. This minimizes the travel time required for the patients, while improving the computational power of the search algorithm. A center has on average 259 candidates, of whom 39 have willing but incompatible donors available. For this sample size, we get on average 25 transplants $(65\%)$ , compared to $92\%$ under exchange at the regional level. Furthermore, the proportion of high-quality transplants is also smaller. The benefits of a center-only exchange system are personal and psychological: Patients live close to the surgery location, which means better support from both family and familiar physicians. + +# Task 4: Patient Choice Theory + +Suppose a patient is offered a barely compatible kidney from the cadaver queue. There are two options: + +- take the bad-match kidney immediately, or +- wait for a better match, + +- from the cadaver queue or +from a paired exchange. + +We consider two cases: without paired exchange and with it. + +# Model 1: Decision Scenario without Paired Exchange + +Of transplants with poorly matched kidneys, $50\%$ fail after 5-7 years. So we assume that the lifetime after a poorly matched kidney transplant is exponentially distributed with mean 6 years [Norman 2005, 458]. + +We translate data of Table 6 on survival probabilities to exponential variables with mean $\lambda$ by solving $P(\text{survive } t \text{ years}) = e^{-\lambda t}$ . + +Table 6. Rates for patient survival [National Institute of Diabetes and Digestive and Kidney Diseases 2007]. + +
Time (y)DialysisLive-donor transplantCadaver transplant
pλpλpλ
1.7740.256.9770.0233.9430.0587
2.6320.229.9590.0209.9070.0209
5.3150.231.8960.0220.8190.0399
10.1010.229.7530.0284.5910.0526
Avg.0.2360.02370.0500
+ +We first diagram the wait strategy for the scenario of waiting on dialysis for a deceased-donor kidney, with no kidney paired exchange (Figure 5). + +![](images/4adf2b553a20dcbc4b90f326a9eff9b5e9d539e1d1ea038e4bd221546833ba44.jpg) +Figure 6. Wait strategy with no paired exchange. + +We then calculate expected remaining lifetime with this strategy. We use + +$P(\text{deceased-donor transplant}) = \frac{\text{deceased-donor transplants}}{\text{waitlist}} = \frac{10659}{75711} = .140,$ using 2006 data [Organ Procurement ... 2007] $^1$ . We have + +$$ +\begin{array}{l} E (\text {l i f e t i m e}) = \frac {0 . 2 3 6}{0 . 2 3 6 + 0 . 1 4 0} \left(\frac {1}{0 . 2 3 6 + 0 . 1 4 0}\right) \\ + \frac {0 . 1 4 0}{0 . 2 3 6 + 0 . 1 4 0} \left(\frac {1}{0 . 2 3 6 + 0 . 1 4 0} + \frac {1}{0 . 0 5 0}\right) \approx 1 0 \mathrm {y e a r s}. \\ \end{array} +$$ + +If instead the patient chooses to undergo immediate transplant with a bad match deceased-donor kidney, then remaining lifetime is exponentially distributed with rate 0.167, so + +$$ +E (\text {l i f e t i m e}) \approx 6. 0 \text {y e a r s}. +$$ + +![](images/d969dfeaba95fe8c55fcdd71ca4928ce7e4b30b3fd04ca1400bf7a618aa03f39.jpg) +Figure 7. Transplant strategy with no paired exchange. + +The expected remaining lifetime for the wait strategy is 4 years greater, so we recommend that strategy. It assumes that the patient is risk-neutral. Being on dialysis leads to an expected remaining lifetime of 4.2 years, which is less than the expected remaining lifetime for a bad-match kidney. The decision hinges on how much risk the patient is willing to take. + +# Model 2: Decision Scenario with Paired Exchange + +This modified scenario leads to Figure 7. Since between $10\%$ and $20\%$ of patients have willing but incompatible donors [Physicians propose . . . 1997], and lacking any better data, we use .15 as the probability of a kidney paired exchange being possible. Using similar calculations as before, we find the expected remaining lifetime for the wait strategy: + +$$ +E (\text {l i f e t i m e}) \approx 1 9. 5 \text {y e a r s}. +$$ + +![](images/338c7187a5f88b34ad1e72c482a7db9c76ab41ad05b6f3baad8f1263bbcd02df.jpg) +Figure 8. Wait strategy with paired exchange. + +# Model Strengths + +- The model compares strategies numerically. + +- Values for survival rate, rate of death, and rate of donor arrival are available from data, and these parameters can be easily adjusted for different scenarios. +- The model can be modified to accommodate other strategies, new categorizations of transplants (perhaps divide transplants into grades A, AA, AAA for quality). + +# Task 5: New Organ Market + +Another method to increase the rate of incoming live donors is to implement a market allowing people to sell organs for transplantation. Currently, it is illegal in the U.S. to "transfer an organ for valuable consideration" [National Organ Transplant Act 1984]. There are two possible ways a market can work: government-managed (Figure 8) or using a public market (Figure 9). + +![](images/46b0b1830b1ed0356d15028d9499a54e4f8157099b6200269d0a670faec91c51.jpg) +Figure 9. Government-managed organ-buying system. + +Originally, $Q_{0}$ units of organs were available (via cadaver and living donor transplants). In the government-managed model, people sell their kidneys to the government at the market equilibrium price. The government pays $D + E$ for the kidney transplants, so the economy suffers a tax of $D + E$ . However, the customers gain the consumer surplus of $C$ and the suppliers gain the supplier surplus of $D$ , so benefit of having more kidneys available to society is $C + D - (D + E) = C - E$ . Because of the inelasticity of demand for kidney transplants, $C > E$ , so $C - E > 0$ . Therefore, a government-managed system would eliminate the waitlist because $Q_{1}$ would likely be greater than the total number of people on the waitlist. However, government-managed systems are known to be slow and inefficient [Krauss 2006], so people in this market would have long wait times for a transplant. The increase from $Q_{0}$ to $Q_{1}$ would be drastic, leading to a strain on hospitals and on the health care system. + +A possible solution to the long wait times intrinsic to government-managed surgeries is to privatize the market and allow private companies to buy kidneys and sell surgeries. The $Q_{0}$ donors who originally donated for free would still be in the model (but we assume that they would still be uncompensated), so + +![](images/92179d7d3320f7fcd34912383f89905afd71c93a28e84372f83ad452fc8a8f3e.jpg) +Figure 10. Free market for organs. + +the companies buy organs only from the remaining supply curve. The market equilibrium is still the same coordinate, but this time the consumer surplus is increased by $C$ and the supplier surplus by $D$ , so the benefit to society is $C + D$ instead of $C - E$ , and the free-market system is more efficient to society by $D + E$ . Because the company's demand curve is less inelastic than the customer's demand curve, it is likely that companies will buy fewer kidneys, at a lower price, than the government would. However, companies may take advantage of the inelastic demand curves and try to make long-run profits. + +A free-market system has benefits over the government-managed system; it is more efficient, and it would lead to more transplants if the government-managed system had longer wait times and inefficient allocations due to the bureaucratic difficulties of managing a nationwide kidney industry. + +However, the free-market system also has disadvantages. Matches involving different races are less likely to lead to good transplants, because tissue-type gene sequences have different distributions by race and hence lower likelihood of compatibility across races. Race-associated differences in genetics could lead to race-specific markets (and prices) for kidneys. + +Another possible protest against this market structure is ethical dilemmas regarding the selling of organs. The organ is a part of one's body, but questions arise whether one "owns" one's organs. One religious view would be that because the body is sacred, it would be wrong to sell one's body for monetary gain. Furthermore, introduction of a market for kidneys could lead some companies to try to start stemcell or nonliving organ farms, a dangerous step according to the realm of bioethics. + +While a kidney market would increase efficiency and lead to more transplants, both the government-sponsored and the free-market versions could each provide major ethical problems. + +# Task 6: Potential Donor Decision Theory + +We calculate the probability that donors will donate for various situations. We build a model similar to the patient's decision scenario, only now we consider the states of the world from a potential donor's viewpoint. We consider + +three cases: kidney donation to a loved one, to a random person, and to a random person in a kidney exchange system so that a loved one can receive a kidney transplant as well. + +We first evaluate the case when the kidney could be donated to a loved one. + +# Donor Decision: Donate to a Loved One? + +# Strategy 1: Donate + +Let $C$ be a score assigned to the strategy when you donate a kidney to a loved one. Let $C$ be a function of three random variables $X, Y$ and $Z$ , where + +- $X$ is the remaining lifetime of the recipient given a live donor transplant, +- $Y$ is the remaining lifetime of the donor given a live donor transplant, and +- $Z$ is the pain and depression value of the donor after donating a kidney. + +The remaining lifetime of a recipient of a live-donor transplant is exponentially distributed with rate 0.0237. We also know that the perioperative mortality rate is 3 deaths per 10,000 donors (0.03%), and that $2\%$ of donors encounter major complications [Najarian et al. 1992]. (Some donors experience depression or conflict with family members, but these problems are unrelated to the success of the transplantation [Liounis et al. 1988].) Thus, $X$ , $Y$ , and $Z$ are as follows: + +- $X$ is exponentially distributed with rate $\lambda = 0.0237$ ; +- $Y = \left\{ \begin{array}{ll} 0, & \text{with probability } 0.03\%, \\ T_{\mathrm{N}}, & \text{with probability } 97.97\%, \\ T_{\mathrm{MC}}, & \text{with probability } 2\%, \end{array} \right.$ + +where $T_{\mathrm{N}}$ is the random variable for remaining lifetime of a normal person, and $T_{\mathrm{MC}}$ is the random variable for remaining lifetime of a person with major complications of a donor from kidney transplant; + +- $Z$ is the numerical value for amount of depression, conflict, and anger that results from donating kidney. + +Hence $C$ from this example is given by: + +$$ +C = a _ {1} X + a _ {2} Y - a _ {3} Z, +$$ + +where $a_1, a_2,$ and $a_3$ are weights for how important each variable is. These weights reflect the emphasis on each variable by the given donor. + +$$ +\begin{array}{l} E (C) = a _ {1} E (X) + a _ {2} E (Y) - a _ {3} E (Z) \\ { = } { a _ { 1 } \cdot 4 2 . 1 9 + a _ { 2 } \cdot ( 0 . 0 0 0 3 \cdot 0 + 0 . 9 7 9 7 \cdot \overline { { T _ { \mathrm { N } } } } + 0 . 0 2 \cdot \overline { { T _ { \mathrm { M C } } } } ) - a _ { 3 } \cdot \overline { { Z } } . } \\ \end{array} +$$ + +# Strategy 2: Don't Donate. + +In this case, we notice several changes to the variables $X,Y$ and $Z$ + +- There are two scenarios: Your loved one does not get a transplant and dies, or your loved one receives a transplant. In the first case, the time until your loved one dies is exponentially distributed with rate 0.236; and in the second case, the time until your loved one receives a transplant is exponentially distributed with rate 0.140 (same as in Task 4). Thus, the minimum of the two, which is the time until the first event occurs, is exponentially distributed with rate $(0.140 + 0.236) = 0.376$ . Thus, we have + +$$ +\begin{array}{l} X = \min \left\{T _ {\text {d i e}}, T _ {\text {t r a n s p l a n t}} \right\} \cdot P (\text {d i e}) \\ + \left(\min \left\{T _ {\text {d i e}}, T _ {\text {t r a n s p l a n t}} \right\} + T _ {\text {R L A D S}}\right) \cdot P (\text {t r a n s p l a n t}), \\ \end{array} +$$ + +where $T_{\mathrm{RLADS}}$ is the remaining life after transplant from a deceased donor, which is exponentially distributed with mean 0.050. We know that $E(X) \approx 10.11$ years. + +$Y = T_{n}$ +$Z = 0$ + +It follows again from $C = a_{1}X + a_{2}Y - a_{3}Z$ that + +$$ +E (C) = a _ {1} E (X) + a _ {2} E (Y) - a _ {3} E (Z) = a _ {1} \cdot 1 0. 1 1 + a _ {2} \cdot \overline {{T _ {N}}} - a _ {3} \cdot 0. +$$ + +The expected value of the benefit of the transplant strategy over the no-transplant strategy is + +$$ +\begin{array}{l} E [ C (\text {t r a n s p l a n t}) ] - E [ C (\text {n o t r a n s p l a n t}) ] = \\ a _ {1} \cdot 3 2. 0 8 + a _ {2} \cdot \left(- 0. 0 2 0 3 \cdot \bar {T _ {\mathrm {N}}} + 0. 0 2 \cdot \bar {T _ {\mathrm {M C}}}\right) - a _ {3} \cdot \bar {Z}. \\ \end{array} +$$ + +The first component is positive, while the second and third are negative, since $\overline{T_{MC}} < \overline{T_N}$ . The result can be either negative or positive, depending on the weights $a_i$ . + +# Donor Decision: Donate to an Unknown Person? + +In this case, the model stays exactly the same but the values of the $a_{i}$ will be different. + +# Donor Decision: Donate via Paired Exchange? + +We consider an $N$ -pair exchange. We continue using the system provided in the previous example but must include more parameters: + +- $X_{1}$ is the remaining lifetime of the loved-one recipient given a live donor transplant, and is exponentially distributed with rate $\lambda = 0.0237$ ; +- $X_{2}$ is the remaining lifetime of the $N - 1$ stranger recipients who are each given a live donor transplant, and obviously $X_{2} = (N - 1)X_{1}$ ; +- $Y$ is the remaining lifetime of the donor given a live donor transplant, same as in non-paired-exchange scenario; and +- $Z$ is the pain and depression value of the donor after donating a kidney. + +We then have + +$$ +\begin{array}{l} E (C) = a _ {1} E \left(X _ {1}\right) + a _ {2} (N - 1) E \left(X _ {2}\right) + a _ {3} E (Y) - a _ {4} E (Z) \\ = a _ {1} \cdot 4 2. 1 9 + a _ {2} \cdot (N - 1) \cdot 4 2. 1 9 \\ + a _ {3} \cdot \left(. 0 0 0 3 \cdot 0 + 0. 9 7 9 7 \cdot \overline {{T _ {N}}} + 0. 0 2 \cdot \overline {{T _ {M C}}} - a _ {4} \cdot \overline {{Z}}. \right. \\ \end{array} +$$ + +For the no-transplant strategy, then in this case instead of one recipient being forced to wait for a donor, all $N$ recipients must now wait, since no size $N - 1$ cycle exists. Thus, we have: + +- $X_{1}$ is the same as in the original $X$ for the no-transplant, no-exchange strategy, but now transplants arrive faster because new live transplants are available. So + +$$ +\begin{array}{l} X _ {1} = \min \left\{T _ {\text {d i e}}, T _ {\text {D E A D t r a n s p l a n t}}, T _ {\text {L I V E t r a n s p l a n t}} \right\} \cdot P (\text {d i e}) \\ + \left(\min \left\{T _ {\text {d i e}}, T _ {\text {D E A D t r a n s p l a n t}}, T _ {\text {L I V E t r a n s p l a n t}} \right\} + T _ {\text {R L A D S}}\right) \cdot P (\text {D E A D t r a n s p l a n t}) \\ + \left(\min \left\{T _ {\text {d i e}}, T _ {\text {D E A D t r a n s p l a n t}}, T _ {\text {L I V E t r a n s p l a n t}} \right\} + T _ {\text {R L A L S}}\right) \cdot P (\text {L I V E t r a n s p l a n t}), \\ \end{array} +$$ + +where $T_{\mathrm{RLAS}}$ is the remaining life after surgery given that it is from a deceased donor. We know this to be exponentially distributed with mean 0.050, and $E(X_1) = 19.26$ years. This value is different from the no-exchange system because people on the waitlist will have a higher chance of receiving a transplant when the policy changes to permit and encourage exchanges. + +- $X_{2} = (N - 1)X_{1}$ is the remaining lifetime of the $N - 1$ stranger recipients who are each given a live donor transplant, and + +$$ +E (X _ {2}) = (N - 1) E (X _ {1}) = (N - 1) \cdot 1 9. 2 6 \text {y e a r s}. +$$ + +- $Y = T_{n}$ +- $Z = 0$ + +Hence we obtain + +$$ +\begin{array}{l} E (C) = a _ {1} E \left(X _ {1}\right) + a _ {2} (N - 1) E \left(X _ {2}\right) + a _ {3} E (Y) - a _ {4} E (Z) \\ = a _ {1} \cdot 1 9. 2 6 + a _ {2} \cdot (N - 1) \cdot 1 9. 2 6 + a _ {3} \cdot \overline {{T _ {N}}} - a _ {4} \cdot 0 \\ \end{array} +$$ + +Using both of the $C$ values for transplant and no-transplant possibilities, we see that the $A$ variable for choosing the transplant is: The expected value of the benefit of the transplant strategy over the no-transplant strategy is + +$$ +E \left[ C (\text {t r a n s p l a n t}) \right] - E \left[ C (\text {n o t r a n s p l a n t}) \right] = +$$ + +$$ +a _ {1} \cdot 2 2. 9 3 + a _ {2} \cdot (N - 1) \cdot 2 2. 9 3 + a _ {3} \cdot (- 0. 0 2 0 3 \cdot \overline {{T _ {N}}} + 0. 0 2 \cdot \overline {{T _ {M C}}}) - a _ {4} \cdot \bar {Z}. +$$ + +We compare this value with $a_1 \cdot 32.08 + a_2 \cdot (-0.0203 \cdot \overline{T_N} + 0.02 \cdot \overline{T_{MC}}) - a_3 \cdot \bar{Z}$ for a no-exchange system. + +The expected lifetime of the related patient has lower impact in the exchange model. This is because the related patient may receive exchange transplants in the future if you do not donate your organ through an exchange. While the effect of $a_1$ decreases, a new variable $a_2 \cdot (N - 1) \cdot 42.19$ increases the probability that a donor decides to donate a kidney. This is because the donor feels responsible for increasing the lifetime for all $N$ recipients in the size- $N$ transplant exchange, because without that donor, none of the transplants would be possible. However, because the donor feels less attached to random recipients, we have $a_1 >> a_2$ . + +# Model Strengths + +# The model + +- provides a numerical value useful in gauging the probability that a donor decides to donate; +- is adjustable to any new system created; +incorporates personal and psychological factors. + +# Model Weaknesses + +Some variables and parameters are not independent, but our model assumes that the rates are independent. + +# Conclusions + +After developing a model to understand the effects of components of the kidney transplant model, we have developed a list of solutions to the waitlist dilemma: + +- Tighten Waitlist Entry Requirement Currently, patients join the waitlist when kidney filtration rate falls below a particular value or when dialysis begins. We recommend that only those whose conditions are at dialysis or worse should be allowed to join the waitlist. This change would lead to reduced inflow of waitlist candidates, dramatically improving the system. + +- Presumed Consent Currently, those who wish to donate kidneys after death must have explicit documentation on hand when their bodies are retrieved. A new policy would assume that all deceased people are eligible for deceased-donor transplant, unless explicitly expressed otherwise. This change would dramatically increase the inflow of deceased donors. +- Kidney Paired Exchange System Many waitlist candidates have potential donors who cannot donate due to incompatible blood types or HLA. A kidney paired exchange system would match these people in a broad regional pool, identifying when donors can donate to the respective other paired recipient. This reduces the flow of incoming waitlist candidates. +- Market Kidneys We investigated government-sanctioned kidney purchases and a free market for kidneys. In both cases, the size of the waitlist diminishes with the number of live kidneys sold. However, a government bureaucracy could not handle the number of kidney transplants, so waiting time would increase for some, at least at first. In a free market, biological factors of kidney transplants could lead to discriminatory prices. Thus, marketing of kidneys is discouraged on the basis of parity. + +# Acknowledgments + +We would like to thank our faculty adviser, Prof. Ramin Takloo-Bighash, for his encouragement, guidance and support, and the Princeton University Applied Mathematics Department and a Kurtz'58 grant for funding Princeton's first year of participating in the MCM/ICM. + +# References + +American National Red Cross. 2006. RBC compatibility table. http://chapters.redcross.org/br/northernohio/INFO/bloodtype .html . Accessed 9 Feb 2007. +Dept. of Health and Human Services. OPTN Online database system: Transplant Data 1994-2006. http://www.optn.org/latestData/viewDataReports.asp. Accessed 9 Feb 2007. +European Medical Tourist. 2007. Kidney transplant. http://www.europeanmedicaltourist.com/transplant/kidney-transplant.html. Accessed 10 Feb 07. +Krauss, Clifford. 2006. Canada's private clinics surge as public system falters. New York Times (28 February 2006). http://www.nytimes.com/2006/02/28/international/americas/28canada.html?pagewanted=1&ei=5088&en=25bafd924c66a0ed&ex=1298782800&partner=rssnyt&emc=rss. + +Liounis, B., L.P. Roy, J.F. Thompson, J. May, and A.G. Sheil. 1988. The living, related kidney donor: A follow-up study. Medical Journal of Australia 148: 436-437. +Najarian, J.S., B.M. Chavers, L.E. McHugh, and A.J. Matas. 1992. 20 years or more of follow-up of living kidney donors. Lancet 340: 807-810. +National Institute of Diabetes and Digestive and Kidney Diseases. 2007. Kidney and urologic diseases statistics for the United States. http://kidney.niddk.nih.gov/kudiseases/pubs/kustats/index.htm. +National Organ Transplant Act Pub. L. No. 98-507, 1984 U.S. Code Cong. & Ad. News (98 Stat.) 2339 (codified at 42 U.S.C.A. §274e (West Supp. 1985)). +Norman, Douglas J. 2005. The kidney transplant wait-list: Allocation of patients to a limited supply of organs. *Seminars in Dialysis* 18 (6) (2005) 456-459. +Organ Procurement and Transplantation Network. 2006. Policies and bylaws 3.5: Allocation of deceased kidneys. http://www.optn.org/ PoliciesandBylaws2/policies/pdfs/policy_7.pdf. +______ 2007. Transplants by donor type. Data: View data reports. http://www.optn.org/latestData/viewDataReports.asp. +Physicians propose kidney-donor exchange. 1997. University of Chicago Chronicle 16 (19) (12 June 1997). http://chronicle.uchicago.edu/970612/kidney.shtml. +Ross, Lainie Friedman, David T. Rubin, Mark Siegler, Michelle A. Josephson, J. Richard Thistlethwaite, Jr., and E. Steve Woodle. 1997. Ethics of a paired-kidney-exchange program. New England Journal of Medicine 336 (24): 1752-1755. +Ross, Sheldon M. 2002. Introduction to Probability Models. 8th ed. New York: Academic Press. +University of Maryland Medical Center. 2007. Overview of the High PRA Rescue Protocol. http://www.umm.edu/transplant/kidney/highpra.html. Accessed 9 Feb 2007. +Wikipedia. 2007. Kidney transplantation. http://en.wikipedia.org/wiki/Kidney_transplantation. Accessed 09 Feb 07. + +# Judges' Commentary: The Outstanding Kidney Exchange Papers + +Chris Arney +Division of Mathematical Sciences +Army Research Office +Research Triangle Park, NC +david.arney1@us.army.mil + +Kathleen Crowley Psychology Dept. The College of Saint Rose Albany, NY 12203 + +# Introduction + +Eight judges prepared for this year's ICM judging by studying the Kidney Exchange Problem and calibrating their criteria. When prepared for their task, they had the opportunity to read and compare an excellent set of creative and interesting papers. It is likely that many students and some advisors would find it surprising that the judges face challenges as complex as those tackled by the ICM contestants: The judges must determine how best to evaluate, grade, and score a myriad of papers as fairly and accurately as possible over a very short period of time. Their goal is to insure that awards are given to the best teams and that the papers published in The UMAP Journal represent the finest student work produced in the contest. + +The papers were assessed in three key areas: + +- effective use of current data and policies relevant to the U.S. organ transplant network to reveal supply and distribution bottlenecks and to identify means for producing improvements in the efficiency and fairness of the organ-donation network; +- application of an appropriate modeling process and appropriate use of the model to perform insightful and critical analysis; and + +- integration of mathematics, science, ethical principles, and common sense to render appropriate recommendations to the decision-makers. + +Overall, the judges were impressed by the considerable efforts expended by the contestants and by the prodigious and sophisticated work produced by many of the teams. This year's problem was particularly challenging because it involved multiple tasks and the teams having to take on several diverse perspectives, ranging from mathematics and sciences to ethics and social policy. + +# The Problem + +The issues raised in the Kidney Exchange Problem are real, and the problem is timely. Organ transplant problems and opportunities are often discussed in the news, and there is an ongoing stream of new proposals seeking to improve organ transplant management. Many of these proposals are being debated by both the U.S. Congress and members of the executive branch of the government. A wide array of professional researchers, staffers, and analysts are engaged in work on many of the same questions, challenges, and procedures that the teams had to address over the contest weekend. Some of these professional working groups are also engaged in advocacy—writing letters to members of Congress or to executives in the Department of Health and Human Services, processes mirrored by two of the required tasks faced by the contestants. + +The data needed for the problem were readily available on the Internet. And while there are a number of articles and even fully developed models in the literature, the problem required more than mere research: Real, robust, creative modeling and critical interdisciplinary thinking were needed to complete successfully the required analyses. Several disciplines in addition to mathematical modeling had to be included for full consideration of this problem; ethical, medical, sociological, political, cultural, and psychological perspectives were all essential components to the development of complete and creative solutions. The dilemmas surrounding the issues of organ donation are undeniably and genuinely interdisciplinary in nature, and they have global relevance. Useful solutions to these challenges clearly require robust modeling if the current inefficiencies of present-day organ-donation systems are to be reduced or eliminated. + +# Judging Criteria + +In the end, the judges selected two papers to be designated as Outstanding. Both of these efforts, and another special paper, will be considered later in this article. The judges organized the evaluative rubric into five categories (listed below), and we use this framework to summarize the judges' perspectives and determinations. + +- Quality of the Executive Summary: Most teams demonstrated that they knew that it was important to provide a good executive summary. Moderately successful efforts only summarized the requirements and stated their recommendations. The more successful efforts also provided a logical link between the research issues, the models, and the recommendations. +- Scientific Knowledge and Application: Many papers demonstrated their significant knowledge of organ transplant science by providing well-written reports of the technical and social issues inherent to the transplant process and the organ distribution network. Many also provided excellent summaries of the widely differing issues regarding the role of government, public health policy, ethics, psychology, international culture, and social issues involved in the procurement and distribution of organs. It was clear that this problem was both very complex and demanding. Although most papers addressed the basic issues of the transplant systems, some papers addressed the complexities of this capacious issue better than others, through use of creative models and insightful analyses. The least successful papers were those that did not go beyond reporting information from the Internet or journal articles. Papers in this category sometimes included unnecessary or irrelevant scientific information, and they sometimes failed to fully integrate and/or document the information they presented. These kinds of papers did not fall into the highest-level categories. Some of the moderately successful papers were rather disjointed and apparently had the science section written by one part of the team and the modeling and analysis sections written by another. Although these sections may have been quite good when considered in isolation, they typically were not well-integrated and therefore did not present a strong synthesis of key elements. The most successful papers used the scientific knowledge as a basis for their models and their subsequent analysis as the basis for their policy recommendations. Almost every team was able to cite an enormous amount of information from the open literature and clearly had used Internet sources fully and effectively. But the stronger teams not only gathered an abundance of information, they examined international procedures and ideas to suggest potential improvements to the U.S. organ donation network that sometimes stumbles along under current public policies and social constraints. In other words, these papers demonstrated that the authors had an understanding that network functionality was critically important in the design of an outstanding solution to this problem. +- Modeling: The most effective papers made their assumptions explicitly from the scientific foundation that they developed to build their models. As one judge noted, some of the models appeared to be hammers looking for nails—making some models so complex that the entire report was devoted to developing the model without devoting time to any thoughtful analysis or meaningful recommendations. Some teams demonstrated their abilities to make appropriate model refinements in the follow-on tasks that addressed + +the more interdisciplinary issues of ethics and politics. It was important that the modeling process was well formulated and robust; but unfortunately, some papers had wonderful models that offered little explanation of how the model functioned or provided little use of the results in the analysis. + +- Analysis/Reflection: Successful papers discussed the ways in which their models were able to address the issues and tasks involved in improving the current organ donation system. The most effective modelers verified the sensitivity and robustness of their models. This problem asked many questions, and even a long weekend does not provide much time to perform all the tasks for this very complex and interdisciplinary model. The most effective papers, however, found the time to recommend new policies based on their analyses, and the policies that they recommended were fully justified by the model analysis they had performed. The weaker papers did not address the questions with effective mathematical modeling or simulation, but relied instead solely on Internet research. The papers that addressed the policy issues well seemed to show that the U.S. was not doing enough to increase the population of donors and provided plausible solutions that were verified or used by their models. The problem tasks led many teams to talk about ethical concerns; but to the dismay of the judges, many other teams did not include consideration of ethical concerns in either their models or in their analyses of the issues. + +- Communication: The quality of writing in the reports this year seemed to have slipped a bit compared to papers in previous competitions. This decline in quality may have been a consequence of the unusually high number of tasks and requirements for this year's problem compared to those in previous years. Nevertheless, this year's most effective papers demonstrated clear organization throughout the modeling process by establishing logical connections between sections of the report. Good communicators also understood that well-selected graphics were a highly effective means for making their points. As to the length of the papers, succinct with adequate explanation was preferred. Long, rambling papers were judged to be less effective because they created the frustration of requiring the judges to read unnecessary details and irrelevancies. Some papers hinted of good analysis but lacked sufficient clarity in their presentation. These teams apparently reasoned better than they communicated and consequently their important ideas and good arguments were not made readily apparent to the readers. The strongest papers presented the problem, discussed the data, explained their analyses, and fully developed their mathematical models. The biggest difference between the stronger and weaker papers was whether or not they were able to inform the reader about what they did, and more importantly, how they did it. Clear presentations allowed the judge to comprehend the logic and reasoning of the successful modelers. The top papers artfully blended the scientific literature with humanistic concerns, strong argument, and elegant mathematics. + +# Analysis of the Outstanding Papers + +The papers judged to be Outstanding shared several common elements: robust modeling, insightful analysis, effective communication, and a touch of creativity. What truly distinguished these papers from the less successful, however, was their passion for the problem. They demonstrated a desire not only to solve the tasks at hand, but also to improve the overall health and wellbeing of real-world patients who suffer from organ disease. It is not surprising that given the complexity of the problem, both of the Outstanding papers still contained some weaknesses. + +"Optimizing the Effectiveness of Organ Allocation" from Duke University "quantitatively analyzes kidney allocation, possible improvements to live donation, and many other strategies to improve the current process" (Executive Summary). Their model showed how list-based pairing could dramatically increase live donations, and they creatively addressed both ethical and political considerations. This team's letters to Congress and Director of the Health Services were crisp and clear with solid, well-supported, recommendations. + +The Princeton paper entitled "Complete Analysis of Kidney Transplant System using Markov Process and Computations Models" made excellent use of the team's model to investigate the effects of policy changes. The authors recommended presumed consent and paired-kidney exchange. They also modeled the psychology of donating through the use of a "consumer decision theory model to explain the decisions that potential donors face when deciding whether or not to donate a kidney" (Executive Summary). Their analysis of the marketing of kidneys led them to reject the idea of making organs available on the free market because of their astute analysis of the serious ethical concerns such a system raises, even though it could potentially supply many kidneys to the organ donation network. + +One other paper, while not designated Outstanding, also deserves mention because of the high quality of its creativity and presentation style. "The Giving Tree" submitted by a team from Berkshire Community College, provided a model that was "delicately designed so that the best ethical practices compliment the most efficient strategy, all while remaining economically feasible" (Executive Summary). The team proposed the very creative concept of "mandate choice," in which potential donors are required to declare their donation preferences when they seek driver's licenses. This team also included incentive strategies to educate citizens to the benefits of organ donation. All and all, this was a highly notable paper and one of the first that we have so favorably received from a small two-year institution. + +# Plagiarism + +Unfortunately, this year's contest was marred by the disqualification of strong papers because of improper referencing, over-reliance on the published + +work of others, and a failure to appropriately and fully acknowledge the use of sources. The contest rules state that failure to credit a source will result in disqualification from the competition, and we were disappointed that this had to be done at the conclusion of this year's competition. + +# The Joy of Interdisciplinary Modeling + +Talk of modeling, science, mathematics, psychology, communication, summaries, transplants, HLA, PRA, TTCC, OPN, references, algorithms, and computer programs echoed in the air; and then intense discussion of scores, rubrics, and criteria ensued as the final decisions were made. All this happened as the eight final judges came together to evaluate the finalist papers. As judges and interdisciplinary problem-solvers, we were most happy when we found papers full of excellent modeling, detailed mathematics, scientific facts, deep analysis, informative graphics, interesting solutions, successful collaboration, and especially strong evidence of student passion. This approach made mathematics all it should be: exciting, relevant, and potentially transformative. + +# Conclusion + +The judges congratulate all the members of the successful teams. We saw evidence that all teams recognized and struggled with the challenges of a real, large-scale, interdisciplinary problem and hope that all of the ICM participants learned from their experiences as a part of this process. We believe that participation in the ICM contributes to the development of contestants in their quest to become sophisticated and effective interdisciplinary modelers. The effort and creativity demonstrated by almost every team was inspiring, and many papers served to reveal clearly the power of interdisciplinary problem-solving. We look forward to both continued improvement in the quality of the contest reports and increasing interest and participation in the annual ICM. + +# Recommendations for Future Teams (with help from the triage and final judges) + +- Spend as much time as you can on analysis of the model, not just its development. Do not just report the model output and data. If possible, summarize your analyses clearly in tabular or graphical form. +- Clear communication makes it easier to identify outstanding work. Check your equations to avoid typographical errors resulting in a relationship that is inconsistent with the written description. State your assumptions, limitations, and strengths, and be sure to integrate fully and appropriately your + +research sources with your model. Do not allow the background research to stand alone, unrelated to the model that you propose. The science that you report should be relevant to the model, and the model should reflect the science. + +- Evaluate your results and discuss their implications. Explain how your results compare to similar work in the literature. +- Keep in mind that simple explanations suffice. If you are doing something super-elegant or ultra-complex, do not lose the reader in super-ultra-elegant complexity. Overly complicated models are not good ones. A significant part of the art of modeling is choosing the most important factors and using appropriate science and mathematics to simplify the problem. +- Long papers are not necessarily good papers. If you cannot describe your models clearly and succinctly, then they probably are not good models. +- The final important reminder is that any material that comes from other sources, even if paraphrased, must be carefully and completely documented; it must be placed in quotation marks if taken verbatim from another source. + +# About the Authors + +Chris Arney graduated from West Point and became an intelligence officer. His studies resumed at Rensselaer Polytechnic Institute with an M.S. (computer science) and a Ph.D. (mathematics). He spent most of his military career as a mathematics professor at West Point, before becoming Dean of the School of Mathematics and Sciences and Interim Vice President for Academic Affairs at the College of Saint Rose in Albany, NY. Chris has authored 20 books, written more than 100 technical articles, and given more than 200 presentations and 30 faculty development workshops. His technical in + +terests include mathematical modeling, cooperative systems, and the history of mathematics and science; his teaching interests include using technology and interdisciplinary problems to improve undergraduate teaching and curricula; his hobbies include reading, mowing his lawn, and playing with his two Labrador retrievers. Chris is Director of the Mathematical Sciences Division of the Army Research Office, where he researches cooperative systems, particularly in information networks, pursuit-evasion modeling, and robotics. He is codirector of COMAP's Interdisciplinary Contest in Modeling and the editor for the Journal's ILAP (Interdisciplinary Lively Applications Project) Modules. + +![](images/b20d2c89beff5b5b39124e676fead7fa9c844f868c4c36a1675a42dfadc41cc5.jpg) + +Kathleen Crowley is a Professor of Psychology at the College of Saint Rose. Her Ph.D. in educational psychology was earned at the State University of New York at Albany. Her teaching interests include parenting, child development, gender development, and the history of psychology; and she has taught a variety of courses at both Saint Rose and the University of Hartford in these subjects. Kathleen's research in + +terests involve gender development, the psychology of women, and teaching of psychology. Recently, she has written on psychology as it relates to the postmodern philosophy. Always interested in using the latest technologies, Dr. Crowley has taught courses online and often uses service learning to enhance her courses. In addition to teaching, she has served as acting dean of the School of Mathematics and Sciences at Saint Rose for several years and as chair and member of many faculty committees. Enjoying travel, theatre, and film, she is the mother of two boys and active in her campus community. Recently, she took trips to London and California and was on an American Psychological Association delegation to Vietnam, Cambodia, and Hong Kong. Her latest hobbies include reading, singing, dancing, and soaking in her hot tub. + +![](images/4a7e86d381241d9e6f08d8b2dd9a5667259b1304fbcbbb979c67adc28a6c2c56.jpg) + +# Practitioner's Commentary: The Outstanding Kidney Exchange Papers + +Sommer Gentry + +Dept. of Mathematics + +United States Naval Academy + +Annapolis, MD 21402 + +gentry@usna.edu + +# Mathematical Models Can Influence Public Policy on the Organ Shortage + +The topic of kidney allocation and the shortage of kidneys for transplantation was a timely choice for the Interdisciplinary Contest in Modeling. In the past few years, public policy on organ donation and allocation has been changing rapidly, often in response to conclusions drawn from increasingly sophisticated mathematical models. Already in 2007, numerical projections of the significant impact of kidney paired donation in this country have prompted federal legislative and judicial action, which was necessary to clarify the indeterminate legality of paired donation. Bills passed in the House and Senate, and a Department of Justice memorandum, state that paired donation does not violate the National Organ Transplantation Act's prohibition on giving organs for valuable consideration. + +The United Network for Organ Sharing (UNOS) is charged with oversight of all transplantation in the United States, including the allocation of organs from deceased donors to recipients. UNOS provides voluminous and easily accessible data on transplants at its Website http://www.unos.org. Within the past five years, UNOS has completely redrawn allocation procedures for liver and lung transplants. Recently, after statistical evidence showed that recipients with a liver allocation score of 15 or lower had shorter expected lifespans if they received a transplant than if they did not, liver allocation policy was revised to ensure that liver transplants went only to recipients who receive a survival benefit. The Department of Health and Human Services has also urged the + +transplant community to eliminate geographical disparities in allocation of organs. + +Organ allocation makes an appealing target for interdisciplinary research, and not just because of the very real influence that mathematical modeling can have in shaping public policy. An application that can improve health care using mathematics also has tremendous motivational value for students, and makes a compelling advertisement of the contributions of operations research and interdisciplinary approaches to the lay public. + +# The Outstanding Papers + +The problem statement asks students to address a range of questions about public policy and individual decisions in organ transplantation. A strong paper, then, should clearly state what changes to national policy are being advocated, and should use well-documented and correct analytical models to substantiate its conclusions. + +Both Outstanding papers begin with dynamical models of the deceased donor kidney waiting list. There are more than 70,000 people on the waiting list to receive a kidney transplant from a deceased donor, and every year more people are added to the list than are removed. The team from Princeton University used a Markov-chain birth-and-death process to represent the size of the queue. The team used actual UNOS data to calculate fixed probabilities of each transition event: a new waitlist arrival increments the size of the queue, while transplantation, recovery of kidney function, or death all decrement the size of the queue. They validated their model's predictions against real data for net waiting-list additions in 2006. Importantly, the team used this model to evaluate two policies numerically: presumed consent for organ donation, and restricting the population that could join the waiting list. The team estimated annual list growth under perturbed versions of their model with increased transplantation rates or decreased waitlist arrival rates, and focused on zero waiting-list growth as the outcome of interest. + +The Duke University team created a detailed discrete-event simulation in $\mathrm{C + + }$ to track the size of the queue as these events unfolded. By making the likelihood of death in each time period depend on the length of time that each individual had been waiting, this team captured the likely increase in deaths on the waiting list as the list grows longer. This latter property suggests that the size of the queue will stabilize when deaths on the waiting list occur at a rate that balances new additions, but the team did not explore this possibility. Because this detailed model of the waiting list contains recipient-specific information, this group was able to implement the UNOS point system for allocating kidneys and compare it to the Eurotransplant point system. Unfortunately, the results of the comparison were hard to interpret, because some figures that the team provided lacked axis labels, had cryptic captions, and were not cited in the text. + +This team ran into a familiar dilemma for operations researchers work- + +ing in organ transplantation, namely, that the objective of organ allocation is ill-specified by the transplant community and has myriad reasonable formulations. In particular, should allocation try to maximize the expected number of life-years gained from transplants? If so, then African-Americans and people older than 40 will be effectively denied any opportunity for transplantation, because these groups have lower expected lifetimes after transplant. This issue is being debated because the Kidney Allocation Review Subcommittee of UNOS proposes to increase the weight given to net lifetime survival benefit in allocation decisions. Historically, fairness to disadvantaged subgroups has been included in UNOS kidney allocation objectives. As another example, because kidney recipients can wait indefinitely on dialysis before receiving their transplants, but no life-prolonging therapies are available for liver or heart recipients, liver and heart allocation favors severely ill patients who are most at risk of death without a transplant rather than those who will receive the largest survival benefit after transplant. + +Some members of the transplant community view deceased-donor organs as a local resource because local residents are the donors and local professionals counsel families about donation. These stakeholders feel that allocating organs recovered in one geographical area to recipients in a different area is unfair. However, the "Final Rule" legislation of 1999 requires that UNOS allocate organs in a way that minimizes the geographically-dependent variance in waiting times, outcomes, or other performance measures [Organ Procurement and Transplantation Network 1999]. This goal not yet been achieved, because waiting times for deceased-donor kidneys can vary by a factor of three or four between regions. The disparity is such that some hopeful recipients register in multiple regions to decrease waiting time. + +Geographical aspects of transplantation were addressed well in both Outstanding papers. The models explored a tradeoff between + +- achieving a high HLA (human leukocyte antigen) match, to ensure better outcomes for recipients; and +- decreasing transport distances, to reduce in-transit cold ischemic time and the associated risk of injury to the kidney. + +Both teams made a crucial observation that the total cold ischemic time could be reduced even for kidneys shipped long distances, simply by speeding up the pretransit allocation process. Thus, the teams recommended that time to placement for each organ should be reduced if possible. Stefanos Zenios has published an excellent analysis of a system that could effectively reduce placement time by offering lower-quality deceased donor organs only to those recipients who are likely to accept them, which could mean using broader geographic sharing. + +Finally, both teams discussed in detail the practical aspects of transplantation that they chose to simplify in each model, and in what way the messy details might alter their results and conclusions. For instance, the Princeton University team reported that its model of donor decision making does not + +account for factors such as the recipient's blood type and geographic location that would affect the person's estimated lifetime if the person remains on the deceased-donor kidney waiting list. The Duke University team correctly pointed out a lack of consensus in the literature about whether HLA matching affects survival of the transplanted organ, a question that has caused difficulties in my own research. + +# Optimization and Kidney Paired Donation + +The problem statement set out an array of different tasks for the teams, so the teams did well to address some topics in more detail than other topics. As it happened, neither of the Outstanding papers proposed optimization models for kidney paired donation. + +About one-third of recipients who have a loved one willing to be a live donor will find that the donor is incompatible. Without paired donation, these available donors do not give an organ and the recipient is added to the waiting list. In kidney paired donation, a match is made between two such incompatible pairs, so that the donor of the first pair can give to the recipient of the second pair, and vice versa. More generally, paired donation can include an exchange of kidneys among more than two incompatible pairs. There have been more living kidney donors than deceased kidney donors in recent years, and so kidney paired donation represents one of the most promising avenues for increasing the number of kidneys available. + +Embedded within paired donation is a fascinating combinatorial optimization problem with a rich history: Among a set of incompatible pairs, how can the largest number of transplants overall be arranged? This is a graph-theory problem known as maximum matching, and it was first solved by Jack Edmonds about 50 years ago. A paired donation graph has a vertex to represent each recipient and his incompatible donor, with edges connecting two vertices iff they are mutually compatible. For example, in Figure 1 many pairs could exchange with pair E, but the only pair which could exchange with pair C is pair A. + +![](images/c1d8f064cc970ff8c1cb699cd20c99d10874ea0570553157fd9e5f0d2bf3d02c.jpg) +Figure 1. A kidney paired donation graph. Each vertex represents a recipient and the recipient's incompatible donor, and each edge indicates a mutually compatible exchange. + +A matching is any subset of edges which does not contain any two edges incident on a single vertex. Some of the matchings on the graph of Figure 1 are $\{\mathrm{AC},\mathrm{BE}\} ,\{\mathrm{AB,EF}\} ,$ and $\{\mathrm{AB}\}$ . However, $\{\mathrm{AB,AC}\}$ is not a matching. To use maximum matching algorithms in practice, a paired donation system must allow a number of pairs to arrive before deciding which transplants should + +occur. If instead paired donations were performed as soon as possible, and if pair A and pair B arrived first, then only two transplants would occur for this group instead of the four that would otherwise be possible. Kim and Doyle [2006; 2007] have designed two interactive puzzles that allow students to solve tricky maximum matching problems and test out their favorite heuristics for the problem. + +The team from Princeton University reported on a simulation of paired donation, but they did not comment on their matching algorithm. Their paper alluded to running a program every time a new pair arrives, which suggests that their method may not achieve the maximum number of transplants. Their results showed about $90\%$ of incompatible pairs finding another pair with a complementary incompatibility. The best available results have match rates lower than $50\%$ , but there was not sufficient detail provided to determine which modeling assumptions needed revision. Possibly the simulated blood types were not appropriately skewed towards the blood types likely to wind up in incompatible pairs. + +List paired donation is a somewhat different approach whereby the living donor gives to a person on the deceased donor waiting list in return for moving the donor's intended recipient to the top of the waiting list. The team from Duke University made a strong claim that list paired donation alone might stabilize the queue size. However, deceased donor kidneys survive only half as long on average as live donor kidneys, so living paired donation is always preferable to list paired. Our own simulations show that list paired donation would not be an important contributor to transplantation rates if living paired donation were widely available [Gentry et al. 2005]. + +Mathematical simulations have been indispensable in demonstrating the impact of paired donation, because UNOS does not collect any data about the incompatible donors who come forward with recipients. The missing data can be reconstructed from known statistics using discrete event simulation. An interdisciplinary approach can be very successful in influencing public debate on these issues by offering detailed projections of the impact of new policies, as exemplified by some of the teams active in paired donation research. Extensive work in this arena has come from the team of Alvin Roth, Tayfun Sonmez, and Utku Ünver in cooperation with transplant surgeon Francis Delmonico; and I have enjoyed a very productive collaboration with my husband Dorry Segev, a transplant surgeon at Johns Hopkins. + +# References + +Gentry, S.E., D.L. Segev, and R.A. Montgomery. 2005. A comparison of populations served by kidney paired donation and list paired donation. American Journal of Transplantation 5 (8): 1914-1921. + +Kim, Scott, and Larry Doyle. 2006. Stanford Engineering Puzzle December 2006. http://soe.stanford.edu/alumni/puzzle_Dec2006.html. + +______ 2007. Stanford Engineering Puzzle January 2007. http://soe.stanford.edu/alumni/puzzle_Jan2007.html +Organ Procurement and Transplantation Network. 1999. Final rule. http://www.optn.org/policiesAndBylaws/final_rule.asp. +Ross, L.F., S. Zenios, and J.R. Thistlethwaite. 2006. Shared decision making in deceased-donor transplantation. The Lancet 368 (9532) (22 July 2006): 333-337. +Zenios, S.A., G.M. Chertow, and L.M. Wein. 2000. Dynamic allocation of kidneys to candidates on the transplant waiting list. *Operations Research* 48(4): 549-569. + +# About the Author + +Sommer Gentry studied applied mathematics and operations research at Stanford University and the Massachusetts Institute of Technology, receiving her Ph.D. in 2005. She was a Department of Energy Computational Science Graduate Fellow from 2001 to 2005. She is an Assistant Professor of Mathematics at the United States Naval Academy and a Research Associate in the Division of Transplantation at Johns Hopkins. Her research on maximizing paired donation to help ease the organ shortage has been profiled in Time and Reader's Digest and featured on the Discovery Channel. She serves as an adviser to both the United States and Canada in their efforts to create national paired donation registries. + +# Author's Commentary: The Outstanding Kidney Exchange Papers + +Paul J. Campbell + +Mathematics and Computer Science + +Beloit College + +Beloit, WI 53511 + +campbell@beloit.edu + +We all use math every day ... + +[beginning sequence of episodes] + +of the TV series Numb3rs] + +# Introduction + +The 2007 ICM Kidney Exchange Problem arose from discussions in my one-semester-hour seminar in Spring 2006 on the mathematics behind the TV series Numb3rs. The specific inspirational episode was "Harvest," which deals with bringing poor Third-World people to the U.S. for black-market sale of their kidneys [“Lady Shelley 2006]. + +In the "Harvest" episode, one such donor dies after the operation and another potential donor is missing. The mathematician star of the series cites "optimization theory developed at Johns Hopkins University to determine the best matches between organ donors and recipients." He and his colleagues use the blood type and HLA-compatibility of the sister of the missing woman to try to identify potential recipients, despite (according to them) there being only a one-fourth chance that the sister matches the missing woman. The team first checks the database of patients registered to receive a kidney, draws a blank, and then realizes that they are probably looking for a patient "who cannot obtain an organ in the normal way—so they wouldn't be on any official list." (We find out later that the black-marketeer patient has a "blood disorder that disqualifies him from getting a transplant.") Fortunately, the FBI finds a "potential list of customers" (with blood data) on a suspect's computer, the list has + +The UMAP Journal 28 (2) (2007) 173-184. ©Copyright 2007 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +a unique "positive match," and the missing girl is rescued just as she is to be operated on. + +The remark about Johns Hopkins reminded me of earlier mention of that research in SIAM News [Cipra 2004]. Searching on the Web brought me quickly to the Web pages of Sommer Gentry that introduce her research on the mathematics behind optimizing kidney paired donation [2005]. + +Here I focus on algorithms for matching donors to kidneys, with particular focus on the work of Alvin E. Roth of the Harvard Business School and his associates. + +# The Problem + +The problem involved a variety of tasks that spanned many interdisciplinary aspects—it is a very complex problem—and teams needed to be aware of the contest guideline that "Partial solutions are acceptable." (In fact, virtually all solutions were by nature partial.) Because of the number of tasks and their difficulty, I did not expect so many teams (273 of the 1222 completing the ICM/MCM) to tackle this problem. But then I also did not expect a $26\%$ increase in participation in the contests, either. The proportion of teams selecting the ICM problem was about the same as in 2006. + +# Matching Kidneys to Patients + +# Kidneys from Cadavers + +Kidneys become available as people die and must be transplanted very shortly after death. Thus, the problem is dynamic, and a priority scheme is needed to determine the recipient of such a cadaver kidney. Such a scheme could be provided by regulation (e.g., the Organ Procurement and Transplantation Network (OPTN) in the U.S.) or by compensation (e.g., a market for kidneys). + +# Living Donors + +Suppose that we have a group of patients in need of kidneys and a group of altruistic living people each willing to donate a kidney to any patient. + +We can model the situation as a bipartite graph, with patients in one part of the graph and donors in the other. An edge joins each donor to each patient for whom the kidney would be suitable. The graph has $n$ vertices (patients plus donors) and $m$ edges (corresponding to feasible donations). A greedy algorithm, using the concepts of alternating path and augmenting path, finds a matching with the most matches (called a maximum cardinality matching) + +in $\mathcal{O}(mn)$ time. Saip and Lucchesi [1993, 5] note other sequential algorithms offering different complexity, as well as parallel algorithms. + +This matching "saves" the most patients possible but relies on the altruism of people to become donors. + +# Quality of Match + +A refinement would be to assess the "value" of each match, in terms of a single number incorporating + +- medical quality (match on ABO blood type, HLA markers, and Panel Reactive Antibody (PRA)), +- individual desirability (among other aspects, preferences for different quality kidneys and travel distance to the operation), and +- social desirability, perhaps in terms of QALY—quality-adjusted life years [Gold et al. 1996; Phillips and Thompson 2001] and a measure of equity (see Zenios et al. [2000]). + +The value of the match can be incorporated into the model as a weight for the edge, and the graph-theoretic problem generalizes to finding a matching that + +- maximizes the sum of the edge weights; or, alternatively, +- among all maximum cardinality matchings, maximizes the sum of the edge weights. + +The reason to distinguish these two situations is that maximizing the sum of edge weights might result in fewer than the maximum possible number of matches: We might get better matches but "save" fewer patients. + +The first kind of matching can be realized through the Hungarian algorithm of Kuhn (1955) in $\mathcal{O}(mn^2)$ time, and the second by an algorithm of Edmonds and Karp (1972) in $\mathcal{O}(mn\log n)$ time. Again, Saip and Lucchesi [1993, 5] note other sequential algorithms offering different complexity, as well as parallel algorithms. + +Like a priority scheme for allocation of cadaver kidneys, incorporation of any measure of social desirability of a match is a political question. Mathematicians can only highlight the tradeoffs for various schemes. + +# Dynamism + +Both the cadaver situation and the living donor situation are dynamic, in that the optimal matching may change (perhaps drastically) with entry or withdrawal of a donor or patient. + +# Kidney Paired Donation (KPD) + +In kidney paired donation, a patient with a willing but incompatible donor is matched with another patient/ donor pair such that the donor of each pair is compatible with the recipient of the other pair. + +# Matching Model + +A bipartite graph is not an appropriate model, since the matching now demands that if donor/patient pair $i$ donates to pair $j$ , then $j$ must donate to $i$ . A general graph is called for. Gentry and Segev [2005] display such a graph with the vertices in a circle and edges as chords, which can be weighted. + +The algorithms for finding optimal matchings in a bipartite graph require some tweaking to handle a general graph. Edmonds's algorithm (1965) finds a maximum cardinality matching in $\mathcal{O}(mn^2)$ time, while that of Blum (1990) finds one in $\mathcal{O}(mn^{1/2})$ time. An algorithm of Galil, Micali, and Gabow solves the edge-weighted problem in $\mathcal{O}(mn\log n)$ time [Saip and Lucchesi 1993, 14]. In the "Math Behind Numb3rs" seminar, we worked through the Edmonds algorithm and its proof but found that hand implementation of it on even small graphs—necessary for really understanding it—was unwieldy. + +The idea for kidney paired donation originated with Rapaport [1986] and was first implemented in Korea [Park et al. 1991]. In the U.S., about 150 such transplants have been performed. + +# How Much Difference Could KPD Make? + +How much difference could kidney paired donation make? In June 2007, there were 72,000 individuals awaiting kidneys, at 270 centers, an average of 270 per center [Organ Procurement . . . 2007]. Of those 270, according to the team from Princeton University, $10\% - 15\%$ , or 27 to 40, have a willing but incompatible donor. A simulation by Roth et al. [2005a] shows that in a population of 25 donor-patient pairs (where the pair may or may not be compatible), on average 12 patients receive a kidney from their own associated compatible donor, but an additional 4 could receive a kidney with paired donation. In a larger population of 100 donor-patient pairs, the corresponding numbers are 47 from their own donor and an additional 23 from paired donation. These data suggest that one-third to one-half more live kidney transplants could take place with widespread implementation of kidney paired donation. + +Well, how many is that and how much difference would it make? In 2006, there were 17,100 transplants in all, of which 6,435 were live transplants; one-third to one-half more of the latter would be 2,100 to 3,200. That would not be enough to turn the tide: Over the course of 2006, the waiting list for a kidney grew by 6,100 (to 72,200), despite 4,200 leaving the list by dying [Organ Procurement ... 2007]. + +# Donation Circles + +The idea of kidney paired donation generalizes naturally to $n$ -way circular exchanges, in which each donor-patient pair donates to another in the circle and in turn receives a donation from a pair in the circle. A few 3-way exchanges have taken place, and one 5-way exchange has been performed [Ostrov 2006]. + +Roth et al. [2005b] explain how 3-way exchanges offer further benefits beyond 2-way exchanges, and why going to 4-way exchanges has very limited further value (because of the rarity of the AB blood type). They calculate upper bounds, based on national data for blood types and PRA, for the effect of $n$ -way exchanges. These bounds agree well with their simulations, which used "various integer programming techniques" for optimization in the case of greater than 2-way exchanges. In a population of 25 incompatible donor-patient pairs, on average 9 patients can receive a kidney via 2-way exchange, and 2 more via 3-way exchanges. In a population of 100 donor-patient pairs, the corresponding numbers are 50 via 2-way and an additional 10 via 3-way. Allowing 4-way or larger circles has negligible additional benefit. + +In fact, Roth et al. prove under mild conditions—mainly, that the population of donor-patient pairs is large—the remarkable result that "4-way exchange suffices": If there is a matching with the maximum number of patient-donor pairs, with no restriction on the size of exchanges, then there is a matching involving the same pairs that uses only 2-way, 3-way, and 4-way exchanges. (Perhaps the 5-way exchange in 2006 could not have been reduced to smaller exchanges because of too few donor-patient pairs at that transplant center.) + +# A Kidney Is Like a House + +Roth et al. [2004] cite an analogy between a housing market, as modeled by Shapley and Scarf [1974], and the "kidney transplant environment": + +[There are] $n$ agents, each of whom is endowed with an indivisible good, a "house." Each agent has preferences over all the houses (including his own), and there is no money in the market, trade is feasible only in houses ... [I]f we consider exchange only among patients with donors, the properties of the housing market model essentially carry over unchanged .... + +# Top Trading Cycles Algorithm + +The authors note that Shapley and Scarf attribute to David Gale a particular algorithm for clearing such a market, called the top trading cycle (TTC) algorithm: + +Each agent points to her most preferred house (and each house points to its owner). Since the number of houses is finite and since each house has an owner, there is at least one cycle in the resulting directed graph. In each such cycle, the corresponding trades are carried out, i.e. each agent + +in the cycle receives the house she is pointing to, and these agents and houses are removed from the market. + +The remaining agents express new preferences and the procedure is iterated recursively. This system cannot be "gamed": The algorithm results in an allocation in which no coalition could have done better by trading among themselves, and it is in each agent's best interest to express true preferences [Roth 1982]. Roth et al. [2004] show that the TTC mechanism is the unique mechanism that is "individually rational, [Pareto-]efficient, and strategy-proof." + +TTC has further applications to other important current problems of allocation or matching, such as college admissions, student placement, and school choice [Kesten 2004; Sonmez 2005]. + +# Combining Cadavers and Living Donors + +The top trading cycles algorithm would suffice for allocating kidneys among patients with willing but incompatible donors, via kidney paired exchange and kidney circles. + +A complicating factor is that the kidney transplant environment also contains "unowned" cadaver kidneys. This situation corresponds to what Abdulkadiroğlu and Sönmez [1999] call the housing allocation problem with existing tenants—which, to the intrigue of the students in the Numb3rs seminar, was exactly the problem that they were facing at the time of our study: room draw for dorm rooms. + +Abdulkadiroğlu and Sönmez critique the mechanism commonly used by colleges (including my institution, Beloit College), which they dub random serial-dictatorship with squatting rights. Under this system, a student may elect to keep their current dorm room ("squat" it) for next year and thus opt out of the dorm-room lottery. The major deficiency is that a student who foregoes keeping their current room and enters the lottery may wind up with a worse room. + +# You Request My House—I Get Your Turn + +Abdulkadiroğlu and Sönmez consequently generalize the top trading cycles mechanism to a procedure that they call you request my house—I get your turn (YRMH-IGYT). All students indicate their preferences, all are in the lottery, and turns are chosen at random. If a student whose turn comes wants your room (and you have not already had your turn), you get the very next turn before they get to choose. So you can always keep your room if all the "better" rooms are gone. + +This seems like a great idea, one that could be completely automated; but the Numb3rs students and I had to reflect on the difficulties of changing a procedure well-established at the College. That procedure absorbs several days + +of staff time sitting for appointments with students coming to select rooms, not to mention students bolting out of classes to meet their appointments. We could see no good way to assess the likely level of improvement in overall student satisfaction, apart from just trying YRMH-IGYT for a year's room draw. However, Chen and Sonmez [2002; 2004] offer results of small-scale experiments. + +At first, YRMH-IGYT would seem to be an easy sell to students: You can't be any worse off than you already are or could be under the current system. But it seemed to us that if all students had the same rankings for rooms, then the advantages of YRMH-IGYT accrue to students already in "good" rooms: If instead of "squatting" their current "good" room under the current system, they enter the YRMH-IGYT lottery and do better by taking "top" rooms, it is in some sense at the expense of other students for whom those "top" rooms are then not available. Yilmaz [2005] offers a further critique of the fairness of TTC; shows the incompatibility of fairness ("no justified envy"), individual rationality, and strategy-proofness; and offers his own algorithm. + +# Kidney Analogy: LPD + +The kidney transplantation community independently invented YRMH-IGYT (dubbing it "indirect exchange," now more commonly known as list paired donation (LPD)): A patient's willing but incompatible donor donates to the highest-priority compatible patient on the cadaver waitlist; in return, the donor's intended recipient goes to the top of that list [Roth et al. 2004]. However, analogous to a student being reluctant to enter the dorm-room lottery for fear of losing their current "good" room, a donor may be unwilling to donate unless the donor's intended recipient gets a kidney at least as good as the donor's. + +Roth et al. itemize the differences between the housing market and the kidney environment. The main difference is the dynamism that we mentioned earlier: No one knows when or what quality kidneys will become available on the cadaver queue, and such kidneys must be allocated and transplanted immediately. "Therefore, a patient who wishes to trade his donor's kidney in return for a priority in the cadaveric waiting list is receiving a lottery instead of a specific kidney" [2004]. + +# Top Trading Cycles and Chains + +Roth et al. introduce the top trading cycles and chains (TTCC) mechanism for kidney exchange, a recursive procedure that generalizes TTC. Each patient points toward a kidney or toward the cadaver queue, and each kidney points a paired recipient. In addition to cycles, this directed graph can also feature $w$ -chains. A $w$ -chain is a directed path on which kidneys and patients alternate and which starts at a kidney and ends at a patient pointing to the cadaver + +queue The result is a chain to the "waitlist," hence the "w" in the name "w-chain." Such chains correspond to generalized indirect exchanges: The kidneys can be allocated to their immediate successor patients, with the last patient getting a high place on the cadaver queue. As before with TTC, we can resolve and remove cycles; now we can also resolve and remove w-chains, perhaps preferably w-chains of maximal length (or perhaps not, for logistical reasons). But since a kidney or a patient can be part of several w-chains, there is a policy dimension needed to complete the algorithm, and Roth et al. discuss several conceivable chain-selection rules. They characterize TTCC using a particular class of rules as Pareto-efficient and TTCC with certain specific rules as strategy-proof. + +Segev et al. [2005] and Gentry et al. [2005] examine optimal use of kidneys through kidney paired donation in association with list paired donation. + +# Future Prospects + +Kidney paired donation, plus 3-way donations, plus TTCC, offer prospects for reducing the growth of the kidney waiting list and saving the vast cost of dialysis for patients on it. + +Roth et al. [2004] offer simulation results that demonstrate for a population of 30 donor-candidate pairs: + +- The rate of utilization of potential-donor kidneys goes from $55\%$ to $69\%$ with kidney paired exchange (this result roughly confirms that of the team from Duke University) and to $81 - 85\%$ under the TTCC mechanism with varying chain-selection rules. +- TTCC decreases the average HLA mismatch from 4.8 to 4.2-4.3. +- The average cycle size is 2.5–3 pairs and the average w-chain size is 1.8–2 pairs. + +Larger numbers of pairs (Roth et al. give results for 100 and 300) result in (on average) higher utilization rates, lower HLA mismatch, and longer cycles and chains. + +Roth et al. go on to describe the advantages of TTCC over current kidney exchange programs. + +Operations researchers (as some economists are, too) must always consider not just the problem of optimization but that of selling an improved solution. In particular, operations researcher Robert E.D. "Gene" Woolsey (Colorado School of Mines) cast as his First Law: "A manager would rather live with a problem he cannot solve than accept a solution he does not understand" [2003]. Implementing new kidney exchange procedures on a wide scale would require suitable legislation, coordination of databases, and education of patients and potential donors. + +Finally, different blood types between willing donor and patient may not be an absolute obstacle in the future. In September 2006, a type-A wife donated to a type-B husband, a transplant that "was made possible by desensitization, a process that removed [rejection] antibodies from [the husband's] blood and kept them away with medication" [Wausau couple . . . 2006]. Segev et al. [2006] recommend pursuing desensitization, because + +even in a large-cohort live donor match, approximately half of the patients remain unmatched . . . There is subsequently little additional benefit to placing difficult-to-match patients into a list exchange program. + +What about a market for kidneys? The team from Princeton University explores the economics of this possibility. They dismiss the option of a government-managed program on the basis of their conviction that government management implies bureaucratic inefficiency and slowness; but surprisingly, after touting the relative advantages of a free market in kidneys, in their conclusion they reject that, too, because of the danger of discriminatory pricing. Roth [2007] reflects on general repugnance to certain economic efficiencies, including a market for kidneys. + +# Conclusion + +I previously wrote about the potential and sources for using the Numb3rs TV series as a way to draw college students into appreciation of mathematics [Campbell 2006]. Instructors may find doing so more feasible with the advent in fall 2007 of a potential "textbook" for a course or seminar based on the series [Devlin and Lorden 2007]. + +My experience demonstrated to me that such a course also offers the instructor an opportunity to serendipitously expand horizons—to learn and engage with more mathematics as it manifests itself in life. I am glad that the TV series, its "Harvest" episode, and the course also led to the ICM Kidney Exchange Problem; I have enjoyed learning a great deal about an interdisciplinary congeries of mathematics, algorithms, medical practice, economics, public policy, and ethics. + +# References + +Abdulkadroğlu, A., and T. Sonmez. 1999. House allocation with existing tenants. Journal of Economic Theory 88: 233-260. +Campbell, Paul J. 2006. Take a Numb3r! The UMAP Journal 27 (1): 1-2. +Chen, Yan, and Tayfun Sönmez. 2002. Improving efficiency of on-campus housing: An experimental study. *American Economic Review* 92 (5): 1669–1686. http://www2.bc.edu/%7Esonmezt/house_12_aer.pdf. + +_____. An experimental study of house allocation mechanisms. Economics Letters 83(1): 137-140. http://www2.bc.edu/%7Esonmezt/house_3_final.pdf. +Cipra, Barry. 2004. OR successes run the gamut, from concrete to kidneys. SIAM News 37 (5) (June 2004). http://www.siam.org/news/news.php?id=230. +Devlin, Keith, and Gary Lorden. 2007. The Numbers Behind NUMB3RS: Solving Crime with Mathematics. New York: Plume Publishing. +Gentry, Sommer. 2005. Optimized match for kidney paired donation. http://wwwoptimizedmatch.com/. +Gentry, Sommer, and Dorry Segev. 2005. Math meets medicine: Optimizing paired kidney donation. http://www.dorryandsommer.com/~sommerg/kidneysSM280.ppt. +Gentry, S.E., D.L. Segev, and R.A. Montgomery. 2005. A comparison of populations served by kidney paired donation and list paired donation. American Journal of Transplantation 5 (8): 1914-1921. +Gold, M.R., J.E. Siegel, L.B. Russell, and M.C. Weinstein. 1996. Cost-Effectiveness in Health and Medicine. New York: Oxford University Press. +Kesten, Onur. 2004. Student placement to public schools in US: Two new solutions. Preprint. +"Lady Shelley." 2006. 214 Harvest. http://www.redhawk.org/content/view/309/11/. This episode of Numb3rs originally aired 1/27/06 and subsequently also on 6/14/07. +Organ Procurement and Transplantation Network. 2007. Data. http://www.optn.org/data/. Accessed 18 June 2007. +Ostrov, Barbara Feder. 2006. Organ transplant required bonding. Wisconsin State Journal (10 December 2006): A1, A8. +Park, Kiil, Jang Il Moon, Soon Il Kim, and Yu Seun Kim. 1999. Exchange donor program in kidney transplantation. Transplantation 67 (2) (27 January 1999): 336-338. http://www.transplantjournal.com/pt/re/transplantation/fulltext.00007890-199901270-00027.htm. +Phillips, Ceri, and Guy Thompson. 2001. What is a QALY? http://www.evidence-based-medicine.co.uk/ebmfiles/WhatisaQALY.pdf. London, UK: Hayward Medical Communications. +Rapaport, F.T. 1986. The case for a living emotionally related international kidney donor exchange registry. *Transplantation Proceedings* 18 (3) (Suppl. 2): 5-9. +Roth, Alvin E. 1982. Incentive compatibility in a market with indivisibilities. Economics Letters 9: 127-132. http://kuznets.fas.harvard.edu/~aroth/papers/1982_EL_IncentiveCompatability.pdf. + +2007. Repugnance as a constraint on markets. Preprint. Journal of Economic Perspectives, forthcoming http://kuznets.fas.harvard.edu/ \~aroth/papers/Repugnance.pdf . +Roth, Alvin E., Tayfun Sönmez, and M. Utku Ünver. 2004. Kidney exchange. Quarterly Journal of Economics 119 (2) (May 2004): 457-488. http://kuznets.fas.harvard.edu/~aroth/papers/kidney.qje.pdf. +________. 2005a. A kidney exchange clearing house in New England. American Economic Review, Papers and Proceedings, 95 (2) (May 2005): 376-380. http://kuznets.fas.harvard.edu/~aroth/papers/KidneyAEAPP.pdf. +_______. 2005b. Efficient kidney exchange: Coincidence of wants in a structured market. American Economic Review, forthcoming. http://www.nber.org/papers/w11402. +Saip, Herbert Alexander Baier, and Cláudio Leonardo Lucchesi. 1993. Matching algorithms for bipartite graphs. Technical Report DCC-03/93, Departamento de Ciência da Computação, Universidade Estadual de Campinas, March 1993. http://citeseer.ist.psu.edu/baiersaip93matching.html. +Segev, D.L., S.E. Gentry, D.S. Warren, B. Reeb, and R.A. Montgomery 2005. Kidney paired donation and optimizing the use of live donor organs. Journal of the American Medical Association 293 (15): 1883-1890. +Segev, D.L., S.E. Gentry, and R.A. Montgomery. 2006. Relative roles for list paired exchange, live donor paired exchange and desensitization. American Journal of Transplantation 6: 437. +Shapley, L., and H. Scarf. 1974. On cores and indivisibility. Journal of Mathematical Economics 1: 23-28. +Sönmez, Tayfun. 2005. School matching. http://www2.bc.edu/~sonmezt/Jerusalem-schools.pdf. +Wausau couple shares kidney after improbable transplant. 2006. Beloit Daily News (25 September 2006): 2A. +Woolsey, Robert E.D. 2003. Real World Operations Research: The Woolsey Papers. Marietta, GA: Lionheart Publications. +Yilmaz, Özgür. 2005. House allocation with existing tenants: A new solution. http://troi.cc.rochester.edu/~ozgr/existing_tenant.pdf. +Zenios, Glenn M. Chertow, and Lawrence M. Wein. 2000. Dynamic allocation of kidneys to candidates on the transplant waiting list. *Operations Research* 48 (4) (July- August 2000): 549-569. + +# Acknowledgments + +I thank Sara Price '06 and Jason Marmon '06 for their enthusiastic pursuit in Spring 2006 of the mathematics behind Numb3rs, particularly in connection with the kidney exchange problem. I also thank the contributing faculty who visited the course in Spring 2006: Ricardo Rodriguez (Beloit College, Physics), Tom Sibley (Mathematics, St. John's University, Collegeville MN), and Jennifer Galovich (Mathematics, College of St. Benedict). Finally, I thank Sommer Gentry for her inspirational work on this problem, for her commentary in this issue, and for her comments on (and further references for) this commentary. + +# About the Author + +![](images/c572682d38594f3ba7a8aee2567b9a2a9ef05926012522680aa5c57f682085b8.jpg) + +Paul Campbell is professor of mathematics and computer science at Beloit College, where he was Director of Academic Computing from 1987 to 1990. He has been the editor of The UMAP Journal of Undergraduate Mathematics and Its Applications since 1984. \ No newline at end of file diff --git a/MCM/1995-2008/2007MCM/2007MCM.md b/MCM/1995-2008/2007MCM/2007MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..b1c79fb598f753ec386a205d4625742deb0793ac --- /dev/null +++ b/MCM/1995-2008/2007MCM/2007MCM.md @@ -0,0 +1,6436 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Associate Director, + +Mathematics Division + +Program Manager, + +Cooperative Systems + +Army Research Office + +P.O.Box 12211 + +Research Triangle Park, + +NC 27709-2211 + +David.Arney1@arl.army.mil + +On Jargon Editor + +Yves Nievergelt + +Department of Mathematics + +Eastern Washington University + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +Chief Operating Officer + +Laurie W. Aragon + +Production Manager + +George W. Ward + +Production Editor + +Timothy McLean + +Distribution + +John Tomicek + +Graphic Designer + +Daiva Kiliulis + +# AP Journal + +Vol. 28, No. 3 + +# Editor + +Paul J. Campbell + +Campus Box 194 + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meyer + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young University + +Army Research Office + +AT&T Shannon Research Laboratory + +University of Houston-Downtown + +Harvey Mudd College + +Oberlin College + +Troy University—Montgomery Campus + +University of Wisconsin—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +California State University, Fullerton + +Brigham Young University + +Southern Methodist University + +Harvey Mudd College + +Adelphi University + +Eastern Washington University + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Membership Plus + +Individuals subscribe to The UMAP Journal through COMAP's Membership Plus. This subscription also includes a CD-ROM of our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2720 $104 + +(Outside U.S.) #2721 $117 + +# Institutional Plus Membership + +Institutions can subscribe to the Journal through either Institutional Plus Membership, Regular Institutional Membership, or a Library Subscription. Institutional Plus Members receive two print copies of each of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, on-line membership that allows members to download and reproduce COMAP materials, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2770 $479 + +(Outside U.S.) #2771 $503 + +# Institutional Membership + +Regular Institutional members receive print copies of The UMAP Journal, our annual collection UMAP Modules: Tools for Teaching, our organizational newsletter Consortium, and a $10\%$ discount on all COMAP purchases. + +(Domestic) #2740 $208 + +(Outside U.S.) #2741 $231 + +# Web Membership + +Web membership does not provide print materials. Web members can download and reproduce COMAP materials, and receive a $10\%$ discount on all COMAP purchases. + +(Domestic) #2710 $41 + +(Outside U.S.) #2710 $41 + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +Send address changes to: info@comap.com + +COMAP, Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730 © Copyright 2007 by COMAP, Inc. All rights reserved. + +# Vol. 28, No. 3 2007 + +# Table of Contents + +# Publisher's Editorial + +Math Is More: Toward a National Consensus on Improving + +Mathematics Education + +Solomon A. Garfunkel 185 + +About This Issue 190 + +# Special Section on the MCM + +Results of the 2007 Mathematical Contest in Modeling + +Frank Giordano 191 + +Abstracts of the Outstanding Papers and the Fusaro Papers 231 + +When Topologists Are Politicians... + +Nikifor C. Bliznashki, Aaron Pollack, and Russell Posner 249 + +What to Feed a Gerrymander + +Ben Conlee, Abe Othman, and Chris Yetter 261 + +Electoral Redistricting with Moment of Inertia and + +Diminishing Halves Models + +Andrew Spann, Daniel Kane, and Dan Gulotta 281 + +Applying Voronoi Diagrams to the Redistricting Problem + +Lukas Svec, Sam Burden, and Aaron Dilley 301 + +Why Weight? A Cluster-Theoretic Approach to Policidal Districting + +Sam Whittle, Wesley Essig, and Nathaniel S. Bottman 315 + +Novel Approaches to Airline Boarding + +Qianwei Li, Arnav Mehta, and Aaron Wise 333 + +Boarding at the Speed of Flight + +Michael Bauer, Kshipra Bhawalkar, and Matthew Edwards 353 + +STAR: (Saving Time, Adding Revenues) Boarding/Deboarding Strategy + +Bo Yuan, Jianfei Yin, and Mafa Wang 371 + +The Unique Best Boarding Plan? It Depends... + +Bolun Liu, Xuan Hou, and Hao Wang 385 + +Airliner Boarding and Deplaning Strategy + +Linbo Zhao, Fan Zhou, and Guozhen Wang 405 + +The Best Boarding Uses Buffers + +Kevin D. Sobczak, Eric J. Hardin, and Bradley J. Kirkwood . . . . . .421 + +Modeling Airplane Boarding Procedures + +Bach Ha, Daniel Matheny, and Spencer Tipping 435 + +American Airlines' Next Top Model + +Sara J. Beck, Spencer D. K'Burg, and Alex B. Twist 451 + +Boarding—Step by Step: A Cellular Automaton Approach To Optimising Aircraft Boarding Time + +Chris Rohwer, Andreas Hafver, and Louise Viljoen 463 + +Judges' Commentary: The Fusaro Award Airplane Seating Paper + +Peter Anspach and Marie Vanisko 479 + +# Publisher's Editorial + +# Math Is More: + +# Toward a National Consensus on Improving Mathematics Education + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +175 Middlesex Turnpike, Suite 3B + +Bedford, MA 01730-1459 + +s.garfunkel@mail.comap.com + +Whether you are a parent or a politician, whether you work in business, industry, government or academia, the state of U.S. mathematics education is of fundamental importance to you and to those whom you care about. As the importance of mathematical and quantitative thinking increases, we must become more focused as a nation on providing our children a better mathematical education. This is not simply about economic competitiveness or getting higher scores on international comparisons. Rather, it is about equipping our children with the necessary tools to be effective citizens and skilled members of the workforce in the 21st century. Mathematics as a discipline and the applications of mathematics to the world around us have grown and changed significantly in the past 50 years. Our system of mathematics education must reflect that growth and change. Quite simply, math is more. + +We want to do the best job possible with the most children possible. We are a group of mathematics educators, mathematicians, and concerned individuals committed to real and significant improvement in the performance of the complex system of mathematics education. To achieve this goal, however, we must be clear about what we mean. In this document, we specify ten planks that represent our beliefs and guide the direction of our efforts. It will take years of hard work by many people—teachers, administrators, policy makers, parents and students, mathematicians and mathematics educators, academics and practitioners across a wide spectrum—to achieve the goal of universal mathematical literacy and proficiency. The signers of this report (see list at end) commit ourselves to that effort. + +# Plank 1: Students need to see mathematics and the people who use mathematics in the broadest possible light. + +What do we mean by mathematical literacy? First, math is more than dividing decimals or solving equations. It is more than algebra or geometry as defined by a particular syllabus or set of textbooks. Math is the use of a graph to model a street network to solve traffic snarls; it is finding the "distance" between two strands of DNA to improve our understanding of the human species. It is about deduction, visualization, statistical and probabilistic reasoning, representation, and modeling. It is what enables our cell phones to work and our MRIs to function. It gives us insight into medicine, biology, economics, business, engineering, and the ways that we reason and make decisions. Mathematics education at all levels and in all courses must engage students with the practicality, the applicability, the power and the beauty of mathematics. This can be accomplished when students see mathematics as including skills, conceptual understandings, and a way of reasoning. + +# Plank 2: Mathematics education must be viewed as a complex system requiring coherent coordination and a long-term investment in the quality of curriculum, instruction, and assessment. + +We do not believe that there are quick fixes or magic bullets that will lead to significant improvements in mathematics education. Rather, we believe that improvements in this complex system will be the result of a series of substantive changes that are informed by research and guided by experimentation with the proper and rigorous evaluation of the results. But change of this magnitude takes time. Among other things, both established and new teachers need to learn and experience mathematics as the rich discipline that we know it to be. Professional working conditions for teachers must allow time and opportunity for developing new understandings about mathematics, its applications, and the teaching of mathematics. + +# Plank 3: Mathematics education at all levels, including advanced college programs, is a form of vocational and professional preparation. + +We must recognize that there is a compelling national (and local) interest in the state of mathematics education. While we do not see this as a zero-sum game, with our country (or state) vying to do better than another, our overall mathematical literacy and competence is important to our economic health. Industry, in addition to government, needs to be heavily involved. Employers are, after all, parents, and vice versa. Surely, having good high school math grades or SAT scores must be about more than getting into a good college. Being able to analyze and solve problems using quantitative reasoning is an increasingly necessary job skill. We believe that not enough emphasis has been placed on the needs of students. Their future will involve many different jobs. They will need to master current and emerging technologies. We know that they will need creativity, independence, imagination and problem-solving abilities in addition to skills proficiency. In other words, students will increasingly need + +advanced mathematical understanding and awareness of the tools mathematics provides to achieve their career goals. + +# Plank 4: A coherent set of broad national curricular goals allowing for new results from educational research should be created. + +While we believe in accountability and we recognize the need for curricular coherence, we worry about the Babel of "Standards" being designed by individual states, districts, and more nationally-based organizations and think tanks. National standards in the spirit of curricular goals can serve a unifying purpose. Standards must, however, be generic enough to allow for the evolution of content and pedagogy. Although there must be room for trying new ideas, standards should increasingly be grounded in robust research demonstrating student learning of important mathematical ideas. Standards at the grain size of individual skills must be avoided. We also believe that the present multiplicity and specificity of standards is a barrier to innovation by both the authors and publishers of mathematics materials. + +# Plank 5: The quality of instruction continues to be of critical importance to the improvement of student achievement. + +The mathematics classroom is more than where students encounter formal mathematics. It is where students decide if mathematics is "for them" and where the ideas must inspire and engage. Active learning produces life-long learning. There is no substitute for curiosity, engagement, pursuit of ideas, and use of prior knowledge, followed by exploration, experimentation, practice, and mastery. The use of applications, the design of rich interactions among students, and the creative use of technologies have produced promising results when accompanied by careful attention to students' progress through well-understood learning progressions. Accountability is hollow if it is not accompanied by robust efforts to improve instruction, by using exciting materials, and by including opportunities for teachers to be learners and to experience broader views of mathematics. Our task is to introduce students to the wonders of mathematics while providing the discipline to regulate their own learning and to ensure proficiency and mastery. Students should not be viewed simply as consumers of mathematics education but as active participants with the most to gain or lose. Their voices should be solicited and taken into serious consideration. + +# Plank 6: Programs must be developed to help all students, recognizing their diverse needs, interests, talents, and levels of motivation. + +"Mathematics for All" is an important rallying cry. But to be meaningful, it requires that we recognize and act on the fact that different student populations need to be provided for differently. For a multitude of reasons, some students may be more motivated to learn than others. Some students have stronger background knowledge than others, and some learn more quickly. One size does not fit all. There is research that can be brought to bear on these + +issues—and we need to know and do more. We cannot afford a mathematics education system that works for the few and not the many. + +# Plank 7: We must test what we value, both locally and nationally. Mathematical literacy is becoming a survival skill. + +We strongly believe in accountability to a rich set of mathematical goals. We want students to master core facts and procedures, but this is not enough. We want conceptual understanding, problem-solving, and flexible use of mathematics to solve both pure and applied problems. Like standards, assessments must reflect our goals—most importantly, the ability to apply mathematical reasoning to analyze and attack real-world problems. If mathematical literacy includes the ability to make use of mathematics, and we believe in the importance of mathematical literacy, then we must align our testing accordingly. Testing must not be about punishment for failure but about giving students and teachers a clearer understanding of what they do and do not know. Testing should inform instruction, not determine it. + +# Plank 8: We must continue to develop and research new materials and pedagogies and translate that research into improved classroom practice. + +Education, as a scientific discipline, is a young field with an active community focused on R&D—Research on learning coupled with the Development of new and better curriculum materials. In truth, however, much of the work is better described as D&R—informed and thoughtful Development followed by careful analysis of Results. It is in the nature of the enterprise that we cannot discover what works before we create the what. Curriculum development, in particular, is best conducted in analogy to an engineering paradigm. To test the efficacy of an approach, we must analyze needs, examine existing programs, build an improved model program, and test it—in the same way that we build scale models to design a better bridge or building. This kind of iterative D&R leads to new and more effective materials and new pedagogical approaches that better incorporate the growing body of knowledge of cognitive science. We understand that educational research has not yet provided all of the answers to how best to help children learn mathematics. However, there is a great deal that we do know about the motivational power of applications, the effectiveness of appropriate learning technologies, the use of collaborative learning with children, and the use of lesson- and case-study programs with teachers. + +# Plank 9: Our country must make a major investment over the coming decade to sustain and rejuvenate the ranks of mathematics teachers in our nation's schools. + +Many mathematics classrooms are staffed with unqualified teachers. This is because school administrators can neither find enough qualified teachers nor provide adequate resources to upgrade staff qualifications. Mandates that every teacher be qualified won't improve the situation until there is a sufficient + +supply of mathematics teachers to meet the demand. To stave off this foreseeable crisis in our math classrooms, our nation needs to act to increase the numbers of young people entering mathematics and mathematics education disciplines in our universities and to improve significantly the continuing education of existing teachers. We must ensure that their education prepares them for current educational realities and that their working conditions as teachers permit them continuous mathematical and pedagogical improvements. We need to find more ways to support new teachers through the difficult induction years, especially young people who commit to teach in our least successful schools. + +# Plank 10: We must build a sustainable system for monitoring and improving mathematics education. + +Perhaps the most important point is that our work must be sustainable. Just as with our students, we need to be there throughout the learning process—watching out for necessary course corrections and building with a long-range view. Too often in the past we have reacted to crises, whether it be Sputnik and fear of losing the space race, being overtaken economically by Japan, or out-sourcing our manufacturing jobs to China and India. Reports are written decrying the current state of affairs and funding is made available. But the need for excellent mathematics education will always be with us. We must build an infrastructure that recognizes this fact, and devotes consistent attention and resources to addressing the challenge of high quality mathematics for all, rather than a cycle of investment, neglect, investment, .... + +The authors of this document share many beliefs—that mathematics is important as a discipline, as a field full of wonder and beauty, as a tool for modeling our world, as a prerequisite for knowledgeable citizenship in a participatory democracy, and as a means to better jobs and a better quality of life. We hold strong views on the importance of education in general and mathematics education in particular. We do not agree on all things, but we are, and intend to remain, inclusive. Clearly, there is much substance and detail that can be added to these planks. We need many voices and many hands and we call on you to join with us to ensure that every child receives the best mathematics education possible and recognizes that in their future, math is more. If you support these ideas and would like to work with us to make these planks a reality and/or receive regular updates on Math is More activities, please visit + +http://www.mathismore.net/forms/form.php + +Jere Confrey North Carolina State University, Raleigh + +Gary Froelich COMAP, Inc. + +Sol Garfunkel COMAP,Inc. + +Midge Cozzens Knowles Science Teaching Foundation + +John Ewing American Mathematical Society + +
Steve LeinwandAmerican Institutes for Research
James InfanteVanderbilt University (Emeritus)
Joseph MalkevitchYork College, CUNY
Henry PollakTeachers College, Columbia University
Eric RobinsonIthaca College
Steve RasmussenKey Curriculum Press
Alan SchoenfeldUniversity of California, Berkeley
+ +# About This Issue + +Paul J. Campbell + +Editor + +This issue of The UMAP Journal continues a practice inaugurated in Vol. 26. It runs longer than the usual size of 92 pp—in fact, almost 300 pp. However, not all of the articles in this issue are printed in the paper copy. Some articles appear only in the Tools for Teaching 2007 CD-ROM (and at http://www.comap.com for COMAP members), which will reach members and subscribers at a later time and will also contain the entire 2007 year of Journal issues. + +However, all articles of this issue on the CD-ROM appear in the printed table of contents and are regarded as published in the Journal. In addition, the abstract of each Outstanding paper appears in the printed version. Paging of the issue runs continuously, including in sequence articles that do not appear in printed form. So, if you notice that, say, page 350 in the printed copy is followed by page 403, your copy is not necessarily defective! The articles corresponding to the intervening pages will be on the CD-ROM. + +We hope that you find this arrangement, if not entirely satisfying, at least satisfactory. It means that we do not have to procrusteanize the content of the Journal to fit a fixed number of allocated pages. For example, we might otherwise need to select only two or three Outstanding MCM papers to publish (a hard task indeed!). Instead, as in the past, we continue to bring you the full content. + +# Modeling Forum + +# Results of the 2007 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +frgiorda@nps.navy.mil + +# Introduction + +A total of 949 teams of undergraduates, from 313 institutions and 536 departments in 12 countries, spent the first weekend in February working on applied mathematics problems in the 23rd Mathematical Contest in Modeling (MCM). + +The 2007 MCM began at 8:00 P.M. EST on Thursday, February 8 and ended at 8:00 P.M. EST on Monday, February 12. During that time, teams of up to three undergraduates were to research and submit an optimal solution for one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problems at the appropriate time, and entered completion data through COMAP's MCM Website. After a weekend of hard work, solution papers were sent to COMAP on Monday. The top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first 22 contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2006). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains all of the 20 problems used in the first 10 years of the contest and a winning paper for each year. Limited quantities of that volume and of the special MCM issues of the Journal for the last few years are available from COMAP. That 1994 volume is also available on COMAP's special Modeling Resource CD-ROM (http://www.comap.com/product/?idx=613). In addition, also available from COMAP is another CD, The MCM at 21, which contains + +all of the 20 problems from the second 10 years of the contest, a winning paper from each year, and advice from advisors of Outstanding teams. + +This year's Problem A asked teams to draw congressional districts for a state so that the districts would have the "simplest" shapes; as an application, teams had to apply their method to New York State. Problem B asked teams to devise and compare procedures for boarding and deboarding airplanes of varying sizes. The 14 Outstanding solution papers are published in this issue of The UMAP Journal, along with commentary from problem authors, contest judges, and outside experts. + +In addition to the MCM, COMAP also sponsors the Interdisciplinary Contest in Modeling (ICM) and the High School Mathematical Contest in Modeling (HiMCM). The ICM, which runs concurrently with MCM, offers a modeling problem involving concepts in operations research, information science, and interdisciplinary issues in security and safety. Results of this year's ICM are on the COMAP Website at http://www.comap.com/undergraduate/contests; results and Outstanding papers appeared in Vol. 28 (2007), No. 2. The HiMCM offers high school students a modeling opportunity similar to the MCM. Further details about the HiMCM are at http://www.comap.com/highschool/ contests. + +# Problem A: Gerrymandering + +The United States Constitution provides that the House of Representatives shall be composed of some number (currently 435) of individuals who are elected from each state in proportion to the state's population relative to that of the country as a whole. While this provides a way of determining how many representatives each state will have, it says nothing about how the district represented by a particular representative shall be determined geographically. This oversight has led to egregious (at least some people think so, usually not the incumbent) district shapes that look "unnatural" by some standards. + +Hence the following question: Suppose that you were given the opportunity to draw congressional districts for a state. How would you do so as a purely "baseline" exercise to create the "simplest" shapes for all the districts in a state? The rules include only that each district in the state must contain the same population. The definition of "simple" is up to you; but you need to make a convincing argument to voters in the state that your solution is fair. As an application of your method, draw geographically simple congressional districts for the state of New York. + +# Problem B: The Airplane Seating Problem + +Airlines are free to seat passengers waiting to board an aircraft in any order whatsoever. It has become customary to seat passengers with special needs first, followed by first-class passengers (who sit at the front of the plane). Then coach and business-class passengers are seated by groups of rows, beginning with the row at the back of the plane and proceeding forward. + +Apart from consideration of the passengers' wait time, from the airline's point of view, time is money, and boarding time is best minimized. The plane makes money for the airline only when it is in motion, and long boarding times limit the number of trips that a plane can make in a day. + +The development of larger planes, such as the Airbus A380 (800 passengers), accentuate the problem of minimizing boarding (and deboarding) time. + +Devise and compare procedures for boarding and deboarding planes with varying numbers of passengers: small (85-210), midsize (210-330), and large (450-800). Prepare an executive summary, not to exceed two single-spaced pages, in which you set out your conclusions to an audience of airline executives, gate agents, and flight crews. + +An article appeared in the New York Times (14 November 2006) addressing procedures currently being followed and the importance to the airline of finding better solutions. The article can be seen at: http://travel2.nytimes.com/2006/11/14/business/14boarding.html. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at either Appalachian State University (Gerrymandering Problem) or at the National Security Agency (Airplane Seating Problem). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree on a score, a third judge evaluated the paper. + +This year again, an additional Regional Judging site was created at the U.S. Military Academy to support the growing number of contest submissions. + +Final judging took place at Asilomar Conference Center, Pacific Grove, CA. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Gerrymandering Problem55391202351
Airplane Seating Problem969164356598
14122255558949
+ +The 14 papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +# Gerrymandering Papers + +"When Topologists Are Politicians... + +Duke University + +Durham, NC + +David Kraines + +Nikifor C. Bliznashki + +Aaron Pollack + +Russell Posner + +"What to Feed a Gerrymander" + +Harvard University + +Cambridge, MA + +Clifford H. Taubes + +Ben Conlee + +Abe Othman + +Chris Yetter + +"Electoral Redistricting with Moment of Inertia and Diminishing Halves Models" + +Massachusetts Institute of Technology + +Cambridge, MA + +Martin Z. Bazant + +Andrew Spann + +Daniel Kane + +Dan Gulotta + +"Applying Voronoi Diagrams to the Redistricting Problem" + +University of Washington + +Seattle, WA + +James Allen Morrow + +Lukas Svec + +Sam Burden + +Aaron Dilley + +"Why Weight? A Cluster-Theoretic Approach to Political Districting" + +University of Washington + +Seattle, WA + +Anne Greenbaum + +Sam Whittle + +Wesley Essig + +Nathaniel S. Bottman + +# Airplane Seating Papers + +"Novel Approaches to Airline Boarding" + +Duke University + +Durham, NC + +Anne Catlla + +Qianwei Li + +Arnav Mehta + +Aaron Wise + +"Boarding at the Speed of Flight" + +Duke University + +Durham, NC + +Michael Brenner + +Michael Bauer + +Kshipra Bhawalkar + +Matthew Edwards + +"STAR: (Saving Time, Adding Revenues) Boarding/Deboarding Strategy" + +National Univ. of Defense Technology + +Changsha, China + +Yi Wu + +Bo Yuan + +Jianfei Yin + +Mafa Wang + +"The Unique Best Boarding Plan? It Depends" + +National University of Singapore + +Singapore + +Yannis Yatracos + +Bolun Liu + +Xuan Hou + +Hao Wang + +"Airliner Boarding and Deplaning Strategy" + +Peking University + +Beijing, China + +Xufeng Liu + +Linbo Zhao + +Fan Zhou + +Guozhen Wang + +"The Best Boarding Uses Buffers" + +Slippery Rock University + +Slippery Rock, PA + +Athula R. Herat + +Kevin D. Sobczak + +Eric J. Hardin + +Bradley J. Kirkwood + +"Modeling Airplane Boarding Procedures" + +Truman State University + +Kirksville, MO + +Steven J. Smith + +Bach Ha + +Daniel Matheny + +Spencer Tipping + +"American Airlines' Next Top Model" + +University of Puget Sound + +TacomawA + +Michael Z. Spivey + +Sara J. Beck + +Spencer D. K'Burg + +Alex B. Twist + +"Boarding—Step by Step: + +A Cellular Automaton Approach to + +Optimising Aircraft Boarding Time" + +University of Stellenbosch + +Stellenbosch, South Africa + +Jan H. van Vuuren + +Chris Rohwer + +Andreas Hafver + +Louise Viljoen + +# Meritorious Teams + +Gerrymandering Problem (53 teams) + +Beijing Forestry University, College of Biological Sciences and Biotechnology, Beijing, China (Gao Mengning) + +Beijing Wuzi University, Informatics College, Beijing, China (Li Zhenping) + +Bemidji State University, MN (Colleen Livingston) + +Carroll College, Helena, MT (Holly S. Zullo) + +Carroll College, Helena, MT (Kelly S. Cline) + +Cornell University, Ithaca, NY (Alexander Vladimirsky) + +Duke University, Dept. of Computer Science, Durham, NC (Owen Astrachan) + +Harvard University, Cambridge, MA (Clifford H. Taubes) + +Harvey Mudd College, Dept. of Mathematics, Claremont, CA (Jon Jacobsen) + +Harvey Mudd College, Dept. of Computer Science, Claremont, CA (Ran Libeskind-Hadas) + +Huazhong University of Science and Technology, Dept. of Industrial and Manufacturing System Engineering, Wuhan, Hubei, China (Gao Liang) + +James Madison University, Harrisonburg, VA (David B. Walton) + +Kansas State University, Manhattan, KS (Dave R. Auckly) + +Korea Advanced Institute of Science & Technology, Daejeon, Korea (Kim Yong Jung) + +MIT, Dept. of Mathematics, Cambridge, MA (Martin Z. Bazant) + +MIT, Dept. of Physics, Cambridge, MA (Leonid Levitov—two teams) + +National University of Defense Technology, Changsha, China (Wang Dan) + +Ningbo Institute of Technology, Zhejiang University Fundamental Courses, Ningbo, Zhejiang, China (Wang Jufeng) + +Northeastern University, School of Mechanical Engineering and Automation, Dept. of Modern Design and Analysis, Shenyang, Liaoning, China (He Xuehong) + +Northwestern Polytechnical University Dept. of Applied Mathematics, Xi'an, Shaanxi, China (Nie Yufeng) + +Northwestern Polytechnical University Dept. of Applied Physics, Xi'an, Shaanxi, China (Lei Youming) + +Oregon State University, Corvallis, OR (Nathan L. Gibson) + +Pacific University, Forest Grove, CA (John August) + +Peking University School of Mathematical Sciences, Dept. of Business Statistics and Econometrics, Beijing, China (Zhang Junni) + +Rose-Hulman Institute of Technology, Dept. of Mathematics, Terre Haute, IN (David J. Rader) +Rose-Hulman Institute of Technology, Dept. of Computer Science and Software Engineering, Terre Haute, IN (Cary Laxer) +Shandong University of Science and Technology, College of Information Science and Engineering, Qingdao, Shandong, China (Pang Shanchen) +Shanghai Hongkou Institute of Education, Shanghai, China (Hu Jun) +Shenyang Institute of Aeronautical Engineering, Shenyang, Liaoning, China (Zhu Limei) +Sichuan University Yangtze Center of Mathematics, Chengdu, Sichuan, China (Chen Bohui) +South China Normal University, Dept. of Information and Computational Science, Guangzhou, Guangdong, China (Yang Tan) +South China University of Technology, School of Computer Science and Engineering, Guangzhou, Guangdong, China (Tao Zhi-Sui) +Southern Connecticut State University, New Haven, CT (Ross B. Gingrich) +Tsinghua University Mathematical Science, Beijing, China (Ye Jun) +U.S. Military Academy, West Point, NY (Elisha Peterson) +University College Cork, Cork, Ireland (Dmitrii Rachinskii) +University of California-Davis, Davis, CA (Sarah A. Williams) +University of Colorado-Boulder, Dept. of Physics, Boulder, CO (Michael H. Ritzwoller) +University of Richmond, Richmond, VA (Kathy W. Hoke) +University of Science and Technology of China, Dept. of Geophysics, Hefei, Anhui, China (Zhan Zhongwen) +University of Science and Technology of China, Special Class for Gifted Young, Hefei, Anhui, China (Zhang Gaigong) +Wake Forest University, Winston Salem, NC (Miaohua Jiang) +Wesleyan College, Macon, GA (Joseph A. Iskra) +Western Washington University, Bellingham, WA (Tjalling Ypma) +Westminster College, New Wilmington, PA (Barbara T. Faires) +Wuhan University of Technology, Wuhan, Hubei, China (He Lang) +Xidian University, Xi'an, Shaanxi, China (Song Yue) +Xuzhou Institute of Technology, Xuzhou, Jiangsu, China (Jiang Yingzi) +Zhejiang University, Hangzhou, Zhejiang, China (Tan Zhiyi) +Zhejiang University City College, Dept. of Information and Computing Science, Hangzhou, Zhejiang, China (Wang Gui) +Zhejiang University of Technology, Jianxing College, Hangzhou, Zhejiang, China (Wang Shiming) +Zhuhai College of Jinan University, Zhuhai, Guangdong, China (Zhang Yuanbiao) + +Airplane Seating Problem (69 teams) +Asbury College, Wilmore, KY (David L. Couliette) +Beijing Institute of Technology, Beijing, China (Li Bing-Zhao) +Beijing Jiaotong University, Beijing, China (Fan Jixiang) +Beijing Jiaotong University, Dept. of Informaion Management, Beijing, China (Wang Bingtuan) +Beijing Normal University, Dept. of Physics, Beijing, China (Huang Haiyang) +Beijing Normal University, Dept. of Statistics, Beijing, China (Zhang Shumei) +Beijing University of Aeronautics and Astronautics, Dept. of Electronic Information Engineering, Beijing, China (Feng Wei) + +Bemidji State University, Bemidji, MN (College Livingston) + +Carroll College, Helena, MT (Sam Alvey) + +Central South University, Changsha, Hunan, China (Hou Muzhou) + +Civil Aviation University of China, Dept. of Computer Science and Technology, Tianjin, China (Zhang Yuxiang) + +Civil Aviation University of China, Science College, Tianjin, China (Mou Deyi) + +Colgate University, Hamilton, NY (Warren Weckesser) + +Cornell University, Ithaca, NY (Alexander Vladimirsky) + +Dalian University of Technology, Institute of University Students' Innovation, Dalian, Liaoning, China (Tao Sun) + +Hangzhou Dianzi University, Hangzhou, Zhejiang, China (Shen Hao) + +Harbin Institute of Technology, Dept. of Management Science & Engineering, Harbin, Heilongjiang, China (Ge Hong) + +Harvey Mudd College, Dept. of Computer Science, Claremont, CA (Ran Libeskind-Hadas) + +Hefei University of Technology, Hefei, Anhui (Du Xueqiao) + +Institute of Electronic Technology, Dept. of Information Engineering University Management, Zhengzhou, Henan, China (Jia Li Xin) + +Korea Advanced Institute of Science and Technology, Dept. of Mechanical Engineering, Daejon, Korea (Joongmyeon Bae) + +Lawrence Technological University, Southfield, MI (Ruth G. Favro) + +Lawrence Technological University, Dept. of Physics, Southfield, MI (Valentina Tobos) + +Loyola College in Maryland, Baltimore, MD (Jiyuan Tao) + +McGill University, Montreal, Quebec, Canada (Nilima Nigam) + +Nanjing University, Dept. of Electronic Science and Engineering, Nanjing, Jiangsu, China (Wu Haodong) + +Nanjing University, Dept of Life Science, Nanjing, Jiangsu, China (Wang Jin) + +Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China (Kong Gaohua) + +National University of Defense Technology, Changsha, Hunan, China (Wu Mengda) + +National University of Ireland, Galway, Ireland (Niall Madden) + +Northeastern University, Dept. of Information Science and Engineering, Shenyang, Liaoning, China (Zhao Shuying) + +Northwestern Polytechnical University, Dept. of Applied Chemistry, Xi'an, Shaanxi, China (Sun Zhongkui) + +Radford University, Radford, VA (Laura J. Spielman) + +Renmin University of China, Beijing, China (Zhou Zemin) + +Rice University, Dept. of Computational and Applied Mathematics, Houston, TX (Mark Patrick Embree) + +Rice University, Dept. of Electrical and Computer Engineering, Houston, TX (Michael T. Orchard—two teams) + +Rowan University, Glassboro, NJ (Hieu D. Nguyen) + +Shandong University, Jinan, Shandong, China (Lu Tongchao) + +Shandong University, Jinan, Shandong, China (Liu Baodong—two teams) + +Shanghai University of Finance and Economics, Shanghai, China (Zhang Zhen Yu) + +Shanghai University of Finance and Economics, Shanghai, China (Li Ming) + +Shenyang University of Technology, Shenyang, Liaoning, China (Chen Yan) + +Sichuan University, Chengdu, Sichuan, China (Hai Niu) + +Sichuan University, Chengdu, Sichuan, China (Zhou Jie) + +Slippery Rock University, Slippery Rock, PA (Richard J. Marchand) + +South China University of Technology, Guangzhou, Guangdong, China (Pan Shaohua) + +Southwest University, Chongqing, China (Deng Lei) + +Southwestern University of Finance and Economics, Dept. of Economic Mathematics, Chengdu, Sichuan, China (Li Shaowen) + +Sun Yat-sen University, Guangzhou, Guangdong, China (Li Yan Hui) + +Trinity University, San Antonio, TX (Diane Saphire) + +Tsinghua University, Beijing, China (Hu Zhiming) + +University of Colorado-Boulder, Dept. of Physics, Boulder, CO (Michael H. Ritzwoller) + +University of Colorado at Denver, Denver, CO (Gary A. Olson) + +University of Colorado at Denver, Denver, CO (Lance W. Lana) + +University of Delaware, Newark, DE (Louis F. Rossi—two teams) + +University of Electrical Science and Technology of China, Dept. of Information and + +Computation Science, Chengdu, Sichuan, China (Qin Siyi) + +University of Massachusetts Lowell, Lowell, MA (James Graham-Eagle) + +University of Stellenbosch, Stellenbosch, Western Cape, South Africa (Jan H. van Vuuren) + +University of Western Ontario, London, Ontario, Canada (Allan B. MacIsaac) + +U.S. Military Academy, West Point, NY (Robert Burks) + +U.S. Military Academy, Dept. of Systems Engineering, West Point, NY (John Willis) + +Wake Forest University, Winston Salem, NC (Miaohua Jiang) + +Westminster College, Dept. of Physics, New Wilmington, PA (Samuel Lightner) + +Wuhan University, Wuhan, Hubei, China (Hu Xinqi) + +Xavier University, Cincinnati, OH (Bernd E. Rossa) + +Youngstown State University, Youngstown, OH (George T. Yates) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, recognized the teams from Harvard University (Gerrymandering Problem) and Duke University (Airplane Seating Problem) as INFORMS Outstanding teams and provided the following recognition: + +- a letter of congratulations from the current president of INFORMS to each team member and to the faculty advisor; +- a check in the amount of $300 to each team member; +- a bronze plaque for display at the team's institution, commemorating their achievement; +- individual certificates for team members and faculty advisor as a personal commemoration of this achievement; + +- a one-year student membership in INFORMS for each team member, which includes their choice of a professional journal plus the OR/MS Today periodical and the INFORMS society newsletter. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from MIT (Gerrymandering Problem) and Stellenbosch University (Airplane Seating Problem). Each of the team members was awarded a $300 cash prize and the teams received partial expenses to present their results in a special Minisymposium at the SIAM Annual Meeting in Boston, MA in July. Their schools were given a framed hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding North American team from each problem as an MAA Winner. The teams were from University of Washington (Gerrymandering Problem) and Truman State University (Airplane Seating Problem). With partial travel support from the MAA, the Rice University team presented their solution at a special session of the MAA Mathfest in Knoxville, TN in August. Each team member was presented a certificate by Richard S. Neal of the MAA Committee on Undergraduate Student Activities and Chapters. + +# Ben Fusaro Award + +One Meritorious paper was selected for each problem for the Ben Fusaro Award, named for the Founding Director of the MCM and awarded for the second time this year. It recognizes an especially creative approach; details concerning the award, its judging, and Ben Fusaro are in Vol. 25 (3) (2004): 195-196. The Ben Fusaro Award winners were from University of Colorado-Boulder (Gerrymandering Problem) and Rowan University (Airplane Seating Problem). + +# Judging + +Director + +Frank R. Giordano, Naval Postgraduate School, Monterey, CA + +Associate Directors + +Robert L. Borrelli, Mathematics Dept., Harvey Mudd College, Claremont, CA + +Patrick J. Driscoll, Dept. of Systems Engineering, U.S. Military Academy, West Point, NY + +William P. Fox, Dept. of Defense Analysis, Naval Postgraduate School, Monterey, CA + +# Gerrymandering Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, Stillwater, OK + +Associate Judges + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, + +Appalachian State University, Boone, NC (Head Triage Judge) + +Ben Fusaro, Dept. of Mathematics, Florida State University, Tallahassee, FL + +Steve Horton, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY (MAA Judge) + +Mario Juncosa, RAND Corporation, Santa Monica, CA (retired) + +Michael Moody, Olin College of Engineering, Needham, MA + +David H. Olwell, Naval Postgraduate School, Monterey, CA (INFORMS Judge) + +John L. Scharf, Mathematics Dept., Carroll College, Helena, MT + +(Fusaro Award Judge) + +Michael Tortorella, Dept. of Industrial and Systems Engineering, + +Rutgers University, Piscataway, NJ (Problem Author) + +# Airplane Seating Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, + +Bloomington, IN + +Associate Judges + +Peter Anspach, National Security Agency, Ft. Meade, MD (Head Triage Judge) + +Kelly Black, Mathematics Dept., Union College, Schenectady, NY + +Karen D. Bolinger, Mathematics Dept., Clarion University of Pennsylvania, + +Clarion, PA + +Jim Case (SIAM Judge) + +J. Douglas Faires, Youngstown State University, Youngstown, OH + +Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Don Miller, Mathematics Dept., St. Mary's College, Notre Dame, IN + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, + +Salisbury University, Salisbury, MD (MAA Judge) + +Dan Solow, Mathematics Dept., Case Western Reserve University, + +Cleveland, OH (INFORMS Judge) + +Marie Vanisko, Dept. of Mathematics, California State University—Stanislaus, + +Turlock, CA (from Fall 07 at Carroll College, Helena MT) + +(Fusaro Award Judge) + +Richard Douglas West, Francis Marion University, Florence, SC + +# Regional Judging Sessions + +Head Judge + +Patrick J. Driscoll, Dept. of Systems Engineering, United States Military + +Academy (USMA), West Point, NY + +Associate Judges + +Merrill Blackman, Dept. of Systems Engineering, USMA + +Tim Elkins, Dept. of Systems Engineering, USMA + +Darrall Henderson, Dept. of Mathematical Sciences, USMA + +Steve Henderson, Dept. of Systems Engineering, USMA + +Steve Horton, Dept. of Mathematical Sciences, USMA + +Michael Jaye, Dept. of Mathematical Sciences, USMA + +Tom Meyer, Dept. of Mathematical Sciences, USMA + +# Triage Sessions: + +# Gerrymandering Problem + +Head Triage Judge + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, + +Appalachian State University, Boone, NC + +Associate Judges + +Jeff Hirst, Rick Klima, Greg Rhoads, and René Salinas + +—all from Dept. of Math'1 Sciences, Appalachian State University, Boone, NC + +# Airplane Seating Problem + +Head Triage Judge + +Peter Anspach, National Security Agency (NSA), Ft. Meade, MD + +Associate Judges + +other judges from inside and outside NSA, who wish not to be named. + +# Sources of the Problems + +The Gerrymandering Problem was contributed by Michael Tortorella (Dept. of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ). The Airplane Seating Problem was contributed by Paul J. Campbell (Mathematics and Computer Science, Beloit College, Beloit, WI). + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency and by COMAP. We thank Dr. Gene Berg of NSA for his coordinating efforts. + +Additional support is provided by the Institute for Operations Research and + +the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +We also thank for their involvement and support: + +- Two Sigma Investments. (This group of experienced, analytical, and technical financial professionals based in New York builds and operates sophisticated quantitative trading strategies for domestic and international markets. The firm is successfully managing several billion dollars using highly automated trading technologies. For more information about Two Sigma, please visit http://www.twosigma.com.) + +We thank the MCM judges and MCM Board members for their valuable and unflagging efforts. Harvey Mudd College, its Mathematics Dept. staff, and Prof. Borrelli were gracious hosts to the judges. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Editing (and sometimes substantial cutting) has taken place: Minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. Please peruse these student efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# Appendix: Successful Participants + +# KEY: + +P = Successful Participation +H = Honorable Mention +M = Meritorious +$\mathsf{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONDEPARTMENTCITYADVISORAB
ALASKA
University of Alaska FairbanksComputer ScienceFairbanksOrion Sky LawlorH
CALIFORNIA
California Poly. State UniversityMathematicsSan Luis ObispoJonathan E. ShapiroP
Calif. State Polytechnic U. PomonaMathematics and StatisticsPomonaIoana MihailaP
Hubertus F. von BremenHP
PhysicsNina AbramzonP
Calif. State Univ. at Monterey BayMathematics and StatisticsSeasideHongde HuHP
Science and Env'1 PolicyHerbert CortezH
Harvey Mudd CollegeComputer ScienceClaremontRan Libeskind-HadasMM
MathematicsJon JacobsenM P
Humboldt State UniversityEnv'1 Resources EngineeringArcataBrad A. FinneyH
San Diego State UniversityMathematics and StatisticsSan DiegoKristin DuncanP
Sonoma State UniversityMathematicsRohnert ParkSunil K. TiwariH
University of California DavisMathematicsDavisSarah A. WilliamsM P
University of California Los AngelesMathematicsLos AngelesLuminita Aura VeseP
University of San DiegoMathematicsSan DiegoDiane HoffossH P
PhysicsDaniel SheehanP P
COLORADO
Colorado State UniversityMathematicsFort CollinsMichael J. KirbyH
Colorado State University - PuebloMathematicsPuebloJames LouisellP
University of Colorado - BoulderApplied MathematicsBoulderAnne M. DoughertyH
Bengt FornbergH
PhysicsMichael H. RitzwollerMM
+ +
INSTITUTIONDEPARTMENTCITYADVISORAB
University of Colorado at DenverMathematicsDenverGary A. OlsonM
Lance W. LanaM
CONNECTICUT
Connecticut CollegeMathematics and CSNew LondonSanjeeva BalasuriyaH
Sacred Heart UniversityMathematicsFairfieldPeter LothH
Southern Connecticut State UniversityMathematicsNew HavenRoss B. GingrichM
DELAWARE
Charter School of WilmingtonMathematicsWilmingtonL. Charles BiehlP
University of DelawareMathematical SciencesNewarkLouis F. RossiM M
DISTRICT OF COLUMBIA
George Washington UniversityMathematicsWashingtonLowell AbramsBH P
FLORIDA
Embry-Riddle Aeronautical UniversityMathematicsDaytona BeachGreg S. SpradlinBH
Jacksonville UniversityMathematicsJacksonvilleRobert A. HollisterBP P
PhysicsPaul R. SimonyP
University of South FloridaMathematicsTampaBrian CurtinP
GEORGIA
Georgia Southern UniversityMathematical SciencesStatesboroGoran LesajaPH P
University of West GeorgiaMathematicsCarrolltonScott GordonP
Wesleyan CollegeChemistry and PhysicsMaconCharles J. BeneshH
Keith L. PetersonH
MathematicsJoseph A. IskraMH
ILLINOIS
Northern Illinois UniversityMathematical SciencesDeKalbYing C. KwongP
INDIANA
Franklin CollegeMathematics and ComputingFranklinJohn P. BoardmanP
Rose-Hulman Institute of TechnologyCS and Software EngineeringTerre HauteCary LaxerMP
MathematicsDavid J. RaderM P
Saint Mary's CollegeMathematicsNotre DameJoanne R. SnowPH
IOWA
Grand View CollegeMathematics and CSDes MoinesMichelle RuseH
Grinnell CollegeMathematics and StatisticsGrinnellKaren L. ShumanPP
Luther CollegeComputer ScienceDecorahSteve HubbardH
MathematicsReginald LaursenHP
Mt. Mercy CollegeMathematicsCedar RapidsK. R. KnoppP
Simpson CollegeChemistry and PhysicsIndianolaWerner S. KollnP
Computer SciencePaul CravenH
MathematicsMartha E. WaggonerH P
KANSAS
Benedictine CollegeMathematics and CSAtchisonLinda J. HerndonP
Emporia State UniversityMathematicsEmporiaBrian D. HollenbeckP
Kansas State UniversityMathematicsManhattanDave R. AucklyM P
KENTUCKY
Asbury CollegeMathematics and CSWilmoreDavid L. CoulliettePM
Morehead State UniversityMathematics and CSMoreheadMichael DobranskiP
Northern Kentucky UniversityMathematicsHighland HeightsGail MackinHP
LOUISIANA
Louisiana Tech UniversityMathematics and StatisticsRustonKatie A. EvansH
MAINE
Colby CollegeComputer ScienceWatervilleJohn E. AugustineP P
MathematicsJan HollyP
MARYLAND
Johns Hopkins UniversityApplied Mathematics and StatisticsBaltimoreFred TorcasoH
Loyola College in MarylandMathematical SciencesBaltimoreJiyuan TaoM H
Salisbury UniversityMathematics and CSSalisburySteven M. HetzlerH
Troy V. BanksH
Towson UniversityMathematicsTowsonMike O'LearyP
Villa Julie CollegeEconomicsStevensonEileen C. McGrawH P
MathematicsSusan P. SlatteryPH
MASSACHUSETTS
College of the Holy CrossMathematics and CSWorcesterGareth E. RobertsP
Harvard UniversityEngineering and Applied ScienceCambridgeMichael BrennerP P
MathematicsClifford H. TaubesO M
Massachusetts Institute of TechnologyMathematicsCambridgeMartin Z. BazantO M
PhysicsLeonid LevitovM M
Salem State CollegeMathematicsSalemLuis P. PoitevinP
Simon's Rock CollegeMathematicsGreat BarringtonAllen B. AltmanH P
PhysicsMike BergmanP
University of Massachusetts LowellMathematical SciencesLowellJames Graham-EagleM
MICHIGAN
Albion CollegeMathematics and CSAlbionDarren E. MasonP
Lawrence Technological UniversityMathematics and CSSouthfieldRuth G. FavroPM
PhysicsValentina TobosM P
Siena Heights UniversityMathematicsAdrianJeff KallenbachP
Timothy H. HusbandP
MINNESOTA
Bemidji State UniversityMathematics and CSBemidjiColleen LivingstonMM
Bethel UniversityMathematicsSt. PaulWilliam M. KinneyH
Saint John's UniversityMathematicsCollegevilleRobert J. HesseH
University of Minnesota DuluthMathematics and StatisticsDuluthBruce B. PeckhamH
MISSOURI
Drury UniversityEnglishSpringfieldKen V. Egan Jr.P
Mathematics and CSKeith J. CoatesP P
PhysicsBruce CallenH
Northwest Missouri State UniversityMathematics and StatisticsMaryvilleRussell N. EulerP
Saint Louis UniversityAerospace and Mechanical Eng'ingSaint LouisSanjay JayaramP P
Southeast Missouri State UniversityMathematicsCape GirardeauRobert W. SheetsP
Truman State UniversityMathematicsKirksvilleSteven J. SmithHO
+ +208 The UMAP Journal 28.3 (2007) + +
INSTITUTIONDEPARTMENTCITYADVISORAB
MONTANA
Carroll CollegeMath. Engineering and CSHelenaHolly S. ZulloM
Kelly S. ClineM
Natural SciencesSam AlveyM H
NEBRASKA
Hastings CollegeMathematicsHastingsDavid B. CookeP
Nebraska Wesleyan UniversityMathematics and CSLincolnKristin A. PfabeP
Wayne State CollegeMathematicsWayneTimothy L. HardyP P
NEW HAMPSHIRE
Rivier CollegeMathematics and CSNashuaWilliam E. BonniceP
NEW JERSEY
Princeton UniversityMathematicsPrincetonRobert CalderbankH
Richard Stockton College of New JerseyMathematicsPomonaWesley S. CrossP
Rowan UniversityMathematicsGlassboroHieu D. NguyenM
F. Olcay IlicasuP
Physics and AstronomyEduardo V. FloresP
NEW MEXICO
New Mexico Inst. of Mining and TechnologyMathematicsSocorroJohn D. StarrettP
New Mexico State UniversityMathematical SciencesLas CrucesMary M. BallykP
NEW YORK
Clarkson UniversityComputer SciencePotsdamKathleen R. FowlerP P
MathematicsJoseph D. SkufcaH H
Colgate UniversityMathematicsHamiltonWarren WeckesserM P
Columbia UniversityApplied Physics and Applied Math.New YorkDavid E. KeyesP
Concordia College-New YorkMathematicsBronxvilleJohn F. LoaseH P
Cornell UniversityMathematicsIthacaAlexander VladimirskyMM
Op'ns Research & Ind'l Eng'ngEric FriedmanPP
Ithaca CollegeComputer ScienceIthacaAli ErkanH
MathematicsJohn C. MaceliP
PhysicsBruce G. ThompsonH
Nazareth CollegeMathematicsRochesterDaniel BirmajerP
Rensselaer Polytechnic InstituteChemical and Biochem. EngineeringTroyShekhar GardeH H
Mathematical SciencesTroyPeter R. KramerPH
Union CollegeMathematicsSchenectadyPaul D. FriedmanP
U. S. Military AcademyMathematicsWest PointElisha PetersonM
Robert BurksM
Systems EngineeringJohn WillisM P
Westchester Community CollegeMathematicsValhallaMarvin L. LittmanP
NORTH CAROLINA
Appalachian State UniversityMathematical SciencesBooneKatherine J. MawhinneyPP
Davidson CollegeEconomicsDavidsonFred H. SmithH
MathematicsTimothy P. ChartierP P
PhysicsTimothy H. GfroererP
Duke UniversityComputer ScienceDurhamOwen AstrachanMO
MathematicsAnne CatllaO
David KrainesO
High Point UniversityMathematicsHigh PointBrian I. FultonH
Meredith CollegeMathematics and CSRaleighCammey E. ColeP
NC School of Science and Math.MathematicsDurhamDaniel J. TeagueH P
Wake Forest UniversityMathematicsWinston SalemMiaohua JiangMM
Western Carolina UniversityGeosciences & Natural Resource MgmtCullowheeDavid KinnerP
Mathematics and CSErin McNelisP
Jeffrey K. LawsonP
OHIO
Bowling Green State UniversityMathematics and StatisticsBowling GreenJuan P. BesP
Malone CollegeMathematicsCantonDavid W. HahnP
Miami UniversityMathematics and StatisticsOxfordDoug E. WardH
University of DaytonMathematicsDaytonYoussef N. RaffoulP
Xavier UniversityMathematics and CSCincinnatiBernd E. RossaM
Youngstown State UniversityCivil/Env'1 and Chemical EngineeringYoungstownScott MartinP
Mathematics and StatisticsGeorge T. YatesHM
+ +OHIO + +
INSTITUTIONDEPARTMENTCITYADVISORAB
OREGON
Eastern Oregon UniversityPhysicsLa GrandeAnthony A. TovarP
Lewis and Clark CollegeMathematical SciencesPortlandElizabeth StanhopeP
Linfield CollegeComputer ScienceMcMinnvilleDaniel K. FordPH
MathematicsJulia D. FredericksP P
Oregon Institute of TechnologyMathematicsKlamath FallsJim FischerP
Oregon State UniversityMathematicsCorvallisNathan L. GibsonM
Pacific UniversityMathematicsForest GroveJohn AugustM
Michael BoardmanP
Portland State UniversityMathematics and StatisticsPortlandGerardo LafferriereP
Western Oregon UniversityMathematicsMonmouthMaria G. FungP
PENNSYLVANIA
Altoona Area High SchoolMathematicsAltoonaErin S. WisorPP
Bloomsburg UniversityMathematics, CS, and StatisticsBloomsburgKevin K. FerlandP
Lafayette CollegeMathematicsEastonThomas HillP
Shippensburg UniversityMathematicsShippensburgPaul T. TaylorP
Slippery Rock UniversityMathematicsSlippery RockRichard J. MarchandHM
PhysicsAthula R. HeratPO
University of PittsburghMathematicsPittsburghJonathan E. RubinP
Westminster CollegeMathematics and CSNew WilmingtonBarbara T. FairesMP
PhysicsSamuel LightnerM P
SOUTH CAROLINA
College of CharlestonMathematicsCharlestonAmy N. LangvillePP
Francis Marion UniversityMathematicsFlorenceDavid SzurleyP
Physics and AstronomyDavid AndersonP
Midlands Technical CollegeMathematicsColumbiaJohn R. LongHP
University of South CarolinaMathematicsColumbiaLincoln LuP
Mount Marty CollegeComputer ScienceYanktonBonita GacnikH
MathematicsStephanie A. GruverP P
TENNESSEE
Lipscomb UniversityMathematicsNashvilleMark A. MillerP
Tennessee Tech UniversityMathematicsCookevilleAndrew J. HetzelP
TEXAS
Rice UniversityComputational and Applied Math.HoustonMark P. EmbreePM
Electrical & Computer EngineeringMichael T. OrchardM M
Trinity UniversityEngineering ScienceSan AntonioWilliam CollinsH
MathematicsBrian MiceliP
Diane SaphireM
VIRGINIA
Eastern Mennonite UniversityMath and SciencesHarrisonburgLeah S. BoyerP P
Godwin High SchoolScience Math. and Tech. CtrRichmondAnn W. SebrellH
James Madison UniversityMathematics and StatisticsHarrisonburgAnthony TongenH
David B. WaltonM
Longwood UniversityMathematics and CSFarmvilleM. Leigh LunsfordP P
Maggie Walker Governor's SchoolMathematicsRichmondJohn A. BarnesH H
ScienceHarold HoughtonP
Radford UniversityMathematics and StatisticsRadfordLaura J. SpielmanM P
University of RichmondMathematics and CSRichmondKathy W. HokeM
Virginia WesternMathematicsRoanokeSteve T. HammerP
Virginia Western Community CollegeMathematicsRoanokeRuth A. ShermanP
PhysicsGerald D. BensonP
WASHINGTON
Central Washington UniversityMathematicsEllensburgStuart F. BoersmaH
Heritage UniversityMathematicsToppenishRichard SwearingenP
Pacific Lutheran UniversityMathematicsTacomaBryan C. DornerP
MathematicsMei ZhuP
Seattle Pacific UniversityComputer ScienceSeattleElaine WeltzP
Electrical EngineeringMelani PlettP
Seattle Pacific UniversityMathematicsSeattleWai LauH P
University of Puget SoundMathematicsTacomaMichael Z. SpiveyO
University of WashingtonApplied and Computational Math.SeattleAnne GreenbaumOH
MathematicsJames A. MorrowO H
Washington State UniversityMathematicsPullmanMark SchumakerP
Western Washington UniversityComputer ScienceBellinghamSaim UralP P
MathematicsTjalling YpmaMP
+ +212 The UMAP Journal 28.3 (2007) + +
INSTITUTIONDEPARTMENTCITYADVISORAB
WISCONSIN
Beloit CollegeMathematics and CSBeloitPaul J. CampbellP
Edgewood CollegeMathematicsMadisonSteven PostP
University of Wisconsin - La CrosseMathematicsLa CrosseBarbara A. BennieP
AUSTRALIA
U. of Southern QueenslandMathematics and ComputingToowoombaSergey A. SuslovH
CANADA
Brandon UniversityMathematics and CSBrandonDoug A. PickeringH
Dalhousie UniversityMathematics and StatisticsHalifaxGeorg W. HofmannP
Queen's UniversityMathematics and StatisticsKingstonNavin KashyapP
York UniversityMathematics and StatisticsTorontoHongmei ZhuH H
University of Western OntarioApplied MathematicsLondonAllan B. MacIsaacM
McGill UniversityMathematics and StatisticsMontrealNilima NigamPM
Stephen W. DruryP
University of SaskatchewanMathematics and StatisticsSaskatoonJames A. BrookeP
CHINA
Anhui
Anhui UniversityApplied MathematicsHefeiYanhong BaoP
Network and Information EngineeringHefeiQuanbing ZhangP
StatisticsHuayou ChenP
Hefei University of TechnologyApplied MathematicsXueqiao DuM H
MathematicsYoudu HuangP
Yongwu ZhouP
Nonlinear Science CenterGifted YoungHefeiChuanming WeiP
U. of Science and Technology of ChinaAutomationHefeiXuli LeH
Electronic Engineering and Information Sci.Qiang MengH
GeophysicsZhongwen ZhanM
Gifted YoungGaigong ZhangM
Zijing DingH
Beijing
Beihang UniversityAdvanced EngineeringBeijingShangzhi LiH
Automation Sci. and Electr. Eng'ngHaiyan SunPH
Beijing Forestry UniversityBio. Sciences and Biotech.BeijingLi HongJunP
Mengning GaoM
InformationMengning GaoP
MathematicsLi HongJunP
ScienceSheng LiuP
Xiaochun WangP
Beijing High School No. 4MathematicsBeijingWei LiP
EnglishShi WangPP
Foreign LanguageFang FangP
Beijing Information Science and Technology UniversityMathematicsBeijingXue ChunyanPP
Beijing Insitute of TechnologyMathematicsBeijingQun RenH
Gui-Feng YanH
Hong-Ying ManH
Bing-Zhao LiM
Beijing Jiaotong UniversityApplication MathematicsBeijingJixiang FanMP
Computation MathematicsYongguang YuP
Computer ScienceXiaoxia WangPH
ElectronicsChengchao YouP
Information EngineeringYongsheng WeiHP
Information ManagementBingtuan WangM
MathematicsBingtuan WangH
PhysicsXiaoming HuangP
SoftwareWei LuP P
Gang YuanPP
StatisticsZhouhong WangH
Traffic EngineeringLiang WangH
Jun WangP
Beijing Language and Culture UniversityComputer ScienceBeijingGuilong LiuPP
InformationXiaoxia ZhaoP
MathematicsRou SongP
Beijing Normal UniversityMathematicsQing HeH
Haiyang HuangP
Hang QiuP
Qing HeHP P
PhysicsHaiyang HuangM
Remote SensingDai YongjiuH
StatisticsHengjian CuiP
Shumei ZhangM
Yong LiP
Fu Lai LiuH
Yong LiH
Statistics and Financial MathematicsCui HengjianP
Water ScienceHonghui WangP
Beijing University of Aeronautics and AstronauticsAdvanced EngineeringBeijingLinping PengP
Electronic Info. EngineeringFeng WeiM
ScienceWu SanxingP
Beijing University of Chemical TechnologyElectric ScienceBeijingGuangfeng JiangH
Kaisu WuP
MathematicsWenyan YuanP
Xinhua JiangP
Beijing University of Posts and Telecomm.Applied MathematicsBeijingHongxiang SunP
Zuguo HeHP
Applied PhysicsJinkou DingH
CS and TechnologyZuguo HeH
Electronic EngineeringQing ZhouP
Information EngineeringWenbo ZhangH
Telecomm. EngineeringJianhua YuanH
Beijing University of TechnologyApplied ScienceBeijingZhen WangP
Yi XueP
Beijing Wuzi UniversityInformaticsBeijingZhenping LiMH
Tian DeliangHP
Cheng XiaohongP P
Central University of Finance and EconomicsMathematicsBeijingXiuguo WangP P
MathematicsXianjun YinH
MathematicsZhaoxu SunP
China Agricultural UniversityHonors Programme (Life Science) ScienceBeijingXuewei WangP
Hui ZouH
Jun Feng LiuHH
Xu Hua LiuP
China University of GeosciencesGeosciences and ResourcesBeijingDeng YanHH
Water Resources and Environmental ScienceZhaodou ChenPP
China University of PetroleumMathematics and PhysicsBeijingXiaoguang LuH
North China Electric Power UniversityAutomationBeijingWen TanH
Electrical EngineeringPan ZhiH
Electric Power EngineeringLi Guo DongP
Electrical EngineeringXirong ZhangP
MathematicsQiu QiRongP
Mathematics and PhysicsQi Rong QiuP
Peking UniversityApplied MathematicsBeijingMinghua DengHP
Business Statistics and EconomicsJunni ZhangM
Computer ScienceXiaolin WangH
FinancePeng HeHH
Financial MathematicsXin YiP P
Guiding Centre for Students'
Extracurricular ActivitiesLiang LuPP
MathematicsXufeng LiuO H
Quantum Electronics InstituteZhigang ZhangH
Scientific and Engineering ComputingPingwen ZhangP
Renmin University of ChinaApplied MathematicsBeijingLitao HanH
Computer ScienceWang WeiH
Information SchoolYunyan YangP
Management of Information SystemsDeying LiP
MathematicsYunyan YangP
Zhou ZeminM
+ +216 The UMAP Journal 28.3 (2007) + +
INSTITUTIONDEPARTMENTCITYADVISORAB
Tsinghua UniversityChemistryBeijingXianqing QiuP
Mathematical ScienceZhiming HuP
Mathematical ScienceZhiming HuM
Mathematical ScienceJun YeMH
Chongqing
Chongqing UniversityChemistryChongqingLi ZhiliangHP
Computer Software EngineeringWu KaiguiP
MathematicsLiu QiongfangP
Mathematics and Physics, Information and Computational ScienceGong QuP
Southwest UniversityMathematics and StatisticsChongqingXuegao ZhengP
Chunlei TangH
Lei DengM
Wendi WangH
Fujian
Fujian Normal UniversityComputer ScienceFuzhouChangfeng MaP
Huiling LinP
MathematicsShenggui ZhangP
Qinhua ChenP
Fujian University of TechnologyMathematics and PhysicsFuzhouLin LiP
Hua Qiao UniversityMathematicsQuanzhouHai Zhou SongP
Xiamen UniversityAutomationXiamenLangcai CaoP
PhysicsGuozhen SuBP P
Guangdome
Guangzhou UniversityMathematicsGuangzhouFu RonglinP
Shang YadongP
Wang XiaofengP
Jinan UniversityComputer ScienceGuangzhouShi Zhuang LuoP
ShiQi YeP
MathematicsDaqiang HuH
Suohai FanP
Jinan University, Zhuhai CollegeComputer ScienceZhuhaiGuangqing LuH P
Yuanbiao ZhangM
Shenzhen PolytechnicPackaging EngineeringZhiwei WangP
Industrial Training CenterShenzhenHongmei TianP
Dongping WeiH
Mechanical and Electrical EngineeringKanzhen ChenH
South China Normal UniversityInformation and Computationall Sci.GuangzhouTan YangM
MathematicsShaoHui ZhangH P
StatisticsQuanXin ZhuH
South China University of TechnologyCS and EngineeringGuangzhouZhi-Sui TaoM
Mathematical SciencesShen-Quan LiuP
Shaohua PanM
Ping HuangP
Sun Yat-Sen UniversityComputer ScienceGuangzhouXiaoMing LiuH
GeographyZuo Jian YuanH
MathematicsYan Hui LiM
StatisticsXiaoLing YinH
Guangxi
Guangxi Teacher Education UniversityMathematicsNanningChengDong WeiP
Mathematics and CSJianWei ChenP
XiongFa MaiP
Guangxi University of Finance and EconomicsMathematics and StatisticsNanningYuanyuan TanP
Guilin University of Electronic TechnologyMathematics and CSGuilinXingxing HeP
Yongxiang MoH
Information and CommunicationSunyong WuP
University of GuangxiApplied MathematicsNanningXiaoceng WuPP
Computing ScienceYuejin LuP
Information ScienceZhongxing WangPP
Operational ManagementRuxue WuP P
Guizhou University for NationalitiesMathematics and CSGuiyangZhensheng HongP
Hebei
North China Electric Power UniversityMathematicsBaodingGendai GuP
Peng LiH
Mathematics and PhysicsBo XiongH
Shenghua WangP
Po ZhangH
Yagang ZhangH
Heilongjiang
Daqing Petroleum InstituteMathematicsDaqingYang YunfengHP
Kong LingbinPP
Harbin Engineering UniversityInfo. and Computer ScienceHarbinYu FeiP
MathematicsLuo YueshengH
Shen JihongHH
Harbin Medical UniversityBioinformaticsHarbinWang Qiang HuHH
Harbin University of Science and TechnologyMathematicsHarbinDongmei LiP
Fengqiu LiuP
Guangyue TianH
Zuobao CaoP
Heilongjiang UniversityMathematical SciencesHarbinWeijun MaPP
Jia Musi UniversityMathematicsJiamusiZhang JuhongPP
Northeast Agricultural UniversityAgricultural EngineeringHarbinFang Ge LiP
CS and TechnologyFang Ge LiP
Qiufeng WuP
Information and Computing ScienceYa Zhuo ZhangP
Harbin Institute of TechnologyEnvironmental Science and EngineeringHarbinTong ZhengH
Management Science and EngineeringHong GeMH
Wei ShangHP
MathematicsChiping ZhangH
Guanghong JiaoPP
Kean LiuPP
Harbin Institute of Tech. (cont'd)MathematicsShouting ShangH
Guofeng FanP
Xianyu MengP
Chiping ZhangP
Xilian WangH P
ScienceYunfei ZhangHP
Heilongjiang Inst. of Science and Tech.Mathematics and MechanicsHarbinHong DuP
Henan
Information Engineering UniversityInformation SecurityZhengzhouZhang Xiao YongP
Surveying and MappingZhang GuoliangP
ManagementJia Li XinM
Hubei
Huazhong University of Science and Tech.Electronics and Information EngineeringWuhanYan DongP
Industrial and Manufacturing Sys. Eng'ngLiang GaoMP
MathematicsYizhi WangP
Wuhan UniversityElectronic InformationWuhanYuanming HuP
MathematicsWuhanHu XinqiM
Luo ZhuangchuH
Mathematics and StatisticsYuanming HuH
Shihua ChengP
Xinqi HuP
Aijiao DengP P
Liuyi ZhongP P
Wuhan University of TechnologyAutomobile EngineeringWuhanGao FeiP
Control EngineeringHuang XiaoweiP
Yang WenxiaP
Zhou JunH
Zhu HuapingP
MathematicsChen JianyeP
Chu YangjieP
He LangM
Zhu HuiyingH
Wuhan University of Technology (cont'd)StatisticsTian WufengH
Mao ShuhuaP
HUNAN
Central South UniversityApplied MathematicsChangshaZheng ZhoushunP
Qin XuanyunP
Geographical Information SystemsZhang HongyanP
Information and CSZhang HongYanH
Math'1 Scienceand Computing Tech.Hou MuzhouM
Zhu ShihuaP
Hunan UniversityApplied MathematicsChangshaXiaopei LiP
CS and TechnologyHao WuP
Probability and StatisticsHan LuoH
Liping WangP
National University of Defense TechnologyApplied MathematicsChangshaDan WangMP
MathematicsZiyang MaoP
Yi WuOP
Mengda WuMP
PhysicsMeihua XieP
Xiaojun DuanPH
Inner Mongolia
Inner Mongolia UniversityAutomationHuhehaoteZhuang MaP
MathematicsHaitao HanP
Jiangsu
China University of Mining and TechnologyApplied MathematicsXuzhouWu ZongxiangP
Information and Electrical EngineeringGong DunweiP
MathematicsZhou ShengwuP P
Institute of System EngineeringMathematicsZhenjiangHonglin YangPP
Jiangsu UniversityMathematicsZhenjiangGuilong GuiP P
ScienceYimin LiPP
Nanjing Univ. of Scienceand TechnologyApplied MathematicsNanjingPeibiao ZhaoP
Zhipeng QiuP
Nanjing University of Science and Tech. (cont'd)StatisticsLiwei LiuP
Zhang ZhengjunP
Nanjing UniversityElectronic Science and EngineeringNanjingHaodong WuM P
Qian ChenH
FinanceDixin ZhangP
Life ScienceJin WangM
MathematicsWeihua HuangP P
PhysicsHuimin ShaoP
Fa LiuH
Nanjing University of Posts and Telecomm.Mathematics and PhysicsNanjingXu LiWeiP
Kong GaohuaM
Qiu ZhonghuaH
Shi AijuH P
Nantong UniversityElectrical EngineeringNantongGuoping LuH
ScienceYuehua GuoP
Zhao Min GroupP
Nonlinear Scientific Research CenterMathematicsZhenjiangGang XuPH
PLA (People's Liberation Army) University of Science and TechnologyApplied Mathematics and PhysicsNanjingShi HanshengH
Communication EngineeringLiu ShoushengP
ScienceShen JinrenP
Teng JiajunP
Southeast UniversityMathematicsNanjingEnshui ChenPP
Feng WangP P
Dan HeHP
Liyan WangHH
Xuzhou Institute of TechnologyMathematicsXuzhouJiang YingziM P
Li SubeiH P
Jianxi
Gannan Normal UniversityComputerGanzhouYan ShenhaiP
MathematicsXie XianhuaH
Xu JingfeiH
Nanchang UniversityMathematicsNanchangChen TaoH
Chen YujuP
Chuanrong LiaoP
Xinsheng MaH
Jilin
Beihua UniversityBasic CoursesJilin CityHongwei ZhaoP
Yuncai WeiH
Zhaojun ChenP
Ming ZhaoP
Jilin Teachers' Institute of Engineering and TechnologyScienceChangchunChangchun LiP
Jilin UniversityCommunication EngineeringChangchunChunling CaoH
XiuLing YaoP
MathematicsPeichen FangPP
Jinying LiuP
Lu XianruiH
Qingdao HuangP
Shaoyun ShiP P
Yang CaoP
Zhao ShishunP
Liaoning
Dalian Fisheries UniversityScienceDalianZhang LifengP
Dalian Maritime UniversityMathematicsDalianY. J. ZhangH P
Dalian Nationalities Innovation CollegeInnovation Education CenterDalianChen Xing WenP P
Tian YunH P
Dalian Nationalities UniversityCS and EngineeringDalianYan De JunPP
Liu RuiP
Liu Xiang DongP
Dean's OfficeGe Ren DongP
Guo QiangP
Ma Yu MeiP
Xue YeP
Dalian Nationalities University (cont'd)Dean's OfficeLi Xiao NiuH
Liu Jian GuoP
Zhang Heng BoP
Fu JieP
Dalian Navy AcademyComputer ScienceDalianYin Cheng yiP P
Operation ResearchFeng JieH P
Dalian Neosoft Institute of InformationInformationTechnology and Business ManagementDalianYuxin ZhaoP
Dalian UniversityInformation EngineeringDalianLiu GuangzhiP
Dong XiangyuP P
MathematicsGang JiataiH
Liu ZixinH
Tan XinxinH
Dalian University of TechnologyApplied MathematicsDalianMingfeng HeP
Cai YuP
Geng XinghuaP
He MingfengH
Wang YiH
Xiao DiP
City InstituteHongzeng WangP
Lianfu LiH
Meng LinP
Xubin GaoP
Institute of University Students' InnovationFeng LinP
He MingfengP
Sun TaoM P
Wu ZhenyuP P
Xi WanwuP
Software SchoolJiang HeP
Li ZheP P
Xu ShengjunP
Zhe LiP
+ +224 The UMAP Journal 28.3 (2007) + +
INSTITUTIONDEPARTMENTCITYADVISORAB
Liaoning Province Shiyan High SchoolYear 08 (Grade 2)ShenyangChunzhi ZhangP
Northeastern UniversityAI and RoboticsShenyangFeng PanP
Yunzhou ZhangP
Computer SystemHuilin LiuH
Control and SimulationPeifeng HaoH
Jianjiang CuiP
Information Science and EngineeringChengdong WuH
Shuying ZhaoM
Modern Design and AnalysisXuehong HeM
Shenyang Inst. of Aero. EngineeringInformation and CSShenyangShiyun WangP
ScienceShenyangWeifang LiuP
Limei ZhuM
Feng ShanHH
Feng ShanP
Shenyang Normal UniversityMathematics and System ScienceShenyangLi XiaoyiP
Liu YuzhongP
Meng XianjiP
Shenyang Pharmaceutical UniversityMathematics Teaching and ResearchShenyangXiang Rong WuP P
Shenyang University of TechnologyMathematicsShenyangYan ChenM P
Shenyang UniversityMathematicsShenyangGuiyan MuP
Shaanxi
North University of ChinaMathematicsTaiyuanYang MingP
Wang PengP
ScienceYang XiaofengP
North University of China FenxiaoElectrical EngineeringTaiyuanXiaoren FanH P
Foreign LanguageYongjian RenP
Northwest A&F UniversityScienceYanglingWang JingminP P
Northwest UniversityPhysicsXi'anYingjie DuP
Qingyan DongH
Northwestern Polytechnical UniversityApplied ChemistryXi'anGenjiu XuP
Sun ZhongkuiM
Applied MathematicsPeng GuohuaH
Nie YufengM
Northwestern Polytech. University (cont'd)Applied PhysicsYouming LeiM
Natural and Applied ScienceSun HaoH
Xiao HuayongH P
Yong XuH
Taiyuan University of TechnologyMathematicsTaiyuanYiqiang WeiP
Xi'an Communication InstituteComputer ScienceXi'anHong WangP
Electronic EngineeringXinshe QiP
MathematicsGuo LiP P
PhysicsDongsheng YangP
Xi'an Jiaotong UniversityApplied MathematicsXi'anXiaoliang HeP
Zhuosheng ZhangH
MathematicsFeng LiuP
Lizhou WangH
Xidian UniversityApplied MathematicsXi'anYue SongM
Computational MathematicsHongyun MengP
Industrial and Applied MathematicsFeng YeH
Guoping YangH
Shandong Shandong UniversityComputer Science and TechnologyJinanXinshun XuH
Xinshun XuP
Mathematics and System ScienceBaodong LiuM
Shuxiang HuangP P P
Tongchao LuM P
Baodong LiuPM P
Shandong University (Weihai)Applied MathematicsWeihaiZhulou CaoP P P
Shandong University of Science and Tech.Information Science and EngineeringQingdaoShanchen PangM H
Shanghai
China Textile UniversityMathematicsShanghaiChen GangH
Donghua UniversityInformationScience and TechnologyShanghaiJie QiH
Yongsheng DingP
ScienceYong GeP
Junmei HouP
Mengyu HuP
+ +226 The UMAP Journal 28.3 (2007) + +
INSTITUTIONDEPARTMENTCITYADVISORAB
East China Normal UniversityStatisticsShanghaiYiming ChengH
East China University of Science and TechnologyMathematicsShanghaiLiu ZhaohuiP
Qin YanP P
PhysicsSu ChunjieH
Fudan UniversityCS and EngineeringShanghaiHaibin KanH
International FinanceHongzhong LiuHH
MathematicsYuan CaoP
Zhijie CaiP
Mechanics and Engineering ScienceSheng CuiP
Fudan University Research Ctr for Nonlinear ScienceApplied MathematicsShanghaiWei LINP
Shanghai Finance UniversityApplied MathematicsShanghaiYumei LiangP
FinanceRui LiP P
Shanghai Financial and Economic U.EconomicsShanghaiZheng XuP
Shanghai Foreign Language SchoolAdministrationShanghaiLiqun PanH P
Yu SunH P
MathematicsFeng XuPP
Weiping WangP P
Principal's OfficeLiang TaoHP
Shanghai Hongkou Inst. of Educ.MathematicsShanghaiJun HuMP
Shengyang YeH P
Shanghai Hongkou Youth CenterMathematicsShanghaiJian TianP
Shanghai Jiading No. 1 Senior HSShanghaiXie Xilin and
Fang YunpingP. P
Shanghai Jiaotong UniversityMathematicsShanghaiBaorui SongHP
Jianguo HuangP P
Shanghai Normal UniversityApplied MathematicsShanghaiJizhou ZhangP
Applied Mathematics and StatisticsYongbing ShiP
Haiyan XuP
Computational MathematicsQian GuoP
Shanghai UniversityMathematicsShanghaiDong Hua WuPH
ShanghaiYouhua HeP
ScienceYuanDi WangP
Wei HuangPP
Shanghai University of Finance and EconomicsApplied MathematicsShanghaiZanzan LiP
Dawu YuP
Li MingM
Zhen Yu ZhangM
Chengdong DongH
Yu ZhuH
FinanceShanghaiShuyuan PanP
Shanghai Youth Center of Science and Technology EducationMathematicsShanghaiGan ChenP
Scientific TrainingGan ChenH
Sino European School of Technology Tongji UniversityFundamental Science and TechnologyShanghaiYang YongjianHP
Environmental Science and Eng'ngShanghaiHailong YinPP
MathematicsHualong ZhangPP
Jin LiangP
SoftwareJialiang XiangHH
Xiongdad ChenP
University of Shanghai for Science and TechnologyScienceShanghaiJia GaoPH
Yucai Senior High SchoolMathematicsShanghaiZhenwei YangPH
Sichuan
Chengdu University of TechnologyInformation ManagementChengduChen GuodongP
Huang GuangxinP
Sichuan UniversityApplied MathematicsChengduHai NiuM
MathematicsYang WengP
Zhou JieM
Li Xiao binH
Shuchao ZouH
Polymer Science and EngineeringMing Shu FanP
Yangtze Center of MathematicsBohui ChenM
Southwestern University of Finance and Econ.Business AdministrationChengduSun Yun LongP
Economic MathematicsDai DaiP
Shaowen LiM
Yunlong SunP
+ +228 The UMAP Journal 28.3 (2007) + +
INSTITUTIONDEPARTMENTCITYADVISORAB
Univ. of Elec. Science and Technology of ChinaApplied MathematicsChengduGao QingH H
Biomedical EngineeringWang ZhuH
Information and Computation ScienceQin SiyiM
Xu QuanziH
Tianjin
Civil Aviation University of ChinaAir Traffic ControlTianjinNie RuntuP
Zhaoning ZhangH
CS and TechnologyYuxiang ZhangM
Liu ShanP
ScienceChen Shang DiP
Yongxin GaoP
Deyi MouM
Tian MingH
Nankai UniversityEconomic TechnologyTianjinTan XuH
Informatics and ProbabilityTianjinXingwei ZhouH
Chunsheng ZhangH
Jishou RuanP
Yunnan
Yunnan UniversityComputer ScienceKunmingWang RuiH
StatisticsJie MengP
Telecomm. EngineeringGuanghui CaiP
Zhejiang
Hangzhou Dianzi UniversityApplied PhysicsHangzhouChengjia LiH
Zhifeng ZhangP
Information and MathematicsHao ShenM
Zheyong QiuH
Shaoxing UniversityMathematicsShaoxingHu JinjiePP
Sun JianxinP
Zhejiang Gongshang UniversityInformation and Computing ScienceHangzhouHua JiukunH H
MathematicsZhu LingHP
Zhejiang Normal UniversityCS and TechnologyJinhuaWenqing BaoH
Zhejiang Normal University (cont'd)MathematicsXinzhong LuH
Qiusheng QiuP
Zhejiang Sci-Tech UniversityApplied PhysicsHangzhouHu JueliangH
MathematicsLuo HuaP
Shi GuoshengP
Zhejiang UniversityMathematicsHangzhouQifan YangP
Zhiyi TanMM
Ningbo Institute of TechnologyNingboJufeng WangMH
Zhening LiHH
ScienceHangzhouJiaer FeiH H
Zhejiang University City CollegeCS and TechnologyHangzhouHuizeng ZhangH
Information and Computing ScienceGui WangMH
Xusheng KangH
Zhejiang University of Finance and Econ.Mathematics and StatisticsHangzhouFulai WangP
Ji LuoH
Zhejiang University of Science and TechnologyScienceHangzhouMingjun WeiP
Yongzhen ZhuP
Zhejiang University of TechnologyJianxing CollegeHangzhouWenxin ZhuoH
Shiming WangMM
MathematicsMinghua ZhouP
Xuejun WuP
FINLAND
Helsingin MatematikkalukioMathematicsHelsinkiEsa I. LappiP P
Päivölä College of MathematicsComputer SciencesTarttilaJukka IlmonenP
MathematicsJanne J. E. PuustelliiP
University of HelsinkiMathematics and StatisticsHelsinkiPetri OlaP
INDONESIA
Institut Teknologi BandungMathematicsBandungRieske HadiantiH P
IRELAND
National University of IrelandMathematical PhysicsGalwayPetri T. PiiroinenPH
MathematicsNiall MaddenM P
+ +
INSTITUTIONDEPARTMENTCITYADVISORAB
University College CorkApplied MathematicsCorkDmitrii RachinskiiM
Liya A. ZhornitskayaP
MathematicsDonal J. HurleyH
Benjamin W. McKayP
University College DublinMathematical SciencesDublinMaria G MeehanP
Ted CoxP
JAMAICA
University of TechnologyChemical EngineeringKingstonNilza G. Justiz-SmithH
KOREA
Korea Adv. Inst. of Science and TechnologyMathematical SciencesDaejeonYong Jung KimMH
Mechanical EngineeringJoongmyeon BaeM
PhysicsHawoong JeongP
NEW ZEALAND
Victoria University of WellingtonMathematics, Statistics, and CSWellingtonMark J. McGuinnessH
SINGAPORE
National University of SingaporeMathematicsSingaporeGongyun ZhaoPP
Statistics and Applied ProbabilityYannis YatracosO
SOUTH AFRICA
University of PretoriaMathematics and Applied MathematicsPretoriaAnsie F. HardingH
University of StellenboschApplied MathematicsStellenboschJan H. van VuurenO M
Editor's Note: For team advisors from China, I formerly followed the New York Times in listing family name first. That convention is probably because Chinese call each other by family name first. In most Chinese names, the family name is a single syllable and the given name is two syllables; so in most cases it is easy to identify family name. However, if each name has one syllable, it is not easy—even for native speakers—to distinguish them from the English transliteration. Since I have always doubted the wisdom of the Times convention, since Chinese Outstanding teams have requested that family names be listed last, and because even with expert help I cannot be completely correct, this year I have left the names in the order given on the registration form.
+ +# When Topologists Are Politicians ... + +Nikifor C. Bliznashki + +Aaron Pollack + +Russell Posner + +Duke University + +Durham, NC + +Advisor: David Kraines + +# Summary + +Former Supreme Court Justice Sandra Day O'Connor once noted that any politician who did not do everything to secure power for the party "ought to be impeached." Though Congress argues that wild election districts such as "a pair of earmuffs" are inherently fair and reasonable, they are so counterintuitive that such claims can be exceedingly difficult to believe. + +Defining the big picture of what is "fair" can be left to philosophers—or computers. Using a novel method, we divide states into districts of equal population, with each district as compact and elementary as possible, where compactness is defined as the moment of inertia of the district with respect to the population density. By not examining any other demographic data in the grouping, we avoid many of the biases that people may impose. We obtain districts considerably more compact than current congressional districts for Ohio and New York. + +Since it is constitutionally sound to group people into congressional districts by "shared interests," we extend the problem by allowing other demographic data to be considered in the formation of such districts. To form revised districts that seek to preserve uniformity of these qualities. We identify how suitable these solutions are and determine their advantages and disadvantages. + +Finally, we propose alternative districting techniques that take into account county boundaries and natural boundaries. + +The text of this paper appears on pp. 249-260. + +# What to Feed a Gerrymander + +Ben Conlee +Abe Othman +Chris Yetter +Harvard University +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +Gerrymandering, the practice of dividing political districts into winding and unfair geometries, has a deleterious effect on democratic accountability and participation. Incumbent politicians have an incentive to create districts to their advantage (California in 2000, Texas in 2003); so one proposed remedy for gerrymandering is to adopt an objective, possibly computerized, methodology for districting. + +We present two efficient algorithms for solving the districting problem by modeling it as a Markov decision process that rewards traditional measures of district "goodness": equality of population, continuity, preservation of county lines, and compactness of shape. Our Multi-Seeded Growth Model simulates the creation of a fixed number of districts for an arbitrary geography by "planting seeds" for districts and specifying particular growth rules. The result of this process is refined in our Partition Optimization Model, which uses stochastic domain hill-climbing to make small changes in district lines to improve goodness. We include as an extension an optimization to minimize projected inequality in district populations between redistrictings. + +As a case study, we implement our models to create an unbiased, geographically simple districting of New York using tract-level data from the 2000 Census. + +The text of this paper appears on pp. 261-280. + +# Electoral Redistricting with Moment of Inertia and Diminishing Halves Models + +Andrew Spann + +Daniel Kane + +Dan Gulotta + +Massachusetts Institute of Technology + +Cambridge, MA + +Advisor: Martin Z. Bazant + +# Summary + +We propose and evaluate two methods for determining congressional districts. The models contain explicit criteria only for population equality and compactness, but we show that other fairness criteria such as contiguity and city integrity are present, too. + +The Moment of Inertia Method creates districts whose populations are within $2\%$ of the mean district size, minimizing the sum of the squares of distances between the district's centroid and each census tract (weighted by population size). We prove that this model gives convex districts. + +In the Diminishing Halves Method, the state is recursively halved by lines perpendicular to best-fit lines through the centers of census tracts. + +From U.S. Census 2000 data, we extract the latitude, longitude, and population count of each census tract. By parsing data at the tract level instead of the county level, we model with high precision. We run our algorithms on data from New York as well as Arizona (small), Illinois (medium), and Texas (large). + +We compare the results to current districts. Our algorithms return districts that are not only contiguous but also convex, aside from borders where the state itself is nonconvex. We superimpose city locations on district maps to check for community integrity. We evaluate our proposed districts with various quantitative measures of compactness. + +The initial conditions do not greatly affect the Moment of Inertia Method. We run variants of the Diminishing Halves Method and find that they do not improve over the original. Based on our results, district shapes should be convex, and city boundaries and contiguity can be emergent properties, not explicit considerations. We recommend our Moment of Inertia Method, as it consistently performed the best. + +The text of this paper appears on pp. 281-299. + +# A Voronoi Model for Districting + +Benjamin O. Barrow + +Andrew F. Glugla + +John B. Shelton + +University of Colorado + +Boulder, CO + +Advisor: Michael H. Ritzwoller + +# Summary + +The U.S. Constitution allots each state a number of seats in the House of Representatives proportional to the state's population. However, it says nothing about how the districts associated with seats should be defined. This silence allows politicians to modify district borders to their advantage in future elections, a practice known as "gerrymandering." + +We offer a model for establishing congressional districts in a fair and unbiased manner. We provide rigorous definitions of "fairness" and "simplicity." For simplicity and to remove any political bias, we ignore all census statistics except population density. We also ignore transportation infrastructure, since political bias can often affect construction of roads, railways, and airports. + +Our model uses a Voronoi diagram to divide a state into districts. We place Voronoi points on a state map and divide the state into regions that enclose each point separately. All territory within a given region is closer to that region's Voronoi point than to any other Voronoi point, resulting in the formation of simple, convex polygonal areas. The model then uses a logically derived law to move the Voronoi points and thus trade territory between neighboring districts. Lower-population districts gain population, and greater-population districts lose population, until all districts arrive at population equilibrium. + +Our Voronoi model reliably produces workable results, though the final district borders are moderately sensitive to the initial positions of the Voronoi points (particularly for areas with lower population density gradients). The Voronoi model's greatest strengths and greatest weaknesses both revolve around stability. The model is not guaranteed to converge to a stable district configuration. However, in every real-world situation for which we tested it, it produced stable district borders. We believe that its great testing success easily outweighs any uncertainty associated with its use. + +[EDITOR'S NOTE: This Meritorious paper won the Ben Fusaro Award for the Gerrymandering Problem. The full text of the paper does not appear in this issue of the Journal.] + +# Why Weight? Moment A Cluster-Theoretic Approach to Political Districting + +Sam Whittle + +Wesley Essig + +Nathaniel S. Bottman + +University of Washington + +Seattle, WA + +Advisor: Anne Greenbaum + +# Summary + +Political districting has been a contentious issues in American politics over the last two centuries. Since the landmark case of Baker v. Carr (1962), in which the U.S. Supreme Court ruled that the constitutionality of a state's legislated districting is within the jurisdiction of a federal court, academics have attempted to produce a rigorous system for districting a state. + +We propose both a modified form of classical K-means clustering and the shortest-splitline algorithm to accomplish impartial redistricting. We apply our methods to redistricting New York, and, as further examples, Texas and Colorado. Both methods use only population-density data and state boundaries as inputs and run in a feasible amount of time. + +Our criteria for successful redistricting include contiguity, compactness, and sufficiently uniform population. + +The K-means method produces districts similar to convex polygons, and the splitline method guarantees that the resulting districts have piecewise linear boundaries. The K-means method has the advantage of allowing seeding of the district centers. The centers of the generated districts then roughly correlate to the existing districts, by proper seeding, but the resulting boundaries are vastly simpler. + +The text of this paper appears on pp. 301-313. + +# Applying Voronoi Diagrams to the Redistricting Problem + +Lukas Svec + +Sam Burden + +Aaron Dilley + +University of Washington + +Seattle, WA + +Advisor: James Allen Morrow + +# Summary + +Gerrymandering is an issue plaguing legislative redistricting. We present a novel approach to redistricting that uses Voronoi and population-weighted Voronoique diagrams. Voronoi regions are contiguous, compact, and simple to generate. Regions drawn with Voronoique diagrams attain small population variance and relative geometric simplicity. + +As a concrete example, we apply our methods to partition New York State. Since New York must be divided into 29 legislative districts, each receives roughly $3.44\%$ of the population. Our Voronoique diagram method generates districts with an average population of $(3.34 \pm 0.74)\%$ . + +We discuss possible refinements that might result in smaller population variation while maintaining the simplicity of the regions and objectivity of the method. + +The text of this paper appears on pp. 315-332. + +# Boarding at the Speed of Flight + +Michael Bauer + +Kshipra Bhawalkar + +Matthew Edwards + +Duke University + +Durham, NC + +Advisor: Anne Catlla + +# Summary + +We seek an efficient method for boarding a commercial airplane that accommodates unpredictable human behavior, with a framework that allows us to compare and contrast different procedures. Passenger dependencies, bottlenecks, and the rate of interferences are critical factors for airplane boarding time. + +Boarding without seating assignments is fastest, since each person is in the correct order for their flexible seat choice; it removes all interferences and makes the boarding time depend solely on the entrance rate of passengers into the plane. Hoping to emulate the performance of this method, which we call "random greedy," we design a new algorithm to model its average seating order: the parabola boarding scheme, which breaks the plane into parabolashaped zones. + +We use a discrete-time simulation engine to model current boarding schemes as well as the parabola and random greedy algorithms. The zone-boarding schemes outside-in, pyramid, and parabola are almost identical in performance; back-to-front and alternating rows are significantly worse. + +We examine the effects of scheme-independent parameters on boarding time. Ensuring a fast rate of people entering the plane and fast luggage stowage are both critical; an airline could reduce boarding time by improving either of these regardless of boarding scheme. + +By varying both the rate of people entering the plane and time to stow luggage, we find a correlation between average boarding time and the difference between best and worst times. The random greedy algorithm has the smallest difference; outside-in, pyramid, and parabola have equal differences. Faster boarding algorithms are also more reliable and allow for tighter scheduling. + +The best boarding algorithms do not have assigned seating. If, however, an airline feels that assigned seating is mandatory for customer satisfaction, then any of outside-in, pyramid, or parabola will result in a consistently fast boarding time with minimum deviation from average times and will be a marked improvement over the traditional back-to-front boarding method. + +The text of this paper appears on pp. 333-352. + +# Novel Approaches to Airplane Boarding + +Qianwei Li +Arnav Mehta +Aaron Wise +Duke University +Durham, NC + +Advisor: Owen Astrachan + +# Summary + +Prolonged boarding not only degrades customers' perceptions of quality but also affects total airplane turnaround time and therefore airline efficiency [Van Landeghem 2002]. + +The typical airline uses a zone system, where passengers board the plane from back to front in several groups. The efficiency of the zone system has come into question with the introduction and success of the open-seating policy of Southwest Airlines. + +We use a stochastic agent-based simulation of boarding to explore novel boarding techniques. Our model organizes the aircraft into discrete units called "processors." Each processor corresponds to a physical row of the aircraft. Passengers enter the plane and are moved through the aircraft based on the functionality of these processors. During each cycle of the simulation, each row (processor) can execute a single operation. Operations accomplish functions such as moving passengers to the next row, stowing luggage, or seating passengers. The processor model tells us, from an initial ordering of passengers in a queue, how long the plane will take to board, and produces a grid detailing the chronology of passenger seating. + +We extend our processor model with a genetic algorithm to search the space of passenger configurations for innovative and effective patterns. This algorithm employs the biological techniques of mutation and crossover to seek locally optimal solutions. + +We also integrate a Markov-chain model of passenger preference into our processor model, to produce a simulation of Southwest-style boarding, where seats are not assigned but are chosen by individuals based on environmental constraints (such as seat availability). + +We validate our model using tests for rigor in both robustness and sensitivity. Our model makes predictions that correlate well with empirical evidence. + +We simulate many different a priori configurations, such as back-to-front, window-to-aisle, and alternate half-rows. When normalized to a random + +boarding sequence, window-to-aisle—the best-performing pattern—improves efficiency by $36\%$ on average. Even more surprising, the most common technique, zone boarding, performs even worse than random. + +We recommend a hybrid boarding process: a combination of window-to-aisle and alternate half-rows. This technique is a three-zone process, like window-to-aisle, but it allows family units to board first, simultaneously with window-seat passengers. + +The text of this paper appears on pp. 353-370. + +# STAR: (Saving Time, Adding Revenues) Boarding/Deboarding Strategy + +Bo Yuan + +Jianfei Yin + +Mafa Wang + +National University of Defense Technology + +Changsha, China + +Advisor: Yi Wu + +# Summary + +Our goal is a strategy to minimize boarding/deboarding time. + +- We develop a theoretical model to give a rough estimate of airplane boarding time considering the main factors that may cause boarding delay. +- We formulate a simulation model based on cellular automata and apply it to different sizes of aircraft. We conclude that outside-in is optimal among current boarding strategies in both minimizing boarding time (23-27 min) and simplicity to operate. Our simulation results agree well with theoretical estimates. +- We design a luggage distribution control strategy that assigns row numbers to passengers according to the amount of luggage that they carry onto the plane. Our simulation results show that the strategy can save about $3\mathrm{min}$ . +- We build a flexible deboarding simulation model and fashion a new inside-out deboarding strategy. +- A $95\%$ confidence interval for boarding time under our strategy has a half-width of 1 min. + +We also do sensitivity analyses of the occupancy of the plane and of passengers taking the wrong seats, which show that our model is robust. + +The text of this paper appears on pp. 371-384. + +# The Unique Best Boarding Plan? It Depends... + +Bolun Liu + +Xuan Hou + +Hao Wang + +National University of Singapore + +Advisor: Yannis Yatracos + +# Summary + +We devise and compare strategies for boarding and deboarding planes of varying capacity. We clarify what properties a good strategy should have. We apply the same assumptions regarding basic boarding procedure, inner structure of planes, and behavior of passengers to all the cases. + +For boarding, we study prevailing strategies and a seemingly excellent strategy, seat-by-seat, proposed in past literature, and categorize them into two types, assigned-seating and open-seating. We develop a model and a simulation for each type. Our criteria identify two good candidates, reverse-pyramid and open-seating. We develop our own comprehensive strategy, simulate it, and compare it with those two. However, the optimal boarding strategy is not the same for different planes. Some values of parameters, such as the passengers' luggage size and weight, greatly influence the final result. Based on these discoveries, we suggest how to modify a boarding procedure in practice to make it optimal. + +For deboarding, a simple strategy beats a complicated one; but we still give a theoretically optimal model, then modify it to achieve a concise strategy applicable in practice. + +The text of this paper appears on pp. 385-404. + +# Airliner Boarding and Deplaning Strategy + +Linbo Zhao +Fan Zhou +Guozhen Wang +Peking University +Beijing, China + +Advisor: Xufeng Liu + +# Summary + +To reduce airliner boarding and deplaning time, we partition passengers into groups that board in an arranged sequence. We assume that first-class and business-class passengers board first; our model treats only economy class. Since deplaning is the converse process of boarding, a strategy for boarding gives a strategy for deplaning. + +We develop a model of interferences among passengers, which determine boarding time. We try to find a strategy with the least interferences. By running Lingo, we tackle the resulting nonlinear integer programming problem and obtain near-optimal strategies for fixed numbers of groups. This model supports the outside-in and reverse-pyramid strategies. + +We develop another model to give a global lower bound for interferences. We also prove that individual boarding sequence, which boards passengers one by one in a particular order, attains that lower bound. + +We develop code in $\mathrm{C}++$ to simulate boarding strategies and test various strategies for three airliners: Canadair CRJ-200 (small), Airbus A320 (midsize) and Airbus A380 (large). Individual boarding sequence, reverse-pyramid, and outside-in are the best three strategies in terms of both average boarding time and its standard deviation. + +We test strategies under various luggage loads and levels of occupancy, with and without late passengers and those with special needs. Outside-in and reverse-pyramid are stable under variation of parameters, whereas individual boarding sequence is extremely sensitive, though not to luggage. + +Our conclusions discredit traditional back-to-front strategies and support individual boarding sequence, reverse-pyramid, and outside-in. The more groups, the worse the situation with back-to-front. Taking cost into consideration, random sequencing should also be recommended. + +Finally, we analyze deplaning and see how its time can be minimized. + +The text of this paper appears on pp. 405-420. + +# Loading and Unloading Passenger Airliners: A Simulation Approach + +Walter Jacob +Joshua Dunn +Matthew Oster +Rowan University +Glassboro, NJ + +Advisor: Hieu D. Nguyen + +# Summary + +Grounded planes cost airlines money. A major factor in determining the grounding time of an airliner is the time that it takes to board passengers. An optimal boarding method would therefore reduce costs to the airlines and maximize profits by reducing the time the plane has to be on the ground and also enabling the airlines to offer more flights. + +Our assumptions were made with the real-world situation in mind. The result is a model that behaves well and parallels the results of other contemporary research efforts. The considerations upon which our constraints are based reflect the deterministic nature of the model. + +We performed a series of empirical tests to obtain acceptable ranges for parameters such as passenger walking speed and time required to stow carry-on luggage in an overhead compartment. Four different seating methods were tested: open seating, back to front seating, Wilma, and our own modified reverse pyramid seating. + +Although each method has its own benefits, we concluded that Wilma outperformed the competing methods for the widest number of configurations. In our simulations Wilma offered an average decrease of $1.8\mathrm{min}$ in small planes, $5.1\mathrm{min}$ in medium planes, and $2.6\mathrm{min}$ in large planes. Our model performed very well with the tested scenarios and scales easily to cover other situations. + +[EDITOR'S NOTE: This Meritorious paper won the Ben Fusaro Award for the Airplane Seating Problem. The full text of the paper does not appear in this issue of the Journal.] + +# Best Boarding uses Buffers + +Kevin D. Sobczak + +Eric J. Hardin + +Bradley J. Kirkwood + +Slippery Rock University + +Slippery Rock, PA + +Advisor: Athula R. Herat + +# Summary + +By constructing a mathematical model of human behavior, we find: + +- Back-to-front block loading is the least efficient boarding method. As passengers enter the aircraft in groups, aisle congestion becomes greatest at the front of the plane, consequently increasing the time required for the next group to enter and take their seats. Aisle congestion in this case is primarily attributed to the time for a passenger to navigate the aisle and reach the assigned seat if obstructed by another passenger sitting in the same row. +- Small planes and large planes exhibit minimal turnaround times. Small planes have a single aisle but few passengers, hence little congestion. In large planes, multiple aisles and decks offset the congestion found in single-aisle midsize planes; a large plane can be modeled as several small planes. +- Boarding strategies are optimized when $10\%$ of the passengers are late. Fewer passengers enter initially, so there is less congestion. When passengers enter late, congestion that would otherwise have occurred is averted. + +Our first observation concurs with researchers who suggest abandoning back-to-front boarding in favor of more-elaborate schemes [Finney 2006; van den Briel et al. 2004; Ferrari and Nagel 2005]; however, these new models make erroneous assumptions about human behavior. A comprehensive scheme must include the time to navigate a congested aisle, stow luggage, and maneuver through a filled row if necessary. We recommend the following: + +- Abandon back-to-front block boarding and consider alternatives. We suggest a hybrid group-boarding method utilizing a rotating seating arrangement that incorporates back-to-front and window-to-aisle seating. +- Incorporate a second aisle into midsize aircraft. +- Reduce carry-on luggage. +- Queue passengers into lines prior to gangway entry. + +The text of this paper appears on pp. 421-434. + +# Modeling Airplane Boarding Procedures + +Bach Ha + +Daniel Matheny + +Spencer Tipping + +Truman State University + +Kirksville, MO + +Advisor: Steven J. Smith + +# Summary + +We describe two models that simulate the process of passengers boarding an an aircraft and taking their seats. Using these models, we simulate common boarding procedures on popular aircraft to analyze efficiency. The second model is more ambitious and tries to model the situation more accurately, but even the first one addresses the major problems involved in boarding an airplane. + +From running the simulations and analyzing the data, we find that the fastest and most consistent procedures are outside-in and reverse-pyramid. Both allow those closest to the windows to be seated first and proceed inward (though reverse-pyramid is slightly more complex). Reverse-pyramid is slightly faster. + +The text of this paper appears on pp. 435-450. + +# American Airlines' Next Top Model + +Sara J. Beck + +Spencer D. K'Burg + +Alex B. Twist + +University of Puget Sound + +Tacom, WA + +Advisor: Michael Z. Spivey + +# Summary + +We design a simulation that replicates the behavior of passengers boarding airplanes of different sizes according to procedures currently implemented, as well as a plan not currently in use. Variables in our model are deterministic or stochastic and include walking time, stowage time, and seating time. Boarding delays are measured as the sum of these variables. We physically model and observe common interactions to accurately reflect boarding time. + +We run 500 simulations for various combinations of airplane sizes and boarding plans. We analyze the sensitivity of each boarding algorithm, as well as the passenger movement algorithm, for a wide range of plane sizes and configurations. We use the simulation results to compare the effectiveness of the boarding plans. We find that for all plane sizes, the novel boarding plan Roller Coaster is the most efficient. The Roller Coaster algorithm essentially modifies the outside-in boarding method. The passengers line up before they board the plane and then board the plane by letter group. This allows most interferences to be avoided. It loads a small plane $67\%$ faster than the next best option, a midsize plane $37\%$ faster than the next best option, and a large plane $35\%$ faster than the next best option. + +The text of this paper appears on pp. 451-462. + +# Boarding—Step by Step: A Cellular Automaton Approach to Optimising Aircraft Boarding Time + +Chris Rohwer + +Andreas Hafver + +Louise Viljoen + +University of Stellenbosch + +Stellenbosch, South Africa + +Advisor: Jan H. van Vuuren + +# Summary + +We model the boarding time for the aircraft using a cellular automaton. We investigate possible solutions and present recommendations about effective implementation. + +The cellular automaton model is implemented in three stages: + +- Initialisation of the seating layout for a chosen aircraft type and assignment of seats to passengers +- The sorting of passengers according to various proposed boarding methods +- "Propagating" the passengers through the aisle(s) of the aircraft and seating them at their assigned places. + +The rules governing the automaton take into account various factors. Among these are the load factor (percentage filled) of the craft, different walking speeds of passengers walking through the aisle, and time delays from stowing luggage and obstructions by other passengers during the seating process. The algorithm accommodates predefined aircraft layouts of common aircraft and also user-defined aircraft layouts. + +We modeled and tested various boarding strategies for efficiency with regard to total boarding time and average boarding time per passenger. Thus, our approach focuses not only on optimisation of the process in favour of the airlines, but also yields information regarding convenience to passengers. Random boarding (where passengers with assigned seat numbers enter the plane in a random sequence) was used as a point of reference. Among other strategies tested were boarding the plane in groups from either end, boarding from seats farthest from the aisles toward the aisles, and combinations of these approaches. + +We conclude that boarding strategies starting farthest away from the entrance or farthest away from the aisles yield shorter boarding times than random boarding. The most successful methods are combinations of these strategies, their detailed implementation depending on the exact layout/size of the aircraft. The method yielding the shortest total boarding time is not necessarily the one with shortest average boarding time per passenger. By considering standard deviations of total and individual boarding times over many iterations of the simulation, we can derive conclusions regarding the stability/consistency of the specific boarding strategies and how evenly the waiting time is distributed amongst the passengers. + +By selecting appropriate strategies, time savings of 2-3 min for small and medium aircraft could be achieved. For a custom 800-seat aircraft with two aisles, more than 6 min could be saved compared to random boarding. Having compared our results to actual turnaround times quoted by airlines, we believe them to be realistic. + +The text of this paper appears on pp. 463-478. + +# When Topologists Are Politicians… + +Nikifor C. Bliznashki + +Aaron Pollack + +Russell Posner + +Duke University + +Durham, NC + +Advisor: David Kraines + +# Summary + +According to Supreme Court Justice William Brennan, former Supreme Court Justice Sandra Day O'Connor once noted that any politician who did not do everything to secure power for the party "ought to be impeached" [Toobin 2003]. Though Congress argues that wild election districts such as "a pair of earmuffs" are inherently fair and reasonable, they are so counterintuitive that such claims can be exceedingly difficult to believe. + +Defining the big picture of what is "fair" can be left to philosophers—or computers. Using a novel method, we divide states into districts of equal population, with each district as compact and elementary as possible, where compactness is defined as the moment of inertia of the district with respect to the population density. By not examining any other demographic data in the grouping, we avoid many of the biases that people may impose. We obtain districts considerably more compact than current congressional districts for Ohio and New York. + +Since it is constitutionally sound to group people into congressional districts by "shared interests," we extend the problem by allowing other demographic data to be considered in the formation of such districts. to form revised districts that seek to preserve uniformity of these qualities. We identify how suitable these solutions are and determine their advantages and disadvantages. + +Finally, we propose alternative districting techniques that take into account county boundaries and natural boundaries. + +The UMAP Journal 28 (3) (2007) 249-259. ©Copyright 2007 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +# Introduction + +When topologists are politicians . . . the dimension of New York's 13th congressional district might not even be an integer. Districting is usually handled by a political and partisan body, such as the state legislature or the governor. Unsurprisingly, partisans district to maximize the number of representatives of their party will in Congress. This process is called gerrymandering, after Elbridge Gerry, governor of Massachusetts in 1812, who famously approved a congressional district that resembled a salamander. + +Gerry's salamander of 1812 could not even hold a candle to such well-known districts of today such as Louisiana's "the 'Z' with drips," or Pennsylvania's "supine seahorse" and "upside-down Chinese dragon" districts. Such awkward and complex districts can lose sight of the primary goal of the House of Representatives as outlined in the U.S. Constitution: to provide regional representation to the people. We seek a "fair" and "simple" districting that maximizes accessibility of all people to regional representation, while providing a partitioning insensitive to partisan motives. + +We define an objective function $F$ whose minimum gives what we define to be the best districting. We apply this method to New York and to Ohio. + +# Definitions + +- Block: A unit of area that corresponds to a fixed number of people. Since population densities vary, block sizes vary. A block is marked in the plane by a pair of $(x,y)$ coordinates. +- District: A collection of a fixed number of blocks (thus having a constant population). +- Capitol: The average of the coordinates of each block in the district, an approximation of the center of its population. +- Fairness: In Shaw v. Reno [1993], the U.S. Supreme Court mentioned that acceptable ways of districting a state include "compactness, contiguity, [and] respect for political subdivisions or communities defined by actual shared interests." By compactness, the justices were alluding to a vague notion that congressional districts should be more like squares or circles than "spitting amoebas" (read: Maryland's Third District). +- Compactness: Suppose that a district $D$ contains $n$ blocks, $z_{1},\ldots ,z_{n}$ , with capitol $c$ . Compactness $C$ is the variance of the spatial distribution of the population: + +$$ +C = \sum_ {i = 1} ^ {n} \| z _ {i} - c \| ^ {2}. +$$ + +When $C$ is small, we conclude that the district's constituents live within a relatively small area. + +- Shared Interests $(S)$ : We assign a vector to each block, giving one component to each interest, and minimize the sum of the variances of the components over the district. That is, given vectors $v_{i}$ associated with blocks $z_{i}$ and mean interest vector $\vec{\mu} = \frac{1}{n}\sum_{i=1}^{n}\vec{v}_{i}$ , the shared interest is + +$$ +S = \sum_ {i = 1} ^ {n} \| \vec {v} _ {i} - \vec {\mu} \| ^ {2}. +$$ + +# A Note on Shared Interest + +Though race, gender, age, and religion are important issues for many, legal ambiguities exist in the use of such measures. Because the benefit is yet unproven of either grouping together or dispersing such groups among congressional districts, we do not use these data. Though measuring political affiliation is an entirely legal and often implemented districting tool, we remain nonpartisan to avoid inadvertently favoring one party over another. + +# Specific Formulation of the Problem + +We measure fairness of a partition of a state into congressional districts. Let $P = \{D_1, \ldots, D_k\}$ be a partition of blocks into congressional districts, such that each district is contiguous and has the same number of blocks. We define + +$$ +f \left(D _ {i}\right) = w _ {1} C _ {i} + w _ {2} S _ {i}, +$$ + +where $w_{1}$ and $w_{2}$ are positive weights that are the same for all districts. We define globally + +$$ +F \left(P\right) = \sum_ {i} ^ {k} f \left(D _ {j}\right). +$$ + +We seek a partition that minimizes $F$ . + +# Assumptions + +- We have accurate data about a state's population, geographical layout, and other relevant factors. +- The population represented by one block is small enough to ensure that districts have negligibly different numbers of people. +- The initial assignment of blocks to districts is random enough to assure that the districting to which our algorithm converges is near the global minimum. + +# Background and Goals + +Weaver and Hess [1963] set the standard for computerized nonpartisan districting. Using integer programming methods, a set of capitals (called "LDs") in their paper were matched with blocks ("EDs") so as to minimize moment of inertia, i.e., our $C$ . Repeatedly, the LDs were relocated to the appropriate centers of mass and then the EDs were redistributed to each LD until the moment of inertia hit a local minimum. By repeating many times with a large number of initial conditions, they hoped to approximate the global minimum and thus derive the most compact districting. Though precise, integer programming algorithms on large sets of data are extremely time-consuming. Though Weaver and Hess's methods found applicability at the county and small state level at best, their landmark work paved the way for the development of a variety of approaches. + +We create a model that expands on theirs with the following goals: + +- Find the ideal partition $P^{*}$ , i.e., the one that minimizes $F$ globally. +- The method to find $P^*$ should be versatile—able to find the ideal partition for a wide variety of shared interest functions $S$ . +- The method should be scalable—able to handle large quantities of data quickly. + +# Friendly Trader Method + +Our method starts with an initial arrangement of blocks into districts and moves blocks between districts to decrease $F$ . Let the blocks be arranged into $n$ districts. By our method, district $D$ attempts to trade blocks to reduce its $f(D)$ . However, our districts are "friendly traders"—they conduct only trades that make the districts as a whole better off (i.e., reduces $F$ ). Our districts are so friendly, in fact, that they will execute trades that raise $f(D)$ so long as $F$ decreases. Since the composition of our districts changes after each trade, capitals must be recalculated at each step. The problem of finding a minimum then reduces to finding and executing all trades that reduce $F$ until no more exist. + +# How Are Blocks Determined? + +We obtained demographic data are obtained from the 2000 U.S. Census [U.S. Census Bureau n.d.]. New York State is partitioned into roughly 5,000 tracts, each with a specific population and coordinates in latitude and longitude. For each minor civil division, we assign one block per 250 people, rounding population to the nearest 250. We spread these blocks evenly within their + +minor civil division. Thus, each block has the same population, and population density corresponds to block density. + +# How Are Districts Initialized? + +We try to devise an initial partition with low $F$ . First, we arbitrarily choose 29 blocks to be district capitals. Each capitol's position is assumed to be the center of population and its interests the mean interests of the district. One by one, each capitol picks districts that "fit well" with the capitol's location and interests. The process is like a professional sports draft, where teams take turns picking players who suit each team well. After all the blocks are assigned to capitals, trading of blocks begins. + +# How Do We Maximize Compactness? + +Let $D_{1}$ and $D_{2}$ be two districts and $b_{k}$ a block in $D_{1}$ . By moving $b_{k}$ from $D_{1}$ to $D_{2}$ , we form districts $D_{1}'$ and $D_{2}'$ . Define + +$$ +\Delta F (D _ {1}, D _ {2}, b _ {k}) = f (D _ {1} ^ {\prime}) + f (D _ {2} ^ {\prime}) - f (D _ {1}) - f (D _ {2}). +$$ + +To determine which trades to make, we first find a block $b *_{12}$ in $D_1$ such that $\Delta F(D_1, D_2, b *_{12}) \leq \Delta F(D_1, D_2, b_k)$ for all blocks $b_k$ in $D_1$ . Let us call $b *_{12}$ the best block from $D_1$ to $D_2$ . + +Now we can define a fully connected directed graph $G$ , with the vertices of $G$ the districts $D_{j}$ and the edge $v_{i} \rightarrow v_{j}$ having length $\Delta F(D_{i},D_{j},b_{*ij})$ , where $b_{*ij}$ is the best block from $D_{i}$ to $D_{j}$ , where the length is negative if the trade decreases $F$ . To reduce $F$ by trades, we search for cycles of negative length in $G$ . (The length of a cycle is the sum of the lengths of the edges composing the cycle, counting multiplicity if an edge appears more than once.) A cycle with negative length corresponds to a group of trades that reduce $F$ . Hence, finding good trades of blocks between districts reduces to finding cycles of negative length in the digraph. + +That problem in turn reduces to the simpler problem of finding a shortest path between any two vertices—that is, a path of minimum length in which no edge is used more than once. The Bellman-Ford-Moore algorithm modifies a standard shortest-path algorithm to find any negative cycle [Cherkassy and Goldberg 1999]. (This algorithm finds only the first negative cycle encountered and thus gives no choice of cycle.) Once a trade is found, it is completed and capitols are recalculated. We reject any trade that, after recalculation of capitols, actually increases $F$ . Since we are making only trades that strictly reduce $F$ , when $F$ can no longer decrease through trading, we have achieved a local minimum. + +# Results + +We implemented the algorithm in a computer program and simulated the process for New York and for Ohio. No matter the starting configuration for the state, we always ended up with the same shape for the districts, making us believe that these data sets are big and diverse enough to converge always to a unique global minimum. + +Figure 1 shows our apportionment of New York, calculated to make the regions most compact ( $w_{2} = 0$ ). We then redid New York using compactness as a guide but with the objective function $F$ weighted toward preserving population density. The results are shown in Figure 2. + +![](images/e1140b3263c5e22957b92912a4fc9fa9ad2e9ea5ef29e1493044c49e074e93e4.jpg) +Figure 1. The most compact apportionment of New York, with close-up of New York City on right. + +![](images/a3b435c88717587291c69111f9151473cc9b510ea60e7c15094929b18af24a3d.jpg) + +![](images/77acf61539edd50ffbdd530b954a00ada2bc42b26e2a008889ed6142113ca956.jpg) +Figure 2. Apportionment of New York based both on compactness and population density. + +![](images/6f85e25d549848f8fdbfca377157b80d0c4f6abaf5c23efc288dce4a1fd82c4f.jpg) + +We also tested our algorithm on Ohio to check that our method is applicable in other circumstances. In Figure 3, Ohio is partitioned using only compactness as a guide; Figure 4 use the same function $F$ as Figure 2. + +In Ohio, congressional districts are designed with preserving county lines in mind. In Figure 6, we attempt not to split counties between congressional districts. We thus add a term in $f(x)$ to take into account county separation. This idea can easily be extended to natural boundaries such as rivers and highways. + +![](images/bf654c51daa916aef64b9afb2d4cdbc8fc6d2c90b5542a22ea8cdcfef00208ed.jpg) +Figure 3. Districting of Ohio, based solely on compactness. + +![](images/00401c26c1dd9028d4504ac31ad02c7dec81bc11defbf09449c5c24f1485ab81.jpg) +Figure 4. Districting of Ohio, based on both compactness and population density. + +![](images/f26a60ddfbe7447b0992918760ccb540a6fb89581719117c7542db2466636e4f.jpg) +Figure 6. Districting of Ohio, based solely on preserving county boundaries. + +# Analysis of Results + +Our results are primarily visual and not numerical. One weakness of our method is the lack of quantifiable data for comparison. Since $F$ itself can be varied, normalizing it for use from one application to another is far from trivial. In addition, the remarkable reproducibility of our results given a wide variety of initial conditions almost entirely eliminates the need for separate numerical results. + +In Figures 2 and 4, after the sorting of blocks, we connected a line surrounding all boundary blocks in a district and softened the line to make it smooth. Compared to current congressional districts, those produced by our algorithm are a vast improvement in simplicity, corresponding to a reduction in $C$ by a factor of 7 for Ohio and by a factor of about 22 for New York. Since our model evaluates districting through distance, it promotes star-shaped districts (in which the capitol is connected to every point in the district by a straight line) rather than fully concave districts, improving accessibility and reducing complexity of the districts. Our model handles the problems of districting New York and Ohio wonderfully, achieving very simple, reasonable results. + +However, interest-weighted models are where our model really shines (Figures 3 and 5). Since we chose not to gauge common interest by controversial factors such as race or age, we chose the tamest quantity possible: population density at each block. We felt that this quantity would be useful to group districts by, since urban issues tend to differ from rural issues, and thus both city slickers and farmers alike could obtain representation for their grievances. + +In some ways, our model already favors uniform population density across districts. Since blocks in urban areas are more densely packed, they naturally migrate to the same district. Our adjustment then merely increases the population density component, producing a noticeable change for Ohio, with tightened districts around the major cities of Columbus in the center, Cincinnati in the southwest, and Cleveland in the northeast, as well as a tightening of the districts around the densely populated Bronx and Queens in New York City. These solutions improve compactness and uniformity of population density compared to current congressional districting (quite unsurprisingly); they also demonstrate the existence of a range of reasonable solutions that satisfy the goals of compactness, simplicity, and fairness. + +Since many states have independent county governments (including New York and Ohio), we ran an alternative solution set for Ohio in which we tried to preserve county lines. Our mean interest vector $\vec{\mu}$ assigned a direction for each county tabulating the number of blocks in each. We then weighted our solution with the goal of preserving county lines. Figure 6 shows the great success of this solution. In most districts, boundaries coincide almost perfectly with county lines. The advantage of the county-based districting solution is clear: Since citizens pay taxes to and receive services from county governments, allowing counties to have exclusive congressional representation allows for easier handling of issues on the local level. Surprisingly, such lines can be + +taken into consideration with little consequence on compactness or simplicity. + +# Why Our Model is Fair + +The question of fairness is much more difficult. To give an example of this challenge, we consider the Fourteenth Amendment to the U.S. Constitution, which states that all races are strictly equal under the law. However, the Voting Rights Act of 1965 states that the government will assist in facilitating the voting of minority areas. Thus, even the government itself has trouble deciding whether "fair" involves helping the often-disadvantaged to realize their rights or involves giving every person exactly the same treatment. We argue that our model is fair because it remains passive and uninvolved. It only takes a set of directives (i.e., the function $F$ ) and produces a solution that divides the region into relatively uniform districts with respect to $F$ . If nonpartisan goals are incorporated, such as uniform population density or compactness, a nonpartisan solution arises. + +However, any component of data can be (and likely has been) misused. For example, African Americans tend to be affiliated with the Democratic Party, while those in rural areas tend to be Republican. Thus, race and population density can be delicately used to achieve partisan aims. Our model is fair because it assigns no judgment to any of these considerations. Rather, it is designed to improve the citizen's accessibility to attentive and diligent representation in order to maximize every individual's rights and powers. + +# Strengths and Weaknesses of Model + +Our model achieves all of the initial goals. It is fast, and could handle large quantities of data, but also has flexibility. Though we did not test all possibilities, we showed that our model optimizes state districts for any of a number of variables. If we had input income, poverty, crime, or education data into our interest function, we could have produced high-quality results with virtually no added difficulty. As well, our method is robust. Moreover, we can divide areas into fairly simple, contiguous, and uniform regions. Our model also consistently leads to useful minima. + +The primary weakness of our model is the absence of good nicknames for our districts—somehow districts such as "egg" and "sort of diamond-shaped thing" don't raise any excitement. + +Though we achieved solid equilibria, our model in no way guarantees that it will ever find a global solution. To see this, consider a rectangle with different sides, and assume that we have 10 points at each of its vertices. Moreover, let the districts initially be the long sides. It is easy to check that no trade will occur, and thus this configuration is a local, but not a global, minimum. + +The other primary weakness of our model is our lack of metrics for comparison. Though compactness and shared interest levels are appropriate measures for comparison of two models within a state, we lack invariant metrics for assessing the quality of one districting versus another. + +# Food for Thought + +Given the proper data, our model can do much more than merely political districting. At heart, it simply attempts to group regions into smaller parts, unified by whatever characteristics desired. For example, if a governing body wanted to determine where to build police stations or hospitals, it could input weighted crime, health, poverty, and/or age statistics into the model. The model could then partition the area into small regions united not only by spatial relations but also by needs and desires. Thus, our model could help politicians and authorities most effectively deploy public resources and services. Ironically, our nonpartisan partitioning method could be a politician's best friend! Of course, by inputting political affiliation data, politicians could identify partisan strongholds in order to plan campaigns. + +# Conclusion + +We set forth an algorithm to determine congressional districts, given data on location, population, and any other factors desired. The algorithm is intended to be fair, or nonpartisan, in stark contrast to the political process of gerrymandering. Characteristics that we consider to be fundamental in the division of a state into congressional districts include contiguousness, compactness, and shared interests or concerns among a district's citizens. + +We assume that a state can be divided into blocks of small constant population and interpret the problem of congressional district apportionment as the distribution of these blocks to the districts so that each district contains a fixed number of blocks (and therefore all districts have the same population). Furthermore, we define an objective function $F$ that measures the quality of a distribution. Finding good partitions is equivalent to finding distributions with low values of $F$ . Our goal, therefore, was to find a partition that minimizes $F$ . This is a useful formulation of the problem because, if all agree to use this method beforehand, the existence of a global minimum of $F$ (our problem is finite) guarantees that if this minimum is found, there can (should) be no partisan squabbling as to the legitimacy of the solution obtained. + +Admittedly, our algorithm is only guaranteed to find local minima of $F$ . However, simulations with random initial starting values seem to converge to the same final apportionment, suggesting that the local minima that our algorithm finds are close to the global minima. Additionally, while we have implemented certain particulars to quantify shared interests of citizens in a + +district, our procedure for determining congressional districts is flexible; with a simple change in the particulars, it can partition blocks into districts under other criteria. + +# References + +Cherkassy, B.V., and A.V. Goldberg. 1999. Negative-cycle detection algorithms. Mathematical Programming 85: 277-311. +Toobin, Jeffrey. 2003. The great election grab. New Yorker 79 (38) (8 December 2003): 63-80. +Shaw v. Reno, 509 U.S. 630 (1993). +U.S. Census Bureau. n.d. FactFinder. http://factfinder.census.gov/home/saff/main.html?lang=en. +Weaver, James B., and Sidney W. Hess. 1963. A procedure for nonpartisan districting: Development of computer techniques. Yale Law Journal 73: 288-308. + +Pp. 261-450 can be found on the Tools for Teaching 2007 CD-ROM. + +# What to Feed a Gerrymander + +Ben Conlee + +Abe Othman + +Chris Yetter + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +Gerrymandering, the practice of dividing political districts into winding and unfair geometries, has a deleterious effect on democratic accountability and participation. Incumbent politicians have an incentive to create districts to their advantage (California in 2000, Texas in 2003); so one proposed remedy for gerrymandering is to adopt an objective, possibly computerized, methodology for districting. + +We present two efficient algorithms for solving the districting problem by modeling it as a Markov decision process that rewards traditional measures of district "goodness": equality of population, continuity, preservation of county lines, and compactness of shape. Our Multi-Seeded Growth Model simulates the creation of a fixed number of districts for an arbitrary geography by "planting seeds" for districts and specifying particular growth rules. The result of this process is refined in our Partition Optimization Model, which uses stochastic domain hill-climbing to make small changes in district lines to improve goodness. We include as an extension an optimization to minimize projected inequality in district populations between redistrictings. + +As a case study, we implement our models to create an unbiased, geographically simple districting of New York using tract-level data from the 2000 Census. + +# What is Gerrymandering? + +Gerrymandering is division of an area into political districts that give advantage to one group. Frequently, this involves concentrating "unfavorable" + +voters in a few districts to ensure that "favorable" voters will win in many more districts. To squeeze unfavorable voters into a few districts, gerrymandering creates snaky and odd shaped regions. The eponymous label was created when politician Elbridge Gerry pioneered this technique in early 19th century and his opponents claimed that the districts resembled salamanders (Figure 1). + +![](images/0b024c9aa4613845fe57da94b2b4077518f5b3c0149b3a4847c32af302936e05.jpg) +Figure 1. The original "Gerry-mander" from the Boston Centinel (1812). Source: Wikipedia [2007], which in turn was cropped from U.S. Department of the Interior [2007]. + +# Basic Terminology + +- Packing: Forcing a disproportionately high concentration of a particular group into one district to lessen their impact in nearby districts. +- Cracking: Spreading members of a group into several districts to reduce their impact in each of these districts. +- Forfeit district: A district where group $A$ packs the members of group $B$ so that group $B$ wins this district but loses several surrounding districts that $B$ might win with a different districting scheme. +- Wasted Vote: A vote cast by a member of group $A$ in a district where $A$ is already assured victory, so voting has no bearing on the result. In general, the group with more wasted votes is made worse off by a districting plan. + +# Why Is It So Bad? + +Gerrymandering reduces electoral competition within districts, since cracking/packing makes elections uncompetitive. Further, incumbent representatives are in no real danger of losing elections, so they do not campaign vigorously, which can lead to lower voter turnout. Exacerbating the problem, incumbents' increased advantage means that they have less incentive to govern based on their constituents' interests, so democratic accountability and engagement mutually deteriorate. + +Gerrymandering also presents the practical problem that it is difficult to explain to voters why district shapes are so labyrinthine. Some districts connect demographically-similar but geographically-distant regions using thin filaments (Figure 2). "Niceness" of district shape almost always takes a back seat to political and racial concerns. Example: In the 2000 California realignment, Democrats and Republicans united to design incumbent-favoring districts, which resulted in the re-election of all of the 153 incumbents in 2004. How can one argue that this is in voters' best interests? + +![](images/6e638f188d8bb75b0b0f2ad94cc11a21cccdf71c6518701bf6e189d852305dc3.jpg) +Congressional District 4 +Figure 2. A present-day gerrymander, the Illinois 4th congressional district. The two "earmuffs" are connected by a narrow band along Interstate 294. Source: Wikipedia [2007], in turn cropped from U.S. Department of the Interior [2007]. + +However, gerrymandering can be considered appropriate in specific situations. For instance, the Arizona Legislature gerrymandered a division between the historically hostile Hopi and Navajo tribes even though the Hopi reservation is entirely surrounded by the Navajo reservation. + +# The Legality of Gerrymandering + +Though gerrymandering is objectionable to many, it is legal. The Voting Rights Act of 1965, which eliminated poll taxes and other discriminatory voting policies, may have inadvertently increased gerrymandering. One interpretation of the Act is that it mandates nondiscriminatory election results, which has led to a strange reversal of vocabulary in which creating "majority-minority" districts is considered beneficial. These gerrymandered districts are packed + +with minorities to guarantee minority representation in Congress. + +However, in Shaw v. Reno (1993), and later in Miller v. Johnson (1995), the Supreme Court ruled that racial/ethnic gerrymanders are unconstitutional. Nevertheless, Hunt v. Cromartie (1999) approved of a seemingly racial gerrymandering because the motivation was mostly partisan rather than racial. The recent case League of United Latin American Citizens v. Perry (June 2006) held that states are free to redistrict as often as they like so long as the redistrictings are not purely racially motivated. + +# Assumptions and Notation + +# What Can We Consider When Districting? + +1. Population equality between districts (legally mandated) +2. Contiguity of districts (legally mandated, excepting islands) +3. Respect for legal boundaries (counties, city limits, townships) +4. Respect for natural geographic boundaries +5. Compactness of district shapes +6. Respect for human-made boundaries (highways, parks, etc.) +7. Respect for socioeconomic similarity of constituents +8. Similarity to past district boundaries +9. Partisan political concerns +10. Desire to make districts (un)competitive +11. Racial/ethnic concerns +12. Desire to protect (or unseat) incumbent politicians + +In our model, we consider only the top seven factors. The case SC State Conference of Branches v. Riley (1982) ruled that past districts (factor 8) are a legitimate tool for creating new districts, but we ignore past districtings, since they are heavily biased by factors 9-12, all related to political or racial concerns. + +# Geography and Similar Characteristics + +The U.S. Census Bureau provides data on legal, natural, and human-made boundaries as well as socioeconomic similarity of regions. In each census, the United States population is divided at several levels of accuracy, the smallest of which are: blocks (40 people on average), block groups (1,500 people), and tracts + +(4,500 people). We follow the practice in Young [1988] by districting based on tracts. + +Census tract boundaries normally follow visible features, but may follow governmental unit boundaries and other non-visible features, and they always nest within counties. Census tracts are designed to be relatively homogenous units with respect to population characteristics, economic status, and living conditions at the time the users established them. + +[Caliper Corporation n.d.] + +For these reasons, we believe that tracts are acceptably small and homogenous to use as a base unit. Further, a tract is completely contained in a county, so we can easily check whether or not a district breaks county integrity. + +# Notation + +Let $n$ be the number of districts and $m$ the number of census tracts. We denote districts by $D_{i}$ , tracts by $T_{l}$ and the set of all tracts by $\Gamma = \{T_{l}\}_{1\leq l\leq m}$ , which we call a state. Denote the set of all districts at a particular time by $\Delta = \{D_i\}_{1\leq j\leq n}$ ; we call this a partition of the state. + +# Adjacency + +Define the symmetric relation $T_{p} \sim T_{q}$ for tract pairs $(T_{p}, T_{q})$ that are adjacent. Let $d(T_{l})$ be the district to which $T_{l}$ belongs. We also naturally extend the definition of $d$ to sets of tracts. + +Define the neighbor set of tract $T_{l}$ by $a_{T}(T_{l}) = \{T_{p} \in \Gamma | T_{l} \sim T_{p}\}$ , all tracts neighboring $T_{l}$ ; and define $a_{D}(T_{l}) = d(a_{T}(T_{l}))$ to be the set of all districts containing neighbors of $T_{l}$ . Every tract borders at least one other tract, so all $a_{T}(T_{l})$ and $a_{D}(T_{l})$ are nonempty. + +# Borders + +Let the border of district $D_{i}$ be $\partial D_{i} = \{T_{l} \in D_{i} | a_{D}(T_{l}) \neq \{D_{i}\} \}$ , the set of tracts in $D_{i}$ adjacent to at least one district other than $D_{i}$ . The interior of district $D_{i}$ is $I_{i} = D_{i} \setminus \partial D_{i}$ , the set of tracts in $D_{i}$ whose neighbors are all in $D_{i}$ . Let $m_{i} = |D_{i}|$ be the number of tracts in $D_{i}$ and $b_{i} = |\partial D_{i}|$ the number of tracts bordering $D_{i}$ . + +The frontier of $D_{i}$ is $F_{i} = \left(\cup_{T_{l}\in D_{i}}a_{T}(T_{l})\right)\setminus D_{i}$ , the set of tracts outside of $D_{i}$ that border the boundary tracts of $D_{i}$ . + +# Counties + +We denote a county by $C_j$ and the set of all counties by $\Lambda$ . Districts can (and often do) break county boundaries, but tracts are contained entirely within counties, so a county is a set of tracts. Districts are also sets of tracts, so we interpret $D_i \cap C_j$ as the set of tracts in both district $D_i$ and county $C_j$ . + +# Population + +Let the population of the state be $P$ and let $\bar{p} = P / n$ be the optimal district size. We use the function $p(\cdot)$ to denote the population of an object; for instance, $p(T_l)$ and $p(C_j)$ are the populations of tract $T_l$ and county $C_j$ , respectively. We use the shorthand $p_i = p(D_i)$ for the population of districts. + +Table 1 is a useful reference of these numerous definitions. + +Table 1. Variables and their meanings. + +
VariableDefinition
nNumber of congressional districts
DiTheith district (1 ≤ i ≤ n)
ΔSet of all districts in a state, a partition
mNumber of census tracts
TlThe lth tract in (1 ≤ l ≤ m)
ΓSet of all tracts in a state
d(Tl)District to which tract Tl belongs
Tp~TqTracts Tp and Tq are adjacent
aT(Tl)Set of tracts adjacent to tract Tl
aD(Tl)Set of districts containing tracts neighboring Tl
∂DiBorder of Di, tracts that neighbor another district
IiInterior of Di, tracts that do not neighbor another district
miNumber of tracts in Di
biNumber of tracts in ∂Di
FiSet of all tracts outside of Di that border ∂Di
CjThejth county
c(Tl)The county to which tract Tl belongs
c(Di)The set of counties containing district Di
PTotal population of the state
Average population of a district
p(·)Population of an arbitrary object
piPopulation of district Di
+ +# Past Models + +Cirincione et al. [2000] judge the quality of a districting plan based on equal population, preservation of county integrity, and district area compactness. They require that district populations differ by no more than $1\%$ from exact equality of number of constituents and point contiguity of a district. They construct districts by picking a random block group, then adding additional block groups to the new district until the population reaches $\bar{p}$ . At this point, they repeat the process starting with a new random block group. Compactness is based on minimum bounding rectangles, and county integrity is encouraged by "randomly" selecting new block groups with a preference for block groups in counties already in the emerging district. + +Mehrotra et al. [1998] and Garfinkel and Nemhauser [1970] implement a "branch-and-price" method in the optimization step. They first obtain a dis + +tricting and then optimize over constraints such that population sizes are allowed to vary. In a final step, they split up population units to ensure population equality. They define compactness in a graph-theoretical manner, where connected nodes are adjacent tracts. They define the "center" of a district to be the tract with the smallest maximum distance to another other tract. They consider a district compact when sum of distances from each node to the center is small. + +We do not use their measure, since it does not uniquely define the center of a graph, and (contrary to their claims) does allow for oddly-shaped districts, such as a district whose graph is a star-shaped tree with one tract in the center and many noncontiguous paths emanating from it. Such a tree structure is one salient feature of gerrymandering. + +We also do not use a "branch-and-price" method of optimization. Following suggestions of Nagel [1965] and Kaiser [1966], we employ a local search algorithm in which tracts are swapped between existing districts to maximize the objective function. + +# Measuring Compactness + +The notion of compactness of a planar region has no uniformly accepted definition. Young [1988] suggests that any reasonable measure of compactness should consider population units (census tracts in our case study) as indivisible but laments that no one measure seems to work well for all geographic configurations. + +Young's measures include the maximum total perimeter, the relative height and width, and the moment of inertia of the district. All these fail to consider both perimeter and area simultaneously. + +The Isoperimetric Theorem states that the quantity $A / P^2$ , the ratio of the area $A$ of a planar region (not necessarily contiguous) to the square of its perimeter, is maximized at $1 / 4\pi$ when the region is circular. We define compactness of a region as the ratio $4\pi A / P^2$ . This ratio is 1 for the circle, with higher values indicating greater compactness. The compactness of a square is $4\pi / 16 \approx 0.785$ , an upper bound for compactness of any rectangle. + +This ratio is a good measure of "regularity" of a region. Specifically, any shear of factor $s$ applied to a circle decreases the compactness by a factor of $s$ , and any concave region has lower compactness than its convex hull. In fact, the convex hull of a concave region has greater area and smaller perimeter. + +# The Multi-Seeded Growth Model + +We take a two-stage approach to finding the best districting. In the Multi-Seeded Growth Model (MSGM), we find an initial allocation of $n$ districts so that the partition has modest levels of population equality and county preservation. + +Our Partition Optimization Model (POM) edits and improves the rough sketch from MSGM. + +The reason that we use two phases is speed. Our initial inclination was to allocate tracts randomly to the $n$ districts and then optimize by swapping tracts to improve some objective function. However, a random initial configuration is so far from the global maximum that the search might take years. + +The MSGM generates a very crude districting that ensures district contiguity and tries for population equality and county preservation. Its districts are unacceptable for an actual plan but save enormous amounts of computing time. + +# How It Works + +We grow the $n$ districts simultaneously until they cover the state. + +We start by allocating the entire state to a dummy district $D_0$ , and then allocate $n$ tracts that serve as the initial "seeds" for the final districts, such that each $D_i$ begins as only a single tract. While $|D_0| > 0$ , we consider the set $S$ of all possible moves that involve taking a district from $D_0$ while preserving contiguity. That is: + +$$ +S \left(D _ {0}; D _ {1}, \dots , D _ {n}\right) = \bigcup_ {i = 1} ^ {n} \bigcup_ {T _ {l} \in F _ {i}} M \left(T _ {l}, D _ {0}, D _ {i}\right), +$$ + +where $M(T_{l},D_{i},D_{j})$ represents a move of tract $T_{l}$ from $D_{i}$ to $D_{j}$ and $F_{i}$ is the set of tracts that border $T_{i}$ . Each move is scored by desirability of the prospective partition according to the score if we were to accept only that move. We perform the top $3\%$ of moves. This method preserves contiguity, because by definition any $T_{l}\in F_{i}$ must be contiguous with $D_{i}$ , and thus the $D_{i}$ are contiguous at each step. + +Even though in the MSGM we do not consider moves between two "true" districts (rather, we consider only moves between a true district and the dummy district), the score of a move does not exist in isolation. Consider two adjacent districts $D_{i}$ and $D_{j}$ , a shared frontier tract $T_{l} \in F_{i} \cap F_{j}$ , and an unshared frontier tract $T_{k} \in F_{i} \cap F_{j}^{c}$ . The acceptance of $M(T_{k},D_{0},D_{i})$ alters the heuristic value of every move associated with $F_{i}$ , which could potentially affect the optimality of further moves with $D_{i}$ , such as the acceptance of $M(T_{l},D_{0},D_{i})$ rather than $M(T_{l},D_{0},D_{j})$ . Furthermore, the acceptance of $M(T_{l},D_{0},D_{i})$ likely expands the size of $F_{i}$ . Perhaps there is an optimal move opened up in this new frontier that we do not even consider, because we have not even calculated its value. + +It would be better to perform only the best move, but such a strategy is too computationally intensive. We compromise by taking in each step an elite fraction of the moves before recalculating $S$ and the values of its associated moves. In this respect, our approach is analogous to the strategy of modified policy iteration for solving a Markov decision problem, in which a fixed number of rounds of value iteration are made between policy iterations. The tradeoff + +of possible inefficiency is more than compensated for by speed gain, especially considering that the solution obtained by MSGM will be further refined by POM. + +The MSGM scheme uses a variable number of moves between recalculating the value of the frontier. Our scheme causes us to be delicate in our selections of tract allocations, making moves virtually one at a time at the beginning and end of the MSGM. By focusing on the beginning and end of the problem, we attempt to avoid having a single district grow too large through inefficient allocation. + +Unlike Cirincione et al. [2000], we use random initial seeds weighted by population rather than seeds equally spaced around the state. The process works as follows: While there are still random seeds to be selected, we find a candidate initial seed tract $T_{l}$ in $D_{0}$ . We accept $T_{l}$ as an initial seed with probability $p(T_{l}) / \hat{p}$ so that tract selection is proportional to population. The MSGM algorithm produces the best initial results when all the districts have the same population rather than the same number of tracts. The geographically optimal placement of five (or fewer) starting seeds in the NYC Metropolitan area and Long Island evinces the fallibility of the equidistant initial-seed method. + +The heuristic by which we rank candidate moves has two components: a population score and a county score. + +# Population Score + +We want to minimize egregious disparities in population between districts. The population component of our heuristic should give the highest score to a district when $p_i = \bar{p}$ . Additionally, we want to penalize large deviations from the optimal population, so our function should be concave down. + +Let $f(p_i)$ be the population heuristic score for a district with population $D_i$ . We use a piecewise definition for $f$ : + +$$ +f (p _ {i}) = \left\{ \begin{array}{c l} M \sqrt {\frac {p _ {i}}{\bar {p}}}, & \mathrm {i f} p _ {i} \leq \bar {p}; \\ M - \frac {4 M}{p _ {i} ^ {2}} (p _ {i} - \bar {p}) ^ {2}, & \mathrm {i f} p _ {i} > \bar {p}. \end{array} \right. +$$ + +Notice that $f$ is steeper for values $p_i > \bar{p}$ because we do not want growing districts to engulf too much population; we penalize deviations above $\bar{p}$ worse than deviations below $\bar{p}$ . Figure 3 shows the function $f$ . + +# County Preservation Score + +We measure a district's county preservation score in terms of the percentage of counties that it completes on a population basis. To encourage growing districts to add remaining tracts in nearly complete counties, the marginal value of + +![](images/95ab9d65a8d9288575c4635c36789e352765b856a056a21a7a5ec24a8f4e6f36.jpg) +Figure 3. MSGM heuristic for population. + +adding these should increase with the fraction of the population already contained in that district. To accomplish this, we use the square of the proportion contained in a county. The county score $g$ for a district $D_{i}$ is: + +$$ +g \left(D _ {i}\right) = \sum_ {C _ {j} \in \Lambda} \left(\frac {\sum_ {T _ {l} \in D _ {i} \cap C _ {j}} p \left(T _ {l}\right)}{p \left(C _ {j}\right)}\right) ^ {2}. \tag {1} +$$ + +For instance, if a district completely contains one county and contains $30\%$ of each of two other counties' populations, its score would be $(1^{2} + 0.3^{2} + 0.3^{2}) = 1.18$ . Figure 4 shows a plot of the county score that a district receives based on what percentage of a county's population said district contains. + +![](images/5b737024f774a58417046df6629575a29d4d1e4154512db8f4894d611d5d5fbc.jpg) +Figure 4. MSGM heuristic for county completeness. + +# Partition Optimization Model + +We refine the MSGM solution through local search. + +# The Objective Function + +The only characteristics of the district and the county that we use are the populations $p(P) = \{p_i\}_{1 \leq i \leq n}$ , the compactness measures $c(P) = \{c_i\}_{1 \leq i \leq n}$ , and the fractions $\rho(P) = \{\rho_{i,r} | 1 \leq i \leq n, 1 \leq r \leq c\}$ of the population of county $r$ that is contained in district $i$ . We would like our score function $s(P) = s(p(P), c(P), \rho(P))$ to have the following properties: + +1. The score function should be unimodal as a function of $p_i$ , with mode at $p_i = \bar{p}$ . +2. The score should increase more by adding tracts that lie in $\chi(D_i)$ , so that we prefer having as few districts as possible in a given county. +3. The score should increase more by adding tracts that increase the sum of all compactness measures by the greatest amount. + +These desiderata suggest that we consider the three vectors $p(P), c(P), \rho(P)$ independently of one another in the score function. In other words, we would like our score function to be a separable function of these three vectors, that is, have the form + +$$ +s (P) = f \big (p (P) \big) + g (c (P)) + h (\rho (P)), +$$ + +where $f,g,h$ are functions. + +# One (Wo)man, One Vote + +The state has total population $P$ and average population of $\bar{p} = P / n$ per district. Letting $p_i$ be the population in district $i$ , we consider three potential metrics for the population variance between districts: + +1. Variance: $\operatorname{Var}(p_1, p_2, \ldots, p_n)$ +2. Maximum deviation: $\max \{|p_i - \bar{p} |\}$ +3. Maximum difference: $\max \{p_i\} -\min \{p_i\}$ + +For each measure, lower values are preferable and the minimum is 0. We submit that variance is the best alternative. To see why, consider two possible population distributions between districts: + +- Situation A: One district has a population of $1.05\bar{p}$ , one has $0.95\bar{p}$ , and all of the others have $\bar{p}$ . +- Situation B: Half of the districts have population $1.05\bar{p}$ and half have $0.95\bar{p}$ (any left-over odd district has $\bar{p}$ ). + +In Situation A, only two districts are different from the ideal population level $\bar{p}$ ; but in Situation B, very few districts have population $\bar{p}$ . So a good metric should rank $B$ worse than $A$ . Clearly, the variance of populations is higher in $B$ than in $A$ , so variance passes this test. The maximum deviation test gives $0.05\bar{p}$ for both $A$ and $B$ , and the maximum difference gives $0.1\bar{p}$ for both. + +We see that variance is the best measure of similarity, since it factors in the pairwise difference in all district populations. + +By penalizing extreme variation away from $\bar{p}$ , MSGM creates districts with approximate population equality. However, in one typical run, the final populations of districts vary from 600,000 to 700,000, an unacceptable difference. + +# Compactness + +To measure the compactness of a district, we would ideally use our compactness measure: + +$$ +c _ {i} = \frac {\operatorname {A r e a} (D _ {i})}{[ \operatorname {P e r i m e t e r} (D _ {i}) ] ^ {2}}, +$$ + +such that: + +$$ +g \big (c (P) \big) = \beta \sum_ {i = 1} ^ {n} c _ {i}, +$$ + +where $\beta$ is some constant. + +Unfortunately, we could not calculate the perimeter of an arbitrary tract (the $C++$ library that we used to interact with our census data shapefiles featured massive memory leaks for large-scale union operations, questionable accuracy for pairwise unions, and seemingly arbitrary calculations of intersection length). + +Yet it is a poor craftsman who blames the tools, so we adopt a different measure of compactness. The clustering coefficient provides a rough approximation for compactness: + +$$ +c c (D _ {i}) = \frac {\sum_ {T _ {l} \in D _ {i}} | \{T _ {k} \in D _ {i} | T _ {k} \sim T _ {j} \} |}{\binom {m _ {i}} {2}}, +$$ + +such that + +$$ +g \big (c (P) \big) = \beta \sum_ {i = 1} ^ {n} c c (D _ {i}), +$$ + +where $\beta$ is some constant. The clustering coefficient provides a ratio of the total number of interdistrict boundaries to the maximum possible number of interdistrict boundaries. If all tracts were uniformly shaped, this measure would + +prize square- and circle-shaped districts, while winding single-tract-width districts would be penalized. However, given the asymmetry of tract shapes, this measure does little to reflect negatively upon district shapes such as the dumbbell (two circular clusters of tracts connected by a narrow band of tracts). In general however, the clustering coefficient values adding to districts tracts that are "close" and removing from districts those tracts that are auxiliary. + +# County Preservation + +We adopt the same county preservation measure (1) used in the MSGM, with the option of adding a scaling factor to the entire function to refine empirical performance. + +# Search Method and Neighborhood Function + +To refine our solution from MSGM, we must move tracts between districts. Yet the space of all possible contiguous moves is too large. We consider a range of possible moves with respect to only one district and perform the best move on this dramatically reduced state space. + +By selecting our target district at random in each iteration, our strategy is best described as stochastic domain hill climbing, a method that combines the best aspects of both random and deterministic local search methods. We perform optimal moves while avoiding getting stuck trying to increase the score of only a single district. Simple first-order moves on the district level, that is, adding or removing individual tracts, cannot reduce the variance metric to the extremely low standard that we demand, so we include second-order moves, that is, " swaps"—both an add and a remove within a single operation. + +If the maximum connectedness of any tract on the graph is $k$ , checking for all adds and removes separately for district $D_{i}$ involves considering + +$$ +\mathcal {O} (k \cdot | \partial D _ {i} | + | F _ {i} |) = \mathcal {O} (k m _ {i}) +$$ + +moves, while looking at all swaps involves considering $\mathcal{O}(k\cdot |\partial D_i|\cdot |F_i|) = \mathcal{O}(km_i^2)$ moves. One might contend, then, that the operation of checking every district for first-order moves might be a better algorithm, since it would take $\mathcal{O}(\sum_{i = 1}^{n}km_{i}) = \mathcal{O}(nkm_{i})$ heuristic evaluations. One could even supplement such an algorithm with a degree of randomness, to avoid being caught in a loop of futility, by employing simulated annealing, stochastic hill climbing, or tabu search on the resulting list of possible future states. We found, however, that checking for second-order moves provides far better empirical results with acceptable time performance, while an algorithm enumerating all the possible second-order states, requiring $\mathcal{O}(\sum_{i = 1}^{n}km_{i}^{2}) = \mathcal{O}(nkm_{i}^{2})$ heuristic evaluations, was too slow to be effective. + +The heart of POM is Algorithm 1. For simplicity and readability: + +- $M_{\mathrm{add}}(D_i)$ is the set of all moves in which we add a frontier tract to $D_i$ . + +- $M_{\text{remove}}(D_i)$ is the set of all moves in which we remove a border tract from $D_i$ , and +- $M^{-1}$ is the move inverse to $M$ , such that applying both $M$ and $M^{-1}$ in turn has no effect. + +Input: Iteration count iter, initial partition $P$ . + +Output: Final partition $P$ . + +count $\leftarrow 0$ +while count $< _{i}$ iter do +curscore $\leftarrow s(P)$ $D\gets$ randomDistrict() +bestscore $\leftarrow$ curscore +foreach $M_{a}\in \{\emptyset \cup M_{add}(D)\}$ do +foreach $M_r\in \{\emptyset \cup M_{remove}(D)\}$ do performMove(Ma) performMove(Mr) if isContiguous(P) then tmpscore $\leftarrow s(P)$ if tmpscore $>$ bestscore then bestscore $\leftarrow$ tmpscore bestadd $\leftarrow M_a$ bestremove $\leftarrow M_r$ end end performMove(Ma-1) performMove(Mr-1) +end +end if bestscore $>$ curscore then performMove(bestadd) performMove(bestremove) count $\leftarrow$ count +1 +end +return P + +Algorithm 1: Stochastic domain hill-climbing algorithm for districting. + +We guarantee that the solution will be contiguous by not even considering moves that would break contiguity, and that we perform a move only if it increases the score of our current state. + +# Achieving Absolute Equality + +U.S. law mandates that the populations of each district in a state be equal to within one person according to the census data [Karcher v. Daggett (1983)]. We deal with entire census tracts, so our algorithm cannot meet that standard. This last step must be implemented by splitting tracts between districts. + +To our knowledge, this problem beyond population unit level (no smaller than block groups) has not been addressed in the literature. Clearly, the simplest way to do this is to split one of the border tracts. While we do not implement this, we describe a methodology for it. + +Let $G$ be a graph whose vertices are the districts and whose edges are the pairs of bordering districts. If we can find a pair of districts such that splitting a border tract between them gives both districts populations within one person of the mean population, then we would optimally do so and ignore those two districts for the remainder of the algorithm. However, to guarantee that the algorithm finishes, we require that the graph $G$ remain connected (otherwise, $G$ may divide into two or more connected components, such that the constituent districts cannot attain populations equal to the overall mean). Taking out two districts at a time by splitting only a single tract splits the fewest possible tracts. + +We search for an edge of $G$ such that removal of its two vertices and all edges emanating from them leaves a new graph $G_{1} \subset G$ that is connected. We call the deletion of a single vertex from a graph that leaves the graph connected a paring. If these two vertices have some special properties, we perform the double paring and then perform the algorithm on $G_{1}$ , and continue until all districts have equal population. If no such pair of districts exists, we then perform a single paring and ensure that the removed district has population $\bar{p}$ before removing it. Define tract splitting to be the process of splitting a border tract into two disjoint areas and two disjoint populations allocated between two bordering districts. + +There always exists an edge on a connected graph $G$ that permits a double paring of $G$ , except for a very specialized set of connected graphs. However: + +Theorem. Every connected graph permits a paring. + +A proof of this theorem is given in the Appendix. + +We recursively update the districts to get population equality. We iteratively pare the graph $G$ of districts such that each time we pare a district or pair of districts, those districts have populations which equal the population mean. By the theorem, this process always ends with all districts having equal population. + +The algorithm removes at least one vertex from $G$ at each step, and the whole algorithm can therefore be performed with $(n - d)$ tract splittings, where $n$ is the number of districts and $d$ is the number of double parings performed. + +# Case Study: New York + +# The Data + +The 2000 census for New York State contains 4,907 tracts, some with no population [Empire State Development 2007]. These empty districts are the "holes" on our maps. Trimming these tracts leaves 4,827 tracts to examine. + +# Results + +Running the MSGM on our initial allocation gives 29 haggard districts, varying from 281,000 to 970,000 population. We use this solution as a starting point. Though our algorithmic process of refinement is stochastic, generally more than $90\%$ of the moves in any run involve swaps; this is particularly true at the very end of a run, where population differences between districts are minute. As a result, swapping provides a way to adjust population smoothly and also "cleans up" tattered fringes of districts, increasing their compactness even with vigorous population changes. After refinement, district populations ranged from 652,561 to 655,760. + +The results in Figures 5-7 demonstrate a partitioning into contiguous, compact, and reasonable districts. Furthermore, the simulations that produced these visually pleasing results also achieve extremely high degrees of population equality and county preservation. + +# Analysis of the Models + +# Solving the Problem + +By combining the Multi-Seeded Growth Model with the Partition Optimization Model, we create fair and geometrically compact districts. The districts conform to well-accepted measures of goodness: population equality, contiguity, preservation of county boundaries, and compactness of shape. + +The districts produced are both simple and fair. Geometric simplicity is measured by compactness, as determined by how close the members of a districts live relative to one other. Additionally, our method penalizes splitting counties between several districts, so that neighboring citizens, who have similar concerns, have the same representative. Fairness of our methodology is evident in its indifference to partisan politics, incumbent protection, and race/ethnicity. + +# Strengths + +The model successfully generates district partitions that simultaneously excel against the standard metrics of county integrity, compactness, and popu + +![](images/eeb3fda8abd689182aab400e24b4ca5c140403fcefd3cd22d568487a170b16d6.jpg) + +![](images/2a99f2fa0b67bdba1a1ecd625c6f5ce6fe31e7d9dceebf20b2d6df65fc1fd9f8.jpg) +Figure 5. New York congressional districts from the POM. + +![](images/bc5217fe2fb48a64a7553e430a92a2bc729c327e2cadd9a4333af9e92629dbe6.jpg) +Figure 6. NYC metro-area POM. +Figure 7. Close-up of the Albany area POM. + +lation equality. Unlike other models in the literature, we provide an algorithm for reducing population differences to at most one person by breaking up a minimal number of tracts. + +The model runs independently of the distribution of population, and works well both in low- and high-density locales, and with regular and oddly-shaped census tracts. This is evidenced by the successful districtings that our model produces in rural, small city, and large metropolitan areas (Figures 5-7). + +The algorithm can generate districts for a large state in less than an hour. + +# Weaknesses + +The model assumes contiguity of the entire state; so in cases where contiguity cannot be forced, such as Hawaii or Michigan, we must change the algorithm slightly. One solution could be to divide the state into several regions and run the model separately on each region, allocating the proportionally correct number of representatives to each region based on population. + +The model appears to tend toward creating districts that are either very low- or high-density, instead of splitting smaller population centers into a number of districts. Since political affiliation and race are likely correlated with population density, the algorithm may inadvertently generate districts that separate various demographic groups into separate districts, which could be viewed as gerrymandering. Yet, another camp would argue that it is appropriate to divide urban, suburban, and rural areas into separate districts, since their residents have different concerns. + +# Conclusion + +Since the 19th century, Elbridge Gerry's lizard has grown into a terrible, twisting serpent, eating away at our democracy. It is time to put Gerrymanders on a healthier diet. + +# References + +Barkan, J.D., P.J. Densham, and G. Rushton. 2006. Space matters: Designing better electoral systems for emerging democracies. *American Journal of Political Science* 50 (4): 926-939. +Bong, C., and Y. Wang. 2006. A multi-objective hybrid metaheuristic for zone definition procedure. International Journal of Services Operations and Informatics 1 (1/2): 146-164. +Caliper Corporation. n.d. About census summary levels. http://www.caliper.com/Maptitude/Census2000Data/SummaryLevels.htm. + +Cirincione, C., T.A. Darling, and T.G. O'Rourke. 2000. Assessing South Carolina's 1990s Congressional redistricting. Political Geography 19: 189-211. +Empire State Development. 2007. New York State Data Center. Census 2000. http://www.empire.state.ny.us/nysdc/popandhous/Census2000.asp. +ePodunk Inc. 2007. New York: Population change, 2000 to 2003. http://www.epodunk.com/top10/countyPop/coPop33.html. +Garfinkel, R.S., and G.L. Nemhauser. 1970. Optimal political districting by implicit enumeration techniques. Management Science 16 (4): B495-B508. +Hunt v. Cromartie, 526 U.S. 541 (1999). +Karcher v. Daggett, 462 U.S. 725 (1983). +Kaiser, H. 1966. An objective method for establishing legislative districts. *Midwest Journal of Political Science* 10 (2) (May 1966): 210-213. +League of United Latin American Citizens v. Perry, 548 U.S. (2006). +Luttinger, J.M. 1973. Generalized isoperimetric inequalities. Proceedings of the National Academy of Sciences of the United States of America 70: 1005-1006. +Macmillan, W. 2001. Redistricting in a GIS environment: An optimisation algorithm using switching-points. Journal of Geographical Systems 3 (2): 167-180. +_____, and T. Pierce. 1994. Optimization modeling in a GIS framework: The problem of political districting. In Spatial Analysis and GIS, edited by S. Fotheringham and P. Rogerson. Bristol, UK: Taylor and Francis. +Mehrotra, A., E.L. Johnson, and G.L. Nemhauser. 1998. An optimization based heuristic for political districting. Management Science 44 (8): 1100-1114. +Miller v. Johnson, 515 U.S. 900 (1995). +Nagel, S. 1965. Simplified bipartisan computer redistricting. Stanford Law Review 17: 863-899. +Shaw v. Reno, 509 U.S. 630 (1993). +SC State Conference of Branches, etc. v. Riley, (1982). 533 F. Supp. 1178 (DSC). Affirmed 459 US 1025. +U.S. Department of the Interior. 2007. Printable maps. http://nationalatlas.gov/printable/congress.html#list. +Weaver, J.B., and S.W. Hess. 1963. A procedure for nonpartisan districting: Development of computer techniques. Yale Law Journal 73 (1): 287-308. +Wikipedia. 2007. Gerrymandering. http://en.wikipedia.org/wiki/Gerrymandering. +Young, H.P. 1988. Measuring the compactness of legislative districts. *Legislative Studies Quarterly* 13: 105-115. + +# Appendix: Proof of Theorem + +Theorem. Every connected graph permits a paring. + +Proof: We proceed by induction on the number $y$ of vertices. We prove a stronger statement, namely that for any connected graph $G$ with at least two vertices, there exist at least two parings. The claim clearly holds for $y = 2$ . + +Suppose that the claim holds for $y = k$ , where $k \geq 2$ , and that it does not hold for $y = k + 1$ . Then, since $y \geq 3$ , take any vertex $v$ of $G$ such that removal of $v$ leaves $G$ unconnected, and consider the two disjoint subgraphs $G_1, G_2$ into which $G$ is divided upon removal of this vertex. By the induction hypothesis, there exist vertices $v_1, v_2$ of $G_1$ such that removal of either one leaves $G_1$ connected. + +We claim that removal of one of $v_{1}, v_{2}$ from the original graph $G$ leaves $G$ connected. To see this, note that neither $v_{1}$ nor $v_{2}$ is adjacent to any vertex in $G_{2}$ , as $G_{1}, G_{2}$ have no common edges. If both $v_{1}, v_{2}$ are adjacent to $v$ , then removal of $v_{1}$ leaves $G$ connected. This is because if we let $G' = G - \{v_{1}\}$ and $G'_1 = G_1 - \{v_1\}$ , then $G'$ consists of $G'_1 \cup \{v\}$ and $G_{2}$ , which are both connected and connected to each other, as $v$ is necessarily connected to $G_{2}$ . This means that $G - \{v_{1}\}$ is connected. + +If one of $v_{1}, v_{2}$ is not adjacent to $v$ , without loss of generality assume that it is $v_{1}$ . Then removing $v_{1}$ from $G$ leaves the graph connected, as $G_{1}' \cup \{v\}$ is connected, as is $G_{2}$ , and they are connected to each other. Some such vertex which admits a paring also exists in $G_{2}$ , yielding two vertices which permit a paring. This proves the result by induction. + +![](images/a1409c37046eabecd9787cc014ba9e7e6e5a80f84732764478be009364083d5b.jpg) +Abe Othman, Chris Yetter, and Ben Conlee. + +# Electoral Redistricting with Moment of Inertia and Diminishing Halves Models + +Andrew Spann + +Daniel Kane + +Dan Gulotta + +Massachusetts Institute of Technology + +Cambridge, MA + +Advisor: Martin Z. Bazant + +# Summary + +We propose and evaluate two methods for determining congressional districts. The models contain explicit criteria only for population equality and compactness, but we show that other fairness criteria such as contiguity and city integrity are present, too. + +The Moment of Inertia Method creates districts whose populations are within $2\%$ of the mean district size, minimizing the sum of the squares of distances between the district's centroid and each census tract (weighted by population size). We prove that this model gives convex districts. + +In the Diminishing Halves Method, the state is recursively halved by lines perpendicular to best-fit lines through the centers of census tracts. + +From U.S. Census 2000 data, we extract the latitude, longitude, and population count of each census tract. By parsing data at the tract level instead of the county level, we model with high precision. We run our algorithms on data from New York as well as Arizona (small), Illinois (medium), and Texas (large). + +We compare the results to current districts. Our algorithms return districts that are not only contiguous but also convex, aside from borders where the state itself is nonconvex. We superimpose city locations on district maps to check for community integrity. We evaluate our proposed districts with various quantitative measures of compactness. + +The initial conditions do not greatly affect the Moment of Inertia Method. We run variants of the Diminishing Halves Method and find that they do not + +improve over the original. Based on our results, district shapes should be convex, and city boundaries and contiguity can be emergent properties, not explicit considerations. We recommend our Moment of Inertia Method, as it consistently performed the best. + +# Assumptions and Justifications + +# About States + +- The Earth's geometry is Euclidean. No state is so big that the spherical shape of the earth significantly distorts distance calculations obtained from Euclidean geometry. +- County lines are not inherently more significant than other boundaries. Some states attempt to not split counties when determining districts, and other states give only slight consideration to county lines. Since several of New York's counties are too big to use as discrete units for dividing representatives, and representing county boundaries in the model is difficult, we instead use the census tract as our base unit of population. +- Deviation from the current district division is not a major factor. There are no inherent transitional problems with switching to a completely new division if it can be shown to have a higher degree of fairness. +- District populations may vary by as much as $2\%$ from the average value. We use the $2\%$ allowance to get around problems with our data on populations not being fine enough. The error could be made smaller if census blocks were used instead of census tracts. + +# About Census Data + +- Census data are always accurate. There are no other reasonable data. +- Census tracts individually satisfy fair apportionment criteria. No U.S. census tract is gerrymandered; there is no political benefit to doing so. +- All population in a census tract can be approximated as located at a single point. For data input to our program, we read in latitude and longitude for each census tract. We assume that the entire population of a census tract is located at this point. Since we have 6,398 census tracts for New York State, none of which have more than $4\%$ of the population for a congressional district (and most of which are considerably smaller), this does not provide a very severe discretization problem. + +# Literature Review + +Gerrymandering has attracted scholarly attention for decades. + +Attempts to assign districts with computers began in the 1960s and 1970s with models by Hess et al. [1965], Nagel [1965], and Garfinkel and Nemhauser [1970]. These methods typically represent population as a series of weighted $(x, y)$ coordinates and attempt to draw equal-population districts based on compactness and contiguity. The methods criteria for compactness vary, and a collection of compactness metrics is reviewed in Young [1988]. Computer resources were limited in this era, and Garfinkel and Nemhauser even report being unable to compute a 55-county state. For a more detailed review of early papers, see Williams [1995]. + +Many versions of the redistricting problem are NP-hard [Altman 1997]. Recent papers have tried graph theory [Mehrotra et al. 1988], genetic algorithms [Bacao et al. 2005], statistical physics [Chou and Li 2006], and Voronoi diagrams [Galvao et al. 2006]. There are also papers such as Cirincione et al. [2000], which are intellectual and stylistic descendants of the old papers but using modern computer resources that enable finer population blocks and tighter convergence criteria. + +We use a moment-of-inertia model similar in formulation to Hess et al. [1965] but with differences in optimization. + +# Criteria for Fair Districting + +We list factors considered in districting and explain which ones we choose. Later, we describe the specific expressions of these criteria in our two models and their mathematical consequences. Core criteria are: + +- Equality of population. The population difference between two districts can vary only by at most a certain number of people, usually on the order of $5\%$ . +- Contiguity. Each district must be topologically simply connected. +- Compactness. There are differing opinions on how to quantitatively define compact, but all agree that small wandering branches are bad. + +A criterion not emphasized in the literature is convexity, a stronger form of contiguity: Any two points in the region can be connected by a straight line segment contained within the region. This disallows holes or extraneous arms that contribute to most poorly-shaped districts. The worst case for a convex region is a district containing sharp angles or that is very elongated. + +Other criteria [Nagel 1965; Williams 1995] serve one of two purposes: + +- Targeted homogeneity or heterogeneity. Nagel explicitly expresses a desire to use predicted voting data to create "safe districts" and "swing districts," where the outcomes of elections are more predictable or less predictable, + +respectively. The stated reasons for this involve balancing the state's districts so that some parts of a state have experienced candidates who are stable to long term change and other parts more responsive. Other papers discuss clustering groups based on race, economic status, age, or other demographic data into a district where statewide minorities have a local majority. + +- Similarity to boundaries or precedent. Whenever possible, people in the same city should have the same representative. Likewise, it can be viewed as unfair to a representative if the people represented change too quickly. It also makes sense for districts to follow rivers, lakes, mountains, and other natural boundaries where appropriate. Usually, boundary of precedent objectives are accomplished by keeping county boundaries intact whenever possible. + +These optional criteria conflict with the earlier core criteria. + +The explicit criteria should be as minimalist as possible, so that more-complicated measures of good districting emerge rather than be forced. Additionally, with complicated objectives, politicians could gerrymander by tweaking parameters of the objective function. + +We explain why we do not include the two optional criteria listed above: + +- We do not consider targeted homogeneity or heterogeneity criteria because we consider it highly unethical to write a computer program to draw districts that benefit a particular candidate or party, even if the stated reasons appear well-intentioned. The goal of computer assignment of districts is to eliminate all manipulations of this form, so including criteria of this form in the objective function is unacceptable. + +- Although the use of existing county or natural boundaries might work well for small states with a high ratio of counties to congressional districts, the county borders of New York are ill-suited for this purpose. New York has only 62 counties but 29 representatives. Following county borders whenever possible but splitting counties where reasonable involves much more work in preprocessing data to incorporate county information and still places pressure on creating noncompact districts. + +We formulate a methodology that involves only equality of population and compactness. + +# Moment of Inertia Method + +# Description + +By equality of population, we mean that no district's population should differ by more than $2\%$ from the mean population per district in the state. There does not appear to be any clear court-mandated tolerance for population difference [Williams 1995], so we simply pick a reasonable number that is within + +the feasibility of computation based on the discretized units of census tracts. We could tighten the bounds further if we were willing to tolerate an increase in computational time and use smaller divisions, such as census blocks. + +Young [1988] lists eight different measures of compactness, none of which is perfect. The most intuitive definition is to minimize the expected squared distance between all pairs of two people in a district. This has the nice physical interpretation of being analogous to the moment of inertia (if the distance is Euclidean). Papers such as Galvao et al. [2006] minimize inertia based on travel-time distance (adjusted for roads, lakes, etc) rather than absolute distance; but we consider only absolute distance, which is easier to find. Also, if district borders are affected by travel time, then it is possible to gerrymander by constructing strategic roads or bridges. + +# Response to Prior Literature Commentary + +Young [1988] finds two problems with moment of inertia as a measure of compactness: + +- It gives good ratings to "misshapen districts so long as they meander within a confined area." +- There is a significant bias based on the area of the district (the moment of inertia is uses squared distances). + +In response to the first objection, we get districts that are not only contiguous but also convex (except where they meet nonconvex state lines). We draw districts where it must be possible to travel between any two points in a district in a straight line without leaving the district. This eliminates the first of Young's concerns, since the cited examples of misshapen districts, such as spirals, that cause moment of inertia to predict poorly all have the property of being nonconvex. + +The concerns about bias toward large-area districts is perhaps more serious. If the complaint is true, then the moment of inertia compactness criterion could lead to stretched or awkward urban districts so as to smooth out larger neighboring districts. In our experimental runs, this problem was not severe. + +# Mathematical Interpretation + +We describe the mathematics of the moment-of-inertia criterion and its objective function. We derive an important result: Any local minimum of our objective function should consist of a collection of convex districts (except where the state border is nonconvex). + +We use the average squared distance between two people in the same district as a measure of the misshapeness of that district. We assume a Euclidean metric. Let $\operatorname{E}[x]$ and $\operatorname{Var} x$ represent the expectation and variance of a random variable $x$ . Let the coordinates of two randomly chosen people in the district + +be $(x_{1},y_{1})$ and $(x_{2},y_{2})$ , and let the coordinates of an arbitrary randomly chosen person be $(x,y)$ . Then our measure is + +$$ +\operatorname {E} \left[ (x _ {1} - x _ {2}) ^ {2} + (y _ {1} - y _ {2}) ^ {2} \right] = 2 \operatorname {V a r} x + 2 \operatorname {V a r} y = 2 \operatorname {E} \left[ | (x, y) - (\overline {{x}}, \overline {{y}}) | ^ {2} \right], +$$ + +where $(\overline{x},\overline{y})$ is the center of mass of people in the district. Furthermore, this quantity is increased if $(\overline{x},\overline{y})$ is replaced by another point. + +Let there be $N$ people in the state to be divided into $k$ districts. Our objective is equivalent to partitioning the people into $k$ sets $S_{1},\ldots ,S_{k}$ of equal size, and picking points $p_1,\dots ,p_k$ to minimize + +$$ +\sum_ {i = 1} ^ {k} \sum_ {x \in S _ {i}} d (x, p _ {i}) ^ {2}, +$$ + +where $d$ is Euclidean distance. Taking the points $p_i$ to be fixed, we find that even if we allow ourselves to split a person between districts (which we do not do in the actual program), we can recast this as a linear programming problem. We let $m_{x,i}$ be the proportion of $x$ that is in district $i$ . We then have + +$$ +m _ {x, i} \geq 0; \tag {1} +$$ + +and for any $x$ + +$$ +\sum_ {i} m _ {x, i} = 1. \tag {2} +$$ + +The restriction of district sizes says that for any $i$ , we must have + +$$ +\sum_ {x} m _ {x, i} = \frac {N}{k}, \tag {3} +$$ + +where $N$ is the total population of the state. The objective function is + +$$ +\sum_ {x, i} m _ {x, i} d (x, p _ {i}) ^ {2}. +$$ + +A global minimum exists since $0 \leq m_{x,i} \leq 1$ , implying that our domain is compact. By linear programming duality, at the point that minimizes the objective, the objective function can be written as a positive linear combination of the tightly satisfied constraints in the solution. For this linear combination, let $C_i$ be the coefficient of (3), $D_x$ the coefficient of (2), and $E_{x,i}$ the coefficient of (1). We have that $C_i$ and $D_x$ are arbitrary, but $E_{x,i} \geq 0$ with equality unless $m_{x,i} = 0$ . Comparing the $m_{x,i}$ coefficients of our objective and this linear combination of constraints, we get that + +$$ +d (x, p _ {i}) ^ {2} = C _ {i} + D _ {x} + E _ {x, i}. +$$ + +If $m_{x,i} \geq 0$ , then $E_{x,i} = 0$ , hence that $E_{x,i} \leq E_{x,j}$ . In particular, person $x$ can be only in the district $i$ for which $E_{x,i} = d(x,p_i)^2 - C_i - D_x$ is minimal. Equivalently, they are in the district $i$ for which $d(x,p_i)^2 - C_i$ is minimal. Therefore, for the optimal solution, there are numbers $C_i$ and the $i$ th district is the set of people $\{x : d(x,p_j)^2 - C_j$ is minimized for $j = i\}$ . Furthermore, these regions are uniquely defined up to exchanging people at the boundaries. + +The next thing to note is that the $i$ th district is defined by the equations + +$$ +d \left(x, p _ {i}\right) ^ {2} - C _ {i} \leq d \left(x, p _ {j}\right) ^ {2} - C _ {j}. \tag {4} +$$ + +Rotating and translating the problem so that $p_i = (0, 0)$ and $p_j = (a, 0)$ , and letting $x = (x, y)$ , (4) reduces to + +$$ +x ^ {2} + y ^ {2} - C _ {i} \leq (x - a) ^ {2} + y ^ {2} - C _ {j}, +$$ + +or + +$$ +2 a x \leq a ^ {2} + C _ {i} - C _ {j}. +$$ + +Therefore, each district is defined by a number of linear inequalities. Hence, we have shown that our measure has the nice property that the optimal districts with fixed $p_i$ are convex, so any local minimum of our objective function should consist of a partition into convex regions. + +# Computational Complexity + +It would be nice to compute the global optimum, but we probably cannot do so in general. Adapting the linear program above, we wish to minimize + +$$ +\sum_ {i} \operatorname {V a r} X _ {i} +$$ + +where $X_{i}$ is a randomly chosen person in district $i$ . This is equal to + +$$ +\sum_ {i} \left(\frac {k}{N} \sum_ {x} m _ {x, i} | \vec {x} | ^ {2} - \frac {k ^ {2}}{N ^ {2}} \left| \sum_ {x} m _ {x, i} \vec {x} \right| ^ {2}\right). +$$ + +Notice that the term + +$$ +\sum_ {i} \sum_ {x} m _ {x, i} | \vec {x} | ^ {2} = \sum_ {x} | \vec {x} | ^ {2} \sum_ {i} m _ {x, i} = \sum_ {x} | \vec {x} | ^ {2} +$$ + +is a constant. Hence, we wish to maximize the sum of the squares of the magnitudes of the centers of mass of the districts. This is an instance of quadratic programming where we try to maximize a positive semidefinite objective function. Since general quadratic programming is NP-hard, it seems likely that it is not easy to find a global maximum for our problem. On the other hand, we have + +shown that even local maxima have many properties that we want, e.g., convexity. Furthermore, these local maxima are significantly easier to find—e.g., from the quadratic programming formulation using the simplex method. + +Unfortunately, the quadratic programming approach leads to an optimization involving $kN$ variables, which can be quite large. Instead, we consider the formulation where to have a local maximum we need to pick $p_i$ and $C_i$ (thus defining our districts by " $x$ goes in the district $i$ for which $d(x, p_i)^2 - C_i$ is minimal") in such a way that the districts have the correct size and so that $p_i$ is the center of mass of the $i$ th district. This will imply that we have a local maximum of the quadratic program, since near our solution (up to first order) our objective function is + +$$ +C - \sum_ {x, i} m _ {x, i} d (x, p _ {i}) ^ {2}, \tag {5} +$$ + +for some constant $C$ . Since we have a global maximum of (5), moving a small amount in any direction within our constraint does not decrease our objective, up to first order. Furthermore, since our objective is positive semidefinite, we are at a local maximum. This formulation is much better, since we are now left with only $3k$ degrees of freedom for $k$ districts. + +# Comparison to Hess et al. + +This procedure is very similar to that of Hess et al. [1965]. They too were attempting the minimize the summed moments of inertia of districts. They also converged on their solution via an iterative technique that alternates between finding the best districts for given centers and finding the best centers for given districts. Our approach differs from theirs in two main points: the method of finding new districts for given centers, and the general philosophy toward achieving exact population equality. Both of these differences stem from our having finer data (Hess et al. used 650 enumeration districts for dividing Delaware into 35 state House and 17 state Senate seats, whereas we have 10 times as many census tracts) and more computational power. We cannot determine exactly what algorithm Hess et al. used to determine optimal districts with given centers other than a "transportation algorithm," possibly the linear programming formulation from earlier (possibly using a min-cost-matching formulation). We have many more census tracts to work with and use an algorithm better adjusted to this problem. We also have different perspectives about what to do to even out population. Our fundamental units are sufficiently small that we can just run our algorithm, adjusting district sizes in a natural way until all districts are within $2\%$ of the desired population. Hess et al. used a solution method that divided fundamental units of population between districts and later had to perform post-iteration checks and alterations so that units were no longer split and population equality still worked out. This readjustment has the potential to increase moments of inertia and could theoretically lead to a failure to converge. + +# Diminishing Halves Method + +As an alternative against which to compare our moment of inertia algorithm, we use the Method of Diminishing Halves proposed by Forrest [1964]. + +# Definition + +The Diminishing Halves Method splits the state into two nearly-equal-sized districts and recurses on each of the two halves. The idea is to split into relatively compact halves. Forrest does not specify exactly how the state must be split into two halves at each step, but rather argues that the method for splitting the state in two could be adjusted based on preferences for keeping counties intact or other goals. + +Suppose that we run a least-squares regression on the latitude and longitude coordinates of the state's census tracts. We would expect that dividing along this best-fit line would be a bad idea, since it would probably cut major cities in half or cover a long distance across the state. If we take a line perpendicular to the best-fit line, then hopefully we get the opposite properties. Therefore, we divide the state at each stage with a line whose slope is perpendicular to the best-fit line of the census tracts. We are not aware of this specific criterion being used in previous literature. + +# Mathematical Interpretation + +The best-fit line is an approximation of the shape of the state is of the form + +$$ +\left(X - \overline {{X}}\right) \sin \theta + \left(Y - \overline {{Y}}\right) \cos \theta = 0. +$$ + +The left-hand side is the distance of a point $(X,Y)$ from the line. We minimize + +$$ +\begin{array}{l} \operatorname {E} \left(\left(X - \overline {{X}}\right) \sin \theta + \left(Y - \overline {{Y}}\right) \cos \theta\right) ^ {2} = \\ \sin^ {2} \theta \operatorname {V a r} X + 2 \sin \theta \cos \theta \operatorname {C o v} X Y + \cos^ {2} \theta \operatorname {V a r} Y. \\ \end{array} +$$ + +This value is minimized when + +$$ +\sin \theta \cos \theta (\mathrm {V a r} X - \mathrm {V a r} Y) + (\cos^ {2} \theta - \sin^ {2} \theta) \mathrm {C o v} X Y = 0, +$$ + +or when + +$$ +\tan (2 \theta) = \frac {- 2 \operatorname {C o v} X Y}{\operatorname {V a r} X - \operatorname {V a r} Y}. +$$ + +To divide the population into $k$ districts, we divide the state by a line perpendicular to the best fit that splits the population in ratio $\left\lfloor \frac{k}{2} \right\rfloor : \left\lceil \frac{k}{2} \right\rceil$ . When we need to divide into an odd half and an even half, the ceiling half goes to the southern side. + +# Experimental Setup + +# Extraction of U.S. Census Data + +We used a Perl script to extract data at the census tract level from the 2000 U.S. Census [U.S. Census Bureau 2001]. For New York, there are 6,661 tracts in the database, 6,398 of which have nonzero population. We extract the population along with the latitude and longitude of a point from each district. The districts have populations from 0 to 24,523 with a median of 2,518. We model the population density by assuming that the entire population of a tract is located at the coordinates given. We adjust for the fact that one degree of latitude and one degree of longitude represent different lengths on the Earth's surface by having our program internally multiply all longitudes by the cosine of the average latitude. We also extracted data for Arizona (small—8 congressional representatives), Illinois (medium—19 representatives), and Texas (large—32 representatives). + +# Implementation in C++ + +We use a $\mathrm{C}++$ program to compute an approximate local minimum of our Moment of Inertia objective function. We do so without splitting census tracts between districts, and this discretization requires us to allow a little lenience about the exact sizes of our districts (we allow them to vary from the mean by as much as 2%). + +We attempt to converge to a local optimum via two steps. First we pick guesses for the points $p_i$ . We then numerically solve for the $C_i$ that make the district sizes correct, giving us some potential districts. We allow a variation of $2\%$ from the mean, beginning with a $20\%$ allowable deviation in the first few iterations and tightening the constraint on subsequent iterations. We then pick the center of mass of the new districts as new values of $p_i$ , and repeat for as long as necessary. Each step of this procedure decreases the quantity in 0, because our two steps consist of finding the optimal districts for given $p_i$ and finding the optimal $p_i$ for given districts. We find the correct values of $C_i$ by alternately increasing the smallest district and decreasing the largest one. When this adjustment overshoots the necessary value, we halve the step size for that district, and when it overshoots by too much, we reverse the change. For New York, convergence to the final districts took a couple of minutes. + +After determining our districts, we output them to a PostScript file that displays the census tracts color-coded by district, so that one can visually determine compactness. Finally, we computed some of the compactness measures discussed in [Young 1988]. + +We also created a $C++$ program to implement the Diminishing Halves Method. + +# Measures of Compactness + +We need an objective method for determining how successful our program is at creating compact districts. Young [1988] gives several measures for the compactness of a region. We use some of these to compare our districts with those produced by other methods. Since our algorithms generate convex districts except where the state border is nonconvex, we perform all of these results on the convex hull of our districts, so that the test results are not unfairly affected by awkwardly-shaped state borders. + +# Definitions + +Inverse Roeck test. Let $C$ be the smallest circle containing the region $R$ . We measure $\operatorname{Area}(C) / \operatorname{Area}(R)$ , a number larger than 1, with smaller numbers corresponding to more compact regions. This is the reciprocal of the Roeck test as phrased in Young. We have altered it so that all of our measures in this section have smaller numbers corresponding to more compact regions. + +Length-Width test. Inscribe the region in the rectangle with largest length-to-width ratio. This ratio is greater than 1, with numbers closer to 1 corresponding to more compact regions. + +Schwartzberg test. We compute the perimeter of the region divided by the square root of $4\pi$ times its area. By the isoperimetric inequality, this is at least 1 with a value of 1 if and only if the region is a disk. This test considers a region compact if the value is close to 1. + +# Calculation in Mathematica + +We used the tests above to check compactness of the proposed districts. We implemented the tests in Mathematica with aid of the Convex Hull and Polygon Area notebooks [Weisstein 2004; 2006]. + +For the Roeck test, we compute the area of the polygon by triangulating it. We find the circumradius by noting that if every triple of vertices can be inscribed in a disk of radius $R$ , then the entire polygon fits into the disk. This is because a set of points all fit in a disk of radius $R$ centered at $p$ if and only if the disks of radius $R$ about these points intersect at $p$ . Let $D_i$ be the disk of radius $R$ centered around the $i$ th point. If every triple of points can be covered by the same disk, then any three of the $D_i$ s intersect. Therefore, by Helly's theorem [Weissstein 1999] all the disks intersect at some point, and hence the disk of radius $R$ at this point covers the entire polygon. Hence, we need for any three points the radius of the disk needed to contain them all. This is either half the length of the longest side if the triangle formed is obtuse, or the circumradius otherwise. + +For the Length-Width test, we pick potential orientations for our rectangle in increments of $\pi / 100$ radians. At each increment, we project our points parallel + +and perpendicular to a line with that orientation. The extremal projections determine the bounding sides of our rectangle. We choose the value from the orientation that yields the largest length-to-width ratio. + +Calculating the Schwartzberg test is straightforward. + +# Results for New York + +Figure 1 presents maps of the Moment of Inertia Method districts, the districts from the Diminishing Halves Method, and the actual current congressional districts of New York. + +Our program's raw output plots the latitude and longitude coordinates of each census tract using a different color and symbol for each district. The state border and black division lines are added separately. There appears to be a slight color bleed across the borderlines near crowded cities, but this is due to the plotting symbols having nonzero width. Zooming in on our plot while the data are still in vector form (before rasterization) shows that our districts are indeed convex. + +# Discussion of Districts + +Both methods produce more compact-looking results than the current districts. Some current New York districts legitimately try to respect county lines, but there are a few egregious offenders, such as Districts 2, 22, and 28, where the boundaries conform to neither county lines nor good compactness. The current District 22 has a long arm that connects Binghamton and Ithaca, and the current District 28 hugs the border of Lake Ontario to connect Rochester with Niagara Falls and the northern part of Buffalo. Both of our methods allow Ithaca and Binghamton to be in the same district, but without stretching the district to the land west of Poughkeepsie. Buffalo and Rochester are kept separate in both of our models. + +Our data do not contain information about county lines. However, both of our methods do a good job at keeping the major cities of New York intact. Buffalo and Rochester are divided into at most two districts in our methods, instead of three under the current districting. The Diminishing Halves Method has a cleaner division for Rochester, but the Moment of Inertia Method handles Syracuse much better. + +Both methods produce districts with linear boundaries. The Diminishing Halves Method has a tendency to create more sharp corners and elongated districts, whereas the Moment of Inertia Method produces rounder districts. The Diminishing Halves Method tends to regions that are almost all triangles and quadrilaterals. Where three districts meet with the Diminishing Halves Method, the odds are that one of the angles is a $180^{\circ}$ angle. The Moment of Inertia Method does a better job of spreading out the angles of three intersecting regions more evenly and thus results in more pleasant district shapes. + +![](images/fe12f91c02f7b9f8b5f869da6f3aad2a2b232e78be7124fbdf0ced69eb311a61.jpg) +a. Current (adapted from U.S. Department of the Interior [2007]). + +![](images/1c33c977a2d7507e0befa5242f50e4f2b3a3510684163bed499a329e8ed557f4.jpg) +b. Moment of Inertia Method. +Figure 1. New York districts. + +![](images/dcae6b5f6265e89d7ba7587d9daf67da515ec336542ff1ed0e8f02937145e37b.jpg) +c. Diminishing Halves Method. +Figure 1 (continued). New York districts. + +That Greater New York City contains roughly one-half of the state's population is convenient for the Diminishing Halves Method. However, that algorithm does not deal very well with bodies of water. This feature leads to the creation of one noncontiguous district (marked as "Non-contiguous" in Long Island Sound in Figure 1c. Overall, the shapes given by the Moment of Inertia Method look rounder and more appealing. + +# Compactness Measures + +Table 1 lists results of the compactness tests. Smaller numbers correspond to more compact regions. + +Table 1. +Mean and standard deviation for compactness measures of districts; smaller is better. + +
DistrictsInverse Roeck TestSchwartzberg TestLength-Width Test
NY (Moment of Inertia)2.29 ± 0.661.64 ± 0.621.91 ± 0.61
NY (Diminishing Halves)2.50 ± 0.871.74 ± 0.691.91 ± 0.77
+ +According to these measures, the Moment of Inertia Method does marginally better than the Diminishing Halves Method. The diminishing halves numbers appear to be larger by about one-seventh of a standard deviation. This probably is caused by a few of the more misshapen districts. + +All three measures are calibrated so that the circle gives the perfect measurement of 1. Roughly speaking, the Roeck test measures area density, the Length-Width test measures skew in the most egregious direction, and the Schwartzberg test measures overall skewness. Each measure tells us approximately the same thing: the Moment of Inertia Method performs a little bit better than the Diminishing Halves Method. + +It would be desirable to compare the numbers in Table 1 to the current districts, but there are two reasons why we cannot do this: + +- The data that we used do not offer congressional district identification at the census-tract level. To compute compactness, we would need to choose a finer population unit, hence the numbers would not be directly comparable to those in Table 1. +- All of our districts in both methods are convex except for where the state border is nonconvex. This is not true for the current districts, and it is unclear how useful the compactness numbers are at comparing convex districts to nonconvex districts. + +# Results for Other States + +To test how well our algorithms perform on states with different sizes, we also computed districts for Arizona (small—8 districts), Illinois (medium—19 districts), and Texas (large—32 districts). [EDITOR'S NOTE: We omit the corresponding figures and specific analysis.] + +# Compactness Measures + +The Diminishing Halves Method produces consistently worse results by all three measures. This fact suggests (and the maps seem to confirm) that this fault is due largely to producing a small number of very elongated districts. + +Given the evidence, we recommend the Moment of Inertia Method over the Diminishing Halves Method. + +# Sensitivity to Parameters + +To test for robustness, we tweak some of the parameters to the Moment of Inertia Model and test variants of the Diminishing Halves Method. + +# Initial Condition + +We ran the Moment of Inertia Model on each of the states with three different random seeds. The results were almost identical each time. + +# Population Equality Criterion + +We ran the New York case of the Moment of Inertia Model using a $5\%$ allowable deviation from the mean in district population instead of a $2\%$ allowable deviation. We observed no significant change in the results. + +# Variants of the Diminishing Halves Method + +We modified our criterion for determining the dividing line in the Diminishing Halves Method to use a mass-weighted best-fit line, weighted to account for different census tracts containing different numbers of people. We ran this modified method on New York, Arizona, and Illinois. We also tried a modification of the Diminishing Halves Method on the New York case that draws vertical and horizontal (longitude and latitude) lines. + +In all these modified cases, results were visibly much worse. The modified methods tended to split cities into more districts than the original method. + +# Strengths and Weaknesses + +# Strengths: + +- Emergent behavior from simple criteria. We specify criteria only for population equality and compactness. We satisfy contiguity and city integrity without explicitly trying to do so. +- Simple, intuitive measure of complexity of districts. In the Moment of Inertia Method, our measure of the noncompactness of a district gives a model that is easy to understand and does not use arbitrary constants that could be tuned to gerrymander districts. +- Results in convex districts. Both models produce districts guaranteed to be convex, aside from where the state border is nonconvex. This provides a fairly strong argument for the compactness of the resulting districts. +- Easily computable. Our final districting can be computed in a few minutes. +- Nice-looking final districts. The districts that we get appear very nice. + +# Weaknesses: + +- No theoretical bounds on convergence time. We could not prove that our algorithm converges in reasonable time, although it has done so in practice. + +- Potential for elongated smaller districts. Some of the smaller districts produced by the Moment of Inertia Method may be stretched to accommodate larger districts. The Diminishing Halves Method may not correctly divide regions such as discs or squares that are not described well by a best-fit line. +- Does not respect natural or cultural boundaries. Our algorithms do not take natural or cultural boundaries into account. Doing so would have the advantage of not having district boundaries cross rivers but could place pressure on making districts noncompact and allow for loopholes that could be exploited by malicious politicians. +- Does not necessarily find the global optimum. Our Moment of Inertia algorithm finds only a local minimum. This leads potentially to some non-determinism in the resulting districts, which could allow gerrymandering; but the amount is small. +- Can only draw new districts, not determine if existing districts are gerrymandered. Cirincione et al. [2000] give a pseudoconfidence interval analysis to assess whether South Carolina's 1990 redistricting had been gerrymandered. We do not perform such analysis here. + +# Conclusion + +We formulated and tested two methods for assigning congressional districts with a computer. + +The Moment of Inertia Method searches for the answer that satisfies the intuitive criterion that people within the same district should live as close to each other as possible. We implemented this method and obtain results that would not have been computationally feasible in the 1960s and 1970s. + +The Diminishing Halves Method recursively divides the population in half, which is very simple to explain to voters. To avoid elongated districts and to cut along sparsely populated areas rather than densely populated regions, our implementation chooses a dividing line perpendicular to the statistical best-fit line through the latitude and longitude coordinates of the census tracts. + +We have some concrete recommendations for state officials: + +- Processing data at the census tract level or finer is computationally feasible. It would not be unreasonable to process at the block group level if the extra resolution would be beneficial. +- Districts should be convex. Most models in the literature check only for contiguity. However, even severely gerrymandered districts such as Arizona District 2 satisfy contiguity. Requiring all districts to be convex greatly reduces the potential for political abuse. + +- City boundaries and contiguity of districts should be emergent properties, not explicit considerations. Neither of our methods explicitly requires districts to be contiguous, yet the districts they generate are not only contiguous but convex. Neither of our methods attempts to preserve city or county boundaries, yet the Moment of Inertia Method does a good job of keeping cities together whenever reasonable. It is probably sensible for smaller states with a high ratio of counties to congressional representatives to be concerned with county boundaries; but for New York, where there are comparatively few counties, looking at city integrity instead of county integrity is more reasonable. +- A good algorithm can handle states of different sizes. Algorithms that perform well on large states might not yield good results for a small state with only one or two large cities. We tested our algorithms on states of different sizes; the Moment of Inertia Method behaves well in all cases. +- We recommend a moment of inertia compactness criterion. The Moment of Inertia Method, compared to the Diminishing Halves Method, + +- consistently produces more visuallyappealing districts, +- has better results on the compactness tests, and +- does a better job of respecting city boundaries. + +# References + +Altman, M. 1997. The computational complexity of automated redistricting: Is automation the answer? *Rutgers Computer and Technology Law Journal* 23: 81-136. +Bacao, F., V. Lobo, and M. Painho. 2005. Applying genetic algorithms to zone design. Soft Computing 9: 341-348. +Chou, C.I., and S.P. Li. 2006. Taming the gerrymander—Statistical physics approach to political districting problem. Physica A 369: 799-808. +Cirincione, C., T.A. Darling, and T.G. O'Rourke. 2000. Assessing South Carolina's 1990s Congressional redistricting. Political Geography 19: 189-211. +Forrest, E. 1964. Apportionment by computer. American Behavioral Scientist 8 (4) (December 1964): 23, 35. +Galvao, L.C., A.G.N. Novaes, J.E.S. de Cursi, and J.C. Souza. 2006. A multiplicatively-weighted Voronoi diagram approach to logistics districting. Computers and Operations Research 33: 93-114. +Garfinkel, R.S., and G.L. Nemhauser. 1970. Optimal political districting by implicit enumeration techniques. Management Science 16 (4): B495-B508. + +Hess, S.W., J.B. Weaver, H.J. Siegfeldt, J.N. Whelan, and P.A. Zitlau. 1965. Nonpartisan political redistricting by computer. *Operations Research* 13 (6): 998-1006. +Mehrotra, Anuj, Ellis L. Johnson, and George L. Nemhauser. 1998. An optimization based heuristic for political districting. Management Science 44 (8): 1100-1114. +Nagel, S.S. 1965. Simplified bipartisan computer redistricting. Stanford Law Review 17: 863-899. +U.S. Census Bureau. 2001. Census 2000 Summary File 1 [New York]. http://www2.census.gov/census_2000/datasets/Summary_File_1/New_York/. +U.S. Department of the Interior. 2007. National Atlas of the United States. Printable Maps. Congressional Districts—110th Congress. http://nationalatlas.gov/printable/congress.html. +Weisstein, Eric W. 1999. Helly's theorem. From MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/PolygonArea.html. +______ 2004. Polygon area. From MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/PolygonArea.html. +______ 2006. Convex hull. From MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/ConvexHull.html. +Williams, J.C. 1995. Political redistricting—A review. *Papers in Regional Science* 74: 13-39. +Young, H.P. 1988. Measuring the compactness of legislative districts. *Legislative Studies Quarterly* 13: 105-115. + +# Why Weight? A Cluster-Theoretic Approach to Political Districting + +Sam Whittle +Wesley Essig +Nathaniel S. Bottman +University of Washington +Seattle, WA + +Advisor: Anne Greenbaum + +# Summary + +Political districting has been a contentious issues in American politics over the last two centuries. Since the landmark case of Baker v. Carr (1962), in which the U.S. Supreme Court ruled that the constitutionality of a state's legislated districting is within the jurisdiction of a federal court, academics have attempted to produce a rigorous system for districting a state. + +We propose both a modified form of classical K-means clustering and the shortest-splitline algorithm to accomplish impartial redistricting. We apply our methods to redistricting New York, and, as further examples, Texas and Colorado. Both methods use only population-density data and state boundaries as inputs and run in a feasible amount of time. + +Our criteria for successful redistricting include contiguity, compactness, and sufficiently uniform population. + +The K-means method produces districts similar to convex polygons, and the splitline method guarantees that the resulting districts have piecewise linear boundaries. The K-means method has the advantage of allowing seeding of the district centers. The centers of the generated districts then roughly correlate to the existing districts, by proper seeding, but the resulting boundaries are vastly simpler. + +# Introduction + +The writers of the Constitution created the House of Representatives to be the branch of government most responsive to the people. The reality is just the opposite. Though representatives are elected every two years, almost 400 of the 435 seats are not effectively contestable, because of gerrymandering. With the immensely detailed amount of data and unlimited computing power available to politicians today, gerrymandering has been elevated to an art. With only the requirements that districts be connected and all have equal population, it is possible to pinpoint candidates and place them in a different district than their neighbors [Toobin 2003]. + +Though undemocratic, gerrymandering is nearly always legal (see, for instance, Backstrom [1986]) and has been used to obtain striking results. In 2002, only four incumbent representatives lost their bid for re-election—the lowest total ever [Toobin 2003]. We will argue that it is certainly true that any attempt to restructure legislative districts fairly needs to ignore the human factors that overwhelmingly determine current redistricting. + +Defining a measure of compactness is essential to ensure fair districts. Both methods that we offer produce districts that at first glance are clearly simpler than the existing ones. We use the centers of the existing districts as seeds for a clustering algorithm. Thus, the new districts have some correlation to the existing districts, but their boundaries are determined in a fair manner. The core of many districts will be roughly the same, while the boundaries will be dramatically simpler. This effectively counteracts the effects of gerrymandering, without being overly difficult to implement. + +# Plan of Attack + +Our goal is an algorithm to divide a region into $k$ districts that satisfy some heuristic definition of fairness. To accomplish this, we must do the following: + +- Define fairness and simplicity. +- State assumptions and constraints. +- Define metrics for comparing algorithms. + +# Defining Simplicity + +We say that district $A$ is simpler than district $B$ if $A$ is contiguous and more compact than $B$ . + +- Contiguity. A district is contiguous if it is arcwise-connected; that is, if one can travel from any point $a$ to any other point $b$ in $A$ while remaining entirely within $A$ . If $A$ contains regions separated by bodies of water, $A$ is contiguous if all regions are connected by water and each region is arcwise-connected. + +- Compactness. Intuitively, a district is compact if it does not meander excessively. This is a hard concept to formalize; many authors give only a hasty definition, and some even argue that compactness is ambiguous to the point of being irrelevant. Nonetheless, we attempt a suitable definition. + +# Towards a Suitable Definition of Compactness + +Young [1988] gives compelling reasons for abandoning all previous definitions of compactness (see the Appendix). But Young does not consider the following adjusted version of the Schwartzberg Test, which is alluded to in Garfinkel and Nemhauser [1970]: + +Definition 1 District $A$ is more compact than district $B$ if + +$$ +\frac {4 \pi \mathrm {A r e a} _ {A}}{(\mathrm {P e r i m e t e r} _ {A}) ^ {2}} > \frac {4 \pi \mathrm {A r e a} _ {B}}{(\mathrm {P e r i m e t e r} _ {B}) ^ {2}}. +$$ + +We call the quantity $4\pi$ Area / Perimeter² the compactness quotient. + +For a circle of radius $r$ , this ratio is equal to 1. It is well-known that the shape with the largest ratio of area to squared perimeter is the circle (see, for instance, Folland [2002]) so the compactness quotient is between 0 and 1. + +As seen in Figure 1, a compactness quotient of 0.13 is visually quite bad. Using the fact given in Bourke [1988] that the area of a non-self-intersecting closed $N$ -gon (with the $k$ th vertex in counterclockwise order equal to $(x_k, y_k)$ ) is equal to + +$$ +\frac {1}{2} \sum_ {i = 1} ^ {N - 1} \left(x _ {i} y _ {i + 1} - x _ {i + 1} y _ {i}\right), +$$ + +we calculated the compactness quotients of several actual districts by approximating their boundaries by piecewise linear segments. Two of New York's more sprawling districts have compactness quotients 0.097 and 0.101 (Figure 2)—even worse than the gerrymander in Figure 1! The two most compact districts in New York, the 26th and 21st, have compactness quotients 0.406 and 0.498. + +We decide that the mean for any state should be at least 0.6, so that the average district would be better than the best current districts in New York. Furthermore, we insist that 0.25 should be more than 2 standard deviations from the mean. It is not possible to require that all districts be greater than 0.25, since several districts inevitably have most of their border coincide with the state border. + +# Defining Fairness + +Almost all unfairness occurs when political and social measures factor into redistricting decisions. Concentrating supporting voters in a single district, + +# # + +![](images/75ddb263b9b9128ae78ecff681adee75c984621105dcd89e37da14134b2df019.jpg) +Figure 1. The compactness quotients of the circle, square, and gerrymander are $1, \pi /4 \approx 0.79$ , and $23\pi /576 \approx 0.13$ , respectively. + +![](images/d95523454751ec46f1959aff934db4b2bbffab254ee2ab2295eb62da67225fd2.jpg) +Figure 2. Current New York districts 8 and 28 (in dark shade), with compactness quotients of 0.097 and 0.101. Source: U.S. Department of the Interior [2007]. + +diluting opposing voters over several districts, placing two incumbents in the same district and forcing them to run against each other, and isolating minorities (see Toobin [2003] and Hayes [1996]) are all the result of districting being controlled by those who attempt to skew voting patterns. In general: + +- Unfair districting stems from either human biases or poorly designed algorithms. + +Our computer simulations use only population density and the boundary of the state, so the determination of districts is completely unbiased. While a district may be unfair on a local scale, in that it divides up a community with a common interest—for instance, a community of apple-growers may be split between two districts—on the national scale, such imbalances will even out. Because of this, there will be no pathological examples of disproportionate representation. + +# Applying the Theory of Data Clustering + +Data clustering is classified observations (or objects) into groups. The main benefits of a cluster-theoretic algorithm are: + +- Data clustering often reveals an internal structure that may not have been initially apparent. +- It is easier to work with a small number of clusters than with a large number of raw data points. + +The philosophy of data clustering is to divide data into a (not necessarily fixed) number of clusters, with the elements in a cluster somehow similar. Data clustering is often applied to problems that deal with a large number of variables, and it is usually very difficult to determine the "proper" way to cluster data [Afifi and Clark 1984]. We apply data clustering in the following way: + +- Split the state into small, discrete units. Our units correspond to geographic locations of interpolated census population measurements [Center for International Earth Science Information Network 2007]. +- Determine some partition of these units into clusters. Note that the only variables present are the location and population of each unit. + +After defining a method for ordering the preference of cluster arrays, we might suppose we are done with the problem: All that is left is to look at all possible cluster arrays and choose the best one. However, this turns out not to be feasible. Abramowitz and Stegun [1968] prove that the number of ways of sorting $n$ observations into $m$ groups is a Stirling number of the second kind: + +$$ +S _ {m} ^ {(n)} = \frac {1}{m !} \sum_ {k = 0} ^ {m} (- 1) ^ {m - k} {\binom {m} {k}} k ^ {n}. +$$ + +For instance, there are more than $10^{15}$ ways to sort 25 objects into 5 groups. We need an algorithmic process to determine an appropriate array of clusters. + +# Cluster-Theoretic Districting + +# The K-means Algorithm + +# Standard Algorithm + +The K-means algorithm is an iterative method for data clustering [Shapiro and Stockman 2001]. Let $D = \{\mathbf{x}_j\}_{j=1}^N \subset \mathbb{R}^n$ be the data to be clustered, and let $S = \{\mathbf{s}_j\}_{j=1}^K$ be a set of seeds. Suppose that we desire to partition $D$ into $K$ clusters; let the $i$ th cluster be denoted by $C_i$ . Associate to the $i$ th cluster a geographical center, denoted by $\mathbf{c}_i$ . Given a distance function $f: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ , the K-means algorithm proceeds as follows. + +- Initialization: For all $C_i$ , let $\mathbf{c}_i = \mathbf{s}_i$ + +# - Iteration: + +- Assign points to clusters: For all $\mathbf{x} \in D$ , associate $\mathbf{x}$ to a cluster $C_i$ whose center $\mathbf{c}_i$ minimizes $f(\mathbf{x}, \mathbf{c}_i)$ . +- Update cluster centers: Redefine $\mathbf{c}_i = \left(\sum_{\mathbf{x} \in C_i} \mathbf{x}\right) / \left(\sum_{\mathbf{x} \in C_i} 1\right)$ . +- Repetition: If updating cluster centers changes at least one cluster center, repeat the iteration step. Otherwise, stop. + +# Weighted Algorithm + +To generate districts of appropriate population, we add a weighting system to the standard algorithm. Let each cluster correspond to a legislative district. Let $D = \{\mathbf{x}_j\}_{j=1}^N \subset \mathbb{R}^2$ be the set of census coordinates. Thus, $x \in D$ corresponds to the position of a population measurement. Define a population function $p: D \to \mathbb{R}$ such that $p_i$ is the population at the coordinates specified by $\mathbf{x}_i$ . A cluster $C_j$ is defined by its points $\mathbf{x} \subset \mathbb{R}^2$ , its center $\mathbf{c}_j \in \mathbb{R}^2$ , and some weight $\alpha_j$ . Define $f$ to be the Euclidean distance function in $\mathbb{R}^2$ . Our weighted K-means algorithm proceeds as follows: + +- Initialization: Using the standard K-means algorithm, assign points to clusters and centers to appropriate positions. +- Iteration: + +- Assign points to clusters: For all $\mathbf{x} \in D$ , associate $\mathbf{x}$ to a cluster $C_i$ whose center $\mathbf{c}_i$ minimizes $f(\mathbf{x}, \mathbf{c}_i)$ . +- Update cluster centers: Redefine $\mathbf{c}_i = \left(\sum_{\mathbf{x} \in C_i} p_i \mathbf{x}\right) / \left(\sum_{\mathbf{x} \in C_i} p_i\right)$ . +- Update cluster weights: Redefined + +$$ +\alpha_ {j} = g \left(\sum_ {\mathbf {x} _ {i} \in C _ {j}} p _ {i} \alpha_ {j}\right), +$$ + +where $g$ is defined below. + +- Repetition: If the properties of the clusters are within tolerance, stop. Otherwise, repeat the iteration step. + +By adjusting the weights, we control the growth or decay of the clusters. If the weight of a cluster increases, data points are more likely to be grouped in other clusters. Similarly, decreasing the weight helps to increase the population of a cluster. Thus, the weight function $g: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ is crucial in the performance of the algorithm. We define: + +$$ +g (p, w) = w \sqrt {\frac {i}{i _ {0}}} + w \cdot \frac {p}{p _ {0}} \cdot \sqrt {1 - \frac {i}{i _ {0}}}, +$$ + +where $i$ is the current iteration, $i_0$ is the maximum number of iterations, and $p_0$ is the desired population for each cluster. Towards the beginning of the algorithm, $i / i_0$ is small, causing the second term to dominate the weight function. As $i$ increases, the weight fluctuates less because the first term begins to dominate. This formula enables the weights to change rapidly at the beginning of the iterative process, causing the clusters to vary greatly between iterations. However, by the end of the algorithm, the weights do not change as readily, allowing stabilization over an optimal clustering. This is somewhat similar to simulated annealing, where initial negative actions allow the algorithm to escape local optima and the probability that a negative action is taken decreases over time. + +# Splitline Algorithm [Smith 2007] + +# Method + +The idea behind the shortest splitline algorithm is quite simple: + +- Start with the number of districts for the state. Divide that number in two as evenly as possible, using integers. +- Find the shortest line that divides the state into two parts such the ratio of their populations is the same as the ratio determined in the previous step. +- Repeat this process recursively on the subdivided parts until the number of parts is the same as the number of districts. At every step, the division is just a line, so the resulting districts have piecewise linear boundaries. Using the shortest line ensures that the districts will have a good compactness quotient. + +# Demonstration + +Figure 3 is a demonstration of the splitline algorithm creating 5 districts from 15 people; there need to be 3 people in each district. The ratio 3:2 is the most balanced integer ratio that 5 can be divided into. At step 1, the algorithm divides the state into two regions with 9 and 6 people, the correct ratios for 3 and 2 districts. At step 2, it acts recursively on the 2 subdivisions. Thus, the region with 6 people is divided into regions with 3 each, with no more subdivision needed. The other region is divided into regions with 6 and 3 people. At the third and final step, the last region is split in two and the process is complete. By using the shortest line at each step, none of the shapes ends up with an unsatisfactory compactness quotient. + +![](images/7a9c57d8ae13c1ef0d1f09a6f94eee9de66036d68cb00671324ad9b5a2446efc.jpg) +Figure 3. An illustration of the splitline method. + +![](images/c539d030b0673ee6dba907aff9497ea70ffe997b274aab20904dec3925fa98c7.jpg) +Figure 4. A proposed redistricting of New York, using the K-means algorithm. + +# Districting of New York State + +# K-means Algorithm + +The results given by the K-means algorithm in Figure 4 are generally quite good. Traditionally, when applying cluster-theoretic algorithms to redistricting, it is common practice to split off any regions with particularly high population density and apply the algorithm to those regions separately (see, for instance, Garfinkel and Nemhauser [1970]). This was not needed for the K-means algorithm: Even though the maximum population density of New York City is roughly 2,000 times the mean population density of the state of New York, the K-means algorithm produced results within our tolerance levels. + +To confirm that the weighted K-means algorithm is an effective aid for determining districts, we also used the algorithm to redistrict Texas too. Texas is a good choice because it is large and contains a variety of population densities. + +The K-means algorithm worked overall well with only a few districts outside our target tolerance. + +# Splitline Algorithm + +To obtain results within our desired tolerance, it was necessary to calculate the districts of New York City separately from the remainder of the state. A limitation of the splitline algorithm is that it does not guarantee contiguity of districts (see Figure 5). However it produces contiguous (and, furthermore, convex) districts for a convex state. + +# Conclusions + +We conclude that both the K-means algorithm and the splitline algorithm are viable methods for fair and simple redistricting. K-means produced much better results for New York: The greatest value of + +$$ +\max _ {\mathrm {a l l d i s t r i c t s}} \left(\frac {1 - \mathrm {c l u s t e r p o p u l a t i o n}}{\mathrm {t a r g e t p o p u l a t i o n}}\right), +$$ + +is no more than $2.5\%$ . As an interesting note, while the unweighted K-means method clusters data into regions with piecewise-linear boundaries, inclusion of the weight function effectively rounds the boundaries of the produced districts. These rounder districts have superb compactness coefficients. K-means also has a visually appealing output and meets all other criteria. + +The splitline algorithm results are not quite as satisfactory; however, we believe that this is a result of our implementation and not of the algorithm itself. Even our flawed version of splitline produced districts simpler than the current districts in New York. Our implementation could achieve districts with either even population or high compactness coefficients, but not both simultaneously. It is also difficult to enforce the contiguity requirement in regions with a highly irregular border. When the splitline algorithm is applied to states with convex boundaries, there are no discontinuities; furthermore, every district is convex. In the case of simple states, the splitline algorithm works well, perhaps even better than the K-means algorithm. Its intuitive simplicity is also likely to make shortest-splitline more appealing to the public. + +Both K-means and splitline are deterministic: that is, when each algorithm is applied to a fixed problem, and all parameters are constant, the final result is unique. Some authors have expressed the opinion that any good districting algorithm is deterministic [Hayes 1996]. There is one human element involved in the K-means algorithm: The choice of seeds is made, in some sense, subjectively, by the person implementing the algorithm. This factor could be completely eliminated by randomly picking the seeds, but this is not the most desirable solution. Random seeds can produce solutions far from the global + +![](images/c9a1642b9b09caf40dc34fe75bdb824d47caf2326bde6bb2d80b2915b4e9c585.jpg) + +![](images/d8d5ef3c106446317dc431fd203cc06359bd0948c6ba283a93a3dbdf56d515f8.jpg) +Figure 5. A proposed redistricting of New York, using the splitline algorithm and calculating the districts within New York City separately from the remaining districts. + +optimum of the optimization function and can require many more iterations to get an answer within a given tolerance level. The natural choice is to use the approximate centers of existing districts as seeds. At first, this may seem contrary to our goal of reversing the effects of gerrymandering. A closer analysis of gerrymandering shows that this is not true. Gerrymandering relies on intricately carving districts based on data that are invisible to our algorithm—say, ethnicity, income level, or political affiliation. + +The K-means algorithm clearly performs better on more-complex data sets. The splitline algorithm should not be abandoned, but our final recommendation is that + +The $K$ -means algorithm quickly and deterministically produces districts that are simple and fair, and applying this algorithm would produce a drastic improvement over current districts in any state. + +# References + +Afifi, A.A., and Virginia Clark. 1984. Computer-aided Multivariate Analysis. Belmont, CA: Lifetime Learning Publications. +Anderberg, Michael R. 1973. Cluster Analysis for Applications. New York: Academic Press. +Abramowitz, M., and I.A. Stegun. 1968. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Washington, DC: U.S. Government Printing Office. +Baker v. Carr, 369 U.S. 186 (1962). +Backstrom, Charles H. 1986. The Supreme Court prohibits gerrymandering: A gain or a loss for the states? The State of American Federalism 17 (3): 1-33. +Bourke, Paul. 1988. Calculating the area and centroid of a polygon. http://local.wasp.uwa.edu.au/~pbourke/geometry/polyarea/. Accessed 11 February 2007. +Center for International Earth Science Information Network. 2007. Socioeconomic data and applications center. http://sedac.cisesin.columbia.edu/. Accessed 8 February 2007. +Folland, Gerald B. 2002. Advanced Calculus. Upper Saddle River, NJ: Prentice Hall. +Garfinkel, R.S., and G.L. Nemhauser. 1970. Optimal political districting by implicit enumeration techniques. Management Science 16 (8): B495-B508. +Hayes, Brian. 1996. Machine politics. American Scientist 84 (6): 522-526. + +Shapiro, Linda G., and George C. Stockman. 2001. Computer Vision. Upper Saddle River, NJ: Prentice Hall. +Smith, Warren B. 2007. Examples of our unbiased district-drawing algorithm in action. http://rangevoting.org/RangeVoting.html. Accessed 12 February 2007. +Toobin, Jeffrey. 2003. The great election grab. New Yorker 79 (38) (8 December 2003): 63-80. +U.S. Department of the Interior. 2007. National Atlas of the United States. Printable Maps. Congressional Districts—110th Congress. http://nationalatlas.gov/printable/congress.html. +Young, H.P. 1988. Measuring the compactness of legislative districts. *Legislative Studies Quarterly* 13 (1): 105-115. + +# Appendix: Definitions of Compactness + +The following definitions of compactness are said in Young [1988] to be representative of those definitions favored in past and present scholarship. + +As discussed by Young [1988], the following definitions of compactness are often used or cited in the literature. + +- Visual test. A district is more compact if it appears to be more compact. +- Roeck test. Find the smallest circle containing the district and take the ratio of the district's area to that of the circle. This ratio is always between 0 and 1; the closer to 1, the more compact the district. +- Schwartzberg test. Construct the adjusted perimeter of the district by connecting with straight lines points on the district boundary where three or more constituent units (i.e., census tracts) from any district meet. Divide the length of the adjusted perimeter by the perimeter of a circle with area equal to that of the district. +- Length-width test. Find a rectangle enclosing the district and touching it on all four sides, such that the ratio of length to width is a maximum. The closer the ratio to 1, the more compact the district. +- Taylor's test. Construct the adjusted perimeter of the district by connecting by straight lines those points on the district boundary where three or more constituent units (i.e., census tracts) from any district meet. At each such point the angle formed is "reflexive" if it bends away from the district and "non-reflexive" otherwise. Subtract the number of reflexive from the number of non-reflexive angles and divide by the total number of angles. The resulting number is always between 0 and 1; the closer to 1, the more compact the district. + +- Moment of Inertia test. Locate the geographical center $\mathbf{c}_i$ of each census tract $i$ in the district. Select an arbitrary point $\mathbf{x}$ and calculate the square of the distance from $\mathbf{x}$ to $\mathbf{c}_i$ , multiplied by the population of tract $i$ . The sum of these numbers is the district's moment of inertia about $\mathbf{x}$ . The point that gives the minimum moment of inertia is the center of gravity of the district. The smaller the moment of inertia about the center of gravity, the more compact the district. +- Boyce-Clark test. Determine the center of gravity of the district and measure the distance from the center to the outside edges of the district along equally-spaced radial lines. Compare the percentages by which each radial distance differs from the average radial distance, and find the average of the percentage deviations over all radians. The closer the result is to 0, the more compact is the district. +- Perimeter test. Find the sum of the perimeters of all the districts. The shorter the total perimeter, the more compact the districting plan. + +![](images/39ea3055c328403855d86276e9784205842ef5a8628c271ac3ccdfce79f96152.jpg) +Members of both Outstanding teams from the University of Washington in the Gerrymander Problem. Top row, from left: Sam Whittle, Aaron Dilley, Sam Burden, advisor Jim Morrow; bottom row: Lukas Svec, Wesley Essig, Nate Bottman. Not shown: Advisor Anne Greenbaum. + +# Applying Voronoi Diagrams to the Redistricting Problem + +Lukas Svec +Sam Burden +Aaron Dilley +University of Washington +Seattle, WA + +Advisor: James Allen Morrow + +# Summary + +Gerrymandering is an issue plaguing legislative redistricting. We present a novel approach to redistricting that uses Voronoi and population-weighted Voronoique diagrams. Voronoi regions are contiguous, compact, and simple to generate. Regions drawn with Voronoique diagrams attain small population variance and relative geometric simplicity. + +As a concrete example, we apply our methods to partition New York State. Since New York must be divided into 29 legislative districts, each receives roughly $3.44\%$ of the population. Our Voronoique diagram method generates districts with an average population of $(3.34 \pm 0.74)\%$ . + +We discuss possible refinements that might result in smaller population variation while maintaining the simplicity of the regions and objectivity of the method. + +# Introduction + +Defining Congressional districts has long been a source of controversy in the U.S. Since the district-drawers are chosen by those in power, the boundaries are often created to influence future elections by grouping an unfavorable minority demographic with a favorable majority; this process is called gerrymandering. It is common for districts to take on bizarre shapes, spanning slim sections of multiple cities and crisscrossing the countryside in a haphazard fashion. The only lawful restrictions on legislative boundaries stipulate that districts must + +contain equal populations, but the makeup of the districts is left entirely to the district-drawers. + +In the United Kingdom and Canada, districts are more compact and intuitive. The success of these countries in mitigating gerrymandering is attributed to turning over boundary-drawing to nonpartisan advisory panels. However, these independent commissions can take two to three years to finalize a new division plan, calling their effectiveness into question. It seems clear that the U.S. should establish similar unbiased commissions yet make some effort to increase the efficiency of these groups. Accordingly, our goal is to develop a small toolbox to aid in the redistricting process. Specifically, we create a model that draws legislative boundaries using simple geometric constructs. + +# Current Models + +The majority of methods for creating districts fall into two categories: ones that depend on a current division arrangement (most commonly counties) and ones that do not. Most fall into the former category. By using current divisions, the problem is reduced to grouping these divisions in a desirable way using a multitude of mathematical procedures. Mehrotra et al. [1998] uses graph partitioning theory to cluster counties to total population variation of around $2\%$ of the average district size. Hess et al. [1965] use an iterative process to define population centroids, use integer programming to group counties into equally populated districts, and then reiterate the process until the centroids reach a limit. Garfinkel and Nemhauser [1970] use iterative matrix operations to search for district combinations that are contiguous and compact. Kaiser begins with the current districts and systematically swaps populations with adjacent districts [Hamilton 1966]. All of these methods use counties as their divisions since counties partition the state into a relatively small number of sections. This is necessary because most of the mathematical tools they use become slow and imprecise with many divisions. (This is the same as saying they become unusable in the limit when the state is divided into more continuous sections.) Thus using small divisions, such as zipcodes (which on average are one-fifth the size of a county in New York), becomes impractical. + +The other category of methods is less common. Forrest's method continually divides a state into halves while maintaining population equality until the required number of districts is satisfied [Hamilton 1966; Hess et al. 1965]. Hale et al. create pie-shaped wedges about population centers; this creates homogeneous districts that all contain portions of a large city, suburbs, and less populated areas [Hamilton 1966]. These approaches are noted for being the least biased, since their only consideration is population equality and they do not use preexisting divisions. Also, they are straightforward to apply. However, they do not consider any other possibly important considerations for districts, such as geographic features of the state or how well they encompass cities. + +# Developing Our Approach + +Since our goal is to create new methods, we focus on creating district boundaries independently of current divisions. It is not obvious why counties are a good beginning point for a model: Counties are created in the same arbitrary way as districts, so they may also contain biases. Many of the division-dependent models end up relaxing their boundaries from county lines so as to maintain equal populations, which makes the initial assumption of using county divisions all the more unnecessary and also allows for gerrymandering if the relaxation method is not well regulated. + +Treating the state as continuous (i.e., without considering pre-existing divisions) does not lead to any specific approach. If the Forrest and Hale et al. methods are any indication, we should focus on keeping cities within districts and introduce geographical considerations. (These conditions do not have to be considered if we treat the problem discretely, because current divisions, like counties, are probably dependent on prominent geographical features.) + +Goal: Create a method for redistricting by treating the state continuously. We require the final districts to contain equal populations and to be contiguous. Additionally, the districts should be as simple as possible and optimally take into account important geographical features. + +# Notation and Definitions + +- Compactness: One way to look at compactness is the ratio of the area of a bounded region to the square of its perimeter: + +$$ +C _ {R} = \frac {A _ {R}}{p _ {R} ^ {2}} = \frac {1}{4 \pi} \mathcal {Q}, +$$ + +where $C_R$ is the compactness of region $R$ , $A_R$ is the area, $p_R$ is the perimeter and $\mathcal{Q}$ is the isoperimetric quotient. We do not explicitly use this equation, but we keep this idea in mind when we evaluate our model. + +- Contiguous: A set $R$ is contiguous if it is pathwise-connected. +- Decomposition: Process in which a state is divided into regions using Voronoi and Voronoiesque diagrams. +- Degeneracy: The number of districts represented by one generator point. +- Generator point: A node of a Voronoi diagram. +- Population center: A region of high population density. +- Simple: Simple regions are compact and convex. + +- Voronoi diagram: A partition of the plane with respect to $n$ nodes such that points are in the same region with a node if they are closer to that node than to any other point. +- Voronoiesque diagram: A variation of the Voronoi diagram based on equal masses of the regions. + +# Theoretical Evaluation of our Model + +How we analyze our model's results is tricky, since there is disagreement in the literature on key issues. Population equality is well-defined. By law, the populations within districts have to be the same to within a few percent of the average population per district. + +Creating a successful redistricting model also requires contiguity. In accordance with state law, districts need to be pathwise connected. This requirement is meant to maintain locality and community within districts. It does not, however, restrict districts from including islands if the island's population is below the required population level for a district. + +Finally, there is a desire for the districts to be simple. This is the most ambiguous criterion. Most commonly, simplicity of districts is gauged by compactness (whcih by no means leads to a unique definition of simple). Taylor [1973] defines simple as a measure of divergence from compactness due to indentation of the boundary and gives an equation relating the nonreflexive and reflexive interior angles of a region's boundary. Young [1988] provides seven more measures of compactness. The Roeck test is a ratio of the area of the largest inscribable circle in a region to the area of that region. The Schwartzberg test takes ratio between the adjusted perimeter of a region to a the perimeter of a circle whose area is the same as the area of the region. The moment of inertia test measures relative compactness by comparing "moments of inertia" of different district arrangements. The Boyce-Clark test compares the difference between points on a district's boundary and the center of mass of that district, where zero deviation of these differences is most desirable. The perimeter test compares different district arrangements by computing the total perimeter of each; a smaller perimeter means more compact. Finally, there is the visual test, which decides simplicity based on intuition [Young 1988]. + +Young [1988] notes that "a measure [of compactness] only indicates when a plan is more compact than another." Thus, not only is there no consensus on how to analyze redistricting proposals, it is also difficult to compare them. + +Finally, we remark that the above list only constrains the shape of generated districts without regard to any other potentially relevant features—for example, how well populations are distributed or how well the new district boundaries conform with other boundaries, like counties or zipcodes. Even with this short list, we are not in a position to define simplicity rigorously. What we can do, however, is argue for which features of proposed districts are simple and which are not. This is in line with our goal, since this list can be provided to + +![](images/9abba37138eee1a5278e29478c5af14b10c6ac53c7747216584968b6d6fb8e26.jpg) +Figure 1. Illustration of Voronoi diagram generated with Euclidean metric. Note the compactness and simplicity of the regions. + +a districting commission who decide how relevant those simple features are. We do not explicitly define simple, we loosely evaluate simplicity based on overall contiguity, compactness, convexity, and intuitiveness of the model's districts. + +# Method Description + +Our approach depends heavily on using Voronoi diagrams. We begin with a definition, its features, and motivate its application to redistricting. + +# Voronoi Diagrams + +A Voronoi diagram is a set of polygons, called Voronoi polygons, formed with respect to $n$ generator points in the plane. Each generator $p_i$ is contained within a Voronoi polygon $V(p_i)$ with the following property: + +$$ +V (p _ {i}) = \{q \mid d (p _ {i}, q) \leq d (p _ {j}, q), i \neq j \}, +$$ + +where $d(x,y)$ is the distance from point $x$ to point $y$ . That is, $V(p_{i})$ is the set of all points $q$ that are closer to $p_{i}$ than to any other $p_{j}$ . The Voronoi diagram is the set $\mathbf{V} = \{V(p_1),\ldots ,V(p_n)\}$ of all Voronoi polygons (see Figure 1). + +Of the many possible metrics, we use the three most common: + +Euclidean metric: $d(p,q) = \sqrt{(x_p - x_q)^2 + (y_p - y_q)^2}$ +- Manhattan metric: $d(p, q) = |x_p - x_q| + |y_p - y_q|$ +- Uniform metric: $d(p, q) = \max \left\{ |x_p - x_q|, |y_p - y_q| \right\}$ + +![](images/8bbb8efc3e447c6247a623b56c7582f563a746e82ed2dfb554c30e461fa5e54c.jpg) +Figure 2. The process of "growing" a Voronoique diagram with respect to a population density. Only three three generator points are used. Figures from left to right iterate with time. + +# Useful Features of Voronoi Diagrams + +- The Voronoi diagram for a set of generator points is unique and produces contiguous polygons. +- The nearest generator point to $p_i$ determines an edge of $V(p_i)$ +- The polygonal lines of a Voronoi polygon do not intersect the generator points. +- When working in the Euclidean metric, all regions are convex. + +The first property tells us that regardless of how we choose the generator points, we generate unique regions. The second property implies that each region is defined in terms of the surrounding generator points, while in turn each region is relatively compact. These features of Voronoi diagrams effectively satisfy two of the three criteria for partitioning a region: contiguity and simplicity. + +# Voronoiesque Diagrams + +The second method that we use to create regions is a modification of Voronoi diagrams; we call them Voronoiesque diagrams. One way to visualize the construction of Voronoi diagrams is to imagine shapes (determined by the metric) that grow radially outward at the same constant rate from each generator point. In the Euclidean metric, these shapes are circles; in the Manhattan metric, they are diamonds; in the Uniform metric, they are squares. As the regions intersect, they form the boundary lines for the regions. We define Voronoiesque diagrams to be the boundaries defined by the intersections of these growing shapes. The fundamental difference between Voronoi and Voronoiesque diagrams is that Voronoiesque diagrams grow the shapes radially outward at a constant rate, even though their radial growth is defined with respect to a real function on a subset of $\mathbb{R}^2$ (representing the space on which the diagram is being generated) (see Figure 2). + +More rigorously, we define a Voronoi diagram to be the intersections of the $\mathcal{V}_i^{(t)}$ s, where $\mathcal{V}_i^{(t)}$ is the Voronoiesque region generated by the generator point + +$p_i$ at iteration $t$ . With every iteration, + +$$ +\mathcal {V} _ {i} ^ {(t)} \subset \mathcal {V} _ {i} ^ {(t + 1)} +$$ + +and + +$$ +\int_ {\mathcal {V} _ {i}} f (x, y) d A = \int_ {\mathcal {V} _ {j}} f (x, y) d A +$$ + +for all $\nu_{i},\nu_{j}$ representing different regions. The manner in which the $\nu_{i}^{(t)}$ s are grown radially from one iteration to the next is determined by the metric used. + +What's useful about Voronoi-esque diagrams is that their growth can be controlled by requiring that the area under the function $f$ for each region is the same at every iteration. In our model, we take $f$ to be the population distribution of the state. Thus, the above equation is a statement of population equality. Also, when $f$ is constant, the regions grow at a constant rate until they intersect, so the resulting diagram is Voronoi. + +The final consideration for using Voronoiques diagrams is determining the location for generator points. + +# Determining Generator Points + +Our first approach is to place generator points at the $m$ largest set of peaks that are well distributed throughout the state, where $m$ is the required number of districts. Doing this keeps larger cities within the boundaries that we will generate with Voronoi or Vornoiesque diagrams and makes sure that the generator points are well dispersed throughout the state. One problem that arises is when a city is so large that for districts to hold the same number of people, a city must be divided into districts. A perfect example is New York City, which contains enough people for 13 districts. + +Our second approach is to choose the largest peaks in the population distribution and assign each peak a weight. The weight for each generator point is the number of districts into which the population surrounding that peak needs to be divided. We call this weight the degeneracy of the generator point. We begin assigning generator points to the highest populated cities with their corresponding degeneracies until the sum of all the generator points and their respective degeneracies equals $m$ . As we will see with New York, this method works well. + +# Creating Regions + +Once we have our generator points, we generate our diagrams: + +- First, generate the diagram using the given generator points. +- Second, assign each generated region, called a subdivision, with some degeneracy $r$ , create $r$ new generator points within that subdivision by finding the $r$ largest population density peaks. + +![](images/79653b0f1398839ce1e5d2571efebaa1dfd9cd917ba7aa0716aea82a523152c8.jpg) +Figure 3. Creating divisions by first subdividing the map. Left: Population density distribution of hypothetical map with five desired districts. Middle: A subdivision of the map into two regions generated from two unshown generator points. Right: Final division of each subregion from the middle figure into desired final divisions. + +- Third, create another diagram within that subdivision using the $r$ generator points. + +We call this three-step process a decomposition (Figure 3). + +# Redistricting in New York State + +We begin by explaining our choice of generator points at population centers. Then we describe several methods for generating Voronoi and Voronoiesque diagrams from these points and present the corresponding results. Finally, we discuss how to use these diagrams to create political districts for New York. + +# Population Density Map + +The U.S. Census Bureau maintains a database of census tract-level population statistics; when combined with boundary data for each tract, it is possible to generate a density map with a resolution no coarser than 8,000 people per region. Unfortunately, our limited experience with the Census Bureau's data format prevented us from accessing these data directly, and we contented ourselves with a 792-by-660 pixel approximation to the population density map [Irwin 2006]. + +We loaded this raster image into Matlab and generated a surface plot where height represents population density. To remove artifacts introduced by using a coarse lattice representation for finely-distributed data, we applied a 6-pixel moving average filter to the density map. The resulting population density is shown in Figure 6. + +# Limitations of the Image-Based Density Map + +The population density image that we used yields a density value, for every one-third of a square mile, from the following set (measured in people per + +![](images/8884569fc13953fad498f97d54565c5d9acfe010e95530c639dcd1fa1a89297a.jpg) +a. White areas represent high population density over New York. + +Figure 4. New York State population density map. Data from 792-by-660 pixel raster image at Irwin [2006]; color and height indicate the relative population density at each point. +![](images/25195497a077439e5ba032e195b47fa3d84f98f98ad2975df34941f95c160e3b.jpg) +b. Angled View: Clearer view of population distribution over New York. + +square mile): + +$\{0, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000\}$ . + +This provides a decent approximation for regions with a density smaller than 5,000 people/sq. mi. However, the approximation will break down at large population centers; New York City's average population density is 26,403 people/sq. mi. [Wikipedia 2007]. + +# Selecting Generator Points + +Our criteria for redistricting stipulate that the regions must contain equal populations. New York State must be divided into 29 congressional districts, so each region must contain $3.44\%$ of the state's population. Since a state's population is concentrated primarily in a small number of cities, we use local maxima of the population density map as candidates for generator points. + +If we were to simply choose the highest 29 peaks from the population density map as generator points, they would be contained entirely in the largest population centers and would not be distributed evenly over state. For the largest population centers, we assign a single generator point with a degeneracy. After all the generator points have been assigned, we generate a Voronoi diagram for the state. Then, we return to the regions with degenerate generator points and repeat the process of finding generator points for that region and generate a Voronoi diagram from them. See Figure 3 for an illustration of the decomposition before and after subdivision. + +We subdivide the region around New York City into 12 subregions, Buffalo into 3 subregions, and Rochester and Albany into 2 subregions. This roughly corresponds to the current allocation, where New York City receives 14 districts, Buffalo gets 3, and Rochester and Albany both get roughly 2. New York City's population is underestimated, since the average density there far exceeds our data's density range. With a more detailed data set, our method would call for more-accurate degeneracy values for each subdivision. + +# Applying Voronoi Diagrams to NY + +The simplest method that we consider for generating congressional districts is to generate the Voronoi diagram from a set of generator points. We achieve this by iteratively "growing" regions outward with the function $f$ constant. That way, the regions grow at a constant rate, and hence the resulting diagram is Voronoi. A region's growth is limited at each step by its radius in a certain metric; we considered the Euclidean, Manhattan, and Uniform metrics. Once the initial diagram has been created, a new set of generator points for subdivisions is chosen and those regions are subdivided using the same method. The decomposition of New York is seen in Figures 5-6. + +![](images/2fb1db2cce728b1538a07eb6464019aa673995e5847217d18fcb9cf20b52673a.jpg) +a. Regions created using the Manhattan metric before subdivisions are implemented. + +![](images/67e1cd18eb538c5b2c1063027e91878143e20bf1dfe63efa981ba0b2b9dde286.jpg) +b. Regions created using the Manhattan metric after subdivisions are implemented. Subdivisions are created in New York City, Buffalo, Rochester, and Albany. +Figure5. Implementation of Voronoi diagrams with the Manhattan metric. in three steps: assigning degeneracies to generator points, using the points to generate regions, and creating subregions generated by degenerate points. Only the last two steps are depicted. (Dots in each region represent generator point locations.) + +![](images/6cdfbc936ea1e3316b33b2832195a1a1049c661bc500cee9b075849ccb1496cf.jpg) + +a. Regions created using the Manhattan metric before subdivisions. Average Population $= (3.5 \pm 2.2)\%$ . + +![](images/937e8016e8f31ecde27ca953955b161ba7e1bca62377787a299675d172cb5e09.jpg) + +b. Regions created using the Euclidean metric before subdivisions. Average Population $= (3.7 \pm 2.6)\%$ . + +![](images/cd60cabb757364512a2877a8ffbe46b0317c8c321cbf5eda9d386c27cefbd45e.jpg) + +c. Regions created using the Uniform metric before subdivisions. Average Population $= (3.7 \pm 2.6)\%$ . + +Figure 6. Voronoi diagrams generated with three distance metrics before subdivision of densely populated regions. (Dots in each region represent generator point locations.) + +Each metric produces relatively simple districts, though the Manhattan metric has simpler boundaries and yields a slightly smaller population variance between regions. + +# Applying Voronoiiesque Diagrams to NY + +Though our simple Voronoi diagrams produced simple regions with a population mean near the desired value, the population variance between regions is enormous. In this sense, the simple Voronoi decomposition doesn't meet one of the main goals. However, the Voronoi regions are so simple that we prefer to augment this method with population weights rather than abandon it entirely. + +Figure 7 shows the result of this decomposition, along with exploded views of the two regions which were subdivided more than twice in the refinement stage of the diagram generation. The population in each region varies from $2.44\%$ to $6.15\%$ . + +# Precisely Defining Boundary Lines + +It is not satisfactory to say that the regions created by our models should define the final boundary locations. At least, boundaries should be tweaked so that they don't accidentally divide houses into two districts. However, given the scale at which the Voronoi and Voronoique diagrams were drawn, it seems reasonable to assume that their boundaries could be modified to trace existing boundaries—like county lines, zipcodes, or city streets—without changing their general shape or average population appreciably. As an example, the average area of a zipcode in New York state is 10 sq. mi. and roughly 200 city blocks per square mile in Manhattan, while the minimum size of one of our Voronoi regions is 73 sq. mi. and the average size is 2,000 sq. mi. Therefore, it seems reasonable that we could approximate the boundaries of our Voronoi and/or Voronoique diagrams by pre-existing boundaries. + +# Analysis + +# New York State Results + +In terms of simplicity of generated districts, our Voronoi-diagram method is superior, particularly when applied with the Manhattan metric: The generated regions are contiguous and compact, while their boundaries—being polygonal—are about the simplest that could be expected. However, this method falls short in achieving equal population distribution among the regions. + +When we modify the Voronoi diagram method to generate population-weighted Voronoiesque regions, we cut the population variance by a factor + +![](images/304d720d607f7a1d3b1c0f6ae886fe204928df9964c2e6fd0f5fabfa6c7fa06d.jpg) +a. Overall New York Voronoiquesque regions. + +![](images/7de92f166e5535c2712826f49532e7c36940d862fdefe3270edbc7ab9cd6b944.jpg) +b. Exploded view of regions around Buffalo. + +![](images/a97f9f7df79a76e088461739ca1fa4ed488f138bc70fbcd45f1c62aa466c79f6.jpg) +c. Exploded view of regions around Long Island. +Figure 7. Districts created by the Voronoique diagram for New York. Average population per region $= (3.34 \pm 0.74) \%$ . (Dots in each region represent generator point locations.) + +of four—from $\pm 2.8\%$ to $\pm 0.7\%$ —while suffering a small loss in the simplicity of the resulting regions. In particular, regions in the Voronoique diagrams appear to be less compact, and their boundaries are more complicated, than their Voronoi diagram counterparts, though contiguity is still maintained. + +Any implementation of a diagram generated from either method would have to make small localized modifications to ensure that the district boundaries make sense from a practical perspective. Though this would appear to open the door to politically-biased manipulation, the size of the necessary deviations (on the order of miles) is small enough when compared to the size of a Voronoi or Voronoique region (on the order of tens or hundreds of miles) to make the net effect of these variations insignificant. Therefore, though we have provided only a first-order approximation to the congressional districts, we have left little room for gerrymandering. + +# General Results + +# Population Equality + +The largest problem with this requirement occurs when we try to make regions too simple. Typically, our Voronoi method has the most room for error here. If a state has high population density peaks with a relatively uniform decrease in population density extending away from each peak, then the regions will differ quite a bit. This is because in this situation, ratios of populations are then roughly equal to the ratios of areas between regions. However, our final method focuses primarily on population, so equality is much easier to regulate. + +# Contiguity + +Contiguity problems arise if the state itself has little compactness, like Florida, or has some sort of oceanic sound, like Washington. The first two methods focus on population density without acknowledging the boundaries of the state. So it's possible for a district to be divided by a geographic obstruction, such as a body of water or a mountain range. Our final method fixes this by growing in increments, which allows for regions to grow not over but around specified obstacles. + +# Compactness + +The Voronoi diagram method creates convex regions. Though the Voronoi-esque method cannot guarantee convexity, the form of a region is similar in shape and size to the Voronoi region. A nice property of Voronoi regions is that we can make slight adjustments to the boundaries while still maintaining convexity (see below). This is good for taking population shifts across districts into account between redistrictings. + +![](images/ed3dbae9338101a8582c01e5aa7cde3ec96e3c1e4cf79d704df373ff7f2e2249.jpg) +Figure 8. Illustration of Voronoique diagram generation that takes geographic obstacles into account. + +# Improving the Method + +# Boundary Refinement + +The Voronoi approach is good at generating polygonal districts but not as successful at maintaining population equality. One improvement is vertex repositioning. Adjacent districts generated by this method all share a vertex common to at least three boundaries. From this vertex extends a finite number of line segments that partially define the boundaries of these adjacent regions. Connecting the endpoints of these segments yields a polygon. Now we are free to move the common vertex anywhere in the interior of this polygon while still maintaining convexity; we can redraw boundaries between regions to equalize population size. + +In the Voronoique method, too, there are ways to adjust population inequality: Looking at the region with the lowest population, systematically increase the area of the low-population regions while decreasing the area of the neighboring high-population regions. + +# Geographic Obstacles + +Our methods don't consider geographic features such as rivers, mountains, canyons, etc. The Voronoique method, however, has the potential to implement these features. The same algorithm that detects intersections between Voronoique regions can detect a defined geographic boundary and stop growing in that direction. An illustration of this idea is shown in Figure 8. + +# Conclusion + +Our model requires the use of only a state's population distribution but as an option can incorporate county, property, and geographic considerations. + +Our Voronoique model satisfies our proposed goal. We supply a model for a redistricting committee to generate district boundaries that are simple, con- + +tigious, and produce districts with equal populations. In particular, Voronoi-esque diagrams redistrict New York very well. What is particularly attractive about our methods is that generating the districts is intuitive and accessible to the general public. The computer generation process takes less than 10 seconds. + +# References + +Garfinkel, R.S., and G.L. Nemhauser. 1970. Optimal political districting by implicit enumeration techniques. Management Science 16 (8): B495-B508. +Hamilton, Howard D. 1966. Reapportioning Legislatures. Columbus, OH: Charles E. Merril Books, Inc. +Hess, S.W., J.B. Weaver, H.J. Siegfeldt, J.N. Whelan, and P.A. Zitlau. 1965. Non-partisan political redistricting by computer. *Operations Research* 13 (6): 998-1006. +Irwin, Jim. 2006. http://upload.wikimedia.org/wiki/en/e/e2/New_York_Population_Map.png. +Mehrotra, Anuj, Ellis L. Johnson, and George L. Nemhauser. 1998. An optimization based heuristic for political districting. Management Science 44 (8): 1100-1114. +Taylor, Peter J. 1998. A new shape measure for evaluating electoral district patterns. American Political Science Review 67 (3): 947-950. +______ 2007. Census 2000 Summary File 1 [New York]. http://www2.census.gov/census_2000/datasets/Summary_File_1/New_York/. +U.S. Census Bureau. 2005. 2005 first edition tiger / line data, Feb. 2007. +Wikipedia. 2007. New York City. http://en.wikipedia.org/wiki/New_York_City. +Young, H.P. 1988. Measuring the compactness of legislative districts. *Legislative Studies Quarterly* 13 (1): 105-115. + +![](images/0e145809c3d9f8f08fff7fb399fa7a1ead59876ec7602b2d5d8a48fc5bf62af0.jpg) +Aaron Dilley. + +![](images/48e4f0f8e8605e17e2e147e067698cef6790c0d42dfeab1945707889927ecb11.jpg) +Lukas Svec, advisor Jim Morrow, and Sam Burden. + +# Boarding at the Speed of Flight + +Michael Bauer + +Kshipra Bhawalkar + +Matthew Edwards + +Duke University + +Durham, NC + +Advisor: Anne Catlla + +# Summary + +We seek an efficient method for boarding a commercial airplane that accommodates unpredictable human behavior, with a framework that allows us to compare and contrast different procedures. Passenger dependencies, bottlenecks, and the rate of interferences are critical factors for airplane boarding time. + +Boarding without seating assignments is fastest, since each person is in the correct order for their flexible seat choice; it removes all interferences and makes the boarding time depend solely on the entrance rate of passengers into the plane. Hoping to emulate the performance of this method, which we call "random greedy," we design a new algorithm to model its average seating order: the parabola boarding scheme, which breaks the plane into parabolashaped zones. + +We use a discrete-time simulation engine to model current boarding schemes as well as the parabola and random greedy algorithms. The zone-boarding schemes outside-in, pyramid, and parabola are almost identical in performance; back-to-front and alternating rows are significantly worse. + +We examine the effects of scheme-independent parameters on boarding time. Ensuring a fast rate of people entering the plane and fast luggage stowage are both critical; an airline could reduce boarding time by improving either of these regardless of boarding scheme. + +By varying both the rate of people entering the plane and time to stow luggage, we find a correlation between average boarding time and the difference between best and worst times. The random greedy algorithm has the smallest difference; outside-in, pyramid, and parabola have equal differences. Faster boarding algorithms are also more reliable and allow for tighter scheduling. + +The best boarding algorithms do not have assigned seating. If, however, an airline feels that assigned seating is mandatory for customer satisfaction, then any of outside-in, pyramid, or parabola will result in a consistently fast boarding time with minimum deviation from average times and will be a marked improvement over the traditional back-to-front boarding method. + +# Introduction + +Short of a single minor detail, the airplane boarding problem would be easily solved using a very simple algorithm. Given his performance in the film *Snakes on a Plane* [2007], we know Samuel L. Jackson is an optimal de-boarder of snakes from planes. Assuming that he maintains equal effectiveness with people, simply invert his role and you have an optimal passenger boarding algorithm. We model people as snakes, play the film in reverse, and determine the boarding time! + +# Conventions + +# Terminology + +- Passenger: A person traveling on the plane who is not part of the crew. +- Boarding Scheme: An assignments of zones or groups according to which passengers board the plane. +- Interference: An event in which a passenger cannot progress towards their seat because of another passenger blocking the way. + +# Variables + +- $C$ : the number of columns in the plane, which is also the number of seats in a row. +- $R$ : the number of rows in the plane. For the most part, we ignore or treat in a different manner distinctions between classes of seating. +- $B$ : the time for a person to stow luggage, assumed to be constant in our preliminary analysis but allowed to vary in the simulation. +- $v$ : the walking speed of passengers, constant. +- $s$ : the time for an already-seated passenger to get up and get out into the aisle to let another passenger pass, constant for our preliminary analysis but variable in the simulation. + +- $\lambda$ : the rate at which people enter the plane through the main door, constant since any variability among passengers is mitigated by walking down the jet-bridge to the plane. + +# Assumptions + +- Passengers with physical limitations, families with infants, and passengers advanced in years board the plane before other passengers. The time for this "pre-boarding" is a constant overhead that airlines cannot avoid. +- First-class passengers board separately. +- All passengers during general boarding walk at the same speed, limited more by the environment (aisle size, people in the way) than by physical capacity. Passengers board and walk independently, that is, no groups wait for one another. Family members are assigned seats next to one another. +- We confine our analysis to the interior of the plane, ignoring terminal effects beyond requiring that gate agents supply passengers at a certain rate. If the plane cannot "process" passengers quickly enough, they queue in the jet-bridge. The interior of the plane is regular and symmetric, with all rows of equal size. +- All planes fly at maximum capacity and all passengers are present when their zone is called, which they follow obediently. Empty seats only speed up the process. Late or noncompliant passengers can be accounted for by adding a time overhead. +- We confine our recommendations and analysis to methods that do not overly alter the status quo. We analyze ticketless methods for comparison but seek the best boarding method for ticketed contexts. We further consider only zone-based boarding calls, assuming that is logistically impossible to require passengers to line up in any verifiable order. + +# Motivation and Subproblems + +What if all variables involving passenger boarding could be controlled? How would we schedule the boarding optimally? We would use a modified version of the outside-to-inside method. We first order the passengers into groups of equal size $R$ by the following set of criteria in descending order of priority: + +- Individual location in row: Window has highest priority, aisle has least. +- Side of plane: left side of plane has priority over right side. + +- Row number: Rows in back have priority over those in front. + +The following algorithm then boards the plane optimally: + +Each group proceeds down the aisle until each person reaches their row (since people are in order, they all reach their rows simultaneously). They step into the first seat in their row and begin stowing their luggage. During this time, the next group commences down the aisle. The only time when a group might stall in the aisle is if $B > 2R / v$ , in which case every other group must wait in the aisle for $B - 2R / v$ seconds. (This accounts for the additional term in the second part of (1) below.) + +The ideal boarding algorithm places a lower bound on the time to board an airplane: + +Ideal boarding time $= \left\{ \begin{array}{ll} C \frac{R}{v} + B, & B \leq 2 \frac{R}{v}; \\ C \frac{R}{v} + B + \left( d \frac{C}{2} - 1 \right) (B - 2R), & B \geq 2 \frac{R}{v}. \end{array} \right.$ (1) + +Key points about the operation of the algorithm are: + +- The main aisle is continuously busy unless passengers have to wait for people in their row to finish stowing luggage. +- Passengers are "pipelined" to minimize the blocking effect of stowing luggage. + +Imperfect ordering forces us to consider the following: + +- Random orderings: How out of order are people and how does this impact other dependencies in boarding? +- Flow rates: How long does it take people to enter the plane and walk down the aisle without blocking it? +- Luggage: How large is the luggage and how long does it take to stow? + +All of these introduce dependencies into the system. Randomness prevents us from determining the occurrence or duration of these dependencies and therefore forces us to design boarding schemes capable of tolerating their effects. + +One way to remove dependencies is to force people to continue moving as far back in the plane and over in a row as long as they don't get blocked. We return to this random greedy approach later, since it represents the intuitive motivation for our best airplane boarding scheme. + +# Predicting Bottlenecks with Queuing Theory + +One model for airplane boarding is a stochastic process, a collection of random variables that must take on a value at every state, with states indexed by a parameter (in our case, time) [Trivedi 2002]. Queuing theory deals with analyzing how the random variables in stochastic processes interact. Traditionally, queuing theory is used to determine the average throughput of a system. While the plane boarding problem does not possess a quantity directly corresponding to throughput, we gain a better understanding of bottlenecks and their effects by using this approach. + +We partition the plane into a series of queues. We place a "processor" at each row. This processor corresponds to a passenger making a decision at this point either to keep walking or to stop and enter their row. Each processor has a queue that stores passengers. A queue has size 1 and if full will stop the processor feeding it; this would represent people backing up if someone stops in the aisle. A diagram for this layout is in Figure 1. + +![](images/da3ef0e770a945aa4165e6c71e621185ac16ab3bf9d91699b9c82f7e01a9e680.jpg) +Figure 1. A queuing-theory model of airplane boarding. + +In Figure 1, $u_{k}$ is the processing rate, the average walking speed of passengers. Each $p_{k}$ represents some probability with which passengers divert into the their rows or continue walking in the aisle. In some cases, people will take longer to get into their rows, depending upon how long it takes to stow their luggage. The processor associated with that row then takes longer to process that job, causing the flow of people through the aisle to stall. Downfalls of this model are that all passengers are eventually supposed to leave the system (i.e., get into their seats) and it doesn't accurately reflect that each row should only ever hold $C$ passengers; so we do not use the queuing theory model as our main model. However, it gives us useful knowledge concerning bottlenecks in the aisle. + +To convert the open system shown of Figure 1 to a closed-form system that can be solved by queuing theory, we use Jackson's Theorem (no, not Samuel L. Jackson again!) [1957]: + +An open system can be represented by a feedback loop if the rate of processing at each processor is augmented proportional to the rate of flow prior to that processor. + +In our application, all rates depend on $p_0$ because the rate into the next queue depends on the output of the previous processor, so terms cancel under our assumptions for the value of any $p_j$ . Using this theorem, we can redraw our airplane model as in Figure 2. The closed form allows us to determine the + +![](images/5dfb90d02475681d3f191ff1987c98e0e28580a68bde0f698a8ce6114a43eddd.jpg) +Figure 2. A closed-form queuing model of airplane boarding. + +probability of having a given number of passengers at a specific node at a given time. We let $\rho(k_0, k_1, \ldots, k_{n-1})$ be the probability of $k_i$ people in position $i$ in the aisle. Conceptually, this implies that we have an $n$ -dimensional state space, since the number of passengers at each node is potentially different. + +We now write down conditions that $\rho$ must satisfy and use these to find an equation for $\rho$ . + +The first condition is that $\rho$ must "conserve" passengers by maintaining flow of passengers into and out of each state in the state-space. This ensures that passengers are never "lost" in the system: + +$$ +\begin{array}{l} \left(\lambda + \sum_ {j = 0} ^ {n - 1} \mu_ {j}\right) \rho \left(k _ {0}, k _ {1}, \dots , k _ {n - 1}\right) = \\ \lambda \rho (k _ {0} - 1, k _ {1}, \dots , k _ {n - 1}) + \mu_ {n - 1} \rho (k _ {0}, \dots , k _ {n - 2}, k _ {n - 1} + 1) + \\ \sum_ {j = 0} ^ {n - 2} \mu_ {j} \rho (\dots , k _ {j} + 1, k _ {j + 1} - 1, \dots). \\ \end{array} +$$ + +We also need to define the boundary states of the state space, which must ensure that no state can have a negative number of passengers at any processor: + +$$ +\begin{array}{l} (\mu_ {0} + \lambda) \rho (k _ {0}, 0, 0, \dots , 0) = \mu_ {1} \rho (k _ {0}, 1, 0, \dots , 0) + \\ \lambda \rho (k _ {0} - 1, 0, 0, \dots , 0), \qquad \qquad k _ {0} > 0; \\ (\mu_ {n - 1} + \lambda) \rho (0, 0, \dots , k _ {n - 1}) = \mu_ {n - 2} \rho (0, 0, \dots , 1, k _ {n - 1} - 1) + \\ \mu_ {n - 1} \rho (0, 0, \ldots , k _ {n - 1} + 1), \qquad k _ {n - 1} > 0; \\ \lambda \rho (0, 0, \dots , 0) = \mu_ {0} \rho (1, 0, \dots , 0). \\ \end{array} +$$ + +Lastly, all probabilities must sum to 1: + +$$ +\sum_ {k _ {n - 1} \geq 0} \sum_ {k _ {n - 2} \geq 0} \dots \sum_ {k _ {0} \geq 0} \rho \left(k _ {0}, k _ {1}, \dots , k _ {n - 1}\right) = 1. +$$ + +We can then extend the solution presented in Trivedi [2002] from a two-processor chain and see that that $\rho$ has the form + +$$ +\rho (k _ {0}, k _ {1}, \dots , k _ {n - 1}) = \prod_ {j = 0} ^ {n - 1} (1 - \rho_ {j}) \rho_ {j} ^ {k _ {j}}, +$$ + +where each $\rho_{j}$ takes the form + +$$ +\rho_ {j} = \frac {\lambda}{\mu_ {j}} +$$ + +and $\mu_{j}$ is the rate of processing of the $j$ th processor. From Trivedi, we know that the bottleneck of the system occurs at the processor with the largest $\rho_{j}$ . + +We now consider a random ordering of people entering the plane, in which a passenger turns into a given row with probability $1/n$ or continues walking with probability $(n-1)/n$ . We assume that in the original system $u_0 = u_1 = \ldots = u_{n-1}$ and therefore all nodes in the closed system must have a rate of $\rho_j = (n-j)\lambda / u_j$ for all $j$ . This implies that $\rho_0$ is the largest in the system and is therefore the bottleneck. If we recursively apply this for an airplane with $(n-1)$ rows, we see that the bottleneck will always be the first processor. We can then recognize three important properties of airline boarding: + +- The critical bottleneck for boarding is always the first row in the plane. +- The criticality of the main bottleneck is linearly proportional to the number of rows in the plane. +- The farther back a row, the less it contributes to bottlenecking. + +# Effects of Row and Column Interferences + +Boarding gets more complicated when people board out of order, which leads to row interference and column interferences that hold up traffic. Here we use probabilistic estimation to assess zone configurations that are affected least by shuffling passengers in a given zone. For the sake of simplicity, we analyze a plane with 6 seats per row, but the analysis generalizes. First, we develop some lemmas, based on assuming that passengers board in random order. + +Row interferences occur when a passenger sitting in an aisle or middle seat has to get up to let in the person who has the window seat or the middle seat. We calculate the expected number of times that a passenger has to get up if the passengers sitting in a row of $k$ seats board in random order. + +Lemma 1 The expected number of interferences in a row of $k$ people is $k(k - 1) / 4$ . + +In particular, the expected number of interferences for 3 seats is $3 / 2$ . + +When a passenger stands in the aisle to stow luggage, the passengers behind must wait. We assume that a passenger can proceed to the right row and stow luggage as long as the passenger is not blocked by another passenger stowing luggage. The lemma below finds the longest sequence of passengers who can be stowing their luggage at once. If the rows are numbered in increasing order from the back of the plane to the front, the problem can be reduced to finding a largest increasing subsequence of row assignments among the passengers, since these passengers then can proceed to their seats and stow their bags. + +Lemma 2 (Kiwi 2006) The expected length of longest increasing subsequence in a permutation of $\{1,2,\ldots ,k\}$ is (asymptotically) of size $2\sqrt{k}$ . + +The proof is quite involved and we do not discuss it. The lemma tells us that if $k$ passengers sitting in different rows board the plane at once, then $2\sqrt{k}$ of them can proceed to their seats and stow luggage without encountering an interference. If we have $m$ people spread over $k$ rows, then it will take them $\lfloor m / (2\sqrt{k}) \rfloor B$ time to stow their luggage. + +We use these lemmas to estimate the boarding time for a group of passengers to be seated in different configurations. + +# Configuration 1: Dense Distribution over Rows + +The zone is composed of $m$ passengers spread densely over $k$ rows. For 6 passengers in a row, dense means that all 6 are in the same zone. The expected number of row interferences for this configuration is $\frac{3}{2} \cdot 2k$ . The boarding time for this zone is approximately + +$$ +T = \left\lfloor \frac {m}{2 \sqrt {k}} \right\rfloor B + 3 k s, +$$ + +where $B$ is bag stowage time and $s$ is the time for a passenger to get out of their seat to allow a fellow passenger to pass and then sit down again. The time for people to walk down the aisle can be ignored, since in this case it is overshadowed by bag stowage and reseating. + +# Configuration 2: Sparse Distribution over Rows + +The zone is composed of $m$ passengers sparsely distributed over $k$ rows, meaning at most two passengers in a row, mostly on different sides of the aisle. Having a sparse distribution totally eliminates the effect of reseating time but results in walking time becoming the critical factor. The walking time for this configuration is roughly $kv$ , where $v$ is the time to walk from one row to next. Thus, total time for boarding this group is + +$$ +T = \left\lfloor \frac {m}{2 \sqrt {k}} \right\rfloor B + k v. +$$ + +# Boarding Schemes + +Currently-used boarding systems include: + +Back-to-front: (Air Canada, Alaska, American, British Airways, Continental, Frontier, Midwest, Spirit, Virgin Atlantic [Finney 2006]) The most widely-used boarding scheme. Passengers are divided into zones and board at the front door in a back-to-front order. + +Outside-in: (Delta and United Airlines) Passengers are boarded windows first, followed by middle seats, with aisle seats boarding last. + +Reverse-pyramid: (US Airways on some routes) This scheme boards people in a V-like manner, with rear middle and windows boarding first, followed by rear aisles and front aisle. + +No assigned seats: (Southwest Airlines) Ostensibly the fastest boarding scheme. Passengers are not assigned seats and may sit anywhere. This scheme has not been widely copied, since it does not lead to high customer satisfaction and is often likened to a cattle car. + +Figure 3 offers a visual comparison. We color seats according to the order in which they fill, with red (dark) earlier and green (light) later. The entry door is at the top and the bottom is the back row. We include an ordering named "Parabolas" that we introduce later. + +![](images/17f45b5cb0e09445ded80424e8bef93d5789e77f6a785e0d64c1dd933a0ba3d0.jpg) +Figure 3. Seat ordering schemes. + +# Simulation Design and Details + +We produced a comprehensive flexible boarding simulator that we use to compare boarding algorithms and the effects of various situations. Our simulation techniques were inspired by stochastic Petri nets, finite time-step simulations, and cellular automata [Marelli et al. 1998]. + +# Process + +Our simulation model runs through time in small intervals. At each interval, it moves each participant in the simulation according to rules defined by the + +input parameters. Certain events take extra time and create blocks for other participants in the model. For example, a passenger stowing luggage blocks the aisle for a certain amount of time. + +# Plane + +The plane is a variably-sized rectangular grid of seats with a single aisle for passenger movement in the center of the columns and a single door for entry at the beginning of the aisle. The space between rows (pitch in industry terms) and between columns is adjustable. + +# Behavior Modeling + +Passengers can board with either assigned or unassigned seats. + +- Assigned seats: Passengers move to their seats as fast as walking speed allows, waiting as necessary for obstacles to clear. They make no mistakes in moving to their assigned seats. +- Unassigned seats: Passengers walk as far back as possible before sitting. If the aisle is blocked, they sit in the current row to avoid waiting standing up. When they sit, they are generous and move all the way toward the window to save future passengers' time. + +A passenger has an associated seating delay time for moving into their row, which corresponds to the time to stow luggage, wait for already-seated passengers to move out of the way, move in, and get settled. The seating delay rises as more people sit in the row, reflecting decreasing space in the overhead compartment and accompanying longer time to find space for a bag. + +When a person stowing luggage blocks the aisle and someone else comes up behind them, there is a certain pass percentage representing the chance that the blocked person can pass by and proceed to a seat farther along. + +# Parameters + +The simulation is run with a passenger input rate, affected by the gate check-in speed, that is, how fast passengers are processed in the terminal. Passengers have a constant walking speed when not blocked. + +With assigned seats, passengers are typically called in groups, with each group some approximately contiguous segment of seats. The group size is variable, and passengers within each group are randomly ordered. Groups themselves satisfy certain seating assignment schemes, for example, ordering groups back to front. + +# Parameter Estimation + +For our simulation trials, we use the following default values and distributions. Estimated values are based on critical thinking; parameters dependent on the plane size will be specified later. Times are in seconds. + +- Walking Speed $= {140}\mathrm{\;{cm}}/\mathrm{s}$ . This varies based (at least) on the age and gender distribution of the passengers. We used the FAA evacuation simulation requirements that call for a simulated plane’s population to be at least 40% female, at least 35% over age 50, and at least 15% both. Our average distribution is balanced male and female with 40% over age 50. The average comfortable walking speeds based on age and gender are from Bohannon [1997]. + +Affected by: Passenger demographics, aisle width, ceiling height, number and size of bags per person. + +- Seating Delay $= U[10,20] + P_{c} + P_{r}$ . The seating delay is uniformly distributed and includes the compartment-filling penalty $P_{c}$ and the row-out-of-order penalty $P_{r}$ . + +Affected by: Other penalties, plus row spacing, luggage size and number, compartment size and layout, and passenger demographics. + +- Compartment Penalty $= {3p}$ ,where $p$ is the number of people already seated in your row. + +Affected by: Size and layout of overhead compartment, luggage size and number. + +- Pass Rate = 0.05 (an estimate). + +Affected by: Aisle width, passenger demographics, luggage size and number. + +- Row Out-of-Order Penalty $= 15p$ , where $p$ is how many people have to move to let you into your seat. + +Affected by: Row spacing, passenger demographics, aisle width. + +- Entry Delay = 5.0 (estimate). The time between successive passengers entering the plane. + +Affected by: Check-in procedure, flight attendant behavior, luggage size and number, out-of-plane characteristics. + +# Summary + +The simulation model is configurable. We use it to test different strategies and measure the effects of certain changes on the process. We can model + +- different types and sizes of planes with varying interior configurations (aisle width, seat spacing, overhead compartment size) +- passengers with and without assigned seats in many arrangements and zone groupings; + +- the effects of luggage count, luggage size, compartment size, and stowing speed; and +- the effects of the gate check-in process speed. + +# Deriving a New Scheme + +Random boarding with unassigned seats tends to be fastest [Finney 2006]. Despite this fact, many airlines do not adopt it because it often leads to low customer satisfaction. We derive a new seating method inspired by the seating patterns of passengers in a random assignment-less environment. + +From our earlier analysis, the best strategy would move passengers as far back as possible and also ensure that passengers boarding within a block are spread out over several rows. We use this intuition to develop heuristics. + +Seats that are always the first ones to be filled are assigned to the first zone. The next group of seats to get filled are assigned to the second zone, and so on (Figure 4). This zone assignment gives a boarding scheme for passengers with assigned seats. Since in the simulation these passengers had minimal interference with one another, we hope that similar results would occur even with shuffling within zones. + +We observe that the zones returned by our learning algorithm resemble parabolas, hence we define the zones as seats highlighted by different parabolas centered near the far end of plane and the center of the rows, superimposed on the seating chart. The parabolas get steeper for higher-numbered zones, since we are boarding aisles at that time. + +![](images/f4d7c588afb1ec4fd5f99e1bc2fc4f32e26edab70defa791ff7819648edfe24a.jpg) +Figure 4. Results of no-assignment seating simulation. Blue-green (light) are passengers seated first and orange-red (dark) are passengers seated last. Traces of equal height take the shape of parabolas. + +We wrote a computer program to compute these parabolas for planes of arbitrary size. We refer to this method of assigned seat grouping as the Parabola boarding method. + +# Relative Effect of Parameters + +We vary input parameter values to determine their impact. We perform these simulations using the default parameters from above and the plane layout of a Boeing 757-200 (39 rows, 6 columns). + +# Walking Speed + +We analyze the effect of passenger walking speed in Figure 5. We vary it from the approximate comfortable walking speed of a 70-year-old female to the approximate maximum walking speed of a 70-year-old male [Bohannon 1997]. Boarding time is not always lowered by increasing walking speed (except in the back-to-front scheme). This fact reflects our key insight from queuing theory analysis that the entry rate is a more critical bottleneck. Ensuring high walking speed is not critical. + +# Luggage Stowage Time + +We analyze the effect of changing the luggage stowage time in Figure 6. Specifically, we change the average value of the uniform distribution from which we select stowage time. This value has a large effect on the boarding time, following our insight that keeping the aisle full or "pipelined" is important: If we slow the process at this pipeline, performance suffers. Ensuring quick luggage stowage is critical. + +# Plane Entrance Rate + +We analyze the effect of changing the plane entrance rate in Figure 7. Increasing the delay between successive entries (that is, lowering the rate of incoming passengers) increases the time to board. At a certain large value, all seat assignment methods become equivalent, presumably because no bottlenecks form—since passengers enter so slowly (effectively, each passenger enters independently, one at a time, without conflicts), queueing and overflow effects do not emerge. Ensuring adequate plane entrance speed is critical. + +# Intra-Row Movement Time + +In Figure 8, we look at the effects of changing the time to shuffle in and out of a row to let in a fellow passenger. Increasing the row movement time raises the boarding time marginally for back-to-front and alternating rows but not for the other algorithms. However, this is to be expected: The other methods are designed to avoid row conflicts, with passengers almost always arriving in outside-in order. So decreasing row movement time is not critical, particularly because we can avoid its effects with certain algorithms. + +![](images/9e0767dd20586e0fdbe666a179b9fcdff9a8c666fbc46961bd39e363a04eb18c.jpg) +Figure 5. Boarding time as a function of walking speed $v$ . + +![](images/56539cac1c04c8310a955f7cd1e782fbdeb2db56732a961f4d0e318877074ace.jpg) +Figure 6. Boarding time as a function of luggage stowage time $B$ . + +![](images/968953c8ad4b2fd662184a75b68bb6aa50b4fce4c295acbd10c1ef41a58c1c68.jpg) +Figure 7. Boarding time as a function of plane entrance rate $\lambda$ . + +![](images/fd939486dc9388efe6d4d30add15cb746d61e7d0b1d8d66956c163e9facdc682.jpg) +Figure 8. Boarding time as a function of intra-row conflicts. + +# Summary + +We have analyzed the relative impact of the parameters of our model for a representative airplane. Two factors are of key importance: average luggage stowage time and plane entrance rate. Ranking the various strategies in the light of their performance on these factors produces the ordering: + +- Outstanding: No assigned seats +- Meritorious: Outside-in, reverse-pyramid, parabola +- Honorable Mention: Alternating rows +- Limited Success: Back-to-front + +Some insight into this ordering comes from comparing average seating order after mixing within groups vs. without seat assignments (Figure 9). Pyramid and parabola most closely approximate the order achieved by the fast no-assignments method. We conclude that outside-in captures most of the key benefits, since in general it is as fast as the other two while being a less-close approximation of the random greedy model. + +![](images/3b5939b5f1c44eab2f943e7c73f06f42430660f57302fbf7e9398b76e2b74f12.jpg) +Figure 9. Average seating order with group mixing. + +# Strategy Robustness and Dependability + +Average boarding speed is not the only measure of success. A fast boarding method does no good if once a week it takes twice as long as its average; airlines need to depend on a consistent time to produce achievable and reliable schedules. Therefore, we prefer boarding methods that vary little between + +Table 1. Time range for each boarding method. + +
AlgorithmTime Range (min)
No assigned seats0.7
Outside-in2.6
Parabola2.8
Reverse-pyramid3.1
Alternating rows4.6
Back-to-front6.2
+ +worst and best cases. We simulated plot boarding times for our various schemes over 500 trials, with results in Table 1. + +The smallest spread between longest and shortest load times is for no assigned seats. Interestingly, there is a direct correlation between time to board and variability in boarding time. The outside-in, reverse-pyramid, and parabola methods have similar boarding times and distributions. Similarly, Back-to-Front and Alternating Rows take the longest and have the largest spread. To some extent, these results suggest that a faster boarding algorithm is also more dependable; however, this may not be true for all cases. + +# Model Generalization + +Our simulation assumes that the plane is boarded from one end of the seating area with passengers walking down aisles at the center of the rows. But some planes have several aisles or passengers boarding on different levels. Our model can be easily generalized to accommodate for different plane designs and layouts. We divide the problem into sections; each subsection is modeled as its own plane with entry rates changed appropriately. Depending on whether boarding of subsections occurs in serially or in parallel, the times are added or compared (and the maximum taken). + +# Specific Results + +We apply our model to various real-world planes of different sizes to compare the speed of the boarding processes. We let Outside-In serve represent outside-in, reverse-pyramid, and parabola, which are similar in timing. We apply our generalization techniques to model multi-aisle, -class, and -level planes with given configurations [Airbus ... 2007, Boeing ... 2007] (Table 2). + +The results support our previous conclusions. In several cases, back-to-front is quite close to outside-in, perhaps because the plane entry rate was not high enough. + +Table 2. Simulation boarding times for multi-aisle, multi-class, and multi-level planes. + +
PlanePassengersUnassignedBack-to-FrontOutside-In
DC 9-40125121714
Airbus A320164142117
Boeing 757-200234202623
Boeing 747-400313313132
Airbus A380555353536
+ +# Conclusion + +While our approaches and models are effective and produce results, there remain several model weaknesses: + +- We assume independent, perfect-knowledge, infallible passengers who always put their luggage directly above themselves, as well as other perfect scenarios (planes of equally-sized rows, jet-bridges of constant flow instead of stairs or buses that bring passengers to the plane). +- There are several areas of the problem that we left untested because they seemed to be of secondary importance, such as varying the number of zones. +- Our comparison of boarding algorithms is simulation-based and therefore by nature not exhaustive. There may be better algorithms that we did not test, such as single-zone random boarding or rotating row-group zones. +- We stay within the current boarding paradigm so as not to produce too much uncomfortable change for passengers. However, greater improvements might obtain if a wider range of choices were available; simple examples might be assigning passengers only to a row and letting them choose a seat there, or hiding money under one seat to encourage speedy boarding. + +Overall, we believe the strengths inherent in our approach overcome many of the weaknesses and allow us to make useful recommendations: + +- Our multilayered approach produced key insights. +- Our simulation model can be extended to test new algorithms and situations with minimal changes. +- We provide a relative ranking of factors affecting boarding speed, not just a ranked list of algorithms. An airline can still make improvements if they don't want to switch their process, or if they already have a fast process. + +# Summary + +Our key observations are: + +- The aisle is the main bottleneck, especially near the entrance, and it is necessary to "pipeline" passengers to maintain a high throughput. +- The rate of passengers entering is also critical, since it determines the maximum rate at which passengers can proceed down the aisle and be seated. +- Sending in passengers with closely-situated seating assignments in short time intervals results in numerous interferences and increases boarding time. Instead, passengers should be enter by zones that distribute seats over several rows. + +Our simulations confirm these insights and show that boarding schedules that follow these rules perform better in terms of both speed and reliability. + +We offer the following recommendations to airlines to improve their boarding time, turnaround time, and ultimately their bottom line: + +- Passenger entry speed: The faster passengers enter the plane, the faster it boards. This means that ticket-checking should be as quick as possible, hence with an optimal number of gate agents. Flight attendants should be stationed at critical junctions (such as entrances to aisles in a multi-aisle plane) to direct passengers to the correct rows. +- Luggage stowage time: The faster passengers stow their bags and sit down, the faster the plane boards. Stowage time can be reduced by changing or enforcing carry-on luggage limits and by having flight attendants assist passengers with large or heavy bags. +- Switch from back-to-front to another boarding method. Outside-in boarding provides a $10\% - 30\%$ advantage over back-to-front. Foregoing assigned seats results in a further $10\% - 30\%$ advantage over outside-in. Faster methods are also considerably more reliable: Outside-in has a time range $50\%$ smaller than back-to-front. + +# References + +Airbus aircraft families. 2007. http://www.airbus.com/en/aircraftfamilies. Accessed 11 February 2007. +Boeing: Commercial airplanes, products. 2007. http://www.boeing.com/ commercial/products.html. Accessed 11 February 2007. +Bohannon, R.W. 1997. Comfortable and maximum walking speed of adults aged 20-79 years; reference values and determinants. Age and Aging 26: 15-19. +Trivedi, K.S. 2002. Probability and Statistics with Reliability, Queuing and Computer Science Applications. 2nd ed. New York: Wiley-Interscience. + +Finney, Paul Burnham. 2006. Loading an airliner is rocket science. New York Times (14 November 2006). http://travel2.nytimes.com/2006/11/14/business/14boarding.html. Accessed 10 February 2007. +Jackson, J.R. 1957. Networks of waiting lines. Operations Research 5: 518-527. +Kiwi, M. 2006. A concentration bound for the longest increasing subsequence of a randomly chosen involution. Discrete Applied Mathematics 154 (13): 1816-1823. +Marelli, Scott, Gregory Mattocks, and Remick Merry. 1998. The role of computer simulation in reducing airplane turn time. Aero Magazine (4th Quarter 1998). http://www.boeing.com/commercial/aeromagazine/aero_01/textonly/t01txt.html. +Snakes on a Plane. 2007. http://www.imdb.com/title/tt0417148/. Accessed 11 February 2007. +van den Briel, Menkes H.L., J. René Villalobos, Gary L. Hogg, Tim Lindemann, and Anthony V. Mule. 2005. America West Airlines develops efficient boarding strategies. Interfaces 35 (3) (May-June 2005): 191-200. + +![](images/c2f0f279e91d099c8481737cfaba75ca4099b4c512248579b79ade69afb7ea2d.jpg) + +Kshipra Bhawalkar, Matthew Edwards, Michael Bauer, and advisor Anne Catlla. + +# Novel Approaches to Airplane Boarding + +Qianwei Li +Arnav Mehta +Aaron Wise +Duke University +Durham, NC + +Advisor: Owen Astrachan + +# Summary + +Prolonged boarding not only degrades customers' perceptions of quality but also affects total airplane turnaround time and therefore airline efficiency [Van Landeghem 2002]. + +The typical airline uses a zone system, where passengers board the plane from back to front in several groups. The efficiency of the zone system has come into question with the introduction and success of the open-seating policy of Southwest Airlines. + +We use a stochastic agent-based simulation of boarding to explore novel boarding techniques. Our model organizes the aircraft into discrete units called "processors." Each processor corresponds to a physical row of the aircraft. Passengers enter the plane and are moved through the aircraft based on the functionality of these processors. During each cycle of the simulation, each row (processor) can execute a single operation. Operations accomplish functions such as moving passengers to the next row, stowing luggage, or seating passengers. The processor model tells us, from an initial ordering of passengers in a queue, how long the plane will take to board, and produces a grid detailing the chronology of passenger seating. + +We extend our processor model with a genetic algorithm to search the space of passenger configurations for innovative and effective patterns. This algorithm employs the biological techniques of mutation and crossover to seek locally optimal solutions. + +We also integrate a Markov-chain model of passenger preference into our processor model, to produce a simulation of Southwest-style boarding, where + +seats are not assigned but are chosen by individuals based on environmental constraints (such as seat availability). + +We validate our model using tests for rigor in both robustness and sensitivity. Our model makes predictions that correlate well with empirical evidence. + +We simulate many different a priori configurations, such as back-to-front, window-to-aisle, and alternate half-rows. When normalized to a random boarding sequence, window-to-aisle—the best-performing pattern—improves efficiency by $36\%$ on average. Even more surprising, the most common technique, zone boarding, performs even worse than random. + +We recommend a hybrid boarding process: a combination of window-to-aisle and alternate half-rows. This technique is a three-zone process, like window-to-aisle, but it allows family units to board first, simultaneously with window-seat passengers. + +# Survey of Previous Research + +# Discrete Random Process + +Bachmat et al. [2006] propose a discrete boarding process in which passengers are assigned seats before boarding. The inputs to the process are an index for the position of each passenger in the queue and a seat assignment for each passenger. Additionally, the researchers define the aisle space that each passenger occupies, the time it takes to clear the aisle once the designated row is reached, and the distance between consecutive rows. The first two parameters are sampled from distributions defined by the researchers. + +The model considers the travel path of each passenger. The passenger moves down the aisle until reaching an obstacle, which is either the back of a queue or a person who is preparing to sit in their row. Passengers who arrive at their row clear the aisle after a delay time; after that, the passengers behind continue down the aisle. + +The researchers define an ordering relation between passengers. Each passenger is assigned a pointer to the last passenger who blocked their path. By following the trail of passengers, the longest chain in the ordering ending at any particular passenger can be identified. This chain specifies the number of rounds needed for the simulation. + +# Other Simulation Studies + +Van Landeghem [2002] simulates different patterns of boarding sequences, based on a plane with 132 seats divided into 23 rows, with Row 1 and 23 having 3 seats and the others having 6. The first objective is to reduce total boarding time; the second objective is to augment the quality perception of the passengers by evaluating the average and maximum individual boarding times as seen by the passengers. Van Landeghem simulates calling passengers to board at random or by block (contiguous full rows), half-block (contiguous + +rows, port or starboard halves only), row, half-row, or individual seat. The shortest boarding time is by seat (in a particular order). + +# Model Overview + +We present a simulation model that can be considered a stochastic agent-based approach. + +We treat the plane as a line, with destinations (seats) at regular distances along the line. Each passenger, modeled as an agent, moves along the line until reaching the assigned seat. Each agent has a speed constrained by the slowest person in front. + +This model takes into account the topology of the airplane. Each row is a discrete unit. We call these units processors, since they determine the rate that an individual moves through the system. Each processor has a queue, a list of people waiting to be processed by it (and hence moved to the next node of the system). Each agent has a destination processor, the row of the assigned seat. + +We consider two major collision parameters. A scenario where a passenger is waiting for another passenger to stow baggage is a baggage collision. We also model seat collisions: when a passenger is sitting between another passenger and that passenger's seat (for example, the passenger with an assigned window seat must move around a passenger in an aisle seat). + +We attempt to optimize boarding time based on the order in which passengers enter the plane, via a genetic algorithm over the search space of all possible orderings. Crossovers and mutations are defined so that no seats are "lost." + +Our final model includes a Markov chain to model passenger preferences in an open seating environment. This model simulates a boarding process like the one used by Southwest Airlines. + +# Details of the Model + +# Basic Model + +We use a compartmental model, calling the compartments "processors," each physically analogous to the space of one row. Differing layouts of processors can model varied plane topologies. + +Each passenger is randomly assigned a seat. These seats are not necessarily unique; they are uniformly drawn from all seats on the plane. A seat is represented as a coordinate pair $(c, r)$ , where $r$ is a row and $c$ is a seat number. + +Passengers move based on the function of the processors. The processors are in series, with each processor having the next as an output (Figure 1). + +Since movement is performed by processors pushing passengers from one row to the next, each passenger stores only that passenger's destination. A passenger who reaches a processor waits in a first-in/first-out queue to be + +![](images/da63fd6d828e6122f885a3b689feeb308e9f2660915d81ed142267c8520d0b4a.jpg) +Figure 1. Processor-based model. + +processed (people cannot move around one another while in the aisle). The initial state of the plane is that all passengers are queued at the first processor. + +In each iteration, each processor looks at the destination of the passenger and performs one of the functions: + +- Pass. A passenger who passes moves from the current processor to the end of the queue of the next processor. +- Fumble. With a certain small probability, the processor will do nothing this cycle (a bag gets caught in the aisle, a passenger trips, or some other time-wasting random event occurs). A fumble is not equivalent to time spent stowing baggage or rearranging passengers; our basic model accounts only for random time-wasting events. +- Sit Down. If this row contains the assigned seat for the passenger currently in the processor, the passenger leaves the aisle and is seated. +- Idle. If there is no passenger in the processor (and the queue is empty), the processor does nothing. + +The processors run sequentially from back to front until every passenger is seated. + +# Assumptions + +- The initial configuration is that all passengers are queued at the first row. In actuality, all passengers are initially queued at the ticket counter, where their boarding passes are scanned and they walk a short distance to the plane. Hence, a more realistic alternative would be a Poisson arrival process from the ticket counter to the queue for the first row. However, this additional process is unnecessary because of the high speed at which boarding passes are scanned, which approximates the speed of normal walking. Hence, + +passengers reach the queue at a much higher rate than they are moved forward through the plane; the queue at the first processor forms instantly when the first passengers walk into the plane. + +- There is no idle time between the first passenger entering the first queue and the last passenger doing so. The airline could wait until there is no queue left before calling additional passengers to board. However, doing this is never to the airline's advantage. +- Special-needs and business-class passengers have already boarded; airlines have an obligation to these customers for early boarding. We start our simulation clock after these special classes of passenger have already boarded and deal only with the bulk passenger class. +- Every passenger functions individually. We expect improved efficiency when passengers travel in groups, since groups are self-organizing (the individuals in a group do not collide with one another). + +# Extended Model + +# Seat assignment + +The initial model assigns seats randomly and without uniqueness. We remedy this to a one-to-one correspondence between passengers and seats. + +# Assumptions + +- The plane is fully booked and every seat is occupied. This assumption allows us to optimize over the worst-case scenario. + +# Seat collisions + +A common occurrence is a passenger needing to cross over a seated passenger. To account for such a seat collision, we implement a new processor function: + +- Rearrange. This cycle is spent waiting for the aisle to clear after the seat collision. This operation reduces the seat collision counter by 1. + +A seat collision has an associated time penalty that depends on the type of collision. When there is a seat collision, the processor for that row spends a number of cycles equal to the time-cost sorting out the collision. During that time, no other passengers can enter the processor (though they can enter the processor queue). + +We determined values for the seat collision costs by physical experimentation involving multiple trials over a simulated plane row. All seat collisions have the same time cost, except that the penalty for crossing over two passengers is about $50\%$ more than for crossing over a single passenger. We expect the variation among passengers to be small. + +# Luggage + +A major factor in boarding times is stowing hand luggage. As the overhead bins fill, it takes longer to stow a bag. Hence, we developed a statistical model of luggage stowing. Luggage stowing is performed by the processor at a given row using the command: + +- Stow. This cycle is spent by a passenger stowing a bag in the overhead bin. The stowing counter is decreased by 1. + +For luggage stowage times, we use a Weibull distribution because of its flexibility in shape and scale. The probability density function is + +$$ +f (x, \kappa , \lambda) = \frac {\kappa}{\lambda} \left(\frac {x}{\lambda}\right) ^ {\kappa - 1} e ^ {- \left(\frac {x}{\lambda}\right) ^ {\kappa}}, +$$ + +where $\lambda$ is a scaling parameter, $\kappa$ is a shape parameter, and $x$ is the number of people who have entered the plane. Its cumulative distribution function + +$$ +F \left(x, \kappa , \lambda\right) = 1 - e ^ {- \left(\frac {x}{\lambda}\right) ^ {\kappa}}. +$$ + +is a measure of the additional time to stow hand luggage as the plane fills up. The waiting time of passenger $x$ is + +$$ +\lceil c F (x, \kappa , \lambda) + N \rceil , +$$ + +where $c$ is a measure of the additional time to store baggage when the plane is full and $N$ is a Gaussian noise parameter that accounts for the nonuniformity of the boarding process. + +# Queue Size + +The initial model assumes that each processor has an unlimited queue. This makes sense for the initial processor, whose queue consists of all passengers lined up along the loading ramp. However, for a processor inside the aircraft, the queue actually takes up physical space. We cap all processor queues but the first at a length of 2, which corresponds well with the ratio of aisle length to passenger size. + +# Multiple Aisles + +We model multi-aisle planes as processor sets with multiple pipelines. Using this technique, planes of arbitrary sizes, topologies, and entrance points can be modeled. We describe here the technique for the modeling of a double-aisle plane, such as the Boeing 777. + +As in the single-aisle model, all passengers are initially queued at a single processor. For a double-aisle plane, this processor represents the junction point at the entry of the plane. No passengers are assigned seats in this row. From + +the first processor, a passed passenger may move to either of two different processors. Each of these two processors begins a serial chain of processors akin to a single-aisle plane. Each passenger chooses an aisle based on seat assignment. As in real aircraft, certain rows of the plane are widened so that a passenger can move from one aisle to the other. + +Some passengers (for example, those in the middle of a row) have seats equidistant to two aisles; they take the first available aisle and can switch aisles at junction points. + +This procedure can be generalized to four-aisle aircraft as well, such as the forthcoming Airbus A380. In that aircraft, not all aisles connect: A passenger cannot move across from an upstairs aisle to a downstairs one. + +We can also simulate a plane with the gate in the middle, or with two gates or more, by changing the configuration of processors. Thus, our procedure can be used to simulate any plane. + +# Assumptions + +- All passengers choose the correct aisle, as usually happens, since a steward is positioned at the junction point (i.e., the first processor) to direct traffic. To make this choice easier, the airline could have color-coded boarding passes. +- Only passengers with middle-seat assignments switch aisles. + +# Deplaning + +Our processor-based model can handle deplaning. The processors are reversed: They push passengers from the back of the plane towards the front. Time spent retrieving baggage follows an opposite distribution from the base model: The first passengers must spend more time retrieving luggage than later passengers. Furthermore, there are no seat collisions; everybody clears out of the plane in order. The destination of all passengers during deplaning is the front of the plane. + +# Assumptions + +- Deplaning is uncoordinated. Though some variant of aisle-to-window deplaning is likely the fastest, we believe that any coordinated deplaning method would greatly decrease customer satisfaction. For example, an aisle-to-window deplaning process would cause window seat passengers near the front of the plane to have to wait for virtually the entire plane to disembark. In any case, it is impossible to control the movement of the passengers. + +# Genetic Algorithm + +We used our model above to find the average times taken by various boarding techniques, including back-to-front and window-to-aisle. However, such known orderings may not be optimal. + +Since the set of all possible orderings is vast, we need a heuristic to explore parts of the space that interest us the most. This heuristic, if it converges, gives an optimum that—while unlikely to be a global optimum—will be a strong local optimum. + +To perform this search, we implement a genetic algorithm, a type of search algorithm that derives the principles of its functioning from evolutionary biology. A genetic algorithm models a solution as a set of "organisms." In our case, an organism is one possible arrangement of passengers in line waiting to board the plane. The algorithm begins with a set of organisms called the population. Each organism in the population is run through our processor model, and, based on the time that it takes for all passengers to be seated, given a "fitness" score. + +Based on the scores, some organisms are selected to survive, while others die. Organisms with the highest score have the highest survival probability. Organisms that survive are kept in the population, and the others are deleted. The population is replenished by the addition of new organisms. New organisms are either offspring of two surviving organisms from the previous round or randomly generated. The algorithm runs for a set number of generations, at which point it returns the best organism remaining in the pool. + +The core of a genetic algorithm is the evolution of the population over time. Over a significant number of generations (for our model, around 60), the algorithm converges. The convergence is a local maximum; the point of convergence is dependent on the initial random population of individuals. The point of convergence is reached using the properties of mutation and crossover. + +# Mutation and Crossover + +Mutation is the process by which an organism changes from one generation to the next. A crossover is the genetic offspring of two individual organisms. We account for both types of evolution in our model. + +We first must consider what the genome or "DNA" of our organisms looks like. An organism is a listing of passengers and seats in order (see Table 1). + +Mutations are relatively simple. During a mutation, a random, sequential section of the DNA is chosen and moved to a different location. A mutation of the initial DNA could look like the bottom row of Table 1, which permutes the seat assignments among the passengers. + +Crossovers are more complicated. A special property of our solution space is the one-to-one correspondence between passengers and seats. This means that the order of seat numbers in the DNA can be switched, but the seat numbers must stay the same. In normal DNA, a sequential piece of one organism's DNA is exchanged with the corresponding sequence of the other organism. Due to + +# Table 1. + +# Example of mutation. + +Original organism + +
Passenger12345
Seat number22A23C7A30F2B
+ +# Mutated organism + +
Passenger12345
Seat number7A30F22A23C2B
+ +the one-to-one correspondence property of our data, we cannot use this type of crossover: If the two sequences chosen did not have the same set of seats, our offspring would not have a valid genetic code. + +Hence, we formulate a new form of crossover that preserves the elements of a DNA code but changes its order (Figure 2). + +![](images/50bb293060d517a380db71ddd441f8aaa472150d7b79a9efb087c67c517eb79c.jpg) +Figure 2. Processor-based model. + +The crossover algorithm first chooses a sequence of seats from the genome of the first organism. It then identifies the indices of these seats in the second organism. The genomes of the two organisms are rearranged such that the ordering of the selected seats is switched between the two organisms, while all other seat assignments remain the same. In the example in the figure, the seat sequence (3...4) is selected as the crossover. The indices of (3...4) are 3 and 4 in the first organism and 1 and 5 in the second. After the crossover, the indices of (3...4) are 1 and 5 in the first organism and 3 and 4 in the second. The order of all the other seats remains the same, but their indices are shifted due to the change in location of 3 and 4. + +# Population Seeding + +We ran the genetic algorithm in two configurations, in the first determining the initial population randomly and in the second "seeding" it (adding nonrandom organisms). For seeding, we added two examples of each of the tested types of boarding configuration (e.g., window-to-aisle and back-to-front). Seeding helps the algorithm approach the global maximum, since the beginning population then contains individuals that have high fitness. + +# The Southwest Model + +# Model Overview + +In the Southwest system, passengers board in order of arrival with no assigned seats. + +In our model, seat preferences are encapsulated in a matrix + +$$ +\mathbf {B} = \left[ \begin{array}{c c c c} b _ {1} & b _ {2} & \ldots & b _ {6} \end{array} \right], +$$ + +which represents the spatial arrangement of seats in each row: Elements $b_{1}$ and $b_{6}$ represent the relative preferences for window seats while elements $b_{2}$ and $b_{5}$ represent those for middle seats. + +The passenger's desire to sit at a given row, to move forward, or to move backward is encapsulated in a transition matrix, + +$$ +\mathbf {P} = \left[ \begin{array}{c c c c} a _ {1, 1} & a _ {1, 2} & \ldots & a _ {1, 6} \\ a _ {2, 1} & a _ {2, 2} & \ldots & a _ {2, 6} \\ \vdots & \vdots & \ddots & \vdots \\ a _ {3 0, 1} & a _ {3 0, 2} & \ldots & a _ {3 0, 6} \end{array} \right], +$$ + +where $\mathbf{P}$ satisfies + +$$ +0 \leq \mathbf {P} _ {i, j} \leq 1, \quad 1 \leq i, j \leq N; +$$ + +$$ +\sum_ {j = 1} ^ {N} \mathbf {P} _ {i, j} = 1, \quad 1 \leq i \leq N. +$$ + +Element $a_{i,i}$ represents the passenger's desire to sit in row $i$ . Element $a_{i,i+1}$ represents the passenger's desire to move forward to row $(i+1)$ . Element $a_{i,i-1}$ represents the passenger's desire to move back to row $(i-1)$ . Although a passenger may prefer to move back a row, that is not possible in our model. + +Since $\mathbf{P}$ is an irreducible aperiodic transition matrix, Markov-chain theory tells us that there is a stationary distribution $\bar{\pi}$ , which gives the probability that a passenger will end up at a particular row. + +The model incorporates each passenger's decision-making, the passenger's location within the plane, and environmental constraints. In deciding whether to move forward or to sit at the current row, a passenger first considers the current location. If at the end of the plane, there is no option but to sit in the last row. If the number of available seats in front of the passenger's current position exceeds the number of people ahead of the passenger, then the passenger can move forward to the next row; if not, the passenger has to sit in the current row. A passenger cannot move backwards. + +As the passenger progresses, preferences need to be adjusted: + +- As the passenger moves forward, there are fewer rows to consider. + +- As the plane fills, certain rows no longer have available seats to consider. + +In both cases, preferences are redistributed so that the relative preferences between all remaining available rows remain the same. Similarly, when seats in a particular row are occupied, the passenger's preference for a particular seat in that row is readjusted so that the relative preferences for available seats remain the same and the sum of seat preferences across the row is 1. Therefore, the preferences for each standing passenger are recomputed each time a passenger finds a seat. + +When a passenger gets to a row, the decision of whether to sit is governed by a random process that favors the row according to the relative preference that passenger has for that particular row. + +After a passenger decides to sit at a given row, if the row contains more than one available seat, his choice of where to sit is governed by a random process that favors each seat according to the passenger's relative preference. + +From a macro perspective, each passenger makes the decision of where to sit autonomously. This decision is driven, however, by certain preferences and their corresponding probabilities that lend order to the seating sequence in the plane. In each cycle, the model recomputes the preferences of each passenger for each particular row and each seat. + +# Assumptions + +- The movement of passengers along the aisle of the plane is unidirectional. Additionally, passengers are aware of the number of people and available seats in front of them. They will not move forward unless the number of available seats exceeds the number of people in front of them. +- All passengers share a common propensity to sit at any given row or to move forward along the aisle. Because passengers prefer seats closer to the front of the aircraft, the desire to sit at any given row is greater than the desire to move forward. +- All passengers share a common preference for seats, favoring window over aisle and aisle over middle. Having a wall or empty space on one side does not seem terribly unappealing; the window is most preferable because it offers a view and the benefit of resting your head. +- The decision to sit in a particular row is independent of the decision to sit at a particular seat in that row. In most cases, passengers first decide on row preference and then decide which seat they prefer. +- When a row of seats is filled, the probability that a passenger sits in that particular row becomes zero. The probability previously attributed to that row is then redistributed proportionally among the unfilled rows according to the preference probability already attributed to them. This process ensures that the relative preferences of the unfilled rows remain the same. + +# Boarding Patterns + +Although our algorithm may be used to model planes of any size, we focus on a standard 180-person plane with 30 rows and 6 seats in every row. + +# Random Boarding + +This boarding process is used as a baseline for comparison to other models. The process involves the random assignment of seats to passengers in the boarding queue followed by the boarding simulation. + +# Window-to-Aisle Boarding + +Window-to-aisle boarding involves filling up all the window seats, followed by the middle seats, and then the aisle seats. In Figure 3, black tiles represent the earliest passengers to enter the plane and white tiles represent later passengers. The darkness of each tile decreases with increasing passenger numbers in the boarding queue. + +![](images/dc87ac206728992981042fe03ee16630874b0340e1c0820fad76150807fbe33d.jpg) +Figure 3. Window-to-aisle boarding. + +This "outside-in" method eliminates all seat collisions. The sequence of window seat passengers is random; likewise, the orders of passengers with middle and aisle seats are each independently random. Thus, this boarding pattern still demonstrates significant baggage collisions from passengers interfering with one another's passage to their seat row. + +# Alternating Half-Rows Boarding + +The plane is split into two halves along the aisle and one half is filled before the other half starts boarding. Each half is filled by loading every third row starting from the back. Once we reach the front, the process is repeated from the second-to-last row followed by the third-to-last row (Figure 4). The rows are filled in a random order, so there may be seat collisions. Each row must be filled before the next row can start loading. Once passengers in one half of the plane have all boarded, the second half begins boarding with the same process. + +![](images/8d52d7bae406e0ba55db2ba4bf3c69b289c8483a18cfae5c3056fa2f5fb31f87.jpg) +Figure 4. Alternating half-rows boarding. + +# Zone Boarding + +In this boarding pattern, the plane is split into contiguous and evenly divided zones based on row number. The passengers in each zone are then randomly assigned to a seat in each zone. The zone farthest back in the plane boards first, followed by the next furthest, and so on till we reach the front of the plane (Figure 5). Passengers in a particular zone must board the plane before passengers in the next zone can begin boarding. + +# Rotating-Zone Boarding + +Rows are filled from back to front in an alternating fashion. Thus, the seats in the back row are filled first, followed by the seats in then front row, the seats in the second to last row, and so on till we reach the middle rows of the plane (Figure 6). The seats in each row are assigned randomly and the passengers of a row must board before the passengers for the next row board. + +![](images/a8499e348811a8d4914c9cdae3384a2e9dae017f45263290d4fac2a04eb7e72d.jpg) +Figure 5. Zone boarding. + +![](images/062ee034c40684fdf4026760d6a6d9c19f75038b3f76b410fd64224d1fc1dc82.jpg) +Figure 6. Rotating-zone boarding. + +# Results + +We evaluated the efficiency of each seating pattern by averaging 30 runs of each simulation with 35 trials per simulation on a 180-person plane (30 by 6). Each run used a randomly-generated seating arrangement within the constraints of the pattern. The waiting times were normalized to the average waiting time of a randomly-generated seating arrangement. The normalization value was derived from analysis of 50 different random patterns. + +Results are shown in Table 2. + +Window-to-aisle is the most efficient; it eliminates seat collisions but its randomized column arrangement allows baggage collisions. + +Table 2. Average normalized times for boarding patterns. + +
Boarding patternTime
Deplaning0.48
Window-to-aisle0.64
Seeded genetic algorithm0.67
Alternate half-rows0.73
Genetic algorithm0.81
Random1
Southwest model1.09
Back-to-front1.10
Rotating-zone1.71
+ +Alternate half-rows minimizes spatial overlap between alternating groups of 3 passengers; any seat collision is not large enough (in a spatial sense) to extend to the half-row following. However, this localized congestion also explains why alternate half-rows is slower than window-to-aisle. It is possible that the time for this scenario is overstated, since three passengers walking to a half-row may self organize. + +Back-to-front, the most common boarding technique, performs surprisingly poorly—worse than random, due to local congestion propagating to waiting passengers. + +Rotating-zone presents collisions of the same sort as alternate half-rows. However, while in half-row boarding there are potentially 6 collisions among 3 passengers, rotating-zone boarding allows for 15 collisions among 6 people. + +Southwest boarding suffers because passengers share a preference for seats closest to the exit, which can increase queuing early in the plane, and for aisle seats over middle seats, which also increases seat collisions. + +The genetic algorithm applied to a random seating arrangement reached a steady-state solution that is most likely a local minimum for that problem instance. We ran the simulation multiple times, and the results displayed similar properties. + +The seeded genetic algorithm resulted in a hybrid between window-to-aisle and alternating half-row boarding. Window seats fill first, followed by the middle and then aisle seats. However, on one side of Figure 7 there are distinct alternating bands every third row. The algorithm shows a distinct window-to-aisle and alternating half-row hybrid boarding process (Figure 7), demonstrating that this hybrid forms a strong local optimum. We observe that the minimum obtained by this hybrid is quasistable. We do not notice any influence from the rotating-zone or back-to-front boarding patterns, indicating that these populations were not as "fit" as the former two and were eliminated from the gene pool. This boarding pattern allows for small families to board together. + +![](images/bfe910f5d33346c748dbd58e2320bd817db75342228bb6fc3ea107a1559b7c90.jpg) +Figure 7. Seeded genetic algorithm. + +Deplaning is $25\%$ faster than window-to-aisle boarding and is consequently less useful to optimize. + +# Sensitivity and Robustness Testing + +The robustness of our model is a measure of how it performs in extreme cases; a robust model is one that does not break down in such cases. The sensitivity of our model is a measure of the effect of small parameter changes; a good model should show small changes in response to small parameter changes. + +Our model is well behaved; that is, it does not exhibit chaotic behavior. Small changes in parameters demonstrate small changes in results, demonstrating good sensitivity. + +# Baggage + +We eliminated the delay from stowing luggage, a key factor responsible for aisle collisions. We expected that the window-to-aisle boarding would benefit more than alternate half-rows boarding. We observe a $26\%$ decrease in time for window-to-aisle and a $16\%$ decrease for alternating half-rows, consistent with our prediction. + +# Seat collisions + +We eliminated time delays due to seat collisions. We expected that random boarding would perform as well as window-to-aisle, since the primary contribution to delay time will be aisle collisions. Our simulation performs as expected, with only a $2.3\%$ difference in times. + +# Queuing + +We allowed an unlimited queue for each processor. We expected elimination of local congestion and increases in efficiency for zone boarding. Our simulation confirms that zone boarding is $25\%$ more efficient than random boarding. + +# Discussion and Conclusions + +# Results + +To identify better boarding techniques, we employed a simulation model based on a stochastic agent-based approach. We simulated boarding sequences with embedded stochastic variability, including aisle and row congestion. + +We find through simulation that window-to-aisle boarding is the most efficient. However, aisle congestion remains significant due to the random sequencing of passengers within the same boarding group. This in turn contributes to substantial delays due to the stowing of luggage. + +Alternate half-row boarding is the next most efficient. Its speed derives from minimization of aisle congestion, despite seat collisions in each half-row. + +We could both eliminate seat collisions and minimize aisle congestion by specifying the sequence of each passenger in the boarding queue; but such a method would not be practical, since it would require all passengers to arrive at the gate punctually and gate agents to spend time organizing passengers. + +In general, seat collisions have relatively less impact near the end of boarding, because the time to stow luggage increases. + +# Optimal Recommendation + +We found a hybrid between alternate half-rows and window-to-aisle to be a local optimal solution. We recommend hybrid boarding because it offers the versatility of both group and individual boarding. In this solution, the first boarding call is for families and window passengers. Since families self-organize, minimizing collisions, we expect hybrid boarding to be more efficient in practice than predicted by simulation. + +# Strengths and Weaknesses + +# Strengths + +- Processor-based model has few input parameters, leading to good robustness and sensitivity. +- Genetic algorithm explores and optimizes known configurations. +- Variety of boarding patterns explored, including planned layouts, genetic optimization, and passenger preference + +- Accounts for all major factors involved in plane boarding. +- Simulates both boarding and deplaning processes. +- Uses a variety of modeling techniques in an integrated holistic model. + +# Weaknesses + +- Parameters have to be derived from physical occurrences. +- Genetic algorithm has high computational requirements and cannot identify global optimum. +- Does not account for non-uniform preferences among passengers. + +# Future Work + +- Identify at which rows bottlenecks occur for any given time point. +- Investigate efficient deplaning algorithms. +- Better quantify passenger seating preferences + +# References + +Bachmat, Eitan, Daniel Berend, Luba Sapir, Steven Skiena, and Natan Stolyarov. 2005. Analysis of airplane boarding times. http://www.cs.bgu.ac.il/~ebachmat/ORrevisionfinal.pdf. +2006. Analysis of airplane boarding via space-time geometry and random matrix theory. Journal of Physics A: Math and General 39 (29): L453-L459. http://www.cs.bgu.ac.il/~ebachmat/jpafinal-new.pdf. +Disney, R., and P. Kiessler, P. 1987. Traffic Processes in Queueing Networks: A Markov Renewal Approach. Baltimore, MD: John Hopkins University Press. +Finney, Paul Burnham. 2006. Loading an airliner is rocket science. New York Times (14 November 2006). http://travel2.nytimes.com/2006/11/14/business/14boarding.html. +Lawler, Gregory F. 2006. Introduction to Stochastic Processes. 2nd ed. Boca Raton, FL: Chapman and Hall / CRC. +van dan Briel, Menkes H.L., J. Rene Villalobos, and Gary L. Hogg. 2003. The aircraft boarding problem. Proceedings of the 12th Industrial Engineering Research Conference (IERC), article number 2153. http://www(public.asu.edu/~dbvan1/papers/IERC2003MvandenBriel.pdf. +Van Landeghem, H. 2002. A simulation study of passenger boarding times in airplanes. http://citeseer.ist.psu.edu/535105.html. + +# STAR: (Saving Time, Adding Revenues) Boarding/Deboarding Strategy + +Bo Yuan + +Jianfei Yin + +Mafa Wang + +National University of Defense Technology + +Changsha, China + +Advisor: Yi Wu + +# Summary + +Our goal is a strategy to minimize boarding/deboarding time. + +- We develop a theoretical model to give a rough estimate of airplane boarding time considering the main factors that may cause boarding delay. +- We formulate a simulation model based on cellular automata and apply it to different sizes of aircraft. We conclude that outside-in is optimal among current boarding strategies in both minimizing boarding time (23–27 min) and simplicity to operate. Our simulation results agree well with theoretical estimates. +- We design a luggage distribution control strategy that assigns row numbers to passengers according to the amount of luggage that they carry onto the plane. Our simulation results show that the strategy can save about $3\mathrm{min}$ . +- We build a flexible deboarding simulation model and fashion a new inside-out deboarding strategy. +- A $95\%$ confidence interval for boarding time under our strategy has a half-width of 1 min. + +We also do sensitivity analyses of the occupancy of the plane and of passengers taking the wrong seats, which show that our model is robust. + +# Introduction + +Airline boarding and deboarding has been studied extensively in operations research literature. U.S. domestic carriers lose \(220 million per year in revenue for take-off delays [Funk 2003]. + +We examine strategies for boarding and deboarding planes with varying numbers of passengers, trying to minimize the boarding and deboarding time. + +# Literature Review + +Marelli et al. [1998] designed a computer program called PEDS (Passenger Enplaning/Deplaning Simulation) that used a probabilistic discrete-event simulation to simulate boarding methods. PEDS predicted that it would take 22 min to board a Boeing 747-200. However, the paper did not lay out the boarding procedure. + +Van Landeghem [2000] stated that the fastest boarding strategy is individually boarding by seat and row number, and the second fastest is a back-to-front "alternate half-row" boarding system, which was cited to take $15.8\mathrm{min}$ . He also proposed strategies with small numbers of boarding groups that are both faster and more robust against disturbances. A problem with the data is that only five replications were done for each boarding procedure tested [Pan 2006]. + +Later, van den Briel et al. [2003] showed that a reverse-pyramid boarding strategy could reduce airplane's turn time by 3-5 min compared to a traditional back-to-front boarding approach. The boarding time depends on events called "interferences." + +Unfortunately, all of these researchers used simulation based on small or mid-size airplanes that do not extend to the much larger aircraft under development today. Our approach and results can be applied in all sizes of airplanes. + +# Basic Assumptions + +- First-class passengers board first. Hence, our simulation considers only economy-class passengers. +- Passengers do not try to pass other passengers in the aisle. The aisles are narrow, so passengers have to wait to move until there are no "obstacles" in front of them. +- A "call-off" system is used. Passengers board in ordered groups; gate agents announce which group is to board. +- A passenger does not take the wrong seat and does not walk past the row of the right seat. Such mistakes inevitably delay boarding. + +# Reasons for Boarding Delay + +# Normal Delay + +"Interference" is the main reason for boarding delay. Van den Briel et al. [2003; 2005] divide boarding interferences into two types: + +- Aisle interference: Since the aisle is narrow enough to allow only one passenger to proceed forward, aisle interference occurs when a passenger stows luggage. To do this, the passenger must stand in the aisle for a moment, thereby acting as an "obstacle" for passengers behind. +- Seat interference: This kind of interference occurs when a passenger is stalled by another one or two passengers sitting in the same half-row. Because of the limited space between contiguous rows, this passenger must ask these passenger already sitting in their seats to stand up and move into the aisle. + +# Abnormal Delay + +Passengers take the wrong seats, or are late. These behaviors can hardly be avoided. Because of their complexity and variety, we don't take them into consideration. Our main objective is to reduce seat and aisle interference. + +# Theoretical Estimate Model + +We consider boarding time as made up of two parts: + +- Free boarding time $t_{\mathrm{free}}$ , the total time if all passengers board without any interference or delay. +- Interference time $t_{\mathrm{inter}}$ , the total interference time including aisle interference and seat interference. + +So the total boarding time is + +$$ +T _ {\text {t o t a l}} = t _ {\text {f r e e}} + t _ {\text {i n t e r}}, \tag {1} +$$ + +# Free Boarding Time + +We consider the passengers as a steady flow that pours into the plane at a rate of $v_{\mathrm{flow}}$ passengers per minute. So the free boarding time is + +$$ +t _ {\text {f r e e}} = \frac {n}{v _ {\text {f l o w}}}, \tag {2} +$$ + +where $n$ is the number of passengers. + +# Interference Time + +# Seat Interference + +We assume that the times to get from the seat to the aisle and get back are the same, both denoted by $t_{S}$ . Suppose that three passengers on the same side of a row are assigned to the same boarding group, passengers sitting in positions A (window), B (middle), and C (aisle). There are six equally likely kinds of seat interferences, corresponding to the boarding orders ABC, ACB, BAC, BCA, CAB, CBA. We calculate the interference time for each case. Take ACB as an example: The window-seat passenger boards first, followed by the aisle seat passenger; then the middle-seat passenger needs the aisle-seat passenger to get up and move to the aisle, the middle-seat passenger moves from the aisle to the seat, and the aisle-seat passenger and sits back down again. So the interference time is $3t_{S}$ . The results are shown in Table 1. + +Table 1. Seat interference time by boarding order. + +
Boarding orderABCACBBACBCACABCBA
Seat interference time03tS3tS5tS6tS8tS
+ +The average seat interference time for 3 passengers in the same half-row is + +$$ +\bar {t} _ {S} = \frac {2 5}{6} t _ {S}. +$$ + +With $n$ passengers boarding, the total seat interference time is + +$$ +t _ {S: \text {i n t e r}} = \bar {t} _ {S} \cdot \frac {n}{3} = \frac {2 5}{6} t _ {S} \frac {n}{3}. \tag {3} +$$ + +# Aisle Interference + +Let $P_{1}, \ldots, P_{n}$ be the passengers in order in the queue, with corresponding row numbers $r_{1}, \ldots, r_{n}$ . We say $P_{i}$ blocks $P_{j}$ if $r_{i} < r_{j}$ . We use the number of blocking times as the number of aisle interference times, that is, when we calculate total interference times, we don't consider the situation that two or more blockings happen together. For example, for passengers $P_{1}, \ldots, P_{5}$ in rows 1, 4, 5, 2 and 3, $P_{1}$ blocks $P_{2}, P_{2}$ blocks $P_{3},$ and $P_{4}$ blocks $P_{5}$ . But actually, after $P_{1}$ is seated, $P_{2}$ and $P_{4}$ can stow luggage simultaneously, and only $P_{3}$ and $P_{5}$ need to wait (two intervals of interference) to stow luggage. To simplify the calculations, we think of this as a total of three intervals of interference. + +As a result, to calculate the aisle interference times, we need calculate only the number of instances of $r_i < r_{i+1}$ . Since the order of passengers is random, the number $i$ of aisle interference times is a random variable. We assume that every permutation is equally likely, so the average aisle interference time is + +$$ +I = \frac {1}{n !} \sum i (r _ {1}, \dots , r _ {n}), +$$ + +where we sum over all permutations. The permutations can be divided into $n! / 2$ pairs, each of which is the reverse of the other; together, each pair will have $(n - 1)$ instances of $r_i < r_{i + 1}$ . Hence + +$$ +I = \frac {n - 1}{2}. +$$ + +With $t_{L}$ for the average time to stow luggage, the total aisle interference time is + +$$ +t _ {A: \text {i n t e r}} = \frac {n - 1}{2} \cdot t _ {L}. \tag {4} +$$ + +From (1)-(4), we get the total boarding time as + +$$ +T = \frac {n}{v _ {\mathrm {f l o w}}} + \frac {2 5}{6} t _ {s} \frac {n}{3} + \frac {n - 1}{2} t _ {L}. +$$ + +# Data Collection + +# Aircraft of Different Sizes + +We base our computer simulations on three types of airplane of different sizes: Airbus A320 (small—124 seats), Airbus A300 (midsize—266 seats), Airbus A380 (large—555 seats). + +# Experimental Data + +We could not collect the needed by experimenting or by interviewing airline executives. Fortunately, this work has already been done by van den Briel et al. [2003] as cited by Pan [2006]. They found the following average times: + +- Get-on time (time between gate agent and gate—assuming one gate agent): 9.0 s. +- To advance one row: $0.95 \, \text{s}$ . +- Stowage: 7.1 s. +- Seat interference time: 9.7 s. + +# Cellular Automata Simulation Algorithm + +In the cellular automata model of boarding analysis, each cell is designated as a passenger, a barrier, a road or a seat. The model restricts individual movements on the plane and computes total boarding time. Time, position, and passenger behavior are each discrete quantities. The passenger compartment + +is specified as a grid of rectangular cells, while time is incremented using a convenient time step. During one simulation time step (STS), a passenger can move only one cell / row, and all cells representing passengers are processed once and in random order. The simulate iterates time steps and update passengers' state and position until all passengers sit down. + +# Call-off Function + +Before passengers board the plane, they are usually divided by a gate agent into groups, often by consecutive rows, for boarding efficiency. We develop our call-off function with three steps: + +1. Divide different seats into groups according to a specific strategy. For example, in implementing outside-in, we put seats in one column into a group. +2. Generate a random order number in each group. +3. Queue the groups consecutively. + +# Enplane Simulation Function + +# Simulation of the Next Passenger Boarding + +The get-on time has an exponential distribution with mean that we estimate to be 10 STS. + +# Individual Behavior Judgments + +What do passengers do in each time step? + +- Stand still when there is an obstacle. +- Move forward by one cell toward the seat when there is free space in front. +- Stow luggage. This behavior needs a counter to record its STS because it requires more than one step. +- Seat interference when the passenger already seated must stand up and let other passengers move in. It also needs a counter. + +# Simulation Results and Analysis + +We simulate common boarding strategies, including random, back-to-front, rotating-zone, outside-in, and reverse-pyramid [Finney 2006]. Back-to-front and rotating-zone allow us to choose the number of rows per group; we try 4, 6, and 8 to see how variation affects the strategies. Similarly, reverse-pyramid can also vary in layers, and we choose 2, 3, and 4 layers to analyze. + +# Simulation Results + +We simulate each boarding strategy 100 times; the results are in Table 3. + +Table 3. Simulation results for strategies. + +
StrategyRows (or layers)Average interferenceSeat interferenceAisle
Random247252
Rows327255
Back-to-front #14257251
Back-to-front #26257252
Back-to-Front #38257353
Rotating #14257253
Rotating #26257354
Rotating #38257254
Outside-in23042
Reverse-pyramid #1223043
Reverse-pyramid #2323042
Reverse-pyramid #3423042
+ +# Analysis of the Simulation Results + +- The more rows in a group, the shorter the boarding time. This is really unexpected! Usually, we think that if we divide the passengers into more groups before boarding in accordance with a boarding strategy, the passengers will be better organized and board the plane more efficiently. But to our surprise, our simulations run in the opposite direction. Take back-to-front as an example. When a group contains 8 rows, the boarding time is $24.6\mathrm{min}$ ; but when there are 4 rows per group, the boarding time increases to $25.0\mathrm{min}$ . With the two extremes (i.e., one row per group vs. all the passenger as a group), the contrast is even more obvious: $32\mathrm{min}$ vs. $24\mathrm{min}$ . + +How could this happen? Through analysis of the simulation processes, we find that two or more interferences can happen at the same moment (Figure 1) without influencing the boarding process adversely. With more rows in a group, multi-interferences increase but boarding time decreases. + +- Dividing passenger groups according to their columns such as outside-in way and reverse-pyramid way avoids seat interference and reduces aisle interference. This is easy to understand. If we divide the groups by rows, passengers in the same row get on the plane together, and try to stow their luggage at the same time. However, dividing the group by columns staggers the time when passengers stow luggage into the same overhead bin, which lead to a reduced number of aisle interference. + +# Optimal Strategy + +Based on the above analysis, we draw the conclusion that dividing passenger groups by columns is more efficient than by rows. The two optimal strategies are outside-in (23.0 min) and reverse-pyramid (22.7 min). Although R-P takes a little less time, outside-in is easier to operate both for gate agents and also passengers. Considering this, we choose outside-in as our boarding strategy. + +# Cross-Validation between Theoretical and Simulation Models + +We compare the results from the simulation with the results of our analytical mode, where we had total boarding time as + +$$ +T = \frac {n}{v _ {\mathrm {f l o w}}} + \frac {2 5}{6} t _ {s} \frac {n}{3} + \frac {n - 1}{2} t _ {L}. +$$ + +Using parameter value estimates from van den Briel et al. [2003], we have + +$$ +\bar {t _ {S}} = \frac {2 5}{6} t _ {S} = 9. 7 \mathrm {s}, \qquad t _ {L} = 7. 1 \mathrm {s}. +$$ + +We also estimate + +$$ +\frac {1}{v _ {\mathrm {f l o w}}} = 4. 5 \mathrm {s} ^ {- 1}. +$$ + +For the A320, we have $n = 126$ , for which we calculate the total boarding time to be $23.2\mathrm{min}$ , a value that agrees closely with our simulation time. + +# Mid-size Planes + +We extend our simulation model and boarding strategies to midsize aircraft such as the A300; outside-in takes 24.6 min, reverse-pyramid takes 24.4 min. + +The A300 has two aisles in economy class, with most (although not all) rows in a 2-4-2 seat configuration. Correspondingly, we adjust our simulation algorithm. Since there are two aisles but only one boarding gate, we divide the passengers into two lines and assume that they don't get into the wrong aisle. + +The two strategies are again comparable in average boarding time; again, considering simplicity, we recommend outside-in. + +# Large Planes + +We extend our simulation model and boarding strategies to large aircraft such as the Airbus A380, with two decks and 555 seats in three classes. + +Usually, the A380 opens two gates in front of the plane to let passengers board, one of which leads directly to the upper deck (where all business seats are located and a small portion of the economy seats) and the other goes to the main deck (where most economy seats are located). + +Since seats in the upper deck are more spread out, it takes less time to board than the main deck. So we consider only the boarding process on the main deck, which is similar to that of a midsize plane, with two aisles and most rows with a 3-4-3 seat configuration. Both outside-in and reverse-pyramid take $26.8\mathrm{min}$ . We still recommend outside-in. + +# Luggage Distribution Control (LDC) + +# A Creative New Boarding Strategy + +We offer a brand-new idea to reduce boarding time. During ticket-check time, the passengers are assigned numbers according to how many pieces of luggage they will take onto the plane. Although we do not completely control the order in which passengers board, we can control the distribution of passengers with different amounts of luggage. + +A passenger in the last row of the plane blocks nobody when stowing luggage; a passenger in the front row blocks all other passengers behind. Let $P(r)$ denote the probability that a passenger in row $r$ blocks other passengers behind; $P(r)$ is a decreasing function of $r$ . The expected aisle interference time that this passenger causes is + +$$ +t _ {A: I} = P (r) t _ {L}, +$$ + +where $t_l$ is the time to stow the luggage.. As for seat interference, it has no direct connection with the row number. We simply define the average seat interference time as $t_{S:I}$ . So the total expected interference time is + +$$ +T _ {\mathrm {t o t a l}: I} = \sum_ {r = 1} ^ {n} (t _ {A: I} + t _ {S: I}) = \sum_ {r = 1} ^ {n} P (r) t _ {L} + T _ {S: I}, +$$ + +where $T_{S:I} = \sum_{r=1}^{n} t_{S:I}$ is a constant. + +A passenger with more luggage increases the total. To reduce the effect on interference time, we want to put this passenger as far back as possible. + +# Simulation Results of LDC + +Through simulation, we compare outside-in and reverse-pyramid strategies with our LDC strategy. With our LDC strategy, boarding times for all sizes of aircraft can be reduced by 2-3 min. That is because we send passengers with much luggage to the back of the plane, which reduces the number of interference times. + +# How to Implement LDC? + +Before passengers board, they exchange their ticket for a boarding card with their seat number. Our strategy is to assign seat numbers according to the amount of carry-on luggage. For the distribution of number of pieces of luggage, we use $60\%$ have one piece, $30\%$ have two pieces, and $10\%$ have three. + +We divide the seats from back to front in these proportions. We assign to passengers a seat in the group according to number of pieces of luggage; if seats in that group are exhausted, we still follow our basic principle: the more luggage a passenger takes, the farther back the seat. + +# Orderly Deboarding + +# Deboarding Strategies + +Most airlines conduct deboarding without any organization. As a result, passengers in the front rows can easily get off first, stalling those behind, much like an inverse back-to-front procedure. This process is still faster than boarding. However, if we could adopt a strategy like outside-in, that is, let aisle passengers all get their luggage and get off the plane, then the middle passengers, and finally window passengers, we could fully use the aisle space without interference, leading to higher efficiency. + +We put forward the deboarding strategies reversed from boarding strategies: random, front-to-back, inside-out, and V (the strategy derived from the reverse-pyramid boarding strategy). + +# Deboarding Simulation Model + +We develop a simulation model to compare deboarding strategies. Differing from the boarding process, deboarding has its own characteristics, as follows: + +- All passengers start in different positions ("their own seat") and go to the same destination ("outside"). +- There is no seat interference, since in most cases passengers in the same row will leave from aisle seat to window seat. +- In the boarding simulation model, passengers enter the plane one by one, forming a queue. During deboarding, the passengers are a crowd and everyone tries to get out of the plane first. + +# Rush to One Goal: Object Position + +During deboarding, passengers occupy the aisle. We cannot move the passengers according to a certain order, as in the boarding process, but have to + +consider the conflict that one position is wanted by several passengers. Therefore, we define the concept of object position, the position that a passenger wants to get into in the next time step. Our simulation program allows passengers to move forward by one cell in one time step; it can find out passengers' object positions before moving them, determine which passengers want to move to the same object position, determine which passengers cannot move because of obstacles, and then confirm which passengers can move forward and which must stay still. If an object position is wanted by several passengers, we randomly choose one to move and the others have to wait. + +# Applicability + +Our deboarding strategies are to divide passengers into several groups, and then let the groups deboard in order. We define a PAD (Passengers Allowed to Deboard) set, a set of passengers allowed to deboard together. + +# Simulation Results and Analysis for Small Planes + +We simulate each proposed deboarding strategy 100 times. Inside-out took $9.9\mathrm{min}$ , V 10.25, random 12.6, and front-to-back 14.0. + +Compared with random and front-to-back, inside-out is better because it makes full use of the whole aisle, while the other two strategies only partly use the aisle. The main reason that we think the V-strategy is no better is that it needs to have more groups and it doesn't make full use of the aisle at the beginning and end of deboarding. + +Is there any better strategy? Can inside-out be improved? During deboarding, passengers in the plane can still get their luggage as long as the aisle near their seats is empty. But during boarding, passengers who haven't boarded can do nothing but wait. Considering this, we find that there is no need to let the next group of passengers wait to deboard until the previous group is completely off the plane. We modify our model by changing it to when proportion $\alpha$ of the previous group still remains on board, we allow the next group to start to deboard—our advanced inside-out strategy. We find that $\alpha = 15\%$ to $20\%$ yields best results, a deboarding time of about $9.4\mathrm{min}$ instead of $9.9$ . There is no need to get an exact optimal value of $\alpha$ , since it will be almost impossible for the flight crew to implement an optimal strategy exactly. + +# Deboarding with Luggage Distribution Control + +If the airline is using our LDC boarding strategy, we already know the distribution of luggage. In this case, our simulation program does not need to judge if a passenger has to get luggage and how long it takes. We simulate the inside-out strategy with different values of $\alpha$ under the luggage distribution given by our LDC boarding strategy. Again, $\alpha = 15\%$ to $20\%$ gives best results. The deboarding time too is reduced by $2 - 3\mathrm{min}$ ; our LDC strategy can reduce + +not only boarding time but also deboarding time, because we put the passengers who need less time to get their luggage in the front of the plane. (The optimal value of $\alpha$ is not sensitive to the distribution of luggage.) + +# Results for Midsize and Large Planes + +When we apply the advanced inside-out strategy in midsize and large planes: + +- The optimal value of $\alpha$ increases to $20 - 30\%$ . The reason for this is possibly the increased number of rows in the deck. + +# Testing of Simulation Models + +Are our simulation results reliable? We apply probability theory. + +We ran each simulation model 100 times. The times are independent trials from the same distribution. According to the Central Limit Theorem, the sample mean has approximately a normal distribution. As a result, we can make an interval estimate [Rozanov 1969]: + +$$ +T = \overline {{X}} \pm \frac {s}{\sqrt {n}} t _ {\alpha / 2, n - 1}, +$$ + +where $s$ is the sample standard deviation and $n = 100$ . We choose $95\%$ confidence. We find for each boarding strategy an interval of $\pm 1 \, \mathrm{min}$ , meaning that our simulation results are reliably consistent. + +# Sensitivity Analyses + +In reality, the boarding and deboarding times are influenced by various random events. Will these factors influence our simulation results? + +- Occupancy level below $100\%$ , that is, there will be empty seats. To show how occupancy affects our simulation result, we resimulate the strategies under occupancies from $20\%$ to $90\%$ . Result: If occupancy is more than $90\%$ , there are no distinguishable changes in results with variation in time step size. If it is below $90\%$ , the boarding time will be quite short and therefore affect boarding strategies very little. +- Passengers (especially those flying for the first time) may get into the wrong aisle in a midsize or large plane, which has more than one aisle. So we test strategies under a wrong-aisle possibility of $5\%$ . Result: The boarding time increases by an average of 3 min. That is a long time! Proper guidance from the cabin crew is essential on midsize and large planes. + +- Our boarding strategies can be implemented on all kinds of aircraft, because the outside-in strategy divides passengers by columns, so small variability in seat numbers won't affect our boarding strategy much. + +# Further Discussion + +# Passing + +Our simulation models assume that passengers do not try to pass other passengers in the aisle. But in reality, research indicates that on average, one person in 10 does this. + +# Boarding Stairs + +We assume a boarding bridge, but in reality a boarding stairs may be used (e.g., on the Airbus A380). The difference is that the airport must send a bus to take the passengers from the waiting room to the boarding stairs. Airports want to make full use of the bus and take as many passengers as possible. As a result, boarding in groups according to our strategy is hard to implement. But if the number of passengers in the bus equals the number in each group, we can still adopt our boarding strategy. When they are not equal, we adopt the following boarding strategy: Let $R$ be the number of rows in the deck, with $R = pm + q$ , where $m$ is the half-capacity of the bus, $p$ and $q$ are integers, and $q < m$ . We implement outside-in for $pm$ rows in front; the other passengers are in one group and get on the plane randomly. + +# Disobedient Deboarders + +Some passengers do not follow directions. We introduce an obedience factor $\beta$ , the proportion of obedient passengers, picked at random. Disobedient passengers get off the plane if they get the chance, regardless of whether it is their turn. When obedient passengers are less than $40\%$ , any strategy is useless. + +# Strengths and Weaknesses + +# Strengths + +- We develop a simple theoretical model that gives a rough estimate of airplane boarding time, considering the main factors that may cause boarding delay. +- We design a new boarding strategy that assigns seats according to amount of luggage, which could save about 3 min in boarding. + +- With $95\%$ confidence, our simulation results fluctuate by only $1\mathrm{min}$ . + +# Weaknesses + +- We don't consider the weight balance of a plane. Usually, the passenger and luggage distribution on the plane should be as uniform as possible. +- There are differences in seat configuration between our model and some actual planes. + +# References + +CAAC Resource. 2006. First flight of Airbus A380. http://www.carnoc.com/css/carnoc2. +Finney, Paul Burnham. 2006. Loading an airliner is rocket science. New York Times (14 November 2006). http://travel2.nytimes.com/2006/11/14/business/14boarding.html. +Funk, M. 2003. The visualization of the quantification of the commodification of air travel or: Why flying makes you feel like a rat in a lab cage. Popular Science 263 (5) (November 2003): 67-74. +Marelli, Scott, Gregory Mattocks, and Remick Merry. 1998. The role of computer simulation in reducing airplane turn time. Aero Magazine (4th Quarter 1998). http://www.boeing.com/commercial/aeromagazine/aero_01/textonly/t01txt.html. +Pan, Matthew. 2006. [Notitle.] http://www(public.asu.edu/~dbvan1/papers/MatthewPanEssay.pdf. +Rozanov, Y.A. 1969. Probability Theory. New York: Dover. +van dan Briel, Menkes H.L., J. René Villalobos, and Gary L. Hogg. 2003. The aircraft boarding problem. Proceedings of the 12th Industrial Engineering Research Conference (IERC), article number 2153. http://www(public.asu.edu/~dbvan1/papers/IERC2003MvandenBriel.pdf. +, Tim Lindemann, and Anthony V. Mule. 2005. America West Airlines develops efficient boarding strategies. Interfaces 35 (3) (May-June 2005): 191-200. +Van Landeghem, H. 2000. A simulation study of passenger boarding times in airplanes. http://citeseer.ist.psu.edu/535105.html. + +# The Unique Best Boarding Plan? It Depends… + +Bolun Liu + +Xuan Hou + +Hao Wang + +National University of Singapore + +Advisor: Yannis Yatracos + +# Summary + +We devise and compare strategies for boarding and deboarding planes of varying capacity. We clarify what properties a good strategy should have. We apply the same assumptions regarding basic boarding procedure, inner structure of planes, and behavior of passengers to all the cases. + +For boarding, we study prevailing strategies and a seemingly excellent strategy, seat-by-seat, proposed in past literature, and categorize them into two types, assigned-seating and open-seating. We develop a model and a simulation for each type. Our criteria identify two good candidates, reverse-pyramid and open-seating. We develop our own comprehensive strategy, simulate it, and compare it with those two. However, the optimal boarding strategy is not the same for different planes. Some values of parameters, such as the passengers' luggage size and weight, greatly influence the final result. Based on these discoveries, we suggest how to modify a boarding procedure in practice to make it optimal. + +For deboarding, a simple strategy beats a complicated one; but we still give a theoretically optimal model, then modify it to achieve a concise strategy applicable in practice. + +# Introduction + +Planes produce revenue while flying; thus, it is important to minimize turnaround time, the time between flights that an plane spends on the ground. + +For many airlines, boarding is the bottleneck. Reduction of boarding time results in profit increases while potentially increasing passenger satisfaction [van den Briel et al. 2003]. + +A number of different strategies have been implemented: back-to-front, rotating-zone, outside-in, and so on. We consult existing research and find the key issues in describing boarding mathematically and designing a good boarding algorithm. + +Finally, we analyze deboarding and see how its time can be minimized. + +# Judging Standards + +Efficiency: Minimizing total boarding time is our primary target. + +Passenger satisfaction: We use two measures to describe passenger satisfaction: the proportion of passengers satisfied, and average individual boarding time. + +Feasibility: Whether and how the strategy is applicable in reality. Generally speaking, the more complicated the boarding strategy, the less feasible it is. + +Shorter total boarding time is preferred by both airlines and passengers. There are two measures of boarding time: total boarding time (which affects airline profits directly) and average individual boarding time (which influences passengers' satisfaction with the airline and thus indirectly airline profits). + +We cannot find industry standards for passenger satisfaction and feasibility. However, we can define them descriptively, rather than merely numerically. + +# Assumptions + +- We consider only coach and business-class passengers. Passengers with special needs and first-class passengers are only a small portion of all passengers and are seated before the majority, so their boarding time is assumed to be fixed and has little influence on our models. +- Each passenger is allowed one piece of carry-on luggage and one personal item (purse, computer, briefcase, or small tote, etc.) [Airline Carry-On Luggage Regulations 2007]. Each passenger puts any luggage into the overhead compartment and takes any personal item to the seat. Occasional withdrawal of excessive carry-on luggage is not in our scope. +- When boarding starts, all passengers have arrived at the boarding gate. Missing and late passengers are not in our scope. +- The aisles of planes are narrow and do not allow a passenger to pass another passenger in the aisle. + +- The boarding time of an individual passenger is from entry into the plane (and thus the aisle) to sitting down. +- The total boarding time is from the entry of the first passenger into the plane (and thus the aisle) to when the last passenger sits down. +- In all our simulations: + +- For an instance of small planes, we take the Boeing 737 +- For an instance of midsize planes, we take the Airbus A340, which has its 264 seats typically arranged 2-4-2 with two aisles. +- For an instance of large planes, we take the Airbus A380. It is double-deck, typically with 350 seats arranged 3-4-3 in the main deck and 176 seats arranged 2-4-2 in the upper deck. When boarding, passengers of either deck board at the same time, through two doors, either of which is the front door of its deck. + +# The Assigned-Seating Model + +We establish a model of assigned-seating boarding strategies, first for small planes, with only one aisle, then for larger planes. + +# The Boarding Process + +- The gate agent announces boarding, then calls groups one after another. The passengers of a called group start queueing at the gate, with occasional conflicts and controversy about position in the queue. +- The gent checks boarding passes before passengers enter the plane. +- Passengers board through the front door. For a long-haul plane, passengers may board through an additional rear door. +- Passengers enter the plane. Because the aisles in the plane are narrow, when a passenger is putting luggage into the overhead compartment, or stops for another reason, the passengers behind have to line up. As soon as the passenger enters the seat row, the passengers behind can move again. +- Each passenger switches from moving in the aisle to moving in the seat row, after putting luggage into the overhead compartment, and finally sits. If there are other passengers between the passenger and the assigned seat, the passenger has to wait to pass through. +- When the last passenger gets seated, boarding ends. + +# Detailed Algorithm + +# Parameters + +- $x_{1}, \ldots, x_{n}$ The sequence of passengers in the boarding queue. +- $l$ The distance between rows (leg room), including the thickness of the seats. It is a fixed value for one plane but it may differ for different models of planes or different airlines. +- $r$ The number of rows of seats in the plane. +- $S$ The aisle length of the plane, where $S = lr$ . +- $m$ The number of boarding groups. +- $G_{1}, \ldots, G_{m}$ The sequence of boarding groups, according to the position of their assigned seats. The smaller the subscript, the earlier the group boards. +- $r_q$ The row of the $q$ th passenger's assigned seat. +- $t_q$ The time when the $q$ th passenger enters the plane (thus enters the aisle). We assume that $t_1 = 0$ , that is, we set the time to be 0 when the first passenger enters the plane. +- $T_{q}$ A random variable denoting the time difference between the $q$ th and $(q + 1)$ st passengers' entry into the plane; $T_{q} = t_{q + 1} - t_{q}$ , with mean $\overline{T}$ and variance $\sigma_T^2$ . +- $S_{q}$ The position of the $q$ th passenger's assigned seat. We define it as $S_{q} = lr_{q}$ . +- $w_{q}$ The aisle space that the $q$ th passenger occupies, luggage and safe distance between two passengers included. We assume that $w_{q}$ is a random variable following a normal distribution with mean and $\bar{w}$ and variance $\sigma_w^2$ . +- $v_{q}$ The moving speed of the $q$ th passenger in the aisle; it follows a normal distribution with mean $\bar{v}$ and variance $\sigma_w^2$ . +- $a_{q}T_{q,L}$ The time that the $q$ th passenger spends putting luggage into the overhead compartment. $T_{q,l}$ follows a triangular distribution [Van Landeghem and Beuselinck 2000] with mean $\overline{T}_L$ and variance $\sigma_L^2$ . The $a_{q}$ is a coefficient random variable relevant to the size, weight and shape of $x_{q}$ 's luggage; it follows a normal distribution with mean $\bar{a}$ and variance $\sigma_a^2$ . The variables $a_{q}$ and $T_{q,L}$ are independent. +- $T_{q,0}$ The time that the $q$ th passenger spends to pass through an empty seat when moving in a row. It follows a triangular distribution with mean $\overline{T}_0$ and variance $\sigma_0^2$ . + +- $T_{q,1}$ The time that the $q$ th passenger spends to pass through a seat in which another passenger is sitting when moving in a row; it follows a triangular distribution with mean $\overline{T}_1$ and variance $\sigma_1^2$ . This includes the time that the seated passenger stands up and that the $q$ th passenger passes through. +- $\overline{T}_{q,S}$ The time that the $q$ th passenger takes to sit; it follows a triangular distribution with mean $\overline{T}_S$ and variance $\sigma_S^2$ . +- $x_{q}(t)$ The position in the aisle of the $q$ th passenger at time $t$ . We define $x_{1}(t_{0})$ as the position of the aisle's entrance and set $x_{1}(t_{0}) = 0$ . + +# Mathematical Assumptions + +- We divide passengers into groups $G_{1}, \ldots, G_{m}$ . Passengers in the same group queue randomly. Then all the group queues connect with each other, in order of subscript from small to large, to form a total queue. The two extremes are: just one group (so the position of each passenger in the total queue is randomly determined), and the number of groups equals the number of passengers (each group consists of only one passenger, and the position of each passenger in the total queue is fixed by the assigned seat). +- There is no waiting time between two consecutive groups. +- When the $q$ th passenger has entered the plane (thus enters the aisle), the passenger is in one of two states: moving with a constant speed $v_{q}$ or standing still. Standing still can further be divided into three categories: + +- Someone ahead is putting luggage in, so the aisle is congested and the queue cannot move forward. +- The passenger is stowing luggage. +- The passenger is finished stowing luggage, but the aisle seat of the row of the passenger's seat is now occupied by someone else. + +- The $q$ th passenger has three states when in the seat row + +- Passing through an empty seat (with time $T_{q,0}$ ); +- Passing through a seat occupied by another passenger (with time $T_{q,1}$ ); +- Waiting (standing still). + +- The passenger is sitting down (with time $T_{q,S}$ ). + +# The Algorithm + +We now propose a detailed algorithm to calculate both total boarding time and individual boarding time for any assigned-seating boarding strategy. + +# Motion in the Aisle + +The $q$ th passenger's motion is determined if we know the situation in the aisle + +ahead, which is determined only by the passengers entering earlier, namely, the 1st to the $(q - 1)\mathrm{st}$ passengers. We record the position and some other information about each passenger and use iteration to do the calculation. + +Let $W_{Q}(t)$ be the interval of space in the aisle that passenger $q$ occupies at time $t$ : + +$$ +W _ {q} (t) = \left\{ \begin{array}{l l} {\big [ \max \{0, x _ {q} (t) - \frac {1}{2} w _ {q} \}, \min \{S, x _ {q} (t) + \frac {1}{2} w _ {q} \} \big ],} & {x _ {q} (t) \neq 0;} \\ {\emptyset ,} & {x _ {q} (t) = 0.} \end{array} \right. +$$ + +We assume that $x_{q}(t) = 0$ from the moment that the $q$ th passenger enters the seat row (the passenger "disappears from the aisle"). + +Let $C_q(t)$ be the space in the aisle occupied by all the passengers in front of $x_{q}$ at time $t$ : + +$$ +C _ {q} (t) = \left\{ \begin{array}{l l} \cup_ {p = 1} ^ {q - 1} W _ {p} (t), & q > 1; \\ \emptyset , & q = 1. \end{array} \right. +$$ + +From the time $t_q$ when $x_q$ enters the aisle, we define + +$$ +t _ {q, k} = \left\{ \begin{array}{l l} 0 & k = 0; \\ t _ {q} + 0. 1 k, & k = 1, \ldots . \end{array} \right. +$$ + +We let the computer) calculate and record $W_{q}(t)$ and $C_q(t_{q,t})$ every 0.1 s. Let + +$$ +x _ {q} (t _ {q, k + 1}) = \left\{ \begin{array}{l l} \min \{x _ {q} (t _ {q, k}) + 0. 1 v _ {q}, S _ {q} \}, & W _ {q} (t _ {q, k}) \cap C _ {q} (t _ {q, k}) = \emptyset ; \\ x _ {q} (t _ {q, k}), & \text {o t h e r w i s e}. \end{array} \right. +$$ + +In the first case, passenger $q$ moves at some speed during the next 0.1 s until reaching the assigned seat row within the next 0.1 s. In the second case, passenger $q$ stays unmoved during the next 0.1 s. + +We denote the time when $x_{q}(t_{q,k}) = S_{q}$ (the passenger has reached the assigned row) by $t_{q,k_0}$ . When $x_{q}$ reaches the assigned row, $x_{q}$ starts to stow luggage, taking time $a_{q}T_{q,L}$ . + +# Motion in a Row + +We say a seat is "occupied" only if a passenger is passing through it or the passenger to whom this seat belongs is getting seated; otherwise, the seat is not occupied, even if the passenger to whom this seat belongs has already been seated (since another passenger can pass through it). Then, $x_{q}$ at time $(t_{q,k_0} + a_qT_{q,L})$ finishes stowing luggage. We now check whether the aisle seat of the row is occupied: + +- If it is occupied, $x_{q}$ waits in the aisle until it is clear. +- If not occupied, $x_{q}$ enters the row (thus is no longer in the aisle) and occupies the space of the aisle seat at time $(t_{q,k_{0}} + a_{q}T_{q,L}) + 0.1$ . + +- If this is the assigned seat, $x_{q}$ then spends time to get seated. During this time period, nobody else can pass though or occupy this seat. After $x_{q}$ sits down, another passenger could pass through this seat +- If this is not the assigned seat, $x_{q}$ spends time $T_{q,0}$ (if nobody is sitting in it) or $T_{q,1}$ (if someone is sitting on it) to pass through it. During this time period, nobody else can pass though or occupy this seat. After $x_{q}$ passes through this seat, we check whether the next seat is occupied, in the same manner, until $x_{q}$ gets seated. + +# Simulation + +# The Seven Candidates + +Most of the airlines use one of the following six boarding strategies [Airplane Boarding ... 2007]: + +Back-to-front (US Airways, Air Canada, British Airways). Divide the seats into 4-6 blocks and board passengers from the back block to the front block. + +Rotating-zone (AirTran) Divide the seats into 4-6 blocks, and board passengers in the sequence (back block, front block, next back block, the next front block, etc.). + +Random (Northwest) Impose no sequence and let passengers enter randomly. + +Block (Delta) Divide the seats into two or three blocks, then divide each block into a window-seats block and non-window-seats block. Label the back window-seats as block 1, the back non-window-seats as block 2, the second back window-seats as block 3, etc. + +Reverse-pyramid (US Airways) Rear window and middle first, front window and middle next, followed by rear aisle, then front aisle. + +Outside-in (United) Window seats first, followed by middle, then aisle seats. + +Figure 1 shows the patterns of the above six boarding strategies. The numbers are group numbers. + +# The 7th Approach + +Van Landeghem and Beuselinck [2000] include one of the extreme boarding strategies that we raised earlier—seat-by-seat. In their simulation, this strategy performed rather well. Thus, we also include this strategy in our simulation. + +To be concrete, we use the following boarding sequence. + +In the first round, one seat from each row is called once; in second round, a second seat of each row is called; etc. In each round, the gate agent always calls from back row to front row. This eliminates aisle delay (delay due + +to staying in the aisle), except for when the last passenger in the previous round (having a front seat) has not finished stowing luggage when the first passenger in the next round (having a back seat) enters the plane. + +Van Landeghem and Beuselinck claim that this is the best strategy. But they assume that the time for a passenger to pass through an empty seat is the same as to pass through a seat with a person in it. In our model, we distinguish these two times. + +In addition, we let window seats be filled first, followed by middle seats, and then aisle seats. Following this strategy, we have no row delay whatsoever. + +# The Simulation + +- We simulate the seven boarding strategies stated with only small planes, using a Boeing 737. +- We assume variable and parameter values as in Table 1. + +Table 1. Assumed variable and parameter values. + +
SymbolValueSymbolValueSymbolValueSymbolValue
l0.7 mr231 mσw0.2 m
1 m/sσv0.2 m/saqTq,L30 sσaqTq,L10 s
9 sσT3 sT̅01 sσT0.2 s
T15 sσT15 sTS0.5 sσTs0.1 s
+ +- We iterate the simulation 100 times, obtaining the sample mean and sample variance of total boarding time and average individual boarding time. Table 2 and Figures 2-3 show the results. + +Table 2. Simulation results for boarding times for the strategies, in order of increasing average total time. + +
StrategyTotalAverage
MeanSDMeanSD
Seat-by-seat15.60.111.60.05
Reverse-pyramid22.40.562.50.24
Outside-in22.81.062.20.26
Block24.70.352.60.15
Random25.20.512.50.14
Back-to-front26.20.392.90.18
Rotating-zone26.90.833.10.22
+ +![](images/9a53d0e522d0134281c7d25a8cf9d9be74662ab5ff071ea811fe8bb397600a51.jpg) +Figure 2. Average total boarding times for the strategies. + +![](images/a26e52f79d2ed2bcfc393624da30dff3d0309eeeacaba8e9b0d5fd50565d57c9.jpg) +Figure 3. Average individual boarding times under the strategies. + +# Results Analysis + +The results of the above simulation indicate that + +- Seat-by-seat is the most efficient boarding strategy. However, its boarding process is too complicated Thus, it is hard to execute in practice, and we eliminate it. +- Reverse-pyramid and outside-in are the next most efficient. Reverse-pyramid is better in having smaller variation in boarding time. + +In addition, since reverse-pyramid and outside-in are widely used, they are feasible and satisfactory, hence have the best comprehensive quality. + +# More on Reverse-Pyramid + +Seat-by-seat and outside-in are two extremes of reverse-pyramid: Seat-by-seat is reverse-pyramid with the most boarding groups, while outside-in is reverse-pyramid with the fewest boarding groups. + +This discovery inspires us to study reverse-pyramid further and identify the following properties of a reverse-pyramid-type strategy: + +Each boarding group has approximately the same number of passengers. +- After each group finishes boarding, there must be a passenger sitting behind each passenger seated (except for back-row passengers). +- After each group finishes boarding, the numbers of passengers are decreasing from the window columns to the aisle columns. Moreover, unless all the seats in a column have been occupied, they are strictly decreasing. +- Every passenger is only permitted to board at least one round after the one who will sit abreast and closer to the window. +- The seats for each group are symmetric with respect to the axis of the plane. + +We claim that the above five properties are necessary and sufficient conditions for a reverse-pyramid type strategy. + +Additionally, we suggest the following three criteria when grouping the seats for a reverse-pyramid type strategy: + +- For a single-aisle plane, just follow the above five points. +- For a double-aisle plane, first divide the seats into two halves by the axis of the plane. If the seats in the middle part have an odd number of columns: For the seats lying on the axis, assign every second seat (from the first one) to either half, and the rest to the other half. After that, for either half, group the seats following the above five points. + +- The number of seats in each group is at least about 30. Divide all the seats into 4-8 groups. + +With these grouping criteria, we can group seats for any size plane, thereby simulating reverse-pyramid for midsize and large planes. + +# The Open-Seating Model + +# Description + +Passengers board in the order of their check-in: the earlier you check-in, the earlier you can board. In addition, since seats are not assigned to passengers in advance, a passenger can choose from the open seats at boarding time. Thus, passengers boarding earlier have a wider range of choices and are more likely to select a satisfactory seat. + +Southwest Airlines uses this open-seating policy. Their boarding procedure is described as follows [2007]: + +Customers get assigned to Groups A, B or C on their boarding passes, in the order in which the passenger checks in. Groups are called in alphabetical order, with passengers rushing to occupy the seat of their choice. + +# Gains and Losses + +# Gains + +Compared to assigned-seating strategies, open-seating is more efficient [Finney 2006], for two reasons: + +- Passengers in the same group compete with one another for seats, so passengers hurry. +- Most passengers have wide preferences for particular kinds of seats. Therefore, when there is congestion, a passenger will probably choose a seat before the congestion, rather than wait for the aisle to clear. + +# Losses + +Although minimizing boarding time is our primary goal, the level of satisfaction is also important. Customer reviews of Southwest Airlines reveal that many people do not like open-seating. The main types of dissatisfaction can be categorized as follows. + +- People may want to sit next to friends and relatives, but with open-seating this may not happen. + +- Some people are used to boarding with seats assigned in advance. They do not want to rush or "compete" with others. +- Some people not distinguished as having "special needs" still have comparatively low capability of "competing" for seats. + +# Algorithm + +To compare open-seating with assigned-seating strategies, we need to calculate the boarding time for open-seating strategy. With open-seating, however, there may be unpredictable events. For example, two passengers may have conflict when they want to choose the same seat. To propose a systematic algorithm, we must make more assumptions to simplify the behaviors of passengers. + +# Assumptions + +- We use Southwest's open-seating strategy of three groups (A, B and C). Group A passengers enter the plane before Group B, followed by Group C. +- All passengers prefer the same types of seats (we specify these later). +- All passengers behave rationally and politely. That is, when a seat a passenger prefers is taken by someone else, the passenger finds another seat. +- Once a passenger is seated, the passenger does not move again. +- The moving speed of passengers is faster than in assigned-seating boarding. + +# Passenger Preferences + +We categorize seats into three classes of preference for each passenger: most preferred, preferred, and not preferred. A passenger first checks for most preferred seats; if there are none, the passenger considers seats in the next class. + +For each passenger, every preference class consists of only two types of seats: a certain column (for example, window seats), and/or a certain range of adjacent rows (for example, seats in Row 10-15). A passenger who has both types of preference in a class has a preference between them. + +# The Boarding Process for a Passenger + +For a Group A passenger: + +- When the passenger just enters the plane (and thus the aisle), if there is someone stowing luggage at row $i$ , the passenger checks whether there are most-preferred seats in front of row $i$ , chooses the nearest one if there + +is one, and otherwise moves forward to wait just behind the passenger stowing luggage at row $i$ . + +- When the passenger at row $i$ finishes stowing luggage and clears row $i$ , our passenger moves on and repeats the above if there is someone stowing luggage at row $j > i$ and iterates until there is no one in the way. +- The passenger now checks whether there are any most-preferred seats among all the seats from the row where he/she is to the last row, chooses the nearest available one or if none are available checks whether there are preferred seats in this area. If there are, the passenger chooses the nearest one; if not, the passenger chooses the nearest empty seat. + +- A Group B passenger's behavior is similar, except that the passenger also checks whether there are preferred seats every time after checking for most-preferred seats and not finding one. +- A Group C passenger's behavior is similar, except that the passenger also checks whether there are empty seats every time after checking for preferred" seats and not finding one. + +# Simulation + +Table 3 shows the results of our computer simulations with the same parameter values as before, together with results for a combined strategy to be developed in the next section. We do not simulate open-seating for large planes. Since they are for long voyages, a larger proportion of passengers prefer assigned-seating and boarding time has less impact on airline profits. + +Table 3. Simulation results for boarding times for the open-seating and reverse-pyramid strategies. + +
StrategySmall planemidsize plane
TotalAverageTotalAverage
MeanSDMeanSDMeanSDMeanSD
Open-seating19.60.31.70.237.41.51.50.3
Reverse-pyramid22.40.62.50.240.81.21.50.3
Combined21.10.62.30.338.91.31.60.2
+ +# A Comprehensive Model + +# Motivation + +A good boarding strategy should perform well in three aspects: efficiency (short total boarding time), passenger satisfaction (short average individual + +boarding time), and feasibility. + +Although the three representative boarding strategies—reverse-pyramid (outside-in), seat-by-seat, and open-seating—each have advantages, they also have drawbacks: + +- Reverse-pyramid/outside-in has a longer boarding time than open-seating. +- Open-seating dissatisfies a notable number of passengers. +- Seat-by-seat is not feasible in practice. + +We are then motivated to develop a comprehensive model, with the aim of including all their advantages and eliminating their drawbacks. + +# Boarding Strategy + +We group the seats as in reverse-pyramid. We divide the groups and their corresponding seats into two categories, with $G_{1}, \ldots, G_{i}$ in Category 1 (passengers who board in the manner of open-seating) and $G_{i+1}, \ldots, G_{m}$ in Category 2 (passengers who board based on assigned-seating). + +By devising the boarding strategy in this way, we ensure that + +- Passengers who prefer assigned-seating are assured a fixed seat at check-in. +- Passengers who prefer open-seating boarding can select seats at will. +- This will beat the open-seating policy in regard to passenger satisfaction + +When the boarding starts: + +- Passengers who chose open-seating board first, group by group. We let the number of these groups now be $\min\{3, i\}$ . But we first mark out certain seats, which are specially for open seating passengers. +- Then the assigned-seating passengers also board group by group. + +# Fixing the Number of Groups + +In our new model, the new parameter is $i$ , the number of groups for open-seating. The value of $i$ will vary with the number of groups (that reverse-pyramid has initially), model of plane, and ratio of passengers preferring the two boarding manners. + +We assume that a proportion $A$ of passengers prefer open-seating and a proportion $(1 - A)$ prefer assigned-seating. Since a fixed optimal grouping scheme of reverse-pyramid is determined by the seat plan of the plane, there is a unique $i_0$ that satisfies + +$$ +\frac {\sum_ {j = 1} ^ {i _ {o}} n _ {j}}{n} \leq 1 - A, \quad \frac {\sum_ {j = 1} ^ {i _ {o} + 1} n _ {j}}{n} \geq 1 - A, +$$ + +where $n_j$ is the number of passengers in group $j$ and $n$ is the total number of passengers. + +The value of $i$ can be set to either $i_0$ or $(i_0 + 1)$ ; the choice depends on the airline's weighting between efficiency and passengers' satisfaction. If the airline considers efficiency more important, it should let more passengers board according to open-seating and set $i = i_0 + 1$ ; if the airline considers passenger satisfaction more important, it should let more passengers board with assigned seats and set $i = i_0$ . + +When $i = 0$ or $i = 1$ , we do not provide for open-seating, for which the value of $A$ is supposed to be large (for example, $85\%$ ). The size of a plane is usually related to the length of the trip (small planes for short trips, bigger planes for longer trips). Usually, the longer the trip, the more passengers prefer assigned seating and the less impact of boarding time on profits. Hence, for larger planes, the portion of seats for open-seating is smaller. + +# Advantages + +- Based on the simulation results, reverse-pyramid-type strategies have the highest efficiency in the assigned-seating model. In addition, the grouping in reverse-pyramid boarding strategy is good because + +- It has comparatively more groups. This lets us have more flexibility to arrange the seats for the two boarding types. +- If the value of $i$ does not go to extremes, each boarding type will have seats of all features (against the window, beside the aisle, in the front, in the back, next to each other, and so on). This enhances the range of choices for passengers from both boarding types. + +- This boarding strategy considers preferences of passengers, so it is quite likely to have higher customer satisfaction. +- Its boarding process is open-seating passengers first, followed by passengers boarding in reverse-pyramid. This process actually takes into account possible confusions during open-seating passengers' boarding and prevents most passengers from witnessing the confusions. +- Both boarding manners are feasible and are used, so we infer that the new combined strategy too is feasible. + +# Testing the New Model + +# Comparison + +Table 3 earlier also displays results for the combined strategy, which has shorter total boarding time and shorter average individual boarding time than + +reverse-pyramid but is longer on both counts than open-seating. Therefore, we suggest: + +- For a small plane (85-210 passengers), use open-seating or the combined strategy. +- For a midsize plane (210-330 passengers), use the combined strategy. +- For a large plane (450–800 passengers), results not displayed show little difference between reverse-pyramid and the combined strategy (both take over an hour). + +# Sensitivity Analysis + +We repeated the simulation many times, using different values for the parameters. The resulting data shows that, no matter which strategies we use, the following two input parameters have the biggest impact on boarding time: + +- $\overline{T}$ , the expected time difference between two adjacent passengers' entry to the plane. +- $\bar{a}$ , the expected value of the coefficient random variable that is relevant to the size, weight, and shape of the passengers' luggage. + +We give the following suggestions for decreasing total boarding time: + +- For small planes, to the extent that it is bearable for passengers, lower the limits for carry-on luggage size and weight. +- For midsize and large planes, have more training for the gate agents or more gate agents. Since large planes are usually for long flights, passengers often need more luggage, so it is not proper to set the luggage limit too low. + +# Deplaning + +# An Ideal Model + +Total deplaning time is the time from the first passenger standing up to the last passenger leaving the plane. + +We assume: + +- The speed of all passengers moving in the aisle when deplaning is constant. +- The time that all passengers spend in picking luggage is constant. + +The deplaning queue is an imaginary queue that is formed if we join all passengers in a line in the order that they deplane, and if the passengers who have left the plane would continue moving forward in the queue. + +We now propose our ideal deplaning strategy: + +- Passengers in aisle seats on one side of the aisle (say all the C seats) as a whole stand up at the same time, which is the beginning of the deplaning process. They take their luggage and leave the plane as a whole. +- As soon as the last passenger (23C) leaves the 23rd row, the passenger in seat 23D moves into the aisle, occupies it in no time, and takes luggage—but does not (yet) move forward. +- As soon as the 23C passenger passes row 22, the 22D passenger moves into the aisle, occupies it in no time, and takes luggage—but does not (yet) move forward. +- The passengers in the D seats behave in the same manner, until the 1D passenger is in the aisle with luggage. Then all the D-seat passengers move to leave the plane as a whole. +- Thereafter, all the B, E, A, and F passengers repeat what the D passenger did. +- When the last passenger (F23) leaves the plane, the deplaning ends. + +Using the above strategy, total deplaning time is minimized. The argument is as follows. + +There are only five segments of unoccupied space in the deplaning queue, and all appear in front of the row 1 passengers. Every passenger has to spend time in the aisle taking luggage. When a row 1 passenger is taking luggage, the passenger who deplanes before that passenger is moving in the deplaning queue. Thus, these five segments of unoccupied space cannot be eliminated, no matter what deplaning strategy we use. + +We then say that all the passengers are divided into six "rounds" by those five segments of unoccupied space. + +# Reality + +The airlines have little control of the behavior of passengers deplaning, so any detailed strategy would be very difficult to implement. Therefore, instead, we now propose a concise criterion for passengers to deplane. + +We use again the concept of "rounds," but with a small modification: Each round has 23 passengers, coming from all the 23 rows; for each row, the passenger stepping out is either of the ones on both sides who are nearest to the aisle. + +In practice, the crew can announce the criterion before passengers deplane. Even with occasional violations, we do better. + +# Conclusion + +Open-seating beats the existing assigned-seating strategies in total boarding time but it loses in terms of passenger satisfaction. Seat-by-seat wins in both aspects; but it is not feasible in reality, thus useless. + +We combine opening-seating with the most efficient feasible assigned-seating strategy, namely, reverse-pyramid (outside-in). We expect the combined strategy to be feasible and good on of our criteria. + +As for deplaning, airlines do not have as much control as they do in boarding, so a complex plan cannot be executed well. Thus, it's better have a simple procedure than a detailed but unfeasible one. + +# Strengths and Weaknesses + +# Reverse-Pyramid Boarding Strategy + +# Strengths + +- The most efficient of the prevailing assigned-seating strategies. +Methodical, so there will be little confusion during boarding. +- Passengers can probably sit next to friends and relatives as they wish. + +# Weaknesses + +- Not as efficient as open-seating. +- Requires more staff to execute and control boarding. + +# Open-Seating Boarding Strategy + +# Strengths + +- Higher efficiency than all the assigned-seating boarding strategies. +- For a few people, especially the young, it is attractive. +- Looks simple and requires less staffing to execute. + +# Weaknesses + +- Dissatisfaction of passengers is the vital drawback. + +# Combined Boarding Strategy + +# Strengths + +- More efficient than traditional assigned-seating boarding strategies. +- Meets the needs of different types of passengers, thus probably making the airline more satisfactory and popular. + +# Weaknesses + +- It might be tedious for airlines to set the portions, and surveys may be needed. + +# References + +Finney, Paul Burnham. 2006. Loading an airliner is rocket science. http://travel2.nytimes.com/2006/11/14/business/14boarding.html. Accessed 10 February 2007. +Luggage Online, Inc. 2007. Airline carry-on luggage regulations. http://www.luggageonline.com/about_Airlines.cfm. Accessed 9 February 2007. +Marelli, Scott, Gregory Mattocks, and Remick Merry. 1998. The role of computer simulation in reducing airplane turn time. Aero Magazine (4th Quarter 1998). http://www.boeing.com/commercial/aeromagazine/aero_01/textonly/t01txt.html. +Southwest Airlines. 2007. Boarding process. http://wwwsouthwest.com/travel_center/boarding_process.html. Accessed 10 February 2007. +van den Briel, Menkes H.L. n.d. Airplane boarding. http://www(public.asu.edu/~dbvan1/projects/boarding/boarding.htm. Accessed 9 February 2007. +_____, J. René Villalobos, and Gary L. Hogg. 2003. The aircraft boarding problem. In Proceedings of the 12th Industrial Engineering Research Conference (IERC-2003), No. 2153, CD-ROM. http://www(public.asu.edu/~dbvan1/papers/IERC2003MvandenBriel.pdf. +Van Landeghem, H., and A. Beuselinck. 2000. Reducing passenger boarding time in airplanes: A simulation based approach. European Journal of Operational Research 142: 294-308. Copy on request from menkes@asu.edu. + +![](images/9e7d81f58393b49f399401db8924af122e76802c74fb4259cfd3f0094e108d24.jpg) + +Bolun Liu, Hao Wang, advisor Yannis Yatracos, and Xuan Hou. + +# Airliner Boarding and Deplaning Strategy + +Linbo Zhao + +Fan Zhou + +Guozhen Wang + +Peking University + +Beijing, China + +Advisor: Xufeng Liu + +# Summary + +To reduce airliner boarding and deplaning time, we partition passengers into groups that board in an arranged sequence. We assume that first-class and business-class passengers board first; our model treats only economy class. Since deplaning is the converse process of boarding, a strategy for boarding gives a strategy for deplaning. + +We develop a model of interferences among passengers, which determine boarding time. We try to find a strategy with the least interferences. By running Lingo, we tackle the resulting nonlinear integer programming problem and obtain near-optimal strategies for fixed numbers of groups. This model supports the outside-in and reverse-pyramid strategies. + +We develop another model to give a global lower bound for interferences. We also prove that individual boarding sequence, which boards passengers one by one in a particular order, attains that lower bound. + +We develop code in $\mathrm{C + + }$ to simulate boarding strategies and test various strategies for three airliners: Canadair CRJ-200 (small), Airbus A320 (midsize) and Airbus A380 (large). Individual boarding sequence, reverse-pyramid, and outside-in are the best three strategies in terms of both average boarding time and its standard deviation. + +We test strategies under various luggage loads and levels of occupancy, with and without late passengers and those with special needs. Outside-in and reverse-pyramid are stable under variation of parameters, whereas individual boarding sequence is extremely sensitive, though not to luggage. + +Our conclusions discredit traditional back-to-front strategies and support individual boarding sequence, reverse-pyramid, and outside-in. The more groups, the worse the situation with back-to-front. Taking cost into consideration, random sequencing should also be recommended. + +Finally, we analyze deplaning and see how its time can be minimized. + +# Introduction + +Airliner turnaround time is an important factor in determining airplane productivity [van den Briel et al. 2005], and boarding time constitutes an important part of turnaround time. Boarding time is the period between when the first passenger enters the plane and when the last passenger is seated. + +Deplaning is also an essential part of turnaround time. We regard deplaning as the converse process of boarding, so an efficient boarding strategy brings out an equally efficient deplaning strategy. + +A common approach in boarding is to partition passengers into several groups and board the groups in a sequence: + +- Back-to-front: This is a traditional strategy. +- Outside-in: Window seats first, middle seats second, and aisle seats last. +- Reverse-pyramid: Discovered by van den Briel et al. [2005]. It is in use. +- Individual boarding sequence: Passengers are called to board one by one according to their seat number. This strategy is criticized as impractical; no known airline uses it. +- Random: There is no sequencing at all. +- Rotating: The deck is divided into blocks from back to front. The back block is called first, then the front block, the next back block third, continuing until the blocks meet in the middle. +- Free seat choice: Some airlines do not preassign seats to passengers; passengers choose their seat after boarding [Ferrari and Nagel 2005]. + +# Previous Work + +The back-to-front procedure is used by many major airlines [Finney 2006]. Recent research [Ferrari and Nagel 2005; Van Landeghem n.d.; Van Landeghem and Beuselinck 2002; van den Briel et al. 2005] discredits the effectiveness of back-to-front and recommends a version of outside-in, such as reversepyramid. These versions perform with similar efficiency, as substantiated by simulations. + +Previous work tends to compare known boarding strategies. Research based purely on simulation is not fully satisfactory because of its lack of rigor. Among previous work, van den Briel et al. [2005] seems to offer the only well-rounded research. They combine analytical model, simulation, and practical implementation, but use a complex nonlinear integer programming problem. We simplify their model and do similar nonlinear integer programming. + +Ferrari and Nagel [2005]: + +- point out deficiencies in the simulation of Van Landeghem and Beuselinck [2002]; +- offer more details on passenger behavior, such as a bin occupancy model and seating model, which they use to describe the effect of seat interferences; +- consider the passengers' seat preferences, while van den Briel et al. do not; and +- pay special attention to robustness. + +Van Landeghem and Beuselinck [2002] is an excellent work, with data from the national carriers' database and from interviewing gate agents and method engineers. They give distributions for walking speed, time to store luggage, and time to sit. They also consider the distribution of the number of luggage items. They conclude that the most effective strategy is the outside-in by-seat strategy but do not offer a rigorous argument to defend that conclusion. They find that random sequencing, used frequently today, preforms well compared to most other strategies, which is somewhat surprising; only 9 of the 46 strategies that Van Landeghem and Beuselinck study do better than random. They conclude that in taking a structured approach to boarding, one should beware of making things far worse by choosing a wrong way of sequencing. + +The data from Van Landeghem and Beuselinck [2002] and Van Landeghem [n.d.] have a strong impact on our simulation. + +# Problem Analysis and Basic Approach + +It is difficult to arrange passengers precisely in a designated sequence. One basic solution is to partition the passengers into several groups, such as by row number or column letter. + +Our basic boarding strategy is based on group partition. Groups board in a particular order, but passengers in the same group enter in a random sequence. + +# Assumptions + +- Within a group, all sequences of passengers are equally likely; and permutations within different groups are independent of one another. + +- We simplify the behavior of passengers. In reality, a passenger walks at a speed between zero and an upper bound; in our models, a passenger either keeps still or moves at a constant speed, but different passengers walk at their personal constant walking speeds. A static passenger does not require any time to accelerate to usual walking speed. +- The airliner deck is a rectangle. In reality, the shape resembles a rectangle with its corners truncated. +- We consider deplaning to be the converse of boarding except in the cases of random sequencing and free-seat-choice strategy. Thus, we consider only boarding. +- We follow the traditional strategy to assign a constant time for first-class and business-class passengers to board before economy class. +- Some large airliners, such as the Boeing 747 and Airbus A330-300, have two parallel aisles. We divide a two-aisle airline into two equal halves and treat each half as a single plane with only one aisle. (Sometimes the two halves are not exactly symmetric, as with the Boeing 747.) + +# Model I: Interferences + +We describe passenger behavior with two parameters, the walking speed and the expected time to stow luggage. It seems impossible to obtain an explicit closed formula for expected boarding time, so we use expected boarding interference instead. + +We use the definitions of van den Briel et al. [2005]. Boarding interference is a passenger blocking another passenger's access to his or her seat. There are two types: seat interferences and aisle interferences. Seat interferences occur when passengers seated close to the aisle block other passengers to be seated in the same row. Aisle interferences occur when passengers stowing luggage block other passengers' access to seats. + +# Model Description + +Suppose that the plane has a single aisle. Let the groups be numbered and let groups enter the airliner in sequence $1, 2, \ldots$ . + +Assumption To avoid seat interference as often as possible, if passenger A of Group $i$ and passenger B of Group $j$ sit on the same half of a row (the same left row or the same right row), with $i < j$ , we assume that A sits closer to the window than B. + +With this assumption, we can describe a group partition and a sequence of groups by two matrices: + +$$ +(x _ {i, \mathbf {r}} ^ {j}), (x _ {i, 1} ^ {j}), \qquad j = 1, \ldots , n; i = 1, \ldots , m, +$$ + +where $n$ is the number of groups and $m$ is the number of rows. Group $j$ consists of $x_{i,\mathrm{r}}^{j}$ passengers from the right half of row $i$ th and $x_{i,1}^{j}$ passengers from the left half of the row $i$ . The sum of all the entries in the two matrices is the number of passengers on the plane + +Due to the assumption, seat interferences occur only within the same group, so we have: + +$$ +\operatorname {E} [ \text {s e a t i n t e r f e r e n c e s} ] = \sum_ {j = 1} ^ {n} \sum_ {i = 1} ^ {m} \left[ \frac {1}{2} \binom {x _ {i, \mathrm {r}} ^ {j}} {2} + \frac {1}{2} \binom {x _ {i, 1} ^ {j}} {2} \right], \tag {1} +$$ + +$$ +\operatorname {E} [ \text {i n t e r f e r e n c e s w i t h i n G r o u p} i ] = \frac {1}{s _ {j}} \left[ \binom {s _ {j}} {2} + \sum_ {i = 1} ^ {m} \binom {x _ {i, \mathrm {r}} ^ {j} + x _ {i, 1} ^ {j}} {2} \right], \tag {2} +$$ + +where + +$$ +s + j = \sum_ {i = 1} ^ {m} (x _ {i, \mathrm {r}} ^ {j} + x _ {i, \mathrm {r}} ^ {j}), +$$ + +and the expected number of aisle interferences between consecutive groups Group $j$ and Group $j + 1$ is + +$$ +\frac {1}{s _ {j} s _ {j + 1}} \sum_ {i = 1} ^ {m} \left[ \left(x _ {i, \mathrm {r}} ^ {j} + x _ {i, \mathrm {l}} ^ {j}\right) \sum_ {t = 1} ^ {m} \left(x _ {t, \mathrm {r}} ^ {j + 1} + x _ {t, \mathrm {r}} ^ {j + 1}\right) \right]. \tag {3} +$$ + +Then the expected number of aisle interferences is the sum of expected aisle interferences within groups plus the sum of expected aisle interferences between groups. + +Equation (1) is interpreted as each pair of passengers in the same group and the same row on the same side (both right or both left) have probability $1/2$ of seat interference, since exactly one of their two possible boarding orders causes a seat interference. + +For aisle interference, for each ordered pair of passengers there are $s_j - 1$ positions for the two passengers to board one after another, leaving $(s_j - 2)!$ ways for the remaining passengers in this group to board. Thus, the probability of such a situation is $(s_j - 1)(s_j - 2)! / s_j! = 1 / s_j$ , and there are + +$$ +\binom {s _ {j}} {2} + \sum_ {i = 1} ^ {m} \binom {x _ {i, \mathrm {r}} ^ {j} + x _ {i, \mathrm {l}} ^ {j}} {2}. +$$ + +pairs that can cause interference; this gives (2). In a similar way, we can calculate the expected aisle interferences of two consecutive groups. The only case of two passengers in different groups to cause interference is that they are the + +last of the previous group and the first of the next one. This happens with probability $1 / s_{j} s_{j + 1}$ . Calculating all the interfering pairs gives (3). + +We define the evaluation of a strategy to be + +$$ +(\text {e x p e c t e d a s i l e i n t e r f e r e n c e s}) + \lambda (\text {e x p e c t e d s e a t i n t e r f e r e n c e s}). \tag {4} +$$ + +where $\lambda$ is a positive number determined by the time needed to stow luggage and the constant walking speed of passengers. + +# Optimal Strategy in a Weak Sense + +Because the number of aisle interferences between each pair of consecutive groups is no more than 1, the total of aisle interferences between different groups is less than the number of groups. When the number of groups is not very large, which is often the case, we neglect aisle interferences between different groups and concentrate on seat interferences and aisle interferences within the same group. + +The first term in (2) is in fact constant when summed over $j$ . The other terms in (1) and (2) are convex and monotonically increasing functions with respect to $x_{i,\mathrm{r}}^{j}$ and $x_{i,1}^{j}$ . + +With aisle interferences between different groups neglected, and the number of groups and number of passengers in each group all fixed, the total number of expected interferences is a sum of convex functions. Therefore, the strategy with the fewest interferences must have the property that in the same group the difference of passengers in different half-rows is no more than 1. Moreover, the difference of passengers in different rows is also no more than one in the best strategy. + +For instance, outside-in and reverse-pyramid strategies have the above properties. This indicates that outside-in and reverse-pyramid strategies might be optimal strategies when the number of groups is not large. + +# Results + +As in van den Briel et al. [2005], let $\lambda = 1$ . For each strategy, we could compute the total expected interferences using (4). We use Lingo 8.0 to search for the optimal strategy with the least expected interferences with the number of groups fixed. The task is to determine the two matrices representing a strategy with the least interferences. We are faced with a nonlinear integer programming problem. The objective function is + +(expected aisle interferences) + (expected seat interferences). + +Such a problem is NP-hard [van den Briel et al. 2005]. + +Due to the limitation of our computers, we could not determine the global optimal solution for a 60-passenger or larger plane, even in several hours. + +Nevertheless, we ran Lingo, stopped the software after some tens of minutes of search, and observed the best solution found to that point. In rare cases, the lower bound of the objective function equaled the least interferences, which means that our computer found a global minimum. In many cases, the interferences of the best strategies found by computer is slightly greater than the lower bound of objective function. + +Table 1 gives the results from Lingo for different airliner structures. Triples in the table denote structures of airliners; for instance, "2,3,11" means an airliner with 2 columns of seats on one side of aisle, 3 columns of seats on the other, and 11 rows. The incompleteness of the results of computation set aside, the best strategies found, as anticipated, are consistent with the theoretical results of the previous subsection. + +Table 1. +Results from LINGO. + +
Airliner typeStructureNumber of groupsBest known strategyBound by Lingo
Airbus A3802,3,15247.046.4
339.637.5
440.433.1
540.730.8
640.329.2
2,3,11235.035.0
329.728.5
429.825.3
530.123.5
2,3,9229.129.1
324.624.1
424.821.3
524.919.4
Airbus A3203,3,262106.5101.0
380.580.5
481.071.4
581.365.6
681.861.7
781.458.9
883.356.8
Canadair CRJ-2002,2,14229.529.5
329.825.1
429.122.7
530.121.5
630.620.3
Part of Boeing 7472,2,19239.539.5
339.833.3
440.330.1
540.828.2
645.127.0
+ +Surprisingly, we got three exactly outside-in strategies, with the rest resembling either reverse-pyramid or outside-in. + +# Model II: Individual Boarding Sequence + +Practical problems set aside, the most efficient strategy comes out of the finest group partition, in which each passenger corresponds to a group and each group consists of exactly one passenger. Passengers are arranged to enter the airliner in an expected order. Using this partition, the best solution is the individual boarding sequence strategy. + +In minimizing interferences, there is an obvious lower bound: the back-seat passenger in each column must be blocked when the one just before is stowing luggage. Also, the front-seat passenger while stowing luggage must block the next passenger in sequence. Hence, the interference at least occurs $(n - 1)$ times, where $n$ is the number of columns, since every back-seat passenger of a column causes an interference except the first to board. Such a minimum can be attained actually when each back-seat passenger follows a front-seat passenger and there are no other interferences—which is an individual boarding sequence. + +Certainly, this strategy is often considered seriously impractical. Van Landeghem and Beuselinck [2002] argue that comparable systems exist today. We devise a system that can be used to make the finest partition: At the airport, there is plenty of time between check-in and when passengers are allowed to enter. Airlines can assign numbers and letters to waiting seats at the airport. Passengers can be seated there according to their seats on the airliner. The airline can call passengers to board in sequence by the numbered seats. + +# Stochastic Simulations + +To establish a simulation that can test various strategies and treat various kinds of airliners, we wrote a program in $\mathbf{C} + +$ . + +To test boarding strategies, our simulation should reproduce the boarding process as closely as impossible, so that the assumptions about human behavior are tenable and data describing passenger boarding behavior accords with reality. Our simulation is based on discrete time and continuous space; each time step is $0.5\mathrm{s}$ . + +# Assumptions and Details in the Simulation + +- Business/economy assumption: First-class and business-class passengers board before economy-class passengers, in an assigned constant time. +- The aisle of economy class is narrow and cannot contain two passengers abreast. Thus, if a passenger is stowing luggage, the passenger behind in the aisle must wait and cannot pass. +- Between consecutive boarding groups, there is no time interval. That is, Group $i$ boards in the wake of Group $(i - 1)$ . + +- In free-seat-choice boarding strategies, passengers prefer seats more toward the window. Thus, the window seat is passengers' favorite and the aisle seat is the most unpopular. However, there is no preference for a particular row. +- There is only one way to reach each seat. Later, we discuss validity of this assumption in multi-aisle airliners. +- In the aisle, passengers can move only toward the back. If a passenger's seat is more toward the front than where the passenger is, the passenger has to go to the back and wait for the aisle to clear at the end of boarding. +- A passenger walks at an individual constant walking speed, but different passengers have different walking speeds. The walking speed distribution is a triangular distribution with lower limit $0.28\mathrm{m / s}$ , mode $0.365\mathrm{m / s}$ , and upper limit $0.45\mathrm{m / s}$ , based on observations by Van Landeghem and Beuselinck [2002]. This distribution is the sum of two continuous uniform distributions. +- The distance between consecutive passengers must not be smaller than $0.6\mathrm{m}$ . A passenger who walks fast enough to violate this limitation stops where the minimum is attained. +- We exclude several low-probability events: a passenger falls down, passenger mistakes another's seat for the passenger's own, or a seated passenger leave the seat voluntarily (e.g., for the toilet). +- The distance between rows is $33.25 \mathrm{in} = 0.84 \mathrm{m}$ [Wikipedia n.d.]. + +# Bin Occupancy Model + +As in Ferrari and Nagel [2005], we suppose that there is an overhead bin for each row on each side of the aisle and every passenger is assigned a random number of pieces of luggage, according to the probabilities in Table 2. + +Table 2. Luggage distribution at normal and high load. + +
Number of pieces
0123
Normal load5%55%30%10%
High load5%20%55%20%
+ +The time that a passenger needs to stow luggage depends on how much luggage and the occupancy of the overhead bin, as follows: + +$$ +t _ {\mathrm {s l}} = 2. 4 \left(2 + \frac {n _ {\mathrm {b i n}} + n _ {\mathrm {l}}}{2} \times n _ {\mathrm {l}}\right) +$$ + +when $n_1$ is positive, with + +- $t_{\mathrm{sl}}$ is the time to stow all pieces of luggage (seconds), +- $n_{\mathrm{bin}}$ is the number of pieces of luggage already in the bin, and +- $n_1$ is the number of pieces of luggage carried by the passenger. + +We let $t_{\mathrm{sl}} = 0$ when $n_1 = 0$ . + +Fractional results for $t_{\mathrm{sl}}$ are rounded to the nearest half-integer. The values of $n_{\mathrm{bin}}$ refer to the corresponding half-rows overhead; passengers always put their luggage into the bin corresponding to their half-row. In reality, if the overhead bin gets full, passengers have to move to other rows to find a bin. This fact is not reproduced directly by the simulation; however, note that $t_{\mathrm{sl}}$ becomes rather large for full bins. + +# The Seating Model + +Our seating model too is inherited from Ferrari and Nagel [2005]. The time that passengers need to sit down depends on the number of interfering passengers (seat interferences) who are already seated. Those interfering passengers passengers have to get out of their row and then sit down again after the new passenger sits. The mathematical form of this is + +$$ +t _ {s} = t _ {p} + 2 t _ {p} n _ {s} = t _ {p} (1 + 2 n _ {s}) +$$ + +when $n_s$ is positive, where + +- $t_s$ is the total time for seating (seconds); +- $t_p$ is time used to get from the seat into the aisle or back (seconds), $tp = 3.6$ ; and +- $n_s$ is the number of occupied seats in front of the passenger's seat. + +We let $t_s = 0$ when $n_s = 0$ . + +# Additional Assumptions of Free Seat Choice + +Modeling free-seat-choice strategies is not easy. We need more assumptions. + +- Passengers are supposed to be sagacious. That is to say, they know the best kind of seats that they can obtain under the worst situation. They know instantly how many of such kind of seats are guaranteed to be available to them. They sit at a seat of such kind with possibility of $1 / n$ , where $n$ is the estimated number of available seats of that kind after then. +- Queued passengers may lose patience and accept a "bad" seat. +- If a passenger arrives at the last row with free seats, the passenger sits there. +- Passengers do not change their walking direction to find seats. +- We do not consider free seat choice in a two-aisle airliner, where two passengers could reach a seat from two aisles at the same time. + +# Simulation Results + +Using the simulation software that we built in $C++$ , we simulated each boarding strategy 50 times. Average boarding time indicates the performance of a strategy, while the standard deviation reflects its robustness. + +The airliner in these simulations has full occupancy. Passengers are considered to carry luggage of normal load as indicated in Table 2. + +We tested 14 strategies for Canadair CRJ-200, 16 for Airbus A320, and 9 for Airbus A380. These strategies come from back-to-front, outside-in, reverse-pyramid, random sequencing, rotating-zone, free-seat-choice, individual sequencing, and two strategies produced by computer from Model I. Except for individual boarding sequence, all strategies are currently practicable, with several in wide use. The notation "back-to-front $3^{\prime \prime}$ means a back-to-front strategy with 3 groups. + +# Canadair CRJ-200 + +We consider the CRJ-200 as a typical small airliner, with a rectangular 14-row deck and two columns on either side of the aisle. We tested 14 boarding strategies, with the results in Table 3. + +Table 3. Statistics of simulation of the boarding time of strategies toward CRJ-200 (min). + +
StrategyAverageSD
Individual boarding sequence3.70.3
Reverse-pyramid 36.50.7
Strategy 1 from Model I6.80.6
Strategy 2 from Model I6.80.5
Free seat choice6.80.3
Outside-in 26.80.6
Outside-in 47.70.6
Random7.80.8
Back-to-front 28.10.6
Back-to-front 38.20.7
Back-to-front 48.40.7
Rotating-zone 48.50.9
Rotating-zone 38.60.8
Rotating-zone 59.10.6
+ +- The simulation supports the claim from Model II that individual boarding sequence has the shortest boarding time, only $3.7\mathrm{min}$ ; all other strategies need at least $6\mathrm{min}$ . Moreover, individual boarding sequence's standard deviation is the smallest. +- The simulation results of Strategy 1 and Strategy 2 from Model I are both satisfactory, which substantiate Model I greatly. + +- Among all strategies currently in use, reverse-pyramid, outside-in, and free-seat-choice are the soundest, with boarding time approximately $6.5\mathrm{min}$ . The standard deviation for free-seat-choice is quite small. +- The traditional back-to-front strategy is most disappointing, with average boarding time over $8\mathrm{min}$ . +- Considering the ease to perform free-seat-choice strategy and random sequencing (do not require any extra effort), they are acceptable choices. + +# Airbus A320 (Midsize) + +The Airbus A320 is a typical midsize airliner. We consider it to have a rectangular 26-row deck with three columns on either side of aisle. We tested 16 boarding strategies; the two new ones are back-to-front 5 and back-to-front 6. + +The results are completely analogous to those for the CRJ-200. Ironically, the more groups, the worse the situation is for the rotating-zone strategy and for back-to-front. Both are even worse than random sequencing. + +# Airbus A380 (Large-Size) + +We take the Airbus A380 as a typical large airliner, with two decks. We divide its upper deck into two halves; we divide the lower deck into three parts horizontally and then divide each of three into two halves. Thus, the two-deck economy class is divided into eight parts and each part is treated as a single airliner. We assume that the A380 has two entrances in the front. Passengers are not allowed to cross between halves, so there is only one way to reach each seat. + +Our strategy is a combination of strategies for each of the eight parts of the A380. Thus, there are many possible combinations, of which we selected nine to test. Individual boarding sequence performs best again, with the least standard deviation. We also find that large airliners are less sensitive to boarding strategies than smaller ones. + +# Sensitivity Analysis + +In our simulations to this point, we assume that the airliner is full and passengers carry a normal load of luggage. Also, we exclude the possibility that passengers are late to board and neglect passengers with special needs, who are usually board first. Here we analyze the effect of these possibilities. + +# More Luggage + +We compare normal load with high load of luggage for a full A320 with no late passengers. Individual boarding sequence still performs excellently with + +great robustness; its increase is below $10\%$ . The remaining strategies all increase by $25\%$ to $30\%$ . These results indicate that sensitivity to luggage load should not be considered an important criterion in choosing boarding strategies. + +# Occupancy + +We ran simulations with an A320 at $62.5\%$ occupancy, as in Van Landeghem and Beuselinck [2002], with no late passengers and normal load of luggage. We randomly selected $37.5\%$ of seats to be unoccupied. All strategies perform well. In comparison with full occupancy, however, individual boarding sequencing show the least sensitivity. Among the remaining strategies, reverse-pyramid, outside-in, and strategies from Model I have greater sensitivity; and back-to-front and rotating strategies are quite sensitive. + +# Late Passengers + +A late passenger is one who does not board when the passenger's group is allowed to board but who reaches the gate before boarding ends. We assume that late passengers are not allowed to board before non-late passengers. We randomly choose passengers to be late, with probability $25\%$ , for a full A320 with normal load of luggage. As expected, individual boarding, which requires the most precise sequencing, is the most sensitive to late passengers, requiring $70\%$ more time than without late passengers—but still requiring less time than other strategies, all of which vary by at most $12\%$ (increase or decrease) from their times without late passengers. One can think of increasing late passengers as making a strategy more like random sequencing; in fact, random sequencing has almost the same average boarding time with or without late passengers. + +# Passengers with Special Needs + +We follow the tradition that passengers with special needs, such as the handicapped and mothers with children, board first. We compare an A320 with $5\%$ special-need passengers with one with no such passengers, with full occupancy and normal load of luggage. At this level of passengers with special needs, average boarding time is not sensitive. + +# Strengths and Weaknesses + +# Strengths + +- Our simulation can deal with different types of airlines, ranging from a 20-passenger jet to a two-deck Airbus-380. + +- We give theoretical proof to support the excellence of individual boarding sequence; this proof is not found elsewhere. + +# Weaknesses + +- Interferences overestimate boarding time. +- In Model I, our computer did not allow us to make a complete computation to obtain the global minimum. +- The data that we use to simulate passenger boarding behavior is specifically intended to model a certain type of airliner. However, passenger behavior, such as walking speed and time to stow luggage, varies in different types of airliner [Marelli et al. 1998]. +- In the simulation of free-seat-choice, a passenger is unrealistically expected to foresee blocking ahead. +- We oversimplify the preferences for seats and rows, neglecting the possibility that some passengers love aisle seats and some passengers prefer front rows. +- Considering the need to maintain balance in flight, passengers are not allowed to be seated randomly in a non-full occupancy airliner. However, we neglect this point. + +# Conclusions + +- With Model I, we translate the original problem into a nonlinear integer programming. Running Lingo, we almost get the optimal strategies with the number of groups fixed. The results from Lingo are outside-in strategies and many of them are only slightly different from reverse pyramid. +- With Model II, we give a proof that individual boarding sequence is the best strategy except for its impracticability. +- We test several kinds of boarding strategies. The results are in accordance with results from Model I and II, in terms of average and standard deviation of boarding time and in terms of robustness. Moreover, random sequencing is acceptable. +Unfortunately, the traditional back-to-front is worst in many major aspects. +- Sensitivity toward luggage load should not be considered important. +- The more late passengers, the more a best strategy moves toward random sequencing. Individual sequencing is extremely vulnerable to late passengers. + +# References + +Ferrari, P., and K. Nagel. 2005. Robustness of efficient passenger boarding in airliners. Transportation Research Board Annual Meeting, paper number 05-0405. Washington, DC. +Finney, Paul Burnham. Loading an airliner is rocket science. New York Times (14 November 2006) http://travel2.nytimes.com/2006/11/14/business/14boarding.html. +Marelli, Scott, Gregory Mattocks, and Remick Merry. 1998. The role of computer simulation in reducing airplane turn time. Aero Magazine (4th Quarter 1998). http://www.boeing.com/commercial/aeromagazine/aero_01/textonly/t01txt.html. +van den Briel, Menkes H.L., J. René Villalobos, Gary L. Hogg, Tim Lindemann, and Anthony V. Mule. 2005. America West Airlines develops efficient boarding strategies. Interfaces 35 (3) (May-June 2005): 191-201. +Van Landeghem, H. n.d. A simulation study of passenger boarding times in airliners. http://citeseer.ist.psu.edu/535105.html. +_____, and A. Beuselinck. 2002. Reducing passenger boarding time in airliners: A simulation approach. European Journal of Operations Research 142: 294-308. +Wikipedia. n.d. Airline seat. http://en.wikipedia.org/wiki/Airline-seat. + +![](images/5afcf71a906e5061b7860493cee2b3521a83cdf9031c462f94e2452518c51775.jpg) +Fan Zhou, Guozhen Wang, and Linbo Zhao. + +# Best Boarding Uses Buffers + +Kevin D. Sobczak + +Eric J. Hardin + +Bradley J. Kirkwood + +Slippery Rock University + +Slippery Rock, PA + +Advisor: Athula R. Herat + +# Summary + +By constructing a mathematical model of human behavior, we find: + +- Back-to-front block loading is the least efficient boarding method. As passengers enter the aircraft in groups, aisle congestion becomes greatest at the front of the plane, consequently increasing the time required for the next group to enter and take their seats. Aisle congestion in this case is primarily attributed to the time for a passenger to navigate the aisle and reach the assigned seat if obstructed by another passenger sitting in the same row. +- Small planes and large planes exhibit minimal turnaround times. Small planes have a single aisle but few passengers, hence little congestion. In large planes, multiple aisles and decks offset the congestion found in single-aisle midsize planes; a large plane can be modeled as several small planes. +- Boarding strategies are optimized when $10\%$ of the passengers are late. Fewer passengers enter initially, so there is less congestion. When passengers enter late, congestion that would otherwise have occurred is averted. + +Our first observation concurs with researchers who suggest abandoning back-to-front boarding in favor of more-elaborate schemes [Finney 2006; van den Briel et al. 2004; Ferrari and Nagel 2005]; however, these new models make erroneous assumptions about human behavior. A comprehensive scheme must include the time to navigate a congested aisle, stow luggage, and maneuver through a filled row if necessary. We recommend the following: + +- Abandon back-to-front block boarding and consider alternatives. We suggest a hybrid group-boarding method utilizing a rotating seating arrange + +ment that incorporates back-to-front and window-to-aisle seating. This approach decreases congestion that otherwise might accumulate near the front of the airplane, by loading a first group of passengers into rear seating while another group is consecutively loading into the front seats. This trend is carried out until the last group is seated in the center of the aircraft. + +- Incorporate a second aisle into midsize aircraft. Midsized aircraft tend to display the worst boarding times due to the absence of a second aisle. A second aisle would ease congestion and cut turnaround time nearly in half. +- Reduce carry-on luggage. Reducing the amount of carry on luggage greatly decreases aisle congestion greatly decreases. +- Queue passengers into lines prior to gangway entry. Increased order greatly reduces aisle congestion. + +# Introduction + +It is common to board an aircraft in zones or groups. The most-used method is boarding blocks of seats, from the rear moving toward the front. This method is more efficient than boarding the entire plane at once but is one of the least efficient schemes that we tested. Several factors have to be taken into consideration for determining boarding times; passengers entering the plane are assumed to be unsorted within their boarding groups, and not all passengers arrive on time. Our model that incorporates all these factors and provides surprising and consistent results. + +Our computer simulation can evaluate various boarding schemes on aircraft of varying sizes. It has factors for late passengers and will search for worst- and best-case boarding times based on randomly-arranged passengers. The most efficient method that we found moves passengers onto the plane from window to aisle, back to front, implementing rotation buffers. This method even allows for some passengers to be late without severely affecting efficiency. We report results for boarding time, tolerance to passenger arrival, and boarding time predictability for several boarding schemes and aircraft. + +# Boarding Strategies and Terminology + +We define boarding strategies by rows, columns, and groups. + +Block boarding: Each block is a group and a block consists of a number of rows. Typically, groups of passengers are seated in blocks sequentially from the back to the front. + +Buffering: Buffering places empty seats between sequentially seating groups, so that congestion and delay of the first group does not interfere with seating + +the second group. For example, a plan may seat block 1 consisting of rows 30-25 and then sequentially block 2 of rows 22-17. The aisle by rows 23 and 24 will be filled temporarily by busy passengers from block 1 and will not interfere with block-2 passengers. Rows 23 and 24 will be seated later. + +Column Seating: Column seats by columns instead of by rows. This strategy is typically implemented from the window to the aisle to minimize row congestion. + +Reverse-Pyramid Scheme: The reverse-pyramid scheme seats passengers in V-shaped groups starting back in the aisle and propagating forward to a window. This method minimizes row congestion while maximizing group size. + +# Assumptions + +# Logistical Assumptions + +We assume that all planes are entered exclusively from the front and that passengers sit one to a seat. + +Our model does not account for the time that it takes a person to seat themselves in an empty row or a seat unobstructed by another passenger. This is because the moment that the passenger leaves the aisle, they can no longer add to aisle congestion and the total seating time for the group. + +According to the U.S. Department of Transportation [2007], airplanes fly at $79\%$ capacity on average. With this information, we make three assumptions: + +- Since passengers are randomly seated, the empty seats are randomly dispersed, and thus there is no need to reseat passengers for balance purposes. +- Boarding times are based on $100\%$ capacity, but group size and buffers can be adjusted to minimize boarding times are based on expected capacities. +- All two-level airplanes are boarded with two-level jetways and board both levels simultaneously. + +Large two-aisle planes with possibly two decks have similar configurations to several small airplanes; hence the strategies for small planes will have comparable efficiencies for large planes. + +We assume that the time required to seat those with special needs is nearly constant. Although "9.7% ... of men and women, aged 16-64 report a sensory, physical, mental, or self-care disability in the United States" [Employment and Disability Institute 2007], the percentage among travelers will be lower, due to monetary limits and ability to travel. A midsize plane with capacity 300 running at $79\%$ capacity would carry 237 passengers, of whom less than $9.7\%$ —perhaps + +15—need special assistance. We assume that seating strategies for special-needs groups are unnecessary due to their small size. + +We also recognize that column boarding is efficient but may separate parties traveling together, costing the airline in terms of customer inconvenience. The goal of column seating is to minimize row congestion. We contend that allowing parties to board the airplane together, even though this may deviate from the seating plan, would not degrade the advantage. Parties flying together will enter a row in order, maintaining minimal row congestion. + +We assume that as the overhead compartments fill, stowing luggage becomes increasingly difficult. We also assume that each passenger has the same amount of carry-on luggage requiring the same volume. To this effect, the time for a passenger to stow luggage depends only on the number of people seated in the row and the number of luggage-volume units already taken up. + +# Behavioral Assumptions + +A fundamental assumption is that passengers are willing to bypass localized aisle congestion, resulting in a time cost. But when aisle congestion becomes large-scale, passengers become averse to bypassing larger numbers of people. This assumption of human behavior is accounted for by disallowing groups of passengers to bypass other groups who are blocking the way to their seats, but allowing passengers in the same group to pass each other in the aisle. Our model of localized passing predicts seating times better than popular models that assume that passengers do not pass others in the aisle. + +All constants have been estimated to the best of our ability but would require experimental determination. Due to the nature of our passenger time model, discrepancies between our constants and actual values will have little effect on the relative efficiencies of the strategies that we investigate. + +We assume that every passenger walks with a constant speed, since the speed of a line is dictated by the slowest passenger. + +We assume that when business- and first-class passengers seat themselves, they have a greater average passenger speed and smaller constants corresponding to bypassing others in the aisle, stowing luggage, and traversing occupied seats. The reasons are that business- and first-class aisles are wider, first-class passengers carry less luggage because their trips tend to be shorter, and individual seating areas in business- and first-class are about $4.5\mathrm{m}^3$ as opposed to $1.2\mathrm{m}^3$ in coach [Ferrari and Nagel 2005]. + +# Methods + +We devised a highly dynamic object-oriented model in Java, which can import group assignments from a text file. We collected data from the simulation from several different configurations and aircraft sizes. + +# Group Boarding Time Model + +We assume that the marginal time increase of a group to be seated with respect to each additional person is: + +$$ +t (p) = C _ {1} \tilde {p} + C _ {2} \alpha + C _ {3} \beta + C _ {4} \gamma , +$$ + +where + +- $p$ is the number of people who need to cross in the aisle, +- $\alpha$ is the number of people who need to move to reach the seat from the aisle, +- $\beta$ is the amount of luggage stored in the overhead compartment, and +- $\gamma$ is the number of rows that a passenger must traverse. + +We let $w$ and $l$ be the width and length of the aircraft. Conceiving of our model as a continuous model, We have + +$$ +t (\bar {p}) = \frac {d t}{d p} = C _ {1} \bar {p} + C _ {2} \bar {\alpha} + C _ {3} \bar {\beta} + C _ {4} \bar {\gamma}, +$$ + +where $\bar{p}$ is linear and $\bar{\alpha},\bar{\beta},\bar{\gamma}$ are constant. Integrating both sides, we find that time per group $T(p)$ is + +$$ +T (p) = \frac {C _ {1}}{2} \bar {p} ^ {2} + C _ {2} \bar {\alpha} \bar {p} + C _ {3} \bar {\beta} \bar {p} + C _ {4} \bar {\gamma} \bar {p}. +$$ + +Because our model is discrete and $\tilde{p},\alpha ,\beta ,\gamma$ can fluctuate randomly depending on the arrangement of passengers within the group, $dt / dp$ cannot be represented continuously in an accurate manner. + +However, by taking a Riemann sum, $T(p)$ can be found as + +$$ +T (p) = \sum_ {n = 0} ^ {p} (C _ {1} \tilde {p} + C _ {2} \alpha + C _ {3} \beta + C _ {4} \gamma) \Delta p, +$$ + +where $\Delta p = 1$ person. + +# Simulation Algorithm + +Our simulation models boarding as a queue while mathematically modeling human behavior. The dynamic simulation can compute aircraft boarding times for different grouping configurations with no modification to the code. The model also can loop through several different configurations and can iterate each configuration to acquire an average seating time with error bounds. Because the problem assumes that passengers are not arranged within the group, passengers are randomly shuffled in the groups and each run yields slightly different results. + +# Running the Model + +Parameters are the number of rows, number of seats per row, location of aisle, number of groups, group configuration file, and number of iterations to average the solution (in most cases, 1,000). + +# Late Passengers + +We assume that a specified percentage of randomly-selected passengers arrive late. + +# Group Boarding Time + +The total time to board aircraft is related the time to board each group, which in turn is based on the time to board each individual within that group and on interactions between consecutive boarding groups. There are two cases for which we make provision: + +- (the more common case) A waiting condition occurs when a group boarding the plane is directly adjacent or farther back in the aircraft than the previous boarded group. In this case, the second group waits until the previous group is seated. The total time for aircraft boarding will have the previous group seating time added minus the time required for the latter group to approach the previous group. +- The other condition occurs when there is a gap between two consecutive boarding groups. This is known as a buffer condition, and the time added to the total time is the entering time of the previous group. + +The two exceptions are the first group and last group. The first group has no interactions with any another group initially and will not contribute to the total time. The time for the first group will factor in after the second group is determined. The last group interaction is determined using one of the two methods and then the time for the final group is added to the total time. + +# Results + +After considering the implementation of buffering zones, block boarding methods, and variations regarding column-boarding techniques, we were able to use our model to represent the most efficient boarding schemes. When cabin configurations are kept constant between different sized aircraft, different boarding schemes are better suited for different sizes of aircraft. We show the results for midsize aircraft in Figure 1. + +![](images/d9de48537c3adfba74e0f6d7e918a28ad570af63f42e8cdf46fa50d8447e26ba.jpg) +Medium Aircraft Loading Time vs. Loading Method (240 Passengers) +Figure 1. Boarding times for various boarding methods, for a 240-passenger plane. + +# Back-to-Front Block Boarding + +A group of passengers entering the aircraft in four groups takes twice as long as a boarding party of the same size with a five-row buffer zone. As passengers board the airplane, congestion from passengers heading toward the back of the airplane and stowing luggage slows the advance of entering passengers. Due to our assumption that individuals passing in the aisle require 1.5 time steps to maneuver past a passenger, a consequently higher group seating time develops. This prolonged seating period prevents the next group from entering, since members from different groups cannot interfere with one another. With a buffer zone, congestion is reduced: As the group advances towards the rear of the airplane and proceeds to take seats, the second group enters, leaving a five-row buffer zone, allowing the second group to get seated as well without any + +additional interference from the first group, consequently reducing the overall boarding time. + +# Half-Block Boarding + +A variation back-to-front is the half-block boarding method, which uses the aisle to divide the passengers into more groups. Because it is more organized, it is nearly twice as efficient as the traditional block boarding system. + +# Column Boarding + +Column boarding is very inefficient if the number of columns is the maximum number of groups present in the plane. Column-filling with an unbuffered group-boarding scheme yields only marginally better results. Only when columns are broken into smaller groups themselves are the advantages of column-filling evident. The implementation of column-filling completely eliminates aisle congestion caused by seat crossovers. The incorporation of segmented columns into our model decreases boarding time because increasing the number of groups enhances the order of the scheme. + +# Reverse-Pyramid + +A hybrid system based on a back-to-front rotating column arrangement, seating passengers from the window to aisle seats, is dubbed the reverse-pyramid. A simulation based on this scheme shows only marginal increases in boarding time compared to block boarding from back to front. The fundamental principle behind reverse-pyramid is reducing aisle congestion by permitting groups to enter in staggered column configurations to minimize seat crossings. A significant drawback, however, is that for it to be most effective, a column that is sent in must be further sectioned. Most airlines use a six-group configuration that most resembles a "V" as the airplane begins to fill up. So if a more robust model is desired, further complication is needed in properly dividing the entering segments of the reverse pyramid. + +# Buffer Zone Implementation + +Schemes that incorporate a buffer are sensitive to most variables. Buffer systems are successful in small and midsize aircraft, because more travelers can be boarded without interfering with other groups; however, the introduction of late arrivals into a buffer system causes the scheme to fail. + +# Back-to-Front Boarding + +This most-common strategy is ironically the most undesirable. As passengers enter the plane, congestion immediately builds from back to front as passengers in the back must pass others in the aisles, wait to stow luggage, and maneuver past other passengers to access a seat. + +# Hybrid Boarding Methods + +The most effective boarding scheme is one that encourages the simultaneous boarding of passengers from window to aisle, and from back to front of the aircraft, while incorporating a buffer. Due to the ordered nature of this scheme, the buffer zone is affected very little by late passengers. Our hybrid scheme has virtually no aisle congestion due to seat crossovers and luggage stowage, because passengers file in from the windows toward the aisle and from back to front. Late passengers are considered as an independent group unto their own, hence do not interfere with groups currently boarding. + +# Deboarding + +Since passengers are already in an ordered system, aircraft deboarding lends itself well to random passenger exiting. This assumption can be based on the fact that there will be no congestion caused by persons crossing other passengers within the same row, because everyone has the same incentive to deboard the airplane. Our results show that a more ordered system boards the fastest; therefore the most ordered system will deboard in the most efficient manner. + +# Tardiness + +Buffer systems are sensitive to tardiness, since each successive late passenger tends to increase the boarding time. Block boarding methods actually experience an increase in efficiency when late passengers board! This results from an absence of congestion that would otherwise be present; however, block boarding methods are the least efficient methods in general. Any improvement on the block boarding scheme is merely making an inefficient system slightly more efficient, but is still not preferable to other methods available. + +# Sensitivity + +# Random Passenger Order within Groups + +There is a weak inverse power relation between the number of groups and the difference between the maximum and minimum boarding times compared + +to the average boarding time. This relationship can be realized intuitively: As the number of groups increases, the randomness decreases until the number of groups equals the number of passengers. In that case, there is no randomness; every passenger is sent to their seat, and every boarding simulation will provide the same result. To this end, strategies with more groups have greater predictability. + +Taking the number of groups for each strategy into consideration, random entry within groups affected our strategies in the following ways: + +- Block methods have a minimum number of groups to board; thus, we show that the current method presently used by airlines is unpredictable. +- Column-seating methods have minimal group numbers and, when taking group number into account, high predictability. The goal of column-seating is to minimize time cost by minimizing row congestion resulting from passengers randomly entering the rows out of order. In other words, column-seating is designed to be resistant to random entry order. +- Buffering systems are by far the most sensitive to random passenger order. This is no surprise, because the benefit of the buffer system is maximized when the overflow from a leading group fills up the exact number of rows to where the following group seating begins. As random amounts of passenger bypassing increases or decreases, congestion due to the leading group increases or decreases as well, resulting in more or less overflow, and thus decreasing the efficiency. +- Pyramid schemes show predictability greater than block seating but less than column-seating. This may be due to the column-oriented advantages shared with column-seating, but decreased due to the increase in group size. + +# Airplane Configurations + +Typically, efficient models are efficient despite plane configuration. Differences in relative efficiencies are noticeable but generally negligible, with some exceptions. + +The primary exception is buffer strategies. Buffer-strategy effectiveness depends on congestion overflow into the aisle, which in turn depends on plane configuration. Column-seating methods also are a surprising exception. In one simulation of boarding a small plane of 208 passengers, column-boarding performed the worst of any strategy. It also performed poorly in a midsize plane of 300 passengers, whereas it performed well in an airplane of only 240 passengers. These results are as a direct result of plane configuration; while uncertainties in column-boarding strategies are noticeable, they are not enough to influence this trend based on average times in 1,000 boarding trials. + +# Late Passengers + +Inevitably, some passengers arrive late. We assume that all late passengers are called to board as a final group to be seated without a strategy. With this in mind, we can analyze the sensitivity of different strategies to the number of late passengers. Late passengers have the following effects (see Figure 2): + +- Block methods become more efficient as passengers arrive late. As block size increases, block methods become maximally efficient at a higher rate of late passengers. The following example will illustrate the inefficiencies of block methods and the sheer magnitude that late passengers have on time efficiency. In midsize aircraft (240 passengers), blocks of 10 to 20 rows have minimal boarding times when late passenger percentages are $21\%$ to $35\%$ . The boarding time of every block method tested decreases as the number of late passengers increased from zero. Not only does the boarding time decrease, but some block configurations became as much as $15\%$ faster at late percentages resulting in maximum efficiencies. +- Column-seating becomes slightly more efficient as the rate of late passengers increased but column-seating is resistant to late arrivals. +- Buffering methods are the most susceptible to late passengers; they decrease utilization of the buffer rows. Adding rows that are not being seated or used to store overflow decreases efficiency. The efficiency of buffer systems quickly decreases when parameters change. +- Pyramid schemes act much like column-seating with respect to late passengers. This is no surprise, because pyramid schemes are a form or column-seating. + +# Buffer Size + +The buffer system is the most sensitive of all the seating strategies. Counterintuitively, it can be one of the most efficient methods if the buffer size is chosen properly. If the buffer is too small, the following group waits. If the buffer is too big, the rows between the last overflow passenger and the first passenger of the following group are not utilized. Rows not utilized for seating or storing overflow decrease efficiency. Because buffer size depends on the overflow of a leading group due to congestion, and the size of the overflow depends on plane configuration and flight capacity, efficiencies of buffer systems are extraordinarily delicate. + +# Vacant Seats + +The percentage of vacant seats has little effect on relative strategy efficiencies. + +![](images/4746c2a1dc00d39eb03ed2f6b146a47ff44c7165c268bb855afa61db5f03ca4a.jpg) +Figure 2. Effect of late passengers on boarding time, for various boarding methods. + +# Conclusion + +# Future Work + +We suspect that the aisle congestion if groups of passengers were allowed to pass each other can be modeled using a constant times the number of people in the group divided by the number of row the group spans. + +Realistically, passengers may not appreciate column-seating because it separates parties. Further complexity can be added to our model to allow parties to sit together. To this end, we could test if airlines truly must trade customer convenience for the benefits of column-seating. We strongly suspect that allowing parties to board together does not undermine the benefits of column-seating. + +Due to the sensitivity but great potential of buffer systems, quickly applicable algorithms for determining optimal buffer zones would be valuable. These algorithms would depend on group size, expected tardiness, expected carrying capacity, and plane configuration. + +Additionally, it would be useful to investigate the effects of adding preliminary order to passengers waiting in the terminal. Using a calling system (such as colored signs) to add preliminary order in the terminal could allow for greater order entering the plane without customer confusion and inconvenience. + +# Closing Remarks + +Not only are presently-employed block strategies inefficient, but they are usually the most inefficient strategies. They lack order to minimize row congestion, and they facilitate accumulation of aisle congestion. + +The best method for boarding airplanes is to board primarily from window to aisle, secondarily from back to front, and furthermore use a buffer system. We assume that preliminary order can be applied to passenger groups still in the terminal to alleviate the large number of groups required to employ our strategy. Our method is resilient to random entry within groups, thus is predictable compared to other methods. We assume that predictability is heavily valued by airlines and recommend our strategy for that reason as well as maximum time efficiency. + +# References + +Employment and Disability Institute. 2007. Disability statistics: Online resource for US disabilities statistics. http://www.ilr.cornell.edu/edi/disabilitystatistics/. +Ferrari, Pieric, and Kai Nagel. 2005. Robustness of efficient passenger boarding in airplanes. *Transportation Research Record: Journal of the Transportation Research Board (Issue Number 1915)* (July 2004): 44-54. http://fgvsp01.vsp.tu-berlin.de/biblio/53/01/15nov04.pdf. +Finney, Paul Burnham. 2006. Loading a plane is rocket science. New York Times (14 November 2006). http://travel2.nytimes.com/2006/11/14/business/14boarding.html. +U.S. Department of Transportation. 2007. Bureau of Labor Statistics. http://www.bts.gov. +van den Briel, Menkes H.L., J. René Villalobos, and Gary L. Hogg. 2004. The aircraft boarding problem. In Proceedings of the 12th Industrial Engineering Research Conference (IERC-2003), No. 2153, CD-ROM. http://www(public.asu.edu/~dbvan1/papers/IERC2003MvandenBriel.pdf. + +![](images/10a415d8876a0d20a99c88ac66337be7eca2d557725aab81cc0f2f6db7909b44.jpg) + +Eric Hardin, Kevin Sobczak, and Bradley Kirkwood. + +# Modeling Airplane Boarding Procedures + +Bach Ha + +Daniel Matheny + +Spencer Tipping + +Truman State University + +Kirksville, MO + +Advisor: Steven J. Smith + +# Summary + +We describe two models that simulate the process of passengers boarding an an aircraft and taking their seats. Using these models, we simulate common boarding procedures on popular aircraft to analyze efficiency. The second model is more ambitious and tries to model the situation more accurately, but even the first one addresses the major problems involved in boarding an airplane. + +From running the simulations and analyzing the data, we find that the fastest and most consistent procedures are outside-in and reverse-pyramid. Both allow those closest to the windows to be seated first and proceed inward (though reverse-pyramid is slightly more complex). Reverse-pyramid is slightly faster. + +# Introduction + +It would seem that the quickest way load passengers onto a plane would be simply to line all the passengers up in order of seat assignment, starting with the back-row window seats and working up to the front-row aisle seats, and march them onto the plane in that order. However, this is far from "simple"; the logistics would be extremely difficult to manage, not to mention that customers would dislike being forcibly lined up. + +The response of airlines has been to try to control the randomness in passenger boarding order by seating passengers in groups, thereby localizing the + +disorder to a particular part of the plane. The traditional approach is back-to-front boarding, where a certain number of rows are seated, starting from the back and working forward. Other procedures include: + +- Outside-in: Also called WilMA (for Window, Middle, Aisle), passengers with window seats are seated first, followed by those with middle seats, and finally those with aisle seats. +- Rotating-zone: Similar to back-to-front, except that after a set of rows in the back of the plane are seated, a set of rows in the front are seated. Back rows and front rows are alternated until the plane is full. +- **Reverse-pyramid:** Reverse pyramid resembles a mix of outside-in and back-to-front, giving preference to seats as far back and to the outside as possible. First seated would be the back half of window seats, then the back middle seats and the front window seats, then the back aisle seats and the front middle seats, then finally the front aisle seats. +- Random: Some airlines purposefully do not try to control the order of passengers on the plane. Random seating can be done with or without some seats assigned. Often the plane is still boarded in stages, with the passengers lumped into groups by check-in time or by another method. + +Much research has been done to determine what procedure is fastest. While some studies have been analytic in nature [Bachmat et al. 2006a, 2006b; Van den Briel et al. 2003], most have adopted the approach of simulating the phenomenon [Ferrari and Nagel 2004; Bazargan et al. 2006; Van den Briel n.d.]. One problem with available simulations is that they focus on at most one plane size and type. In particular, in many models, a small plane with one aisle and three seats on each side of the aisle is taken to be representative of all aircraft. In reality, most planes that carry more than a small number of passengers have two aisles, and some have two floors as well. + +We describe a model that aims to address these problems. We simulate aircraft boarding for any size plane (number of passengers), any layout (number of aisles, number of seats in each row), and most importantly, any order for seating passengers. We use the model to estimate the relative efficiency of various boarding procedures. + +# Motivating the Model + +The process is slowed when boarding passengers have to interact, events we call interferences. Aisle interference occurs when a passenger cannot continue down the aisle to the seat because the aisle is blocked. Seat interference occurs when a passenger can only reach the seat by going past already occupied seats. Seat interferences increase the time it takes to sit down and may also lead to prolonged aisle interference if the people currently sitting down must get into the aisle to let a person in. + +# Assumptions + +- The plane is full. While this is not always true, if a plane is far below capacity, any boarding method will probably work well. +- All passengers are in the economy class and there are no special-needs passengers. First-class passengers pay a premium price and expect to be seated first. We model only the seating of economy passengers. Likewise, we do not take into account pre-boarding by passengers with special needs such as the disabled, the elderly, and those traveling with small children. +- There are no late passengers. All passengers in a particular boarding group get in line to board as soon as they are called. +- All pieces of carry-on luggage are the same and there is always enough room for them. Passengers enter the plane with a randomly assigned number of bags (within a reasonable range such as 0-3). If there are already passengers seated in a particular row, it may take longer for an arriving passenger to stow bags, but the passenger can eventually do so. +- No one can pass a passenger in the aisle. This is the principal cause of aisle interference. In reality, a passenger might be able to squeeze past another, but we do not allow this. This is a reasonable assumption because in general passing a person in the narrow aisle of an airplane is a difficult and slow task anyway. +- There is only one entrance. Some airlines allow boarding from two entrances, but the majority of airlines have only a single entrance. Moreover, planes that allow multiple entrances tend to be small, where the boarding is already not as difficult as in large planes. +- All seats are assigned. This assumption primarily affects the random boarding procedure, for which we assume that the order of passengers entering the plane is completely random but each passenger has a unique seat that they are headed for when they enter. In unassigned random seating, there is the added problem that the passengers likely do not have a specific seat in mind when they get on the plane. When they enter, they head to either a seat that they consider "desirable" or to a seat that is easy to get to. We do not model this choice process. +- Every passenger sits in their assigned seat. Any seat-switching happens after take-off and so does not affect the boarding time. +- Deboarding is always the same. This is probably the most significant assumption that we make. We assume that the passenger unloading process will be the same no matter what the boarding process was, or the reverse of what the boarding process was, an assumption that seems to match the way airlines currently operate. + +# Estimating Parameters + +There are few data for the speed at which passengers board planes, the time that it takes to stow luggage, and so on (though Bazargan et al. [2006] have some estimates). Thus, we had to estimate these quantities ourselves; consequently, the boarding times from in our model probably do not correspond directly to actual times. Our model strives to compare boarding procedures, which can be done by taking a standard set of "reasonable" values. We later discuss what happens to the model when parameter values are varied. + +# Modeling Airplane Boarding Procedures + +We built two separate simulation models, which we will refer to as the Array Model (or AM) and the Graph Model (or GM), both implemented in Python. In general, the AM is more simplistic, while the GM tries to simulate the situation more accurately. + +In the Array Model, the plane is represented internally by a two-dimensional array. Some columns of the array are aisles, and the rest are seats. The seats are either occupied are unoccupied, as are aisle cells. In this model, a person in the aisle completely blocks all the people behind. The AM can simulate different boarding procedures as well as different plane sizes and types, but with only one type of plane geometry at a time. + +The Graph Model represents the plane internally by a graph, where the nodes and edges are weighted to represent the delay associated with crossing that node or traversing that edge. While more complicated, the GM is more flexible, allowing passengers to pass one another in the aisle (subject to a corresponding time delay, of course), and allowing for different plane geometries in different sections of the plane. This gives the possibility of modeling first-class seating as well, where the seating structure is different from economy seating. Also, in larger planes there is a second level of seating, which might have a different configuration from the primary level. + +In the following two sections, we give the details of the two models and the relevant parameters in use. Then we give results for the tests that we did using the AM and the GM. + +# The Array Model + +The array model is built based on the motivation of the Game of Life, devised by mathematician John Horton Conway. We treat the layout of seats and aisles as a matrix of four different values: occupied and unoccupied seats, occupied and unoccupied locations on aisles. The only objects that interact with this matrix are passengers. By moving up and down, right and left, inside the matrix, passenger change cell values in the matrix. Of course, passengers + +cannot move freely; they must follow certain rules that depend on the current layout of the matrix. + +# Parameters + +The AM is based on several parameters. To make the model more accurate to the real world situation, we assign most parameters a normal probability distribution so that they vary slightly from run to run. + +- Interval boarding time: This is the time that the airline staff checks a passenger's boarding pass. Default mean 4 s, standard deviation 1 s. + +- Time sensitivity: This is the interval of time for which our model will update itself. The default value is 0.25 s. + +- Luggage stowing time: This value depends strongly on the number of passengers there already (which also means that their luggage is already there). The more passengers, the longer it takes for a new passenger to stow luggage. By default, mean time and its standard deviation are 4 s and 1 s for an empty row, 8 s and 2 s for a row with one passenger, and 14 s 3 s for a row with more than one passenger. (In all the airplanes that we examined, no seat has more than two seats between it and an aisle, so we need to consider at most two passengers already seated.) + +- Seating time: Similar to the luggage stowing time parameter, this value depends greatly on the number of passengers already there. By default, we set the mean time and its standard deviation to 3 s and 1 s for an empty row, 7 s and 2 s for a row with one passenger, and 17 s and 3 s for a row with more than one passengers. + +The behavior of the model as it handles seat and aisle interference can be seen best by examining a screen shot of the AM simulating a plane using the back-to-front procedure (Figure 1). + +# The Graph Model + +The graph model builds the airplane seating from a graph of nodes, each connected bidirectionally to applicable adjacent nodes in one of the four cardinal directions. Each node contains an occupant. Aisles have connections in all directions and seats have connections horizontally. + +Passengers are tracked as they pass through the plane. Each is randomly assigned with uniform probability zero, one, or two carry-on bags that must be stowed before the passenger is seated. It is assumed that the passenger takes the aisle closest to the assigned seat in every case, crossing no more seats than is absolutely necessary. + +![](images/d5177951efa5029f02ce9a84558f7c2b19ced18aa929113706da4b99a96328f2.jpg) +Figure 1. Screenshot for AM model. + +Storage bins are considered shared among several rows, usually two or three. The time required to load one additional bag into an overhead bin is proportional to the square root of the number of bags currently there. Thus, time to load bags is on the order of the $3/2$ power of the number of bags. + +Because of the structural flexibility present in the model, it can emulate planes with inconsistent geometries, such as the Boeing 767-400 with 2-2-2 in the front and 2-3-2 in the back. We can also, although with more difficulty, implement planes with two floors, such as the Airbus 380. + +The graph model can use a smaller sample size because of the recomputation of random data. Every time a node's delay is computed, it is re-randomized; thus, a single run incorporates a broadly normalized set of random data. For this reason, we consider 200 runs per configuration to be sufficient to represent accurately the performance of a configuration. + +# Parameters + +- Aisle-aisle movement delay: How long it takes for a person to move one node through an aisle. Default: 2 s. +- Aisle-seat movement delay: How long it takes to move from an aisle to a seat. Default: 3 s. +- Seat-aisle movement delay: How long it takes to move from a seat to an aisle. Default: 3.5 s. +- Seat-seat movement delay: How long it takes to move from one seat to another. Default: 7 s. + +# Strengths + +One strength includes accurate simulation of shared luggage bins: A passenger loading a bag into a bin two rows ahead may influence the loading time of a piece of luggage elsewhere. In addition, luggage bins are shared for both sides of an aisle, which accurately models people's tendency to put luggage on either side of the aisle. + +Another is that aisle spaces are allocated for people moving across already taken seats, simulating the requirement of everyone clearing the occupied seats for the newcomer to move in. This accentuates the effectiveness of modifications to the strategies, such as the even-odd variation or the staggered variation. + +A third strength is that if there is an aisle, an empty seat, a filled seat, and the target seat, in that order, and a passenger moves into the empty seat on the way to the target seat, the passenger can get to the target seat only when the aisle is clear, according to the rationale that all swapping must be done through an aisle. + +# Weaknesses + +One weakness of the model is that it does not simulate people traveling very far to get to an empty luggage bin. Rather, luggage bins are assumed to have unlimited capacity, and people do not prefer those with smaller delays. + +Further, in our model, when a passenger enters the aisle to make room for someone who needs access to a inner seat, the passenger does not move toward the front of the plane; if the cell toward the back cannot be taken, then there is extra delay. This is somewhat inaccurate. + +# Using the Model + +# Plane Configurations + +To analyze how the model reacts to different types of planes, we developed several plane configurations based on actual popular plane configurations. + +# Small Planes: + +- S1: 3-3 (three seats, an aisle, and three more seats), 23 rows, 138 seats. (Based on the Airbus 320) +S2: 2-3-2, 25 rows, 175 seats. (Based on the Boeing 767-200) + +# Midsize Planes: + +- M1: 2-3-2, 35 rows, 245 seats. (Based on the Boeing 767-400) +- M2: 2-4-2, 40 rows, 320 seats. (Based on the Airbus A300-600) + +# Large Planes: + +- L1: 3-4-3, 40 rows, 400 seats. (Based on the Boeing 747) +- L2: 3-4-3, 40 rows on bottom level. 2-4-2, 30 rows on top level. 640 seats total. + +(Based on the Airbus 380) + +# Array Model Tests + +For each plane configuration, we ran the simulation for the following boarding styles: back-to-front (BF), outside-in (OI), reverse-pyramid (RP), random (R), and rotating-zone (RZ). Data were gathered for 1,000 runs of each procedure. Shown in Figure 2 are a boxplot and the mean and standard deviation for two of each plane size and each boarding procedure. + +![](images/1c8c20e628a9bb9fdd7187e09643963b2aba1bb4258108d1790d68ab78c71c72.jpg) +Figure 2. Results based on the array model: Small planes. + +Plane S1: + +
ProcedureMeanSD
BF91977
OI58012
RP57814
R74358
RZ70110
+ +![](images/8c86e7fd68761f34cc412d2123f66bb54e2535f460eb8d9cb5a39bdeefcd7e9d.jpg) +Plane S2: 2-3-2, 25 rows, 175 Seats (Boeing 767-200) + +Plane S2: + +
ProcedureMeanSD
BF70210
OI7075
RP6975
R72010
RZ70110
+ +Except for Plane S1, where it performed horribly, back-to-front boarding actually performed quite well. This suggests that the conventional wisdom of the airline carriers is well-founded. Random boarding did not perform particularly well on any configuration, contrary to the opinions of airlines that are beginning to adopt it. The only procedures that consistently performed the + +![](images/1fc80ee809e343712d9b1ea84498e95486ea723cb891d109190a2d151fee720a.jpg) +Plane M1: 2-3-2, 35 rows, 245 seats (Boeing 767-400) + +Plane M1: + +
ProcedureMeanSD
BF97410
OI9895
RP9785
R100110
RZ97410
+ +![](images/cae61d114872aa7cdb0065ee4ce8aecd2294297a4a3a93625c10c08087ef168e.jpg) +Plane M2: 2-4-2, 40 rows, 320 seats (Airbus A300-600) + +Plane M2: + +
ProcedureMeanSD
BF127912
OI12905
RP12795
R130912
RZ129016
+ +![](images/a287e47212fec7ed2bd1bb4a93b13973c24a7e8a09f9be10039e307c69029cd7.jpg) +Plane L1: 3-4-3, 40 rows, 400 seats (Boeing 747) +Figure 2 (continued). Results based on the array model: Midsize and large planes. + +Plane L1: + +
ProcedureMeanSD
BF163630
OI16117
RP16056
R166629
RZ166133
+ +![](images/2d9af428745187c8307aa6a53b245546101bb30b0f200c71e0a0126c58fac18a.jpg) +Plane L2: 3-4-3, 40, 2-4-2, 30, 640 seats (Airbus 380) + +Plane L2: + +
ProcedureMeanSD
BF259315
OI25985
RP25925
R262417
RZ259617
+ +fastest were outside-in and reverse-pyramid, with reverse-pyramid having a slight advantage. + +However, the most interesting aspect of the data is not the mean times, but rather the standard deviations. Outside-in and reverse-pyramid consistently had standard deviations almost half that of the other procedures. This is very important, because besides wanting to make boarding as fast as possible, airlines need to keep on schedule, so they do not just need the fastest but the most consistently fast. With this in mind, outside-in and reverse-pyramid seem clear winners according to the AM. + +# Graph Model Tests + +As with the AM, we ran the five boarding strategies on each of the six plane configurations (Figure 3). Since the GM includes more randomization internally, we ran each simulation only 200 times, which still gives a $95\%$ confidence interval about the mean with a radius of less than five time units, which is enough precision for our purposes. + +![](images/38038b5c70ae0e0d1ea270e32a8f3641d25335191eee80a9e1c11bad4f84497c.jpg) +Figure 3. Results based on the graph model: Small planes. + +Plane S1: + +
ProcedureMeanSD
BF147974
OI110322
RP108122
R132658
RZ157580
+ +![](images/180926490105408f73f047a2bc2e8c71c24b47a44a1fc1d6ff2d61c9cc54bb95.jpg) + +Plane S2: + +
ProcedureMeanSD
BF171221
OI174921
RP171520
R177329
RZ176028
+ +The data generated by the Graph Model are not easily interpretable. As in the Array Model, back-to-front does quite well; outside-in does not do quite as well as before. However, reverse-pyramid is still the best boarding strategy. + +![](images/d05a0225162dc812928fd1a2784d641ee4102b5ec918f7fbd13a8696d4989db8.jpg) +Plane M1: Boeing 767-400 + +![](images/27d3bacb7e05c5b8f021725517bf3e522d86386f30b425193c7709b4c8c2e515.jpg) +Plane M2: Airbus A300-600 + +Plane M1: + +
ProcedureMeanSD
BF171021
OI174922
RP171619
R177125
RZ175627
+ +Plane M2: + +
ProcedureMeanSD
BF306331
OI311229
RP306224
R314233
RZ311727
+ +![](images/70d325dce21c2528914d17c6446997ea38f4c4380980e46c33dae263dfee01ad.jpg) +Plane L1: Boeing 747 +Figure 3 (continued). Results based on the graph model: Midsize and large planes. + +![](images/d6f9f7eadf89b479e705519e223afb09aa452021df7b535989ac2ab06185b3f8.jpg) +Plane L2:Airbus 380 + +Plane L1: + +
ProcedureMeanSD
BF309838
OI309327
RP305025
R316137
RZ31413
+ +Plane L2: + +
ProcedureMeanSD
BF486230
OI488531
RP484931
R492739
RZ491337
+ +according to this model. The only plane for which it performed worse than another model was the M1 plane, where it was beaten out by back-to-front by less than 7 time units. The standard deviations for reverse-pyramid are in general less than those of the other strategies, though by not nearly as much as in the Array Model. + +One very perplexing aspect of the Graph Model data is the actual numerical values returned. Even more so than the Array Model, the Graph Model has not been tuned to actual time, so the time units in the results cannot really be taken as seconds. Still, planes S1, S2, and M1 all have simulation values in the range of 1,000 to 1,800 despite different plane sizes, yet the values for M2 and L1 jump dramatically up to the range of 3,000 to 3,300. We had expected a more gradual growth as plane size increased. + +# New Boarding Strategies + +One benefit of the flexibility of the Graph Model is that we can devise our own boarding strategies and see how they compare with common ones. We tried taking an existing strategy and modifying it so that for a given group, the passengers in even-numbered rows board, followed by those in odd rows. We also considered the "staggered" modification, which is similar to the even-odd modification, except that even-numbered rows are boarded on one side of an aisle and odd-numbered on the other side. These modifications are to an existing strategy, so the original groupings still remain. For instance, in OI.EO (outside-in, modified by even-odd), the even-numbered window seats are boarded, then the odd numbered window seats, then the even middle seats, and so on. + +The performance of these alternative strategies generally shaves a few time units off of the simulation time, but the relative results are the same. Shown in Figure 4 is a graph comparing the 15 boarding procedures on the M2 plane. + +# Evaluating the Model + +# Array Model Sensitivity + +To test the sensitivity of the Array Model, we ran the simulation 100 times each on the M2 configuration for a value higher and a value lower than the default on each of interval boarding time, luggage stowage time, and seating time. The changes alter the boarding time, but we were primarily concerned with whether the qualitative assessment of the different procedures changes as well. + +![](images/e8475e659869821d92d1ca4a0450868b0f3280f67510dcf8c860a4c6b7371b5e.jpg) +M2: Default, Even-Odd, and Staggered Strategies +Figure 4. Comparison of modified and original strategies. + +# Boarding Time + +We ran the simulation with a parameter values of (2,0.5) (that is, a mean of 2 and a standard deviation of 0.5), and a value of (7,3), and compared them to the default value of (4,1). The relationship between the outside-in, reverse-pyramid, and random procedures remains about the same; back-to-front and rotating-zone become comparatively faster as the boarding time increases (meaning that passengers are entering the plane at a slower rate). However, we must keep in mind that in the example where the mean boarding time is 7 (one passenger boards every 7 s), the total time to load the plane is over 2,000 s (more than 30 min). The difference in means is less than 1 min, which is not significant. What this means is that as the boarding time increases, the gains of one procedure over another become relatively less. + +# Stowage Time + +For luggage stowage time, we ran the model with parameter values (2,0.5), (5, 1), (10,2)—meaning (2,0.5) for an empty row, (5,1) for a row with one person already seated, and (10,2) for a row in which two people are already seated—and the values (8,2), (14, 3), (20, 4), in addition to the default values (4,1), (8,2), (14,3). + +The model is not very sensitive to changes in luggage stowage time. The times returned by the simulation rose slightly as stowage time increased, but + +not by much, especially compared to how the simulation times changed with boarding time. + +# Seating Time + +The default seating time is (3,1), (7,2), (17,3) and the others parameter values run were (2,0.5), (4,1), (8,2) and (6,2), (12,3), (25,5). As with stowage time, the Array Model is not very sensitive to changes in seating time. + +# Graph Model Sensitivity + +Sensitivity testing for the Graph Model involved choosing six parameters and running the simulation on each plane for each parameter low, high, and normal. More specifically, all but one parameter were normal, and that parameter would be set either low or high. Low was defined to be $50\%$ of normal and high was $175\%$ . The parameters that we chose were: + +- Seat-to-seat movement delay per node +- Aisle-to-seat movement delay per node +- Seat-to-aisle movement delay per node +- Aisle-to-aisle movement delay per node +- Luggage bin loading delay per bag +- Delay between successive passengers boarding the aircraft + +We ran 25 tests for each configuration, a configuration being a setting of the six variables, a plane, and a loading system. Since the analysis for the Array Model showed that there is little variation, we tested only the original version of each boarding system. Although we found some outliers in the sensitivity analysis, we believe that the results are adequately representative to draw our conclusions, due to the degree of randomness internal to the model. + +The primary source of variation in the Graph Model is the delay between passengers boarding the aircraft. Total boarding time is almost directly proportional to the time between individual passengers, suggesting that the main bottlenecks occur outside, rather than inside, the plane. When we decreased boarding time to have a mean of $3.5\mathrm{s}$ instead of $7\mathrm{s}$ , the average boarding time was reduced by nearly half. This is the same situation as in the Array Model, where the largest variation was found by modifying the passenger boarding rate. + +Other sensitivities were far less noticeable. The aisle-to-aisle transfer time has the next-largest impact on the result, but the variation of the boarding times is well within $15\%$ for a $50\%$ change in the variable. After aisle-to-aisle transfer time comes luggage loading time, whose variation is closer to $10\%$ . Others quickly drop off to below $8\%$ . + +# Strengths and Weaknesses + +# Array Model + +The strength of the Array Model is mostly in its conceptual simplicity. It represents a fairly simple view of an aircraft and its passengers. It is also easy to modify to accommodate different plane configurations and boarding strategies. + +The main weakness is likewise its conceptual simplicity. There is a lot more that can be done to model the aircraft boarding process more accurately. For instance, instead of having the parameters be decided according to a normal distribution at the beginning of each run, a more accurate version might have each individual have their own randomly chosen parameter values. Also, a more accurate model might be able to get rid of some of the assumptions that we made, such as allowing passengers to pass in the aisle in certain situations, allowing for late passengers, and modeling first and business class passengers. + +# Graph Model + +The purpose of the Graph Model was to address the major weaknesses of the Array Model. The Graph Model is even more flexible in allowing different plan geometries, handles passenger interactions more intelligently, and incorporates randomness at the individual level. However, it is not without its problems. It still treats some aspect of passenger behavior and luggage storage in a naive fashion. More importantly, it is even less tuned to actual times than the Array Model, so it would take a large amount of effort to use the model to generate precise time estimates for the boarding procedures. + +# Conclusions + +Despite these problems, we still feel that both our models capture the essence of the plane boarding process. From the Array Model data, we can make a fairly confident conclusion that the best boarding strategies are reverse-pyramid and outside-in, due to their fast times and low amount of variation. However, from results on the Graph Model, we had to slightly revise our conclusions regarding the outside-in strategy, which did not perform particularly well. The reverse-pyramid strategy still performed best in the Graph Model, so it remains our primary recommendation. + +Reverse-pyramid is a bit complicated to implement, so outside-in might still be a good strategy. We must also remember that the traditional back-to-front boarding performed well too, so it might not be worthwhile for an airline to switch away from it. Further, a small speed increase can be gained by implementing an even-odd or staggered variation. For an airline trying to squeeze every last bit of efficiency out of their boarding procedures, a variation of the reverse-pyramid is the best bet. + +# References + +Bachmat, E., D. Berend, L. Sapir, S. Skiena and N. Stolyarov. 2006a. Analysis of airplane boarding times. Working paper. http://www(public.asu.edu/~dbvan1/papers/orsubmit.pdf. +______ 2006b. Optimal boarding policies for thin passengers. Working paper. http://www(public.asu.edu/~dbvan1/papers/thin.pdf. +Bazargan, Massoud, Juan Ruiz, and Victor Cole. 2006. Aircraft boarding strategies: Simulation study. AGIFORS Airline Operations 2006 Conference. www.agifors.org/document.go?documentId=1586\&action=download. +Demerjian, Dave. 2006. Airlines try smarter boarding. Wired News (9 May 2006) http://www.wired.com/news/technology/0,70689-0.html? tw=wn_index_1. +Ferrari, P., and K. Nagel. 2004. Robustness of efficient passenger boarding in airplanes. http://www.vsp.tu-berlin.de/publications/airplaneboarding/15nov04.pdf. +Finney, Paul B. 2006. Loading an airliner is rocket science. New York Times (14 Nov 2006). http://travel2.nytimes.com/2006/11/14/business/14boarding.html. +van dan Briel, Menkes H.L. n.d. Airplane boarding. http://www(public.asu.edu/dbvan1/projects/boarding/boarding.htm. Accessed 9 February 2007. +_____, J. René Villalobos, and Gary L. Hogg. 2003. The aircraft boarding problem. In Proceedings of the 12th Industrial Engineering Research Conference (IERC-2003), No. 2153, CD-ROM. http://www(public.asu.edu/~dbvan1/papers/IERC2003MvandenBriel.pdf. Accessed 9 February 2007. + +# American Airlines' Next Top Model + +Sara J. Beck + +Spencer D. K'Burg + +Alex B. Twist + +University of Puget Sound + +Tacom, WA + +Advisor: Michael Z. Spivey + +# Summary + +We design a simulation that replicates the behavior of passengers boarding airplanes of different sizes according to procedures currently implemented, as well as a plan not currently in use. Variables in our model are deterministic or stochastic and include walking time, stowage time, and seating time. Boarding delays are measured as the sum of these variables. We physically model and observe common interactions to accurately reflect boarding time. + +We run 500 simulations for various combinations of airplane sizes and boarding plans. We analyze the sensitivity of each boarding algorithm, as well as the passenger movement algorithm, for a wide range of plane sizes and configurations. We use the simulation results to compare the effectiveness of the boarding plans. We find that for all plane sizes, the novel boarding plan Roller Coaster is the most efficient. The Roller Coaster algorithm essentially modifies the outside-in boarding method. The passengers line up before they board the plane and then board the plane by letter group. This allows most interferences to be avoided. It loads a small plane $67\%$ faster than the next best option, a midsize plane $37\%$ faster than the next best option, and a large plane $35\%$ faster than the next best option. + +# Introduction + +The objectives in our study are: + +- To board (and deboard) various sizes of plane as quickly as possible. + +- To find a boarding plan that is both efficient (fast) and simple for the passengers. +With this in mind: +- We investigate the time for a passenger to stow their luggage and clear the aisle. +- We investigate the time for a passenger to clear the aisle when another passenger is seated between them and their seat. +- We review the current boarding techniques used by airlines. +- We study the floor layout of planes of three different sizes to compare any difference between the efficiency of a given boarding plan as plane size increases and layouts vary. +- We construct a simulator that mimics typical passenger behavior during the boarding processes under different techniques. +- We realize that there is not very much time savings possible in deboarding while maintaining customer satisfaction. +- We calculate the time elapsed for a given plane to load under a given boarding plan by tracking and penalizing the different types of interferences that occur during the simulations. +- As an alternative to the boarding techniques currently employed, we suggest an alternative plan and assess it using our simulator. +- We make recommendations regarding the algorithms that proved most efficient for small, midsize, and large planes. + +# Interferences and Delays for Boarding + +There are two basic causes for interference—someone blocking a passenger in an aisle and someone blocking a passenger in a row. Aisle interference is caused when the passenger ahead of you has stopped moving and is preventing you from continuing down the aisle towards the row with your seat. Row interference is caused when you have reached the correct row but already-seated passengers between the aisle and your seat are preventing you from immediately taking your seat. A major cause of aisle interference is a passenger experiencing row interference. + +We conducted experiments, using lined-up rows of chairs to simulate rows in an airplane and a team member with outstretched arms to act as an overhead compartment, to estimate parameters for the delays cause by these actions. The times that we found through our experimentation are given in Table 1. + +Table 1. Delays caused by common boarding activities. + +
Boarding activityTime (s)
Walking 1 row of seats1
Carry-on stowage6
Clearing aisle when you must get by someone seated in the aisle seat4
Clearing aisle when you must get by people seated in the aisle seat and adjacent seat4
When person seated on the aisle must get up6
When person seated in middle seat must get up6
When two people must get up7
When no one is in the aisle and you can squeeze by the middle person1
+ +We use these times in our simulation to model the speed at which a plane can be boarded. We model separately the delays caused by aisle interference and row interference. Both are simulated using a mixed distribution defined as follows: + +$$ +Y = \min \{2, X \}, +$$ + +where $X$ is a normally distributed random variable whose mean and standard deviation are fixed in our experiments. We opt for the distribution being partially normal with a minimum of 2 after reasoning that other alternative and common distributions (such as the exponential) are too prone to throw a small value, which is unrealistic. We find that the average row interference time is approximately 4 s with a standard deviation of 2 s, while the average aisle interference time is approximately 7 s with a standard deviation of 4 s. These values are slightly adjusted based on our team's cumulative experience on airplanes. + +# Typical Plane Configurations + +Essential to our model are industry standards regarding common layouts of passenger aircraft of varied sizes. We use an Airbus 320 plane to model a small plane (85-210 passengers) and the Boeing 747 for a midsize plane (210-330 passengers). Because of the lack of large planes available on the market, we modify the Boeing 747 by eliminating the first-class section and extending the coach section to fill the entire plane. This puts the Boeing 747 close to its maximum capacity. This modified Boeing 747 has 55 rows, all with the same dimensions as the coach section in the standard Boeing 747. Airbus is in the process of designing planes that can hold up to 800 passengers. The Airbus A380 is a double-decker with occupancy of 555 people in three different classes; but we exclude double-decker models from our simulation because it is the larger, bottom deck that is the limiting factor, not the smaller upper deck. + +# Current Boarding Techniques + +We examine the following industry boarding procedures: + +- random-order +- outside-in +- back-to-front (for several group sizes) + +Additionally, we explore this innovative technique not currently used by airlines: + +- "Roller Coaster" boarding: Passengers are put in order before they board the plane in a style much like those used by theme parks in filling roller coasters. Passengers are ordered from back of the plane to front, and they board in seat-letter groups. This is a modified outside-in technique, the difference being that passengers in the same group are ordered before boarding. Figure 1 shows how this ordering could take place. By doing this, most interferences are avoided. + +![](images/6a818444063ab6a61499890ef69563cb7484ab9eadce39166e1ece963506f8ac.jpg) +Figure 1. Roller Coaster boarding before passengers reach the boarding gate. + +# Current Deboarding Techniques + +Planes are currently deboarded in an aisle-to-window and front-to-back order. This deboarding method comes out of the passengers' desire to be off the plane as quickly as possible. Any modification of this technique could lead to customer dissatisfaction, since passengers may be forced to wait while others seated behind them on the plane are deboarding. + +# Boarding Simulation + +We search for the optimal boarding technique by designing a simulation that models the boarding process and running the simulation under different + +plane configurations and sizes along with different boarding algorithms. We then compare which algorithms yielded the most efficient boarding process. + +# Assumptions + +The environment within a plane during the boarding process is far too unpredictable to be modeled accurately. To make our model more tractable, we make the following simplifying assumptions: + +- There is no first-class or special-needs seating. Because the standard industry practice is to board these passengers first, and because they generally make up a small portion of the overall plane capacity, any changes in the overall boarding technique will not apply to these passengers. +- All passengers board when their boarding group is called. No passengers arrive late or try to board the plane early. +- Passengers do not pass each other in the aisles; the aisles are too narrow. +- There are no gaps between boarding groups. Airline staff call a new boarding group before the previous boarding group has finished boarding the plane. +- Passengers do not travel in groups. Often, airlines allow passengers boarding with groups, especially with younger children, to board in a manner convenient for them rather than in accordance with the boarding plan. These events are too unpredictable to model precisely. +- The plane is full. A full plane would typically cause the most passenger interferences, allowing us to view the worst-case scenario in our model. +- Every row contains the same number of seats. In reality, the number of seats in a row varies due to engineering reasons or to accommodate luxury-class passengers. + +# Implementation + +We formulate the boarding process as follows: + +- The layout of a plane is represented by a matrix, with the rows representing rows of seats, and each column describing whether a row is next to the window, aisle, etc. The specific dimensions vary with each plane type. Integer parameters track which columns are aisles. +- The line of passengers waiting to board is represented by an ordered array of integers that shrinks appropriately as they board the plane. + +- The boarding technique is modeled in a matrix identical in size to the matrix representing the layout of the plane. This matrix is full of positive integers, one for each passenger, assigned to a specific submatrix, representing each passenger's boarding group location. Within each of these submatrices, seating is assigned randomly to represent the random order in which passengers line up when their boarding groups are called. +- Interferences are counted in every location where they occur within the matrix representing the plane layout. These interferences are then cast into our probability distribution defined above, which gives a measurement of time delay. +- Passengers wait for interferences around them before moving closer to their assigned seats; if an interference is found, the passenger will wait until the time delay has finished counting down to 0. +- The simulation ends when all delays caused by interferences have counted down to 0 and all passengers have taken their assigned seats. + +# Strengths and Weaknesses of the Model + +# Strengths + +- It is robust for all plane configurations and sizes. The boarding algorithms that we design can be implemented on a wide variety of planes with minimal effort. Furthermore, the model yields reasonable results as we adjust the parameters of the plane; for example, larger planes require more time to board, while planes with more aisles can load more quickly than similarly-sized planes with fewer aisles. +- It allows for reasonable amounts of variance in passenger behavior. While with more thorough experimentation a superior stochastic distribution describing the delays associated with interferences could be found, our simulation can be readily altered to incorporate such advances. +- It is simple. We made an effort to minimize the complexity of our simulation, allowing us to run more simulations during a greater time period and minimizing the risk of exceptions and errors occurring. +- It is fairly realistic. Watching the model execute, we can observe passengers boarding the plane, bumping into each other, taking time to load their baggage, and waiting around as passengers in front of them move out of the way. Its ability to incorporate such complex behavior and reduce it are key to completing our objective. + +# Weaknesses + +- It does not account for passengers other than economy-class passengers. +- It cannot simulate structural differences in the boarding gates which could possibly speed up the boarding process. For instance, some airlines in Europe board planes from two different entrances at once. +- It cannot account for people being late to the boarding gate. +- It does not account for passenger preferences or satisfaction. + +# Results and Data Analysis + +For each plane layout and boarding algorithm, we ran 500 boarding simulations, calculating mean time and standard deviation. The latter is important because the reliability of plane loading is important for scheduling flights. + +We simulated the back-to-front method for several possible group sizes. Because of the difference in the number of rows in the planes, not all group size possibilities could be implemented on all planes. + +# Small Plane + +For the small plane, Figure 2 shows that all boarding techniques except for the Roller Coaster slowed the boarding process compared to the random boarding process. As more and more structure is added to the boarding process, while passenger seat assignments continue to be random within each of the boarding groups, passenger interference backs up more and more. When passengers board randomly, gaps are created between passengers as some move to the back while others seat themselves immediately upon entering the plane, preventing any more from stepping off of the gate and onto the plane. These gaps prevent passengers who board early and must travel to the back of the plane from causing interference with many passengers behind them. However, when we implement the Roller Coaster algorithm, seat interference is eliminated, with the only passenger causing aisle interference being the very last one to board from each group. + +Interestingly, the small plane's boarding times for all algorithms are greater than their respective boarding time for the midsize plane! This is because the number of seats per row per aisle is greater in the small plane than in the midsize plane. + +# Midsize Plane + +The results experienced from the simulations of the mid-sized plane are shown in Figure 3 and are comparable to those experienced by the small plane. + +![](images/4d221a1a67b5d7354df0f8e02b3310dee41a316afd73b20d307af99c093f1ede.jpg) +Figure 2. Results of boarding strategies on small aircraft. + +Again, the Roller Coaster method proved the most effective. + +![](images/5fadf9fd7dfe6b5b749a3eaa6915c0dfedfd7c6d2ab19cb2463facd9e2c8ed44.jpg) +Figure 3. Results of boarding strategies on midsize aircraft. + +# Large Plane + +Figure 4 shows that the boarding time for a large aircraft, unlike the other plane configurations, drops off when moving from the random boarding al- + +g.orgithm to the outside-in boarding algorithm. Observing the movements by the passengers in the simulation, it is clear that because of the greater number of passengers in this plane, gaps are more likely to form between passengers in the aisles, allowing passengers to move unimpeded by those already on board. However, both instances of back-to-front boarding created too much structure to allow these gaps to form again. Again, because of the elimination of row interference it provides for, Roller Coaster proved to be the most effective boarding method. + +![](images/8d5b9a4e1812920bee4a9ded731b53206e023350a7a855bf90bb72441400c1a3.jpg) +Figure 4. Results of boarding strategies on large aircraft. + +# Overall + +The Roller Coaster boarding algorithm is the fastest algorithm for any plane size. Compared to the next fastest boarding procedure, it is $35\%$ faster for a large plane, $37\%$ faster for a midsize plane, and $67\%$ faster for a small plane. The Roller Coaster boarding procedure also has the added benefit of very low standard deviation, thus allowing airlines a more reliable boarding time. The boarding time for the back-to-front algorithms increases with the number of boarding groups and is always slower than a random boarding procedure. + +The idea behind a back-to-front boarding algorithm is that interference at the front of the plane is avoided until passengers in the back sections are already on the plane. A flaw in this procedure is that having everyone line up in the plane can cause a bottleneck that actually increases the loading time. The outside-in ("Wilma," or window, middle, aisle) algorithm performs better than the random boarding procedure only for the large plane. The benefit of the random procedure is that it evenly distributes interferences throughout the + +plane, so that they are less likely to impact very many passengers. + +# Validation and Sensitivity Analysis + +We developed a test plane configuration with the sole purpose of implementing our boarding algorithms on planes of all sizes, varying from 24 to 600 passengers with both one or two aisles. + +We also examined capacities as low as $70\%$ ; the trends that we see at full capacity are reflected at these lower capacities. The back-to-front and outside-in algorithms do start to perform better; but this increase in performance is relatively small, and the Roller Coaster algorithm still substantially outperforms them. Under all circumstances, the algorithms we test are robust. That is, they assign passenger to seats in accordance with the intention of the boarding plans used by airlines and move passengers in a realistic manner. + +# Recommendations + +We recommend that the Roller Coaster boarding plan be implemented for planes of all sizes and configurations for boarding non-luxury-class and nonspecial-needs passengers. As planes increase in size, its margin of success in comparison to the next best method decreases; but we are confident that the Roller Coaster method will prove robust. We recommend boarding groups that are traveling together before boarding the rest of the plane, as such groups would cause interferences that slow the boarding. Ideally, such groups would be ordered before boarding. + +# Future Work + +It is inevitable that some passengers will arrive late and not board the plane at their scheduled time. Additionally, we believe that the amount of carry-on baggage permitted would have a larger effect on the boarding time than the specific boarding plan implemented—modeling this would prove insightful. We also recommend modifying the simulation to reflect groups of people traveling (and boarding) together; this is especially important to the Roller Coaster boarding procedure, and why we recommend boarding groups before boarding the rest of the plane. + +# References + +Airbus S.A.S. 2007. Aircraft families / A320 family. http://www.airbus.com/en/aircraftfamilies/a320/. + +Boeing. 1995. 747 family. http://www.boeing.com/commercial/747family/index.html. +van dan Briel, Menkes H.L., J. René Villalobos, and Gary L. Hogg. 2003. The aircraft boarding problem. Proceedings of the 12th Industrial Engineering Research Conference (IERC), article number 2153. http://www(public.asu.edu/~dbvan1/papers/IERC2003MvandenBriel.pdf. +Wikipedia. 2007. Boeing 747. http://en.wikipedia.org/wiki/Boeing_747. + +![](images/b8b0c7988eeb9cb4568548c5b4cf99b751ea848b8f12932a084c62ed9da348d6.jpg) +Alex Twist, Spencer K'Burg, Sara Beck, and advisor Mike Spivey. + +Pp. 463-478 can be found on the Tools for Teaching 2007 CD-ROM. + +# Boarding—Step by Step: A Cellular Automaton Approach to Optimising Aircraft Boarding Time + +Chris Rohwer + +Andreas Hafver + +Louise Viljoen + +University of Stellenbosch + +Stellenbosch, South Africa + +Advisor: Jan H. van Vuuren + +# Summary + +We model the boarding time for the aircraft using a cellular automaton. We investigate possible solutions and present recommendations about effective implementation. + +The cellular automaton model is implemented in three stages: + +- Initialisation of the seating layout for a chosen aircraft type and assignment of seats to passengers +- The sorting of passengers according to various proposed boarding methods +- "Propagating" the passengers through the aisle(s) of the aircraft and seating them at their assigned places. + +The rules governing the automaton take into account various factors. Among these are the load factor (percentage filled) of the craft, different walking speeds of passengers walking through the aisle, and time delays from stowing luggage and obstructions by other passengers during the seating process. The algorithm accommodates predefined aircraft layouts of common aircraft and also user-defined aircraft layouts. + +We modeled and tested various boarding strategies for efficiency with regard to total boarding time and average boarding time per passenger. Thus, our approach focuses not only on optimisation of the process in favour of + +the airlines, but also yields information regarding convenience to passengers. Random boarding (where passengers with assigned seat numbers enter the plane in a random sequence) was used as a point of reference. Among other strategies tested were boarding the plane in groups from either end, boarding from seats farthest from the aisles toward the aisles, and combinations of these approaches. + +We conclude that boarding strategies starting farthest away from the entrance or farthest away from the aisles yield shorter boarding times than random boarding. The most successful methods are combinations of these strategies, their detailed implementation depending on the exact layout/size of the aircraft. The method yielding the shortest total boarding time is not necessarily the one with shortest average boarding time per passenger. By considering standard deviations of total and individual boarding times over many iterations of the simulation, we can derive conclusions regarding the stability/consistency of the specific boarding strategies and how evenly the waiting time is distributed amongst the passengers. + +By selecting appropriate strategies, time savings of 2-3 min for small and medium aircraft could be achieved. For a custom 800-seat aircraft with two aisles, more than 6 min could be saved compared to random boarding. Having compared our results to actual turnaround times quoted by airlines, we believe them to be realistic. + +# Automata Theory and Its Relevance + +A cellular automaton is an algorithm that determines the time development of a given system. If the algorithm is fed an initial configuration of the system, a finite set of fixed rules determines how the system develops. A timestep structure is used, such that the algorithm advances incrementally with all its rules being implemented at every time-step. + +We used this approach to model various boarding strategies. We create a set of rules to govern how passengers move in the aisle(s) of a plane and what happens when they take their seats. Then we tested various strategies for boarding by changing the order in which passengers entered the plane. Ultimately, we made a comparison of relative boarding times for different strategies (averaged over many iterations), to select the most time-effective strategy. The algorithm was implemented in Matlab. + +# The Algorithm + +The simulation consists of three main parts: + +- an input vector of the passengers, +- a set of rules describing the behaviour of passengers in the plane, and + +- a seating plan of the plane (flexible for various sizes/sections of planes) represented as a matrix. + +The arrangement of the input vector determines in which sequence passengers enter the plane. For instance, if the strategy is to board window-seat passengers first, the input vector would be sorted / arranged so that these passengers are at the "front" end of this vector. The vector (which is essentially a lookup-table) also contains the following information for each passenger: + +- passenger number (to track elements moving through the matrices); +- seat number; +- walking speed of the passenger (dependent on whether the passenger is a healthy adult, a child, or a passenger with a disability); +- class of the passenger (first class, economy class etc.); and +- the passenger's individual boarding time (determined when passenger is seated). + +The rules governing the behaviour (or rather propagation) of the passengers in the plane takes into consideration the passengers' walking speed. We assumed that in the space of one seat-row, two consecutive passengers can stand in the aisle. Thus, the aisle of the plane (also modelled as a vector) was created such that it has two elements for every seat row it bypasses. According to the rules, a passenger could move ahead in the aisle only if the element in front is unoccupied. + +# Assumptions + +- The aircraft has a single entrance. Most airports have facilities for only one boarding entrance to each plane. +- Not all passengers walk at the same speed. We created three categories of passengers who move through the plane at different speeds. The notion of speed is difficult to implement in cellular automata, due to the finite timestep nature of the algorithm. Thus, we used probabilities. Rules were constructed in such a way that a healthy adult definitely advances one matrix (or rather aisle-) element per time step. Since children would move slightly slower, they only advance with a probability of 0.7. Lastly, disabled, frail or handicapped people would move the slowest, and were thus forced to advance with a probability of 0.3. In this way, an idea of speed is introduced, where slow passengers hold up the faster ones in the aisle. It is also assumed that passengers do not pass one another in the aisle. + +- The distribution of the three categories of passengers is: $2\%$ disabled, frail, or handicapped; $10\%$ children; and the remaining $88\%$ healthy adults. These assumptions are based on semi-educated guesses, since very few data on this matter are available. +- When a passenger gets to the row of the allocated seat, the passenger must stow hand luggage, blocking the aisle for 5 time steps. +- If a passenger reaches the row of the allocated seat, a similar time-penalty is introduced, depending on how many seated people the passenger has to pass over to reach the seat. This time allows for passengers to move out of their seats and into the aisle to permit the given passenger to pass. During this time, obstruction occurs in the aisle, leading to a time delay. This time delay was implemented using a quadratic method: A fixed time delay was multiplied by the square of the number of seated passengers in the way. We considered this to be a realistic model, since several people moving into the aisle would cause a larger time delay for other passengers trying to pass them. +- The time units quoted in the results section are arbitrary and represent individual steps of the cellular automaton. Nonetheless, the time delays are scaled so that their magnitude, in terms of movement of passengers in the aisle, is reasonable. The scale was calculated as follows: + +- A healthy adult passenger advances one element in the aisle vector during each time step if not obstructed. This would be approximately $0.5\mathrm{m}$ . +- The average walking speed in an aircraft of a healthy adult is taken to be $0.75 \, \text{m/s}$ . +- Thus, one algorithmic time-step would be roughly 0.67 s. + +Based on these assumptions, the delays were calculated as described above. + +- Most planes have more space per person in first class and business class than in economy class. Thus, we implement large time delays for luggage stowing in economy class, smaller ones in business class, and the smallest in first class. +- We assume that passengers move in only one direction in the aisle during boarding, since they all have allocated seat numbers and can (we hope) read. + +The model is later expanded to accommodate larger planes with two aisles, where similar assumptions are made. + +# Step-by-Step Explanation of the Algorithm + +First, a seating-plan is loaded, in the form of a matrix, in which the elements represent the seats in the plane numbered sequentially. Our code was constructed such that a fixed, predetermined seating plan could be loaded (for specified aeroplane layouts) or that a seating plan with a chosen number of rows and seats per row could be used. + +The passenger vector is then initialized. A load factor is chosen, which determines what fraction of available seats is occupied. This, of course, affects the length of the passenger vector. (Length of the passenger vector is equal to (load factor) $\times$ (total number of available seats)). Each element in this vector has a passenger number and corresponding values for this passenger's seat, speed, and class. This vector is rearranged in different ways for the various boarding strategies, by changing the sequence of the passengers before they enter the plane, so that, for instance, passengers with window seats board first. + +Next, a vector representing the aisle is created. This vector has two elements per seat-row. Each vector element can contain a single passenger. As passengers move into the aisle, their passenger numbers are stored in this vector. + +The propagation / motion of passengers through the aisle is implemented in finite time-steps. The aisle is checked element-by-element from the rear of the plane. When a passenger is encountered in an element, a check is carried out whether another passenger is present in the aisle element directly ahead. If that element is unoccupied, the passenger moves into this aisle-element with the probability (speed) associated with that individual. This check continues through the entire aisle until the entrance of the plane is reached. If the element of the aisle at the entrance to the plane is unoccupied, another passenger is extracted from the passenger vector and fed into the aisle vector. Then another time-step iteration is initiated. The process can be summarised as a sequential checking of the entire aisle (and propagation of passengers through it) during each time-step, and the feeding of new passengers from the passenger vector into the aisle vector. + +After any passenger advances one element in the aisle, a check is carried out for whether the passenger reached the row of the assigned seat. If so, the row containing the seat is checked for seated passengers obstructing the path, and the described time delays are implemented. A delay for the loading of each passenger's hand luggage is also initiated when the passenger reaches the row of the allocated seat. Qualitatively, these delays are instituted in such a way that they result in this passenger spending a number of time steps stationary in the aisle. + +When time delays expire, the passenger is "seated" and is removed from the aisle. + +The entire algorithm is iterated until all passengers are in their allocated seats and the aisle is empty. The time taken for this entire process is recorded and stored. + +For a given initial setup of the passenger vector, the entire simulation was run over several iterations to obtain statistically relevant values for: + +- Average time taken for the entire boarding process. This is an indication of how effective the boarding process is, since it is in the interest of the airlines to minimize the total boarding time. +- Standard deviation for this average total boarding time over all iterations. The absolute standard deviation is a quantitative measure of the consistency of the boarding procedure, i.e., how sensitive the strategy is to randomness. The relative standard deviation (absolute standard deviation divided by average total boarding time) is a qualitative measure of the consistency of the boarding times and allows comparison between the various strategies and aircraft types. +- Average time (and standard deviation time) that it takes each passenger from when entry until being seated. The standard deviations show how uniform / consistent the boarding time per passenger is. + +For larger planes and for custom layouts of the seating plan, we implemented an option for a second aisle vector. The algorithm is carried out as above, with the sequential checking procedure simply being carried out in both aisles during each time-step. Yet some modifications are required: Still only one line of passengers enters the plane; this line has to split into the two aisles. This is done by checking whether the passenger at the entrance to the plane sits on the left half or the right half of the plane, and feeding the passenger into the relevant aisle vector. If the seating layout of the plane has an odd number of seats per row, passengers sitting on the middle seats enter the aisle that first has an unoccupied first element. If the first elements of both aisles are open, the passenger enters either aisle with a $50\%$ probability. The rest of the algorithm progresses as for the single-aisle case. + +# Description, Implementation and Results + +We describe the various boarding strategies, and the qualitative results for each, in detail. Table 1 gives numerical results. + +The load factors used in the simulations are based on statistics obtained from Transport Canada [2004]. In-depth analysis of the results follows the descriptions. + +# 1. Random boarding + +Description and algorithmic implementation + +Here the seat numbers in the passenger vector are arranged at random, so that the sequence of passengers entering the aisle at the front of the aircraft is random. Random boarding is common and will thus be used as a reference for comparison with other methods. + +Table 1. Results for various aircraft and boarding methods (originally in an Appendix). + +
L87001#0T97099708170629081706090a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
C90G668L927C217B899768P2271807a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
9626S746Z266976892119762206P96a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
E4019009900∠9009700890059002900a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
E131S901S11P911G201811901a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
E1991S0821E29212821B881381081C206192881a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
F0701280P870128082808970P29709880a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
F989P981P741OS197229781P2628161a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
E2662127910714078627981742669087a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
90109800980018001900160082009800a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
L72E69199969198789979902a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
16691169P292P89900017269410872a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
9970278069708770999019808290930a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
67618260127697216282111906a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
8402E882828266129782988298829192a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
2110621060014101800861060108800a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
917909668189109279997269a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
127062689277∠2099197079978187a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
91701160P9701270P970116029708160a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
E8862987678282P62287928a#n Bu#p#o#q#n#p#u#p#i#s#e#y(792 seats)
E26E26∠9018688871976∠9018201L011(792 seats)
91708910P11091081108210811010208610a#n Bu#p#o#q#n#p#u#p#i#s#e#y
P22272E82P82818192992978482a#n Bu#p#o#q#n#p#u#p#i#s#e#y
679116712761999126922191∠9619829991a#n Bu#p#o#q#n#p#u#p#i#s#e#y
0168499421:ap#
+ +Peppepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe pe + +# Results + +Random boarding yielded results which were, in general, only faster than those methods that board the plane from front to back. This method never obtained the worst results in any of the measured categories. + +# 2. Dividing passengers into three groups and beginning boarding with the rear group + +Description and algorithmic implementation + +First the passenger vector is arranged randomly. By finding the highest available seat number, the seating plan is divided into three equal groups. The seat numbers in the passenger vector are arranged in these three groups. The group at the back of the plane boards first, the middle group second, and the front group boards last. Thus, the three groups are internally still arranged at random. + +# Results + +For all aircraft sizes, this method yielded faster total average boarding times than random boarding. For larger aircraft, the average boarding time per passenger was larger than for random boarding. This may be explained due to each individual's initial seating time. For random boarding, very soon after boarding commences rapid front-positioned seating occurs as people seat themselves randomly; but for this method, people start seating themselves only after the rear passengers move to the back of the plane. Thus, the aisle has to be traversed before any seating occurs. This effect is greatly pronounced if the plane is large and the aisle is longer. The relative deviation of passenger boarding times was lower than that of the random method, which implies that these individual times are more uniformly distributed. + +# 3. Dividing passengers into three groups and beginning boarding with the front group + +Description and algorithmic implementation + +As before, the available seats are divided into three equal groups. The passenger vector is arranged in such a way that boarding commences with the front group. The middle group boards second and the rear group last. As before, the three groups are internally still arranged at random. + +# Results + +This method performs worst in almost all aspects. However, the relative standard deviations of the total boarding time are among the best. The poor performance of this particular method can be explained by the congestion of passengers near the entrance to the plane, since the front-seated passengers board first and obstruct flow through the aisle(s). This method was not tested on the largest aircraft, since it was evident that it was the most ineffective boarding strategy. + +# 4. Beginning boarding by filling window seats first + +Description and algorithmic implementation + +First, the passenger vector is arranged randomly. By checking which seat + +numbers are in the first and last column of the seating matrix, passengers with window seats (arranged randomly) are extracted from the passenger vector, which is then rearranged in such a way that these passengers board first. The rest of the passengers are queued behind them at random. + +Results + +This method is faster than random seating but is out-performed by seating in groups from the back of the craft to the front. The standard deviation in total boarding time is small for all aircraft and is the best in this category for the largest aircraft. + +5. Beginning boarding by filling window seats first, and dividing passengers into three groups and beginning boarding with the rear group + +Description and algorithmic implementation + +As above, the window seats are extracted and placed at the front of the passenger vector. Then the passenger vector is then divided into three groups (front, middle and back), and boarding commences with the group at the back of the craft. + +Results + +This method is a good improvement on merely commencing boarding with window seats. Thus far, it yields the best results for average total boarding time; but the average time per passenger is not the best. + +6. Beginning boarding by filling window seats first, and dividing passengers into three groups and beginning boarding with the front group + +Description and algorithmic implementation + +As above, passengers with window seats are placed at the front of the passenger vector. Then the passenger vector is then divided into three groups (front, middle and back), and boarding commences with the group at the front of the craft. + +Results + +Especially with large aircraft, this method performed poorly. As with method 3, this can be attributed to the congestion at the entrance of the plane. + +7. Dividing passengers into three groups, beginning with the back group, and extracting window seats + +Description and algorithmic implementation + +Again the passengers are grouped into front, middle and back, with the back group at the beginning of the passenger vector. The window seats are then extracted and placed at the front of the vector. Boarding begins with window seats (arranged back to front) and then with normal seats (grouped back to front). + +Results + +This method is the best of the methods mentioned thus far, with overall good performance in all aspects. + +8. Filling seats inwards towards the aisle(s) + +Description and algorithmic implementation + +For planes with a single aisle: Each passenger's seat is located in the seating plan matrix, and its distance (in terms of seats) from a window seat is calculated. The passenger vector is then rearranged in such a way that the passengers are arranged in terms of this distance from the window seats, beginning with the smallest distance (i.e., with the window seats themselves). The plane fills up from the window seats towards the middle of the plane (which is the aisle). + +For planes with two aisles: Essentially the plane is divided into two halves, each aisle being the centre of one half. For simplicity, planes with even numbers of seats between the two aisles were considered, to simplify the location of the middle of the plane. Each half of the plane is then treated as in the previous boarding strategy (that is, as if it were an individual plane with one aisle), and the passenger vector is arranged such that seating begins with passengers furthest from the aisles, and ends with passengers closest to the aisles. + +# Results + +This method is an improvement on the strategy of boarding window seats first (method 4). For all aircraft except the smallest (Fokker 50), the average total boarding time is shorter than for method 4. However it is not among the best methods in any particular aspect, though both the average boarding time and the total boarding time are fairly stable (that is, they have relatively small standard deviations). + +# 9. The passengers are first sorted in groups from back to front, and these groups are further sorted towards the aisle(s) + +Description and algorithmic implementation + +As in strategy 8, the seats are arranged to fill towards the aisle(s). The seats are then further sorted into three groups, and boarding commences with the back group. The table in the left of Figure 1 shows the way in which the passenger vector is sorted before boarding for a simple aircraft layout with one aisle. The numbers in the figure show in which order seats from the various sections are sorted in the passenger vector. + +# Results + +This method performs best in most points (as is clear from inspection of Table 1). For small aircraft, it is the fastest method. Throughout, the standard deviation of passenger boarding times is good, as is the absolute standard deviation of total boarding time. + +# 10. The passengers are first sorted towards the aisle(s) and then further divided into groups from back to front + +Description and algorithmic implementation + +Again the three groups are created, from the back of the craft to the front. Then the passengers are sorted within the groups such that the seats farthest from the aisles board first and those closest to the aisle board last. The right-hand table in Figure 1 shows how the passenger vector is sorted for a + +simple aircraft layout with one aisle; low numbers seat first. + +# Results + +For total boarding time of the largest aircraft, this method yields the best result. For other aircraft, it also performs well in this regard. However this strategy is not very consistent, since the standard deviations of the total boarding times are among the highest, especially for large aircraft. For all aircraft sizes, this method yields shorter average boarding time per passenger than method 9. + +![](images/ce93c43727acd81fcc0e987a28789cc232fd25ad0957e371719ef0a40b49f47b.jpg) +Figure 1. Illustration of seating strategies 9 and 10 (low numbers seat first). + +![](images/66f399d5a6d67b34e80e2672ea0ebbd515994ff9db047b69d9cfdae50e0416c5.jpg) + +# Short Summary of Results + +For small aircraft (roughly 50 seats), methods 9, 5, and 7 yield the best average total boarding times. + +For slightly larger aircraft (roughly 150 seats), methods 10, 7, and 9 yield the best average total boarding times. + +For medium aircraft (roughly 300 seats), methods 7, 9, and 5 yield the best average total boarding times. + +For large aircraft (roughly 800 seats), methods 10, 9, and 7 yield the best average total boarding times. + +It is thus clear that + +methods 5, 7, 9 and 10 are the most efficient strategies. + +They have in common that they begin boarding with passengers seated in the rear of the plane. Furthermore, they implement a further sorting criterion (for instance, boarding window seats first or filling the columns of the plane towards the aisles). + +Random boarding was among the three most inefficient methods for all plane classes. + +# Sensitivity Analysis + +To see how sensitive the algorithm is to variations in its parameters, we carried out additional simulations. + +- Changing the percentage of disabled / frail / handicapped passengers + +We carried out several simulations with varying percentages of disabled or handicapped passengers. Some methods were affected more strongly by these changes than others. + +As an example, we provide results from a simulation with a Boeing 777-200 (midsize aircraft), with the same load factor as used previously (0.78). We change from $2\%$ of the total passengers assumed to be handicapped to $6\%$ . + +The percentage increases in average total boarding time for methods 1, 9, and 10 were $16\%$ , $13\%$ , and $19\%$ . The percentage change does not vary too greatly for the various methods. + +- Investigating the effect of various load factors on the total boarding time From the results obtained, we chose one method (method 9) and ran the simulation over a range of load factors. We found a strong linear relation between load factor and average total boarding time. We conclude that it is fairly irrelevant what load factor is used to compare boarding procedures, provided the same load factor is used for all. + +# Advantages of Our Model + +The model can be customised to accommodate any seat plan specification (an aircraft with either one or two aisles). Two decks on a plane could simply be modelled as two separate aircraft with their individual seating configurations. + +Our measurements are sensible in that they provide information from 200 iterations of each boarding procedure. Thus, the accuracy of the specific boarding measurements is reliable. + +We strove to keep all parameters fixed during the simulation of each boarding strategy. This ensures that if a parameter has been allocated an unrealistic value, then this fault has a reduced effect on the outcome of the experiment and, more specifically, on the comparison of boarding strategies. + +Many different boarding strategies were implemented. Some were combined with others to yield a more substantial result. In essence, any of the boarding strategies can be combined in the model to produce many more procedures. We did, however, select strategies that we assume to be realistic and that yield a spread of data and results that are informative. + +# Improvements, Comments + +- We do not address deboarding strategies. We are convinced, though, that it is significantly more challenging to set up effective boarding strategies, since this involves arranging passengers before they board the plane. In a sense, deboarding is inherently a more structured process. We also believe that a simple reversal of the more effective boarding procedures should save time during deboarding. +- The speeds of passengers are strongly discrete in the model. Distinctions are made only among frail/handicapped passengers, children, and healthy adults. It would have been more realistic to implement a probability distribution of walking speeds. +- Throughout the model, the assumption is made that passengers move in only one direction in the aisle. This does not accommodate the fact that sometimes people move opposite to the stream of passengers in the aisle. +- In our model, passengers cannot pass in the aisle. This is not realistic, though in most cases passengers would probably wait for a person ahead of them to stow luggage before passing. This could slow boarding and would affect the outcomes of some boarding strategies (for instance, boarding from the front would have yielded a significantly shorter boarding time). Nonetheless, we believe that our model is inherently systematic and that the obtained results are meaningful and believable. +- It would be sensible to allow for two doors into the plane. However, one door at the front of the plane and one at the rear could be likened to boarding two separate planes, each from one entrance. +- We could have investigated how division into more groups would have affected the simulations. Nonetheless, we believe that it is not reasonable for airport staff to have to divide passengers into so many groups, as this process itself would be time-consuming. +- Special seat allocations and boarding strategies for disabled people could have been considered, but this would not have affected the final outcome of the simulations greatly, due to the small percentage of disabled passengers. Perhaps one strategy should have involved seating all disabled passengers in the front of the plane so that they do not obstruct motion through the aisles and that they do not have to walk as far in the plane. +- Many of our boarding strategies do not allow for passengers to board in groups (for instance, mothers with their children), and this could cause inconveniences in its implementation in the real world. +- The various delays in the boarding algorithm were based on guesses that we made by comparing the average walking speed of a healthy person to + +the average time that we assumed would be needed to stow hand luggage. These delays were then implemented in the finite time-step nature of the algorithm, and could have been researched more accurately. + +# Conclusion + +The task assigned to us was to devise and test various strategies for boarding procedures for various classes of aircraft. Using the approach of an algorithm based on a finite time-step cellular automaton, we obtained some clear results. + +From our simulations, we find that certain boarding procedures result in significant savings of time. The most efficient strategies all apply two filters to the passengers before the plane is boarded. One filter involves sorting passengers so that those farthest from the entrance board first, and the other sorts passengers according to seat columns. Nonetheless, methods that disrupt boarding of groups of adjacently seated passengers may be logistically difficult to implement or even irritate passengers (e.g., method 10). + +The most outstanding of these are implemented as follows (see Figure 2): + +Method A: The aircraft is divided in groups seated from the rear to front, and each of these groups is further sorted from window (and centre) seating towards the aisle(s). + +Method B: The passengers are first sorted from window seats (and centre seats) towards the aisle(s). They are then further divided into groups from rear to front. (The figure illustrates how this would be implemented in a two-aisle aircraft) + +Time differences in total average boarding time between the most efficient and least efficient strategies are up to 3 min for small planes and 5-6 min for larger planes. + +Essentially, an optimised seating strategy merely divides the plane into sections and dictates which sections are boarded first. Thus, once a strategy is selected, it would be easy to include these sections on the tickets of passengers, and to have the passengers group themselves as desired (for instance, by allotting areas at the boarding gates to the various passenger groups, or calling them separately for boarding). In this way, the organisation of the passengers into the desired sequence need not imply a significant time delay at all. The trade-off between shortened boarding time and required organisation time is especially pronounced with small aircraft. + +Methods with the best total boarding times do not necessarily guarantee the shortest average boarding time per individual passenger. This is important to note, since the implementation of such a strategy may infringe on convenience to the passengers. The average boarding time per individual passenger for some methods was greater than that for random boarding, especially + +![](images/236e11fbec7d4f776277180703f29c5d144257abbc81c79087122cced7ec080a.jpg) +Figure 2. Implementation of Methods A and B in a two-aisle aircraft. + +for large aircraft. This is not necessarily relevant, since random boarding has a larger spread in individual boarding times. Furthermore, the differences in average boarding time per passenger in relation to the total boarding times of these methods were relatively small. Yet a rigorous sorting of passengers prior to boarding may very well be perceived as irritating by many passengers. This effect should be minimised by performing the sorting efficiently and in a simple manner (as suggested above). + +It is easier to group people by seat row of the aircraft than by column, since passengers often travel in groups and some sorting methods would disrupt these groups. Thus Method A would most probably be more practical to enforce than Method B. The event of passengers not abiding to the desired procedure could cause disruption of the boarding process. + +In conclusion, we recommend Method A. + +Airports and airlines could further shorten boarding times by making infrastructure changes, such as allowing passengers to board from both ends of the plane. + +We do not believe that implementation of these structured boarding strategies in the real world would result in an administrative waste of time that outweighs the potential boarding time savings. + +# References + +Finney, Paul Burnham. 2006. Loading an airliner is rocket science. New York Times (14 November 2006). http://travel2.nytimes.com/2006/11/14/business/14boarding.html. + +KLM Airways. n.d. http://www.klm.com/travel/fi_en/travel_information/ on_board/seatingplans/index.htm. + +Transport Canada Aviation Forecasting. 2004. Passenger load factors. http:// www.tc.gc.ca/pol/en/airforcasting/assumptionreport2004/assum20047. htm . + +![](images/e513b5993ab3945a338b400663c5180d1386d1d012b4c0735b32078a51cf0eb3.jpg) + +Top, from left: Advisor J.H. Van Vuuren, Chris Rohwer; bottom: Andreas Hafver, Louise Viljoen. + +# Judges' Commentary: + +# The Fusaro Award Airplane Seating Paper + +Marie Vanisko + +Dept. of Mathematics + +Carroll College + +Helena, MT + +mvanisko@carroll.edu + +Peter Anspach + +National Security Agency + +Ft. Meade, MD + +anspach@aol.com + +The Ben Fusaro Award for the 2007 Airplane Seating Problem went to a team from Rowan University in Glassboro, New Jersey. Their paper was designated Meritorious; it fell just short of the Outstanding designation due to an error in one of their equations and some questionable results. However, this paper exemplified some outstanding characteristics: + +- it presented a high-quality application of the complete modeling process; +- it demonstrated noteworthy originality and creativity in their modeling effort; and +- it was well written, in a clear expository style, making it a pleasure to read. + +The students were asked to devise and compare procedures for boarding and deboarding planes with varying numbers of passengers. They were also asked to prepare an executive summary for an audience of airline executives, gate agents, and flight crews, in which they explained their findings. + +Addressing real-world problems involves formulating a mathematical description of the problem, solving the mathematical model, interpreting the mathematical solution, and critically evaluating the model. + +Before a team could formulate a mathematical description of the problem, it was necessary to do research to estimate reasonable values for parameters to be + +used. The Rowan University team began by looking at current boarding procedures and came up with a detailed list of sources of boarding delays, including the storing of carry-on luggage. Based on their assumptions, it was clear that the team members had considered many issues associated with the boarding process, and that they justified each assumption. Certain assumptions, for example, in terms of the implication on boarding time "random seating and assignment seating can be thought to be equivalent," might be questionable. However, as long as they used this assumption consistently in their simulation models, it was considered allowable. It should be mentioned that such assumptions help to distinguish Outstanding papers from Meritorious papers. In setting up their simulation model, the Rowan University team considered + +- the time that it takes passengers to walk to their seats; +- the service time, which includes time for stowing luggage, based on the size and quantity of luggage; and +seating time. + +The rather important detail of distinguishing separate times for these activities was overlooked by many other teams. The Rowan team determined walking speed using an accelerometer and included the results in their model, using a uniformly-distributed random variable, together with a factor that allowed for a decrease in walking speed as the number of passengers increased. They used a uniform distribution over the interval 11.5 to 14.5 seconds to estimate stowing time for luggage. The number of passengers with carry-on luggage was estimated with a log function of a uniformly distributed random variable. Seating time was a function of which column (window, middle, aisle) passengers were in. Although the level of mathematics used in this model may not have been as high as some, the team utilized it very well. Overall, the Rowan model was quite simple, but the description of parameters used was very clearly spelled out. This is what judges look for when simulations are done. + +Their simulation models consisted of four different seating methods: + +- open seating, where passengers are lined up randomly; +back-to-front seating; +- outside-in Seating (WilMA); and +- modified reverse-pyramid seating, in which the outer columns are seated first, followed by open seating of the rest of the plane. + +To test the efficiency of their model, the team used Matlab simulations with several types of small, medium, and large aircraft. Reports were given for the mean, median, and variance of the simulated results for each type of seating on each type of plane. Frequency histograms were also given for each category. This type of reporting clearly demonstrated the results of their simulations. + +However, the judges did not feel that all the results were reasonable, and this was a reason for the Meritorious designation rather than Outstanding. If the team had acknowledged the unreasonableness of some of their results, that would have been more acceptable. Nevertheless, this paper is a very good example of mathematical modeling. The team is to be congratulated for using mathematics to create their own model to solve the problem at hand, in a clear and solid example of the modeling process. + +# About the Authors + +Marie Vanisko has retired from Cal State Stanislaus and moved back to Montana, where she taught for 31 years at Carroll College and was a visiting professor at the U.S. Military Academy at West Point. She chairs a College Board committee for the SAT Subject Tests in Mathematics and serves on a national joint committee of the National Council of Teachers of Mathematics and the Mathematical Association of America (MAA). For each of the past two years, Marie has co-directed an MAA Tensor Foundation grant project for high school girls, entitled Preparing Women for Mathematical Modeling, with the hope of encouraging more young women to select careers that involve mathematics. She serves as a judge for the COMAP MCM and HiMCM has also been active in the MAA PMET (Preparing Mathematicians to Educate Teachers) project. + +# Statement of Ownership, Management, and Circulation (All Periodicals Publications Except Requester Publications) + +
1. Publication Title +The UMAP Journal2. Publication Number3. Filing Date +9/7/07
0197-3622
4. Issue Frequency +Quarterly5. Number of Issues Published Annually +Four (4)6. Annual Subscription Price +$104.00
7. Complete Mailing Address of Known Office of Publication (Not printer) (Street, city, county, state, and ZIP+4®) COMAP, Inc., 175 Middlesex Tpk., Suite 3B, Bedford, MA 01730Contact Person +John Tomicek
Telephone (Include area code) +781/862-7878 x130
+ +8. Complete Mailing Address of Headquarters or General Business Office of Publisher (Not printer) + +# SAME + +9. Full Names and Complete Mailing Addresses of Publisher, Editor, and Managing Editor (Do not leave blank) + +Publisher (Name and complete mailing address) + +Solomon Garfunkel, 175 Middlesex Tpk., Suite 3B, Bedford, MA 01730 + +Editor (Name and complete mailing address) + +Paul Campbell, 700 College St., Beloit, WI 53511 + +Managing Editor (Name and complete mailing address) + +Tim McLean, 175 Middlesex Tpk., Suite 3B, Bedford, MA 01730 + +10. Owner (Do not leave blank. If the publication is owned by a corporation, give the name and address of the corporation immediately followed by the names and addresses of all stockholders owning or holding 1 percent or more of the total amount of stock. If not owned by a corporation, give the names and addresses of the individual owners. If owned by a partnership or other unincorporated firm, give its name and address as well as those of each individual owner. If the publication is published by a nonprofit organization, give its name and address.) + +
Full NameComplete Mailing Address
Consortium for Mathematics and175 Middlesex Tpk., Suite 3B, Bedford, MA 01730
Its Applications, Inc. (COMAP, Inc.)
+ +11. Known Bondholders, Mortgages, and Other Security Holders Owning or + +Holding 1 Percent or More of Total Amount of Bonds, Mortgages, or + +Other Securities. If none, check box + +![](images/d3aa69a07b41fa1313f91b3a3908f84c314b32dad0a499405648d62b7d40f73a.jpg) + +
Full NameComplete Mailing Address
+ +12. Tax Status (For completion by nonprofit organizations authorized to mail at nonprofit rates) (Check one) + +The purpose, function, and nonprofit status of this organization and the exempt status for federal income tax purposes. + +□ Has Not Changed During Preceding 12 Months +□ Has Changed During Preceding 12 Months (Publisher must submit explanation of change with this statement) + +
13. Publication Title +The UMAP Journal14. Issue Date for Circulation Data +11/16/07
15. Extent and Nature of CirculationAverage No. Copies Each Issue During Preceding 12 MonthsNo. Copies of Single Issue Published Nearest to Filing Date
a. Total Number of Copies (Net press run)720730
b. Paid Circulation +(By Mail and Outside the Mail)(1)Mailed Outside-County Paid Subscriptions Stated on PS Form 3541(Include paid distribution above nominal rate, advertiser's proof copies, and exchange copies)650670
(2)Mailed In-County Paid Subscriptions Stated on PS Form 3541(Include paid distribution above nominal rate, advertiser's proof copies, and exchange copies)00
(3)Paid Distribution Outside the Mails Including Sales Through Dealers and Carriers, Street Vendors, Counter Sales, and Other Paid Distribution Outside USPS®3035
(4)Paid Distribution by Other Classes of Mail Through the USPS (e.g. First-Class Mail®)00
c. Total Paid Distribution (Sum of 15b (1), (2),(3), and (4))680705
d. Free or Nominal Rate Distribution +(By Mail and Outside the Mail)(1)Free or Nominal Rate Outside-County Copies included on PS Form 354100
(2)Free or Nominal Rate In-County Copies Included on PS Form 35411521
(3)Free or Nominal Rate Copies Mailed at Other Classes Through the USPS (e.g. First-Class Mail)00
(4)Free or Nominal Rate Distribution Outside the Mail (Carriers or other means)00
e. Total Free or Nominal Rate Distribution (Sum of 15d (1), (2), (3) and (4))1521
f. Total Distribution (Sum of 15c and 15e)695726
g. Copies not Distributed (See Instructions to Publishers #4 (page #3))516
h. Total (Sum of 15f and g)700742
i. Percent Paid (15c divided by 15f times 100)9898
+ +16. Publication of Statement of Ownership +If the publication is a general publication, publication of this statement is required. Will be printed in the Third issue of this publication. + +17. Signature and Title of Editor, Publisher, Business Manager, or Owner + +![](images/d0d4eb5c9a613ced6c8aa1b629f419e766518b8b4301c9b66cdfcc8b9f4f1fe6.jpg) + +I certify that all information furnished on this form is true and complete. I understand that anyone who furnishes false or misleading information on this form or who omits material or information requested on the form may be subject to criminal sanctions (including fines and imprisonment) and/or civil sanctions (including civil penalties). \ No newline at end of file diff --git a/MCM/1995-2008/2008ICM/2008ICM.md b/MCM/1995-2008/2008ICM/2008ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..13b72b21c9d56f2ceeec43bbe429dc5e7a13be59 --- /dev/null +++ b/MCM/1995-2008/2008ICM/2008ICM.md @@ -0,0 +1,2306 @@ +# The U + +# M + +# Publisher + +COMAP, Inc. + +# Executive Publisher + +Solomon A. Garfunkel + +# ILAP Editor + +Chris Arney + +Division Chief, Mathematical Sciences + +Program Manager, Cooperative Systems + +Army Research Office P.O.Box 12211 + +Research Triangle Park, NC 27709-2211 + +david.arney1@arl.army.mil + +# On Jargon Editor + +Yves Nievergelt + +Dept. of Mathematics Eastern Washington Univ. + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +# Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +# Chief Operating Officer + +Laurie W. Aragón + +# Production Manager + +George W. Ward + +# Production Editor + +Joyce Barnes + +# Distribution + +John Tomicek + +# Graphic Designer + +Daiva Chauhan + +# AP Journal + +# Vol. 29, No. 2 + +# Editor + +Paul J. Campbell + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meyer + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young Univ. + +Army Research Office + +AT&T Shannon Res. Lab. + +U. of Houston-Downtn + +Harvey Mudd College + +Oberlin College + +Troy U.-Montgomery + +U. of Wisc.—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +Calif. State U., Fullerton + +Brigham Young Univ. + +Southern Methodist U. + +Harvey Mudd College + +Adelphi University + +Eastern Washington U. + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Institutional Web Membership (Web Only) + +Institutional Web Memberships does not provide print materials. Web memberships allow members to search our online catalog, download COMAP print materials, and reproduce for classroom use. + +(Domestic) #2830 $449 (Outside U.S.) #2830 $449 + +# Institutional Membership (Print Only) + +Institutional Memberships receive print copies of The UMAP Journal quarterly, our annual CD collection UMAP Modules, Tools for Teaching, and our organizational newsletter Consortium. + +(Domestic) #2840 $289 (Outside U.S.) #2841 $319 + +# Institutional Plus Membership (Print Plus Web) + +Institutional Plus Memberships receive print copies of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules, Tools for Teaching, our organizational newsletter Consortium, and on-line membership that allows members to search our online catalog, download COMAP print materials, and reproduce for classroom use. + +(Domestic) #2870 $569 (Outside U.S.) #2871 $599 + +For individual membership options visit www.comap.com for more information + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +Send address changes to: info@comap.com + +COMAP, Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730 + +© Copyright 2007 by COMAP, Inc. All rights reserved. + +# Vol. 29, No. 2 2008 + +# Table of Contents + +# Publisher's Editorial + +A New Memory + +Solomon A. Garfunkel 93 + +# Special Section on the ICM + +Results of the 2008 Interdisciplinary Contest in Modeling + +Chris Arney 97 + +Evaluation and Improvement of Healthcare Systems + +Luting Kong, Yiyi Chen, and Chao Ye 113 + +Better Living through Math: An Analysis of Healthcare Systems + +Denis Aleshin, Bryce Lampe, and Parousia Rockstroh 135 + +The Most Expensive is Not the Best + +Hongxing Hao, Xiangrong Zeng, and Boliang Sun 155 + +Judges' Commentary: The Outstanding Healthcare Papers + +Sarah Root, Rodney Sturdivant, and Frank Wattenberg 169 + +Reviews 175 + +# Publisher's Editorial + +# A New Memory + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +175 Middlesex Turnpike, Suite 3B + +Bedford, MA 01730-1459 + +s.garfunkel@comap.com + +This past year, COMAP began publication of a new journal. While unlike almost all other COMAP products and publications, this journal did not originate with a specific COMAP project—rather, it came to us from the editors. We are excited about the innovative and unique nature of this publication. We are proud to unabashedly promote it here. + +The International Journal for the History of Mathematics Education is the only journal that is entirely devoted to the world history of mathematics education. The major aim of this journal is to provide mathematics education with its memory, in order to reveal the insights achieved in earlier periods (ranging from ancient times to the late 20th century) and to unravel the fallacies of past events. + +The journal is published twice a year and each issue is roughly 100 pages long. It features research papers, notes, book reviews, and interviews. The Chief Editor is Gert Schubring, Bielefeld University, Germany; the Managing Editor is Alexander Karp, Teachers College, Columbia University, USA. + +# Editorial Board + +Abraham Arcavi (Israel) + +Ahmed Djebar (France/ Algeria) + +Fulvia Furinghetti (Italy) + +Hélène Gispert (France) + +Jeremy Kilpatrick (USA) + +Leo Rogers (Great Britain) + +Harm Jan Smid (Netherlands) + +Elena Ausejo (Spain) + +Eileen Donoghue (USA) + +Paulus Gerdes (Mozambique) + +Wann-Sheng Horng (Taiwan) + +João Bosco Pitombeira (Brazil) + +Yasuhiro Sekiguchi (Japan) + +# Excerpt + +Here is an excerpt from our third issue, December 2007, in which we interviewed Dr. Henry Pollak (former Director of Mathematics and Statistics Research at Bell Laboratories, now at Teachers College, Columbia University), and he recounted some of his experiences through so many years in the field. + +![](images/72cd4668ef30f0b66feeb72612092ff072e5c9d0a684ac305fc6c07f36754d33.jpg) +Henry Pollak. + +There was the famous 1963 letter [Bers et al. 1962] that complained about the "New Math," it was signed and published in both the American Mathematical Monthly and by NCTM [National Council of Teachers of Mathematics]. I signed that letter, as did one other person from SMSG [School Mathematics Study Group], and I did it for a very good reason, that is that I thought SMSG satisfied the conditions that had been written down. [Morris] Kline and [Max M.] Schiffer and others didn't think so, but I did and it's not that I was so crazy. It's that I lived in the middle of discrete mathematics. I lived at the forefront of understanding how important that is for applications. So the structural part of SMSG was beautiful preparation for discrete mathematics. And the trouble is that people forget that telephony involves not just transmission, which is classical mathematical physics, but also switching, which was discrete mathematics in its most difficult form. Terriby difficult and terribly interesting. And so I lived in both of these, Bell Labs and SMSG, you see, and so to me, what was going on in SMSG was beautiful for the modern applications of mathematics, and I think that the classical applied mathematicians who signed this letter didn't understand this. It's extremely difficult to get over the idea that something could be as important as the thousands of years of success of applying mathematics to physics. I mean, classical analysis is an enormous edifice of success, and yet the success of discrete mathematics in the last fifty years is enormous as well. So I think that + +that is why I signed that letter. And nobody in SMSG ever complained about my signing that letter [laughs], because they understood. But I think a lot of the people at the time didn't understand how applied mathematics was changing. Nowadays, and even by 1963 the definition of applied mathematics is no longer just classical analysis applied to classical physics. It is a lot more than that. And when you look at all of applicable mathematics SMSG is excellent preparation for that. It's exactly what you want. [Karp 2007, 74-75] + +More detailed information about the journal can be found at http://www.comap.com/historyjournal/index.html + +# References + +Bers, Lipman, et al. 1962. On the mathematics curriculum of the high school. American Mathematical Monthly 69 (3) (March 1962): 189-193. + +Karp, Alexander. 2007. Interview with Henry Pollak. The International Journal for the History of Mathematics Education 2 (2): 67-89. http://www.comap.com/historyjournal/archives.htm. + +# About the Author + +Solomon Garfunkel, previously of Cornell University and the University of Connecticut at Storrs, has dedicated the last 25 years to research and development efforts in mathematics education. He has served as project director for the Undergraduate Mathematics and Its Applications (UMAP) and the High School Mathematics and Its Applications (HiMAP) Projects funded by NSF, and directed three telecourse projects including Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra, for the Annenberg/CPB Project. He has been the Executive Director of COMAP, Inc. since its inception in 1980. Dr. Garfunkel was the project director and host for the series, For All Practical Purposes: Introduction to Contemporary Mathematics. He was the Co-Principal Investigator on the ARISE Project, and is currently the Co-Principal Investigator of the CourseMap, ResourceMap, and WorkMap projects. In 2003, Dr. Garfunkel was Chair of the National Academy of Sciences and Mathematical Sciences Education Board Committee on the Preparation of High School Teachers. + +# Modeling Forum + +# Results of the 2008 Interdisciplinary Contest in Modeling + +Chris Arney, ICM Co-Director +Division Chief, Mathematical Sciences Division +Program Manager, Cooperative Systems +Army Research Office +PO Box 12211 +Research Triangle Park, NC 27709-2211 +David.Arney1@arl.army.mil + +# Introduction + +A record total of 380 teams from three countries spent a weekend in February working in the 10th Interdisciplinary Contest in Modeling (ICM). They confronted an open-ended interdisciplinary modeling problem involving public health policy concerning healthcare systems. This year's contest began on Thursday, Feb. 14 and ended on Monday, Feb. 18, 2008. During that time, teams of up to three undergraduate or high school students researched, modeled, analyzed, solved, wrote, and submitted their solutions. After the weekend of challenging and productive work, the solution papers were sent to COMAP for judging. Three of the top papers, which were judged to be Outstanding by the expert panel of judges, appear in this issue of The UMAP Journal. + +COMAP's Interdisciplinary Contest in Modeling (ICM) along with its sibling contest, the Mathematical Contest in Modeling (MCM), involves students working in teams to model and analyze an open problem. Centering its educational philosophy on mathematical modeling, COMAP supports the use of mathematical tools to explore real-world problems. It serves society by developing students as problem-solvers in order to become better informed and prepared as citizens, contributors, consumers, workers, and community leaders. The ICM and MCM are examples of COMAP's efforts in working towards its goals. + +This year's public health problem was challenging in its demand for teams to utilize many aspects of science, mathematics, and data analysis in their modeling. The problem required teams to understand the data and nature of healthcare systems and to model the complex policy issues associated with the issues of financing and managing a nation's healthcare system. In order to accomplish their tasks, the students had to consider many difficult and complex issues. Political, social, psychological, and technological issues had to be considered, along with several challenging requirements needing scientific and mathematical analysis. The problem also included the ever-present requirements of the ICM to use thorough data analysis, research, creativity, and effective communication. The author of the problem was Kathleen Crowley, Professor of Psychology at the College of Saint Rose. + +All members of the 380 competing teams are to be congratulated for their excellent work and dedication to modeling and problem solving. The judges remarked that this year's problem was challenging and demanding in many aspects of modeling and problem solving. + +Next year, we will move to an environmental science theme for the contest problem. Teams preparing for the 2009 contest should consider reviewing interdisciplinary topics in the area of environmental issues. + +# The Healthcare Problem + +# Finding the Good in Healthcare Systems + +Nations have systems for providing healthcare for their residents. Issues that are often of concern to people and are often in the news include which system is better and whether current systems can be improved. Aspects of these systems vary widely between nations: how they are funded; whether services are delivered through public, private, or non-profit organizations; whether public insurance is universal for all residents; who is eligible for assistance; what care is covered; whether the latest medical procedures are available; and how much is required as user fees. Other factors that are often debated in determining the quality of care include: coverage for complementary care (glasses, dental, prostheses, prescription drugs, etc.); which diseases are the most critical in improving overall health; percentage of GDP spent on healthcare; percentage of healthcare costs that goes toward labor/administration/malpractice insurance; ratio of public to private spending on healthcare; per capita spending on healthcare; growth of per capita spending on healthcare; number of participating physicians; per capita sick days; fairness of care in terms of age, race, gender, socio-economic class; and many more. Adding to the complications are health related factors such as personal exercise, food availability, climate, occupations of citizens, and smoking habits. + +The World Health Organization (WHO), an agency of the United Nations, is a source of data on health factors. The annual World Health Report + +http://www.who.int/whr/en/index.html + +assesses global health factors, and World Health Statistics + +http://en.wikipedia.org/wiki/World_Health_Organisation provides health statistics for the countries in the U.N. The production and dissemination of health statistics is a major function of WHO. To many people, these data and the associated analyses are considered unbiased and very valuable to the world community. There are many other sources of reliable health data available. + +Part I: Describe several different outcomes (metrics) that could be used to evaluate the effectiveness of a country's healthcare system, such as average life expectancy of its residents. What metric would you use to make comparisons between existing and potential systems? Can you combine your metrics to make them even more useful in measuring quality? + +Part II: Identify current sources of data that provide the raw data needed to compute the metrics you have identified above. You may need to modify your list of metrics based on the availability of data. Explain why you have selected those data and demonstrate how they can be used to assess and compare the relative effectiveness of healthcare systems as they exist in different countries. + +Part III: Choose at least three of the most important and viable metrics for comparing healthcare systems. Justify why these are the most useful for this purpose. Can any of these help measure the historical change in an existing healthcare system? Are they measurable and can the data be easily collected? + +Part IV: Use your three (or more) metrics to compare the United States healthcare system with one other country which is considered to have good healthcare using the most recent year for which you have data. Which country has the better healthcare system? Is your answer definitive? + +Part V: Using your metrics compare the United States and one other country which is considered to have poor healthcare using the most recent year for which you have data. Which country has the better healthcare system? + +Part VI: Pick a country's (U.S. or other) healthcare system and restructure it to improve the system based on your metrics. Build predictive models to test various changes to determine if the changes will improve the overall quality of the system. Suggest major change(s) that can improve the system. + +# The Results + +The 380 solution papers were coded at COMAP headquarters so that names and affiliations of the authors were unknown to the judges. Each paper was then read preliminarily by "triage" judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary, the model description, and overall organization are the primary elements in judging a paper. Final judging by a team of modelers, analysts, and subject-matter experts took place in April. The judges classified the 380 submitted papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
MentionParticipation
Healthcare353175149380
+ +The three papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with a commentary by the judges. We list those three Outstanding teams and the 53 Meritorious teams (and advisors) below. The complete list of all participating schools, advisors, and results is provided in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +"Evaluation and Improvement of Healthcare Systems" + +Beijing University of Posts and Telecommunications + +Beijing, China + +Qing Zhou + +Luting Kong + +Yiyi Chen + +Chao Ye + +"The Most Expensive Is Not the Best" + +National University of Defense Technology + +Changsha, China + +Ziyang Mao + +Hongxing Hao + +Xiangrong Zeng + +Boliang Sun + +"Better Living through Math: + +An Analysis of Healthcare Systems" + +Harvey Mudd College + +Claremont, CA + +Darryl Yong + +Denis Aleshin + +Bryce Lampe + +Parousia Rockstroh + +# Meritorious Teams (53) + +Anhui University, China (Huayou Chen) + +Asbury College, Wilmore, KY (Duk Lee) + +Beijing Jiaotong University, China (Hong Zhang) + +Beijing University of Posts, China (Tianping Shuai) + +Beijing Language and Culture University, China (Xiaoxia Zhao) + +Central University of Finance and Economics, China (Zhaoxu Sun) + +China University of Mining and Technology, China (2 teams) (Xingyong Zhang) (Shujuan Jiang) + +Chongqing University, China (2 teams) (Renbin He) (Jian Xiao) + +College of Science of Harbin Engineering, China (Yu Fei) + +Dalian University of Technology, China (4 teams) (Mingfeng He) (Liang Zhang) (Zhe Li) (Shengjun Xu) + +Donghua University, China (Xiaofeng Wang) + +East China University of Science and Technology, China (Lu Xiwen) + +Fudan University, School of Management, China (Zhongyi Zhu) + +Hangzhou Dianzi University, China (2 teams) (Zhifeng Zhang) (Chengjia Li) + +Harbin Institute of Technology, China (3 teams) (Zhu Lei) (Xiaofeng Shi) (Xuefeng Wang) + +Harbin University of Science and Technology, China (Fengqiu Liu) + +Harvey Mudd College, Claremont, CA (Darryl Yong) + +Huazhong University of Science and Technology, China (Yan Dong) + +James Madison University, Harrisonburg, VA (Hasan Hamdan) + +Jinan University, China (Shizhuang Luo) + +Nanjing University, China (2 teams) (Huikun Jiang) (Li Wei Xu) + +National University of Defense Technology, China (2 teams) (Mengda Wu) (Yi Wu) + +Peking University, China (2 teams) (Yuxin Liu) (Yuan Wang) + +Peking University Health Sciences Center, China (Termison) + +Sichuan University, China (Hai Niu) + +Sichuan Agricultural University, China (Shi Du) + +South China Normal University, China (Xiuxiang Liu) + +South China Agricultural University, China (YanKe Zhu) + +South China University of Technology, China (2 teams) (WeiJian Ding) (ShenQuan Liu) + +Shandong University at Weihai, China (Zhulou Cao) + +Tsinghua University, China (Mei Lu) + +University of Science and Technology of China, China (2 teams) (Hong Zhang) + +(Yuanbo Zhang) + +Xi'an Jiaotong University, China (Zhuosheng Zhang) + +Yuanpei College, China (Liman Sha) + +Zhejiang University, China (2 teams) (Qifan Yang) (Zhiyi Tan) + +Zhejiang Gongshang University, China (Zhu Ling) + +Zhuhai College of Jinan University China (3 teams) (YuanBiao Zhang) + +(Zhiwei Wang) (Yuanbiu Zhang) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and the Head Judge. Additional awards were presented to the National University of Defense Technology team from the Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Contest Directors + +Chris Arney, Division Chief, Mathematical Sciences Division, + +Army Research Office, Research Triangle Park, NC + +Joseph Myers, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Associate Director + +Rodney Sturdivant, Dept. of Mathematical Sciences, + +U.S. Military Academy, West Point, NY + +Judges + +Ben Cole, National Security Agency, Ft. Meade, MD + +William C. Dowdy, U.S. Army Medical Department Activity, + +West Point, NY + +John Kobza, Dept. of Industrial Engineering, Texas Tech University, + +Lubbock, TX + +Sarah Root, Dept. of Industrial Engineering, University of Arkansas, + +Fayetteville, AR + +Frank Wattenberg, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Triage Judges + +Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY: + +Amanda Beecher, Randy Boucher, Robert Burks, Gabriel Costa, Jong Chung, Ben Cole, Brian Davis, Eric Drake, J. Dzwonchyk, Amy Erickson, Keith Erickson, Douglas Fletcher, Gregory Graves, Michael Harding, Alex Heidenberg, Heather Jackson, Anthony Johnson, Jerry Kobylski, Elizabeth Morseman, Joseph Myers, Donald Outing, Jack Picciuto, Todd Retchless, Jon Roginski, Tyler Smith, Brian Souhan, Rodney Sturdivant, Patrick Sullivan, Edward Swim, Krista Watts, Brian Winkel, and Robyn Wood + +# Acknowledgments + +We thank: + +INFORMS, the Institute for Operations Research and the Management Sciences for its support in judging and providing prizes for the INFORMS winning team; +- IBM for their support for the contest; +- all the ICM judges and ICM Board members for their valuable and unflagging efforts; +- the staff of the U.S. Military Academy, West Point, NY, for hosting the triage and final judgings. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the student papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, style has been adjusted to that of The UMAP Journal, and the papers have been edited for length. Please peruse these student efforts in that context. + +To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Editor's Note + +As usual, the Outstanding papers were longer than we can accommodate in the Journal, so space considerations forced me to edit them for length. It was not possible to include all of the many tables and figures. + +In editing, I endeavored to preserve the substance and style of the paper, especially the approach to the modeling. + +—Paul J. Campbell, Editor + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation +$\mathrm{H} =$ Honorable Mention +$\mathbf{M} =$ Meritorious +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONDEPT.CITYADVISORC
CALIFORNIA
Cal. State U. at Monterey BayScience and Env'l PolicySeasideHerbert CortezP
Harvey Mudd CollegeMathematicsClaremontDarryl YongO
Harvey Mudd CollegeMathematicsClaremontDarryl YongM
Harvey Mudd CollegeMathematicsClaremontRachel LevyH
Univ. of California, DavisMathematicsDavisKarl BeutnerP
IOWA
Simpson CollegeMathematicsIndianolaWilliam SchellhornH
Simpson CollegeMathematicsIndianolaDebra CzarneskiP
Simpson CollegeMathematicsIndianolaRick SpellerbergP
KENTUCKY
Asbury CollegeMathematics and CSWilmoreDuk LeeH
Asbury CollegeMathematics and CSWilmoreDuk LeeM
Asbury CollegeMathematics and CSWilmoreKen RietzH
MARYLAND
Villa Julie CollegeMathematicsStevensonEileen McGrawP
MONTANA
Carroll CollegeMath/Eng'ng/CSHelenaKelly ClineH
Carroll CollegeMath/Eng'ng/CSHelenaKelly ClineP
Carroll College, MTMath/Eng'ng/CSHelenaMark ParkerP
NEW JERSEY
Princeton UniversityMathematicsPrincetonIngrid DaubechiesH
NEW YORK
Clarkstown HS SouthMathematicsWest NyackMary GavioliP
NORTH CAROLINA
Duke UniversityMathematicsDurhamBianca SantoroP
VIRGINIA
James Madison UniversityMath and StatHarrisonburgHasan HamdanM
Virginia Commonwealth U.Biomedical EngineeringRichmondDianne PawlukH
WASHINGTON
Seattle Pacific UniversityElectrical EngineeringSeattleMelani PlettP
HONG KONG
City Univ. of Hong KongManagement SciencesHong KongLigang ZhouP
Hong Kong Baptist Univ.MathematicsKowloonMan Lai TangP
INDONESIA
Institut Teknologi BandungMathematicsBandungEdy SoewonoP
Institut Teknologi BandungMathematicsBandungKuntjoro SidartoH
CHINA
Anhui
Anhui UniversityStatisticsHefeiHuayou ChenM
Anhui UniversityStatisticsHefeiLigang ZhouH
Hefei University of TechnologyApplied MathematicsHefeiYongwu ZhouH
Hefei University of TechnologyComp'1 MathematicsHefeiYoudu HuangP
University of Sci. & Tech. of ChinaGifted YoungHefeiYuanbo ZhangM
University of Sci. & Tech. of ChinaStatistics and FinanceHefeiHong ZhangM
Beijing
Beihang UniversityElectronic EngineeringBeijingWu SanxingH
Beijing Electron. Sci. & Tech. Inst.Basic EducationBeijingPeiqun WuH
Beijing Institute of TechnologyMathematicsBeijingHua-Fei SunH
Beijing Institute of TechnologyMathematicsBeijingBing-Zhao LiP
Beijing Institute of TechnologyMathematicsBeijingGuifeng YanH
Beijing Institute of TechnologyMathematicsBeijingChunguang XiongH
Beijing Institute of TechnologyMathematicsBeijingQun RenP
Beijing Institute of TechnologyMathematicsBeijingXiuling MaP
Beijing Jiaotong UniversityApplied MathematicsBeijingJing ZhangH
Beijing Jiaotong UniversityChemistryBeijingYongsheng WeiH
Beijing Jiaotong UniversityChemistryBeijingYongsheng WeiP
Beijing Jiaotong UniversityMathematicsBeijingBingtuan WangH
Beijing Jiaotong UniversityMathematicsBeijingZhouhong WangH
Beijing Jiaotong UniversityMathematicsBeijingZhouhong WangH
Beijing Jiaotong UniversityMathematicsBeijingShangli ZhangP
Beijing Jiaotong UniversityMathematicsBeijingJun WangH
Beijing Jiaotong UniversityMathematicsBeijingXiaoming HuangH
Beijing Jiaotong UniversityMathematicsBeijingHong ZhangM
Beijing Jiaotong UniversityMathematicsBeijingPengjian ShangH
Beijing Jiaotong UniversityMathematicsBeijingKeqian DongP
Beijing Jiaotong UniversityMathematicsBeijingFaen WuP
Beijing Jiaotong UniversityMathematicsBeijingXiaoxia WangP
Beijing Jiaotong UniversityMathematicsBeijingZhonghao JiangH
Beijing Jiaotong UniversityMathematicsBeijingLiang WangP
Beijing Jiaotong UniversityMathematicsBeijingGuozhong LiuP
Beijing Jiaotong UniversityMathematicsBeijingGuozhong LiuH
Beijing Jiaotong UniversityMathematicsBeijingXun ChenP
Beijing Jiaotong UniversityMathematicsBeijingXun ChenP
Beijing Jiaotong UniversityPhysicsBeijingBingtuan WangP
Beijing Jiaotong UniversityPhysicsBeijingBingtuan WangH
Beijing Jiaotong UniversityStatisticsBeijingWeidong LiH
Beijing Language and Culture U.Computer ScienceBeijingGuilong LiuH
Beijing Language and Culture U.Computer ScienceBeijingYanbing FengH
Beijing Language and Culture U.Computer ScienceBeijingYanbing FengH
Beijing Language and Culture U.Information SciencesBeijingXiaoxia ZhaoM
Beijing Language and Culture U.Information SciencesBeijingXiaoxia ZhaoP
Beijing Language and Culture U.Information SciencesBeijingPing YangH
Beijing Normal UniversityMathematical SciencesBeijingZhengru ZhangH
Beijing U. of Posts & Telecomm.Applied MathematicsBeijingZuguo HeH
Beijing U. of Posts & Telecomm.Applied PhysicsBeijingJinkou DingP
Beijing U. of Posts & Telecomm.Applied PhysicsBeijingJinkou DingP
Beijing U. of Posts & Telecomm.Communication Eng'ngBeijingZuguo HeH
Beijing U. of Posts & Telecomm.CS and TechnologyBeijingHongxiang SunP
Beijing U. of Posts & Telecomm.Economics and MgmtBeijingTianping ShuaiM
Beijing U. of Posts & Telecomm.Electronic EngineeringBeijingQing ZhouO
Beijing U. of Posts & Telecomm.Electronics Info. Eng'ngBeijingXinchao ZhaoH
Beijing U. of Posts & Telecomm.Information EngineeringBeijingWenbo ZhangH
Beijing U. of Posts & Telecom.Information EngineeringBeijingWenbo ZhangP
Beijing Univ. of Astro. & Aero.Computer ScienceBeijingHongying LiuH
Beijing University of Chem. Tech.Chemistry ScienceBeijingKaisu WuP
Beijing University of Chem. Tech.MathematicsBeijingDamin LiuH
Beijing University of Chem. Tech.MathematicsBeijingWenyan YuanH
Central Univ. of Finance and Econ.Applied MathematicsBeijingXiaoming FanP
Central Univ. of Finance and Econ.Applied MathematicsBeijingZhaoxu SunM
Central Univ. of Finance and Econ.Applied MathematicsBeijingXianjun YinH
Central Univ. of Finance and Econ.Applied MathematicsBeijingDonghong LiH
China University of GeosciencesInformation TechnologyBeijingZhaodou ChenH
China University of GeosciencesInformation TechnologyBeijingZhaodou ChenH
China University of GeosciencesMathematicsBeijingGuangdong HuangP
China University of GeosciencesMathematicsBeijingYan DengP
China University of GeosciencesMathematicsBeijingYan DengP
China University of GeosciencesMathematicsBeijingLinlin ZhaoP
Peking U., Health Sciences CtrMathematicsBeijingPKU BiomathsH
Peking U., Health Sciences CtrMathematicsBeijingPKU BiomathsM
Peking U., Health Sciences CtrMathematicsBeijingPKU StatisticsH
Peking U., Health Sciences CtrMathematicsBeijingPKU StatisticsH
Peking U., Health Sciences CtrMathematicsBeijingPKU BiostatisticsH
Peking U., Health Sciences CtrMathematicsBeijingPKU BiostatisticsH
Peking U., Health Sciences CtrMathematicsBeijingPKU SciencesH
Peking U., Health Sciences CtrMathematicsBeijingPKU SciencesH
Peking U., Health Sciences CtrMathematicsBeijingPKU BiosciencesH
Peking U., Health Sciences CtrMathematicsBeijingPKU MathsP
Peking UniversityExtracurricular ActivitiesBeijingMingming YuH
Peking UniversityMicroelectronicsBeijingYuan WangM
Peking UniversityPhysicsBeijingYuxin LiuM
Peking UniversityProbability and StatisticsBeijingMinghua DengH
Peking UniversitySci. and Eng'ng ComputingBeijingPeng HeH
Peking UniversityYuanpei CollegeBeijingLiman ShaM
Tsinghua UniversityMathematical SciencesBeijingMei LuM
Tsinghua UniversityMathematical SciencesBeijingMei LuP
Tsinghua UniversityMathematical SciencesBeijingJinxing XieP
U. of Int'l Business and Econ.Int'l Trade and Econ.BeijingBaomin DongH
U. of Int'l Business and Econ.Int'l Trade and Econ.BeijingBaomin DongP
Chongqing
Chongqing UniversityApplied ChemistryChongqingZhiliang LiP
Chongqing UniversityApplied MathematicsChongqingRenbin HeM
Chongqing UniversityInfor. and Comp'l ScienceChongqingQu GongP
Chongqing UniversityInformation and Comp'l Sci.ChongqingJian XiaoM
Chongqing UniversitySoftware EngineeringChongqingBin CaiP
Southwest UniversityStatisticsChongqingXueqiao ZhengP
Southwest UniversityStatisticsChongqingJianjun YuanH
Third Military Medical Univ.Basic MedicineChongqingWanchun LuoH
Third Military Medical Univ.MathematicsChongqingMingkui LuoP
Third Military Medical Univ.MathematicsChongqingMingkui LuoP
Fujian
Fujian Agriculture and Forestry U.Life SciencesFuzhouWu LongP
Fujian Normal UniversityComputer ScienceFuzhouShenggui ZhangP
Xiamen UniversityMathematicsXiamenLi ShiyinH
Guangdong
Jinan UniversityComputer ScienceGuangzhouChuanlin ZhangH
Jinan UniversityMathematicsGuangzhouDaiqiang HuH
Jinan UniversityMathematicsGuangzhouShizhuang LuoM
Jinan University, Zhuhai Coll.Computer ScienceZhuhaiZ.-W. WangH
Jinan University, Zhuhai Coll.Math. ModelingZhuhaiYuanbiu ZhangP
Jinan University, Zhuhai Coll.Math, ModelingZhuhaiYuanbiu ZhangM
Jinan University, Zhuhai Coll.MathematicsZhuhaiYuanBiao ZhangM
Jinan University, Zhuhai Coll.MathematicsZhuhaiYuanBiao ZhangH
Jinan University, Zhuhai Coll.Packaging EngineeringZhuhaiY.-B. ZhangH
Jinan University, Zhuhai Coll.Packaging EngineeringZhuhaiZhiwei WangM
Shenzhen PolytechnicIndustrial Training CenterShenzhenTianlin LeiP
Shenzhen PolytechnicMech'1 and Electrical Eng'ngShenzhenKanzhen ChenH
South China Agricultural Univ.Software TechnologyGuangzhouYanKe ZhuM
South China Agricultural Univ.Software TechnologyGuangzhouYanKe ZhuH
South China Agricultural Univ.MathematicsGuangzhouQingMao ZengH
South China Normal Univ.Applied MathematicsGuangzhouDing WeijianM
South China Normal Univ.Applied MathematicsGuangzhouLiu ShenQuanM
South China Normal Univ.Applied MathematicsGuangzhouLiu ShenQuanH
South China Normal Univ.Info. Sci. and ComputationGuangzhouLiling XieH
South China Normal Univ.MathematicsGuangzhouShaohui ZhangH
South China Normal Univ.MathematicsGuangzhouXiuxiang LiuM
Sun Yat-Sen (Zhongshan) Univ.Dept. of Earth SciencesGuangzhouZuoJian YuanH
Sun Yat-Sen (Zhongshan) Univ.MathematicsGuangzhouXiaoLing YinP
Hebei
North China Electric Power U.Mathematics & PhysicsBaodingHUIFENG SHIH
Heilongjiang
Harbin Engineering UniversityApplied MathematicsHarbinZhu LeiM
Harbin Engineering UniversityApplied MathematicsHarbinZhu LeiP
Harbin Engineering UniversityInformation and CSHarbinYu FeiH
Harbin Engineering UniversityInformation and CSHarbinYu FeiM
Harbin Engineering UniversityMathematicsHarbinShen JihongH
Harbin Engineering UniversityMathematicsHarbinShen JihongH
Harbin Engineering UniversityMathematicsHarbinLuo YueshengH
Harbin Institute of TechnologyComputer ScienceHarbinJin WuH
Harbin Institute of TechnologyEnv'l Sci. & Eng'ngHarbinTong ZhengH
Harbin Institute of TechnologyEnv'l Sci. & Eng'ngHarbinTong ZhengP
Harbin Institute of TechnologyManagement Sci. & Eng'ngHarbinHong GeH
Harbin Institute of TechnologyManagement Sci. & Eng'ngHarbinXuefeng WangM
Harbin Institute of TechnologyManagement Sci. & Eng'ngHarbinXuefeng WangP
Harbin Institute of TechnologyMathematicsHarbinXilian WangP
Harbin Institute of TechnologyMathematicsHarbinXilian WangH
Harbin Institute of TechnologyMathematicsHarbinShouting ShangH
Harbin Institute of TechnologyMathematicsHarbinBaodong ZhengP
Harbin Institute of TechnologyMathematicsHarbinGuanghong JiaoH
Harbin Institute of TechnologyMathematicsHarbinBoping TianP
Harbin Institute of TechnologyNetworkingHarbinXiaoping JiH
Harbin Institute of TechnologyComputer ScienceHarbinZheng KuangH
Harbin Institute of TechnologyCS and TechnologyHarbinLili ZhangP
Harbin Institute of TechnologyMathematicsHarbinChiping ZhangH
Harbin Institute of TechnologyMathematicsHarbinDongmei ZhangH
Harbin Institute of TechnologyShiyan School, Math.HarbinXiaofeng ShiM
Harbin Univ. of Sci. & Tech.MathematicsHarbinFengqiu LiuM
Harbin Univ. of Sci. & Tech.MathematicsHarbinShanqiang LiH
Heilongjiang Inst. of Sci. & Tech.Mathematics and MechanicsHarbinYajiang ZhangH
North East Agricultural Univ.Electric EngineeringHarbinYaZhuo ZhangP
North East Agricultural Univ.Info. & Computing Sci.HarbinYaZhuo ZhangP
North East Agricultural Univ.Info. & Computing Sci.HarbinFangge LiP
North East Agricultural Univ.Mechanical EngineeringHarbinFangge LiP
Hubei
Huazhong U. of Sci. & Tech.CS and TechnologyWuhanKe ShiH
Huazhong U. of Sci. & Tech.CS and TechnologyWuhanKe ShiP
Huazhong U. of Sci. & Tech.Electronics and Info. Eng'ngWuhanYan DongM
Huazhong U. of Sci. & Tech.Ind'l and Mfg Systems Eng'ngWuhanLiang GaoH
Wuhan UniversityMathematics and StatisticsWuhanYuanming HuP
Wuhan UniversityElectronic InformationWuhanHu YuanmingH
Hunan
Central South UniversityMetallurgical EngineeringChangshaShihua ZhuH
Central South UniversityEngineering ManagementChangshaZhoushun ZhengH
Central South UniversityFire Fighting EngineeringChangshaKunnan YiH
Central South UniversityTraffic Equipment & Info. Eng'ngChangshaCheng LiuP
Hunan UniversityMathematics & EconometricsChangshaHan LuoP
Hunan UniversityMathematics & EconometricsChangshaChuanxiu MaH
Hunan UniversityMathematics & EconometricsChangshaShangjiang GuoH
Hunan UniversityMathematics & EconometricsChangshaLiping WangP
National Univ. of Defense Tech.Mathematics and Systems Sci.ChangshaZiyang MaoO
National Univ. of Defense Tech.Mathematics and Systems Sci.ChangshaZiyang MaoH
National Univ. of Defense Tech.Mathematics and Systems Sci.ChangshaMengda WuM
National Univ. of Defense Tech.Mathematics and Systems Sci.ChangshaYi WuM
Inner Mongolia
Inner Mongolia UniversityMathematicsHuhhotHaitao HanP
Jiangsu
China U. of Mining and Tech.CS and TechnologyXuzhouJiang ShujuanM
China U. of Mining and Tech.MathematicsXuzhouZhou ShengwuH
China U. of Mining and Tech.MathematicsXuzhouZhang XingyongM
China U. of Mining and Tech.MathematicsXuzhouWu ZongxiangH
Nanjing UniversityIntensive InstructionNanjingMingwen XiaoH
Nanjing U. of Posts & Telecomm.Mathematics and PhysicsNanjingQiu ZhonghuaH
Nanjing U. of Posts & Telecomm.Mathematics and PhysicsNanjingYe JunH
Nanjing U. of Posts & Telecomm.Mathematics and PhysicsNanjingLiWei XuM
Nanjing Univ. of Sci. & Tech.Applied MathematicsNanjingChungen XuP
Nanjing Univ. of Sci. & Tech.Applied MathematicsNanjingYongshun LiangP
Nanjing Univ. of Sci. & Tech.MathematicsNanjingJun ZhangH
Nanjing UniversityMathematicsNanjingHuang HuaP
Nanjing UniversityMathematicsNanjingHuikun JiangH
Nanjing UniversityMathematicsNanjingHuikun JiangM
Nanjing UniversityMathematicsNanjingWeihua HuangP
PLA University of Sci. & Tech.Applied Mathematics & Phys.NanjingTian ZuoweiH
PLA University of Sci. & Tech.Institute of MeteorologyNanjingZheng QinP
PLA University of Sci. & Tech.Institute of SciencesNanjingShi HanshengH
Southeast UniversityMathematicsNanjingEnshui ChenP
Southeast UniversityMathematicsNanjingEnshui ChenP
Southeast UniversityMathematicsNanjingZhizhong SunH
Southeast UniversityMathematicsNanjingRui DuP
Southeast UniversitySci. & Tech. OfficeNanjingZhiqiang ZhangH
Southeast UniversitySci. & Tech. OfficeNanjingZhiqiang ZhangP
Southeast Univ. at JiulonghuMathematicsNanjingDaoyuan ZhuP
Southeast Univ. at JiulonghuMathematicsNanjingFeng WangP
Southeast Univ. at JiulonghuMathematicsNanjingQibao JiangP
Xuzhou Institute of Tech.MathematicsXuzhouJiang Yingzi H
Jilin
Beihua UniversityMathematicsJilin CityJiang XiaoweiP
Beihua UniversityMathematicsJilin CityFeng XiaoxiaP
Beihua UniversityMathematicsJilin CityZhao MingP
Jilin Teachers Inst. of Sci. & Tech.Basic ScienceChangchunChangchun LiP
Jilin UniversityMathematicsChangchunChunling CaoH
Jilin UniversityMathematicsChangchunShaoyun ShiP
Jilin UniversityMathematicsChangchunYongkui ZouP
Jilin UniversityMathematicsChangchunYao XiulingH
Jilin UniversityMathematicsChangchunWang GuomingH
Jilin UniversityMathematicsChangchunWang GuomingH
Jilin UniversityMathematicsChangchunChunling CaoH
Liaoning
Dalian Fisheries UniversityMathematicsDalianZhang LishiP
Dalian Fisheries UniversityScienceDalianZhang LifengP
Dalian Maritime UniversityApplied MathematicsDalianDong YuP
Dalian Maritime UniversityMathematicsDalianYun ZhangH
Dalian Maritime UniversityMathematicsDalianNaxin ChenP
Dalian Maritime UniversityMathematicsDalianShuqin YangP
Dalian Maritime UniversityMathematicsDalianGuoyan ChenP
Dalian Maritime UniversityMathematicsDalianSheng BiH
Dalian Maritime UniversityMathematicsDalianXiaoyan ShenH
Dalian Maritime UniversityMathematicsDalianXiaoyan ShenP
Dalian Nationalities UniversityCS and Eng'ngDalianLiu RuiP
Dalian Nationalities UniversityCS and Eng'ngDalianXiangdong LiuH
Dalian Nationalities UniversityCS and Eng'ngDalianDejun YanP
Dalian Nationalities UniversityDean's OfficeDalianLiu YanH
Dalian Nationalities UniversityDean's OfficeDalianFu JieP
Dalian Nationalities UniversityDean's OfficeDalianHengbo ZhangH
Dalian Nationalities UniversityDean's OfficeDalianJinzhi WangH
Dalian Nationalities UniversityDean's OfficeDalianJinzhi WangP
Dalian Nationalities UniversityDean's OfficeDalianXue YeP
Dalian Nationalities UniversityInnovation Education CtrDalianRixia BaiH
Dalian Nationalities UniversityInnovation Education CtrDalianXinwen ChenH
Dalian Nationalities UniversityInnovation Education CtrDalianLiu YanP
Dalian UniversityInformation EngineeringDalianDong XiangyuP
Dalian UniversityMathematicsDalianTan XinxinP
Dalian UniversityMathematicsDalianLiu GuangzhiH
Dalian UniversityMathematicsDalianLiu ZixinH
Dalian UniversityMathematicsDalianZhang ChengH
Dalian UniversityMathematicsDalianLiu ZixinH
Dalian University of TechnologyApplied MathematicsDalianMingfeng HeH
Dalian University of TechnologyApplied MathematicsDalianZhenyu WuP
Dalian University of TechnologyApplied MathematicsDalianYu CaiP
Dalian University of TechnologyApplied MathematicsDalianMingfeng HeM
Dalian University of TechnologyApplied MathematicsDalianLiang ZhangM
Dalian University of TechnologyCity InstituteDalianHongzeng WangP
Dalian University of TechnologyInnovation ExperimentDalianLin FengP
Dalian University of TechnologyInnovation ExperimentDalianLin FengP
Dalian University of TechnologyInnovation ExperimentDalianWanwu XiH
Dalian University of TechnologyInnovation ExperimentDalianTao SunP
Dalian University of TechnologyInnovation ExperimentDalianXinghua GengH
Dalian University of TechnologyInnovation ExperimentDalianLin FengH
Dalian University of TechnologyInnovation ExperimentDalianQiuhui PanH
Dalian University of TechnologyInnovation ExperimentDalianTao SunH
Dalian University of Tech.Software SchoolDalianZhe LiM
Dalian University of Tech.Software SchoolDalianShengjun XuM
Dalian University of Tech.Software SchoolDalianHe JiangP
Dalian University of Tech.Software SchoolDalianMing ZhuP
Dalian University of Tech.Software SchoolDalianTie QiuP
Shenyang Inst. of Aero. Eng'ngElectronicsShenyangLin LiH
Shenyang Inst. of Aero. Eng'ngElectronicsShenyangWeifang LiuH
Shenyang Inst. of Aero. Eng'ngInformation and CSShenyangShiyun WangP
Shenyang Inst. of Aero. Eng'ngInformation and CSShenyangLi WangP
Shenyang Inst. of Aero. Eng'ngInformation and CSShenyangYong JiangP
Shenyang Inst. of Aero. Eng'ngNorth Science and Tech.ShenyangLiu WeifangH
Shenyang Inst. of Aero. Eng'ngNorth Science and Tech.ShenyangLi LinP
Shenyang Inst. of Aero. Eng'ngScienceShenyangFeng ShanH
Shenyang Inst. of Aero. Eng'ngScienceShenyangLimei ZhuP
Ningxia
North Univ. for MinoritiesForeign LanguageYinchuan CityYuanyuan YanP
Shaanxi
Northwest A&F UniversityScienceYanglingWang JingminH
Northwest A&F UniversityScienceYanglingJingMin WangP
Northwest A&F UniversityScienceYanglingJingMin WangH
Northwest UniversityPhysicsXiánQingyan DongP
Taiyuan Institute of Tech.Electrical EngineeringTaiyuanXiao Ren FanH
Taiyuan Institute of Tech.Electr. Ass'n of Sci. & Tech.TaiyuanFan XiaorenH
Xi'an Jiaotong UniversityMath. Teach. and Exp'tXi'anJicheng LiP
Xi'an Jiaotong UniversityInformation ScienceXi'anZhuosheng ZhangM
Xidian UniversityApplied MathematicsXi'anXiaogang QIH
Xidian UniversityComp'1 MathematicsXi'anXuewen MUH
Xidian UniversityIndustrial & App. Math.Xi'anShuisheng ZHOUH
Xidian UniversityScienceXi'anFeng YeP
Xi'an Jiaotong UniversityInformation ScienceXi'anYuan YiH
Shandong
Shandong UniversityComputer Science Tech.JinanHong LiuH
Shandong UniversityLife Science & Biotech. Ed.JinanDuohong ShengP
Shandong UniversityMathematics & Sys. Sci.JinanJian ChenP
Shandong UniversityMathematics & Sys. Sci.JinanJian ChenH
Shandong UniversityMathematics & Sys. Sci.JinanHuang ShuxiangH
Shandong UniversityMathematics & Sys. Sci.JinanHuang ShuxiangP
Shandong UniversitySoftWare CollegeJinanZhang SihuaP
Shandong Univ. at WeihaiInfo. Sci. & Eng'ngWeihaiHuaxiang ZhaoH
Shandong Univ. at WeihaiInfo. Sci. & Eng'ngWeihaiYang BingH
Shandong Univ. at WeihaiInfo. Sci. & Eng'ngWeihaiYang BingH
Shandong Univ. at WeihaiApplied MathematicsWeihaiYangBing SongHuiMinH
Shandong Univ. at WeihaiApplied MathematicsWeihaiZhulou CaoM
Shanghai
Donghua UniversityScienceShanghaiSurong YouP
Donghua UniversityApplied MathematicsShanghaiYong GeP
Donghua UniversityGlorious Sun Bus. & MgmtShanghaiXiaofeng WangM
East China Univ. of Sci. & Tech.MathematicalShanghaiLu XiwenM
East China Univ. of Sci. & Tech.MathematicalShanghaiQian XiyuanP
East China Univ. of Sci. & Tech.MathematicalShanghaiSu ChunjieP
Fudan UniversityMathematical SciencesShanghaiYuan CaoH
Fudan UniversityMathematical SciencesShanghaiZhijie CaiP
Fudan UniversityStatisticsShanghaiZhongyi ZHUM
Shanghai Finance UniversityMathematicsShanghaiYumei LiangH
Shanghai Finance UniversityMathematicsShanghaiRongqiang CheP
Shanghai Finance UniversityMathematicsShanghaiKeyan WangH
Shanghai Jiaotong UniversityMathematicsShanghaiLiuqing XiaoP
Shanghai Jiaotong UniversityXuhui Branch, Math.ShanghaiXiaojun LiuP
Shanghai Nanyang Model HSMathematicsShanghaiJian CuiH
Tongji UniversityEnv'l Sci. & Eng'ngShanghaiHailong YinH
Tongji UniversitySoftwareShanghaiXiongdA ChenP
Taiyuan Institute of TechnologyElectrical EngineeringTaiyuanXiao Ren FanH
Taiyuan Institute of TechnologyElectr. Ass'n of Sci. & Tech.TaiyuanFan XiaorenH
Sichuan
Chengdu University of TechnologyInformation ManagementChengduWei HuaP
Sichuan Agricultural UniversityLife SciencesYa'anShi DuM
Sichuan UniversityElectrical Eng'ng & Info.ChengduXiaoshi LiuP
Sichuan UniversityMathematicsChengduHai NiuM
Sichuan UniversityPhysical Science and Tech.ChengduMin GongH
Univ. of Elec. Sci. & Tech. of ChinaApplied MathematicsChengduDu HongfeiH
Univ. of Elec. Sci. & Tech. of ChinaApplied MathematicsChengduDu HongfeiP
Univ. of Elec. Sci. & Tech. of ChinaInfo. and Comp'n Sci.ChengduZhang YongH
Tianjin
Hebei University of TechnologyMaterial Sci. and Eng'ngTianjinChangqing XuH
Nankai UniversityAutomationTianjinDu LilunP
Zhejiang
Hangzhou Dianzi UniversityApplied PhysicsHangzhouZhifeng ZhangM
Hangzhou Dianzi UniverityInfo. & Mathematics Sci.HangzhouZheyong QiuH
Hangzhou Dianzi UniverityInfo. & Mathematics Sci.HangzhouChengjia LiM
Hangzhou Dianzi UniverityMathematics & PhysicsHangzhouZongmao ChengH
Shaoxing UniversityMathematicsShaoxinghu jinjieH
Shaoxing UniversityMathematicsShaoxingyao yanyunH
Zhejiang Gongshang UniversityInformation and Comp. Sci.HangzhouLi YinfeiP
Zhejiang Gongshang UniversityMathematicsHangzhouZhu LingM
Zhejiang Gongshang UniversityMathematicsHangzhouZhu LingH
Zhejiang Normal UniversityComputer ScienceJinhuaYoutian QuH
Zhejiang Normal UniversityComputer ScienceJinhuaYoutian QuP
Zhejiang Normal UniversityComputer ScienceJinhuaTingting CenP
Zhejiang Normal UniversityMathematicsJinhuaXinzhong LuH
Zhejiang Normal UniversityMathematicsJinhuaXinzhong LuH
Zhejiang Normal UniversityMathematicsJinhuaGuolong HeP
Zhejiang Sci-Tech UniversityPsychologyHangzhouHan ShuguangP
Zhejiang U. of Finance and Econ.Mathematics and StatisticsHangzhouFulai WangH
Zhejiang UniversityMathematicsHangzhouZhiyi TanM
Zhejiang UniversityMathematicsHangzhouQifan YANGM
Zhejiang UniversityNingbo Institute of Tech.NingboWei LiuP
Zhejiang UniversityNingbo Institute of Tech.NingboWei LiuH
Zhejiang University City CollegeCS and TechnologyHangzhouHuizeng ZhangH
Zhejiang University City CollegeCS and TechnologyHangzhouHuizeng ZhangP
Zhejiang University City CollegeCS and TechnologyHangzhouXueyong YuH
Zhejiang University of Sci. & Tech.MathematicsHangzhouMingjun WeiP
Zhejiang University of Tech.MathematicsHangzhouMinghua ZhouP
Zhejiang University of Tech.MathematicsHangzhouXuejun WuH
Zhejiang University of Tech.MathematicsHangzhouXuejun WuH
Zhejiang University of Tech.Zhejiang CollegeHangzhouJun LuP
+ +# Evaluation and Improvement of Healthcare Systems + +Luting Kong + +Yiyi Chen + +Chao Ye + +Beijing University of Posts and Telecommunications + +Beijing, China + +Advisors: Qing Zhou and Zuguo He + +# Summary + +To evaluate the effectiveness of healthcare systems, we describe metrics in three categories: resources, performance, and inequity. In the Incomplete-Induction Model, we use the Variance Analysis method to evaluate the significance of each metric. The four most important metrics are the percentage of GDP spent on healthcare, the ratio of general government expenditure on health to private expenditure, health-adjusted life expectancy, and health inequity. + +We combine the metrics into two integrative metrics, the ratio of resources to performance, and health inequity, using the Analytical Hierarchy Process. The two metrics make up the Evaluation Vector. + +To compare the effectiveness of different health systems by means of the Evaluation Vector, we construct two comparison models. In Model 1, we compare based on relative disparity. In Model 2, we introduce a coordinate system in which a vector stands for a healthcare system. The effectiveness of the system is reflected by the length of the vector: A smaller length stands for a better system. + +In Task IV and Task V, we choose Brazil for its good healthcare system and India for its poor one. According to the two comparison models, both systems are better than that of the U.S. Then we analyze the relationship between resources and system effectiveness in order to explain why the Indian system is better. + +In Task VI, we analyze the U.S. system and put forward suggestions + +to improve it. Then we build a model to investigate the influence of the changes. In addition, we measure the historical change in the system. Generally, its effectiveness is increasing, but the growth rate is slower recently. + +We also analyze the strengths and weaknesses of each model. + +# Solution of Task 1 + +# Description and Analysis + +We put forward a method to measure a country's healthcare system. To simplify the problem, we first abstract the system as a simplified input-output system (Figure 1). + +![](images/0bdd5811a53d7b55ac3cb2283150633b5c65dda929d7b98a449fc0bf600012e2.jpg) +Figure 1. Healthcare as a simplified input-output system. + +Sufficient resources should be put in to guarantee that the system functions well. Viewed in isolation, the more resources the system gets, the better it will be. However, linked to output, the better system is not the one with more resources but the one with a low input-output ratio. Later we discuss how to use the metric of resources to measure a healthcare system. + +Output reflects the system's performance: The better the system is, the more output it will produce; we define performance later. + +How the system operates can't be ignored, since that affects the whole health situation of the country, such as the distribution of resources and the health level in different areas. These factors will be expressed by the metric of Inequities. + +# Metrics + +# Resources + +A good healthcare system needs adequate resources: human resources, material resources, and financial resources: + +- Human resources are the population engaged in medical careers, including physicians, nurses, pharmacists, and other health workers. +- Material resources are the hardware facilities in the medical system, such as hospitals and hospital beds. + +- Financial resources include three aspects: + +- The percentage of GDP spent on healthcare. +- The percentage of total government expenditure spent on healthcare. +- The ratio of government spending to private spending on health. Apparently, in a good health system this ratio is high. + +# Performance + +- Health level. The main objective of a health system is improving health [WHO 2001]. We choose disability-adjusted life expectancy (DALE) and infant mortality as criteria, the combination of which can be used to evaluate the level of health. + +- Disability-adjusted life expectancy. DALE is the life expectancy at birth adjusted for disability [WHO 2001]. It is a comprehensive measure of the global burden of disease and the trends of population health level [Mathers et al. 2001]. +- Infant mortality rates. Infant mortality rate is a significant indicator of medical level: High-medical-level countries have a low infant mortality rate. + +- Health-service coverage. Health-service coverage comprises several factors, such as the immunization coverage of 1-year-olds and the percentage of the population with public insurance. A good health system should provide healthcare for all of its citizens. Usually, developed countries have high rates in the both of those. +- Responsiveness. Responsiveness measures how the system performs relative to non-health aspects, meeting or not meeting a population's expectation of how it should be treated [WHO 2001]. The notion of responsiveness is composed of seven elements, including [WHO 2001]: + +- respect for dignity, +- confidentiality, +- autonomy to participate in choices about one's own health, +- prompt attention, +- amenities of adequate quality, +- access to social support networks, and +- freedom to select which individual or organization delivers one's care. + +The seven points above lead to a general metric of responsiveness. In part II we discuss how to combine them. + +# Inequities + +- Inequities in health. A healthcare system is not so perfect if the health level varies widely between different categories of the population, even in countries with a rather good health status on average [WHO 2001]. To describe inequities in health, we use life expectancy in terms of age, race, gender, socioeconomic class, and so on. If every category has the same life expectancy, the system is fair in terms of health level. +- Inequities in responsiveness. The same as health level: If some people are treated with courtesy and others are not, there are inequities in responsiveness. +- Fairness of financial contribution. To be fair, the expenditure each household faces should be distributed according to ability to pay rather than by risk of illness [WHO 2001]. That means that a household should not become impoverished to obtain healthcare, and rich households should pay more towards the system than poor households [Gakidou et al. 2000]. + +# The Combination of Metrics + +We devise a composite measure of the three metrics: Resources, Performance, and Inequities. + +# Analytical Hierarchy Process + +- Divide layers. We divide the metrics into several layers as Figures 2-5 show. + +![](images/230fcdb6d8cc3498b6530fbad382a4fc359f944536c4e004ac7eba267c3854eb.jpg) +Figure 2. Resources. + +![](images/d4731f75d6a1a34b08847c19fdc90f3571d2b7e016cb0e6737751181c859a859.jpg) +Figure 3. Performance. + +- Evaluation Vector. A good system should use the least resources possible to produce performance, therefore we use the ratio of Resources to Performance to evaluate the system's effectiveness. + +The other metric is the inequity index. Since the two metrics may not have the same magnitude, it is not appropriate to add or multiply them. + +![](images/c91c2dd6daef6508debcaeb5eb40af9ce01843a8572ff72ce3ce3738e2e94ba3.jpg) +Figure 4. Inequities. + +![](images/0bf5650f9cf0fd3436ae5df46df11ae945447843b3f21f1f228b6e3937384bfb.jpg) +Figure 5. Evaluation. + +Hence, we form an evaluation vector (EV) consisting of the two metrics: + +$$ +\mathrm {E V} = \left(\frac {\mathrm {r e s o u r c e s}}{\mathrm {p e r f o r m a n c e}}, \mathrm {i n e q u i t i e s}\right). +$$ + +This is our final composite measure to evaluate the effectiveness of a healthcare system. When both components of the vector are lower, the system is better. + +# Determine Weights + +We specify the calculation of one metric, Resources; the others can be calculated in the same way. After comparing the effect of two criteria in the same layer to the higher layer, we can construct the conjugated-comparative matrix with Saaty's Rule [Jiang 1993]. For example, $a_{12}$ can indicate the difference of the effect on Resources between Human Resources and Financial Resources. Let $M_1$ be the conjugated-comparative matrix of Resources, while the elements of $M_2$ are Financial Resources: + +$$ +M _ {1} = \left[ \begin{array}{l l l} 1 & 2 & 3 \\ \frac {1}{2} & 1 & 2 \\ \frac {1}{3} & \frac {1}{2} & 1 \end{array} \right], \qquad M _ {2} = \left[ \begin{array}{l l l} 1 & 1 & 2 \\ 1 & 1 & 1 \\ \frac {1}{2} & 1 & 1 \end{array} \right]. +$$ + +After calculation of the matrix using the summation method [Jiang 1993], we obtain the weight vectors: + +$$ +w _ {1} = (. 5 3 9, . 2 9 7, . 1 6 4), \qquad w _ {2} = (. 4 1, . 3 3, . 2 6). +$$ + +So we can form the formulas: + +$$ +\text {R e s o u r c e s} = . 5 3 9 \times \mathrm {F R} +. 2 9 7 \times \mathrm {H R} +. 1 6 4 \times \mathrm {M R}, +$$ + +$$ +\mathrm {F i n a n c i a l R e s o u r c e s} = . 4 1 \times \mathrm {A s p} _ {1} +. 3 3 \times \mathrm {A s p} _ {2} +. 2 6 \times \mathrm {A s p} _ {3}, +$$ + +where our notations are defined in Table 1. + +Table 1. +Symbols used. + +
AbbreviationsMeaning
HRHuman resources
MRMaterial resources
FRFinancial resources
HLHealth level
HSCHealth service coverage
RLResponsiveness level
DALEDisability-adjusted life expectancy
HALEHealth-adjusted life expectancy
IMRInfant mortality rate
IHInequities of health
IRInequities of responsiveness
IInequities metric
RResponsiveness metric
FFDFairness in financial distribution
AspiSeven aspects of responsiveness
HLHealth level
RPResources/performance ratio
EVEvaluation vector
LLength of the evaluation vector
THTotal expenditure on health as % of GDP
GHtoPHRatio of government expenditure on health care to private expenditure
GHtoGGovernment expenditure on health as percentage of total government expenditure
+ +# Formulas + +Using a similar method, we arrive at equations as follows: + +$$ +\text {P e r f o r m a n c e} = . 4 9 \times \mathrm {H L} +. 3 1 \times \mathrm {H C S} +. 2 \times \mathrm {R L}, +$$ + +$$ +\text {H e a l t h L e v e l} = . 6 \times \text {D A L E} +. 4 \times (1 - \text {I M R}), +$$ + +$$ +\text {R e s p o n s i v e n e s s} = \frac {1}{7} \sum_ {i = 1} ^ {7} \operatorname {A s p} _ {i}, +$$ + +$$ +\text {I n e q u i t i e s} = . 4 \times \mathrm {I H} +. 4 \times \mathrm {I R} +. 2 \times \mathrm {F F D}. +$$ + +With these formulas and our basic criterion, we easily get the evaluation vector to evaluate the effectiveness of a healthcare system. + +# Strengths and Weaknesses + +The Analytical Hierarchy Process method is a good combination of qualitative and quantitative analysis, and it gives the weights conveniently. But it possesses a certain subjectivity. + +# Solution of Task II + +# Modify the List of Metrics and Calculate Each + +In Task I, we listed three total metrics and several small metrics. But data for some metrics are unavailable, so we need to modify our list of metrics. In this task, we take the U.S. as an example. + +# Data Disposal + +For the sake of consistency, we need to process the original data, which we denote as $V_{\mathrm{original}}$ . + +Step 1: Find the maximum and minimum values in the whole table, denoted by $V_{\mathrm{max}}$ and $V_{\mathrm{min}}$ . The adjusted value is + +$$ +V _ {\mathrm {a d j u s t e d}} = \frac {V _ {\mathrm {o r i g i n a l}} - V _ {\mathrm {m i n}}}{V _ {\mathrm {m a x}} - V _ {\mathrm {m i n}}}. +$$ + +Step 2: If the metric has only one factor, we can simply use $V_{\text{adjusted}}$ . If the metric consists of several factors, we should give each one the weight as determined in Task 1. + +# Neglected Metrics + +We neglect the metrics of responsiveness inequities and fairness of financial contribution because we lack data. + +To quantify responsiveness, WHO surveyed 35 countries, giving scores in seven aspects; but data for the U.S. are absent [WHO 2007]. Thus, we delete this factor. Without the metric of responsiveness, we should adjust the weights in calculating the metric Performance: + +$$ +\mathrm {P e r f o r m a n c e} = . 6 1 3 \times \mathrm {H L} +. 3 8 7 \times \mathrm {H C S}, +$$ + +# Selected Metrics + +Resources + +- Human resources (Table 2): + +$$ +\mathrm {H R} = . 2 5 (\text {p h y s i c i a n s} + \text {n u r s e s} + \text {d e n t i s t s} + \text {p h a r m a c i s t s}), +$$ + +where the numbers are measured per thousand of population. + +- Material resources (Table 3): We choose hospital beds per 10,000 population to reflect the amount of material resources. + +Table 2. Human resources (per thousand of population). + +
Year: 2000PhysiciansNursesDentistsPharmacists
U.S.2.569.371.630.88
Max, 35 countries5.9115.21.633.14
Min, 35 countries0.020.1100
Normalized U.S. value.43.611.28
+ +Table 3. Material resources (hospital beds per 10,000). + +
Year: 2003Beds
U.S.33
Max, 35 countries1324
Min, 35 countries2
Normalized U.S. value.24
+ +- Financial resources (Table 4): + +* TH = Total expenditure on health as % of GDP +* GHtoPH = Ratio of government expenditure on health care to private expenditure +* GHtoG = Government expenditure on health as percentage of total government expenditure. + +$\mathrm{FR} =$ Financial resources $= .33\mathrm{TH} + .41\mathrm{GHtoPH} + .26\mathrm{GHtoG}$ + +Since by the usual calculation the normalization result for GHtoPH turns out to be extremely exceptional, we calculate it instead by + +$$ +V _ {\mathrm {a d j u s t e d}} = \frac {\ln V _ {\mathrm {o r i g i n a l}} - \ln V _ {\mathrm {m i n}}}{\ln V _ {\mathrm {m a x}} - \ln V _ {\mathrm {m i n}}}. +$$ + +Table 4. Financial resources as percentage of GDP. + +
Year: 2004TH %GHtoPH % / %GHtoG %
U.S.15.444.7 / 55.318.9
Max, 35 countries16.698.8 / 1.233.4
Min, 35 countries1.612.9 / 87.11.4
Normalized U.S. value.92.27.55
+ +Performance + +- Health level (Table 5): + +* Disability-adjusted life expectancy (DALE): In our data, there is no information about DALE. So we use HALE, health-adjusted life expectancy, to substitute for it. + +$$ +\mathrm {H L} = \text {H e a l t h l e v e l} = . 6 \mathrm {H A L E} +. 4 (1 - \mathrm {I M R}). +$$ + +* Infant mortality. + +Table 5. +Health level. + +
HALE (2002)Infant mortality (2005) per 1000 live births
MaleFemaleAve.
U.S.67717
Max, 35 countries7278165
Min, 35 countries27302
Normalized U.S. value.89.85.87.031
+ +- Health service coverage (Table 6): + +We choose percentage of immunization coverages to evaluate the level of health service coverage, plus TB treatment success: + +* Measles = immunization coverage among one-year-olds with one dose of measles +* Diphtheria = immunization coverage among one-year-olds with three doses of diphtheria, tetanus toxoid and pertussis (DTP3) +* HepB3 = immunization coverage among one-year-olds with three doses of Hepatitis B (HepB3) +* TB = tuberculosis treatment success (%) + +$$ +\mathrm {C o v e r a g e} = . 2 5 (\mathrm {M e a s l e s} + \mathrm {D i p h t h e r i a} + \mathrm {H e p B} + \mathrm {T B}). +$$ + +Table 6. +Health service coverage (percentages). + +
Measles (2005)Diphtheria (2005)HepB3 (2005)TB (2004)
U.S.93969261
Max, 35 countries999999100
Min, 35 countries238209
Normalized U.S. value.92.97.91.61
+ +- Inequities + +We choose probability of dying aged $< 5$ years per 1,000 live births (under-5 mortality rate) by place (rural or urban). To our disappointment, data are not available or not applicable Africa, the Americas, and Europe. Therefore, to analyze the healthcare system in the U.S., we use "infant mortality by race" to indicate inequity. + +Table 7. Health inequity in the U.S.: Under-5 mortality in 2004. + +
Under-5 mortalityNormalized (relative to Black/AA)
White5.7.12
Black or African-American13.21
American Indian or Alaska Native8.4.44
Asian or Pacific Islander4.7.09
Hispanic or Latino6.5.67
+ +Table 7 shows high variability, indicating disparity among races and consequent severe health inequity. + +# Comparison of Healthcare Systems + +We construct two models to compare the effectiveness of healthcare systems. + +# Model 1 + +Let $\mathrm{EV}_i$ be the evaluation vector of system $i$ : $\mathrm{EV}_i = (R_i, I_i)$ , where $R_i$ is ratio of Resources to Performance and $I_i$ is the inequity index. + +# Design of the Model + +We construct the comparison function + +$$ +f (\mathrm {E V} _ {1}, \mathrm {E V} _ {2}) = \frac {R _ {2} - R _ {1}}{\max (R _ {1} , R _ {2})} + \frac {I _ {2} - I _ {1}}{\max (I _ {1} , I _ {2})}. +$$ + +The first term is the relative disparity of resources-performance ratio between two systems. The second term is the relative disparity of inequity index between two systems. + +If $f(\mathrm{EV}_1, \mathrm{EV}_2) > 0$ , then system 1 is better than system 2. + +# Model Expansion + +In our function, the two metrics—resources/performance ratio and inequity index—have equal weight. They could be weighted otherwise. + +# Model 2 + +# Basic Assumption and Symbol Definition + +As before, EV is the evaluation vector with components $R$ (ratio of Resources to Performance) and $I$ (index of inequity). The length of the vector, $L = \sqrt{R^2 + I^2}$ , measures the effectiveness of the healthcare system. + +# Basic Model + +All the points on the same circle have the same length (Figure 6); in other words, the systems have the same effectiveness. Consequently, a system could adjust internal resources distribution; it could sacrifice the resources/performance ratio to improve the inequity index, or vice versa. + +![](images/8c8bd6bc89fbb644381f83275810cacd37dde9f06a0b7f908779d910672b666d.jpg) +Figure 6. Two healthcare systems of equal effectiveness. + +To compare systems, we draw concentric circles according to the evaluation vectors. A system with a smaller circle is better. + +# Strengths and Weaknesses + +# Model 1 + +- The calculation in Model 1 is simple and clear. The model can be easily understood. +- Model 1 could be used to compare any two healthcare systems. +- The weights of resources/performance ratio and inequity index can be adjusted flexibly. + +# Model 2 + +- Compared to Model 1, Model 2 is more visual and intuitive. +- Further development of Model 2 can deal with two indexes not of the same order of magnitude. [EDITOR'S NOTE: We omit this elaboration.] +- In Model 2, the weights of resources/performance ratio and inequity index are equal. + +# Solution of Task III + +# The Incomplete-Induction Model + +In Task 2, we modified the list of metrics. However, some metrics are not so important. We now use the Incomplete-Induction Model to select them most important metrics. + +We select metrics that are applicable to most countries' systems. According to the WHO [2001], the metric Inequities is indispensable in evaluating the effectiveness of health system. So we need to choose other metrics only from among the 14 in the two major factors Resources and Performance. + +# Design of the Model + +Step 1: Choose $N$ countries to analyze. + +Step 2: For each country $i$ , obtain the resources/performance ratio $\mathrm{RP}_i^0$ (the first component of the evaluation vector) by the method of Task II. + +Step 3: Delete the $j$ th metric and calculate $\mathrm{RP}_i^j$ using the other 13 metrics. + +Step 4: Let $P_{j} = \sum_{i = 1}^{N}\left(\mathrm{RP}_{i}^{j} - \mathrm{RP}_{i}^{0}\right)^{2}$ . + +Step 5: Choose the metrics associated with the two (or more) largest $P_{j}$ s. + +Step 6: Some metrics belong to Resources while others belong to Performance. So we need to adjust the metrics if the metrics we have selected are all from Resources or all from Performance. + +# Result + +We choose for our analysis 10 countries, from different regions of different continents, from different levels of development, and with different healthcare systems. In other words, they are representative in the whole world: Argentina, Egypt, Finland, Ghana, Honduras, Japan, Syria, Thailand, and the U.S. + +The three metrics with the highest values of $P_{j}$ are all submetrics of Resources. Consequently, we go with the first two only and substitute for the third the fourth-ranking metric, which is from Performance. Including the metric for Inequity that we regard as mandatory, the set of four metrics is: + +- $M_{1} =$ total expenditure on health as percentage of GDP, +- $M_2 =$ ratio of public to private expenditure on health, +- $M_{3} = \mathrm{HALE},$ and +- $M_4 =$ Inequities. + +# Application + +The resources/performance ratio $R$ can be expressed in terms of the four selected metrics as + +$$ +R = \frac {. 4 4 6 M _ {1} + . 5 5 4 M _ {2}}{M _ {3}}. +$$ + +where the weights calculated in Task 2 are adjusted through the following method: + +$$ +\begin{array}{c c} \frac {. 3 3}{. 3 3 + . 4 1}, & \frac {. 4 1}{. 3 3 + . 4 1}. \end{array} +$$ + +In Task IV, we discuss how to calculate the metric for inequities. + +# Measure the Historical Change + +We use the four selected metrics to evaluate a system's historical change; we take the U.S. as our example. + +# The Change of $M_{1}$ and $M_{2}$ + +We show the variation trend of $M_{1}$ and $M_{2}$ in Figure 7. Their values increase, which means that the whole nation (especially the government) has attached increasing importance to medical treatment. + +![](images/560a719351be3cd4146014a2d7cfe3163aad30446dcc7f6702dd53c2700e6585.jpg) +The variation trend of M1 & M2 +Figure 7. Trends in U.S. total expenditure on health as percentage of GDP ( $M_{1}$ , lower curve) and in the ratio of public to private expenditure on health ( $M_{2}$ , upper curve). + +# The Change in $M_{3}$ : HALE + +HALE is the most direct and obvious criterion to reveal the health level of the population. Because HALE is a new metric (only since 2000), we + +can't get enough historical data. Under the circumstances, we use a similar metric, life expectancy, to substitute for HALE. Figure 8 shows the trend. + +![](images/7561efd0421c64057dd93a52a63b548a77c9642e818433863238807ce3729785.jpg) +Life expectancy at birth(years) +(Both sexes) +Figure 8. Trend in U.S. life expectancy. + +# The Change in $M_4$ : Inequities + +A good healthcare system aims at not only improvement of the health level but also reduction of health inequity. If the level is reduced or even eliminated, the system is considered to be improved. Recall, we measure inequity in terms of infant deaths per 1,000 live births. Figure 9 shows improvement in the early 1990s and little change since. + +![](images/89b3392b16bc57c8dafe2bc136067e661e5cd1b5886d9d2bb08f1769fe684307.jpg) +Inequities in health +Figure 9. Trend in U.S. healthcare inequity (as measured by infant deaths per thousand live births for different groups). + +# Solution of Task IV + +Brazil's automatic healthcare system is creating enormous value for people there, hence Brazil is considered to have good healthcare. + +# Calculation of Health Inequities + +To measure health inequities in Brazil, we choose three metrics: infant mortality by place (rural and urban), by wealth, and by education level of the mother. In this way, we get three ratios, $a_1$ , $a_2$ , and $a_3$ . + +The most equitable situation is if a ratio equals 1; the extent of deviation from 1 shows the unfairness of the system. We use the natural logarithm of original data to normalize the extent of deviation. The bigger the absolute value is, the worse the fairness is: + +$$ +V _ {\mathrm {a d j u s t e d}} = \frac {| \ln V _ {\mathrm {o r i g i n a l}} |}{| \ln V _ {\mathrm {m a x}} |}. +$$ + +Adding $V_{\text{adjusted}}$ with different weights, we can easily get the index of health inequities of Brazil. + +The index of health inequities of India can be calculated in same way. + +# Comparison + +The normalized data for the four metrics for the U.S., Brazil, and India are in Table 8. + +Table 8. Comparison of countries. + +
% of GDPPublic/PrivateHALEInequitiesEV
U.S..92.27.82.67(.64,.67)
Brazil.48.33.67.68(.59,.68)
India.27.06.54.66(.24,.66)
+ +The health-adjusted life expectancy in Brazil is shorter than in the U.S., but the U.S. puts more resources into its system in terms of percentage of GDP spent on healthcare. The inequity index in Brazil is a little higher than in the U.S, which means that the distribution of healthcare is more balanced in the U.S. + +Using isolated metrics to compare, it's hard to say which system is better. Therefore, we compare using the evaluation vector. + +- Compare by Model 1: $f\left( {{\mathrm{{EV}}}_{\mathrm{U.S}},{\mathrm{{EV}}}_{\text{Brazil }}}\right) = - {0.05} < 0$ ,so by our comparison principle,the system in Brazil is better. +- Compare by Model 2: $L_{\mathrm{U.S.}} = .92$ , $L_{\mathrm{Brazil}} = .91$ . Smaller is better, hence the system in Brazil is better. + +# Solution of Task V + +Compared to other countries, India ranks very low in percentage of GDP spent on healthcare, while the U.S. ranks high; moreover, the Indian government stakes little of residents' expenditure on healthcare. + +The lack of resources leads to low output. The health-adjusted life expectancy in India is shorter than in the U.S. However, we must take into account that India has much smaller medical resources. The ratio of resources to performance in India is much lower than in the U.S., in other words, India's system is better than the U.S.'s. + +In terms of inequities, the two countries are almost at the same level. + +Even with increasing resources, the effectiveness of a system won't improve without limit. When the amount of resources is lower than the critical point, the effectiveness of the system will increase sharply as of resources grow. But when the resources are above the critical point, as they increase, the effectiveness of the system grows much more slowly. + +Figure 10 shows India at point A and the U.S. at point B. Therefore, India's system has broad prospects for development. To improve the effectiveness of system, India should put more resources into the system, such as increasing the percentage of GDP spent on healthcare, building more hospitals, and adding healthcare workers. + +![](images/f1e07e419122353042333c6039c243008df0c09a13849addbb206411451b007f.jpg) +Figure 10. Relative status of the healthcare systems of the U.S. (A) and India (B). + +For the U.S., more resources can't bring higher effectiveness. The way to improve the system is to make some change in policy. We discuss detailed measures next. + +# Solution of Task VI + +We choose the U.S. healthcare system to do further study. + +# Introduction + +For the U.S., both criteria $R$ and $I$ are at a high level. But high input doesn't return corresponding high output. "The reasons for the especially high cost of healthcare in the U.S. can be attributed to a number of factors, ranging from the rising costs of medical technology and prescription drugs to the high administrative costs resulting from the complex multiple payer system in the U.S." [Bureau of Labor Education 2001]. So we need to restructure the system based on our four metrics. + +# Restructuring the System + +Modeling: With the four metrics, we can simplify the healthcare system as in Figure 11. + +![](images/a68473383a1d9bb18a48af61ba6ce50eb7ba51528e48d22ed4cd597252e5479d.jpg) +Figure 11. Simplified U.S. healthcare system. + +Suppose the initial evaluation vector is + +$$ +\mathrm {E V} _ {0} = \left(\frac {\mathrm {r e s} _ {0}}{\mathrm {p e r} _ {0}}, I _ {0}\right). +$$ + +The quantity $\mathrm{res}_0$ is determined by $M_1$ and $M_2$ , while $M_3$ and $M_4$ reflect the levels of $\mathrm{per}_0$ and $I_0$ , respectively. + +Since $M_1$ and $M_2$ are inputs of the system while $M_3$ and $M_4$ are outputs, we can describe the system with two functions: + +$$ +M _ {3} = f (M _ {1}, M _ {2}), \qquad M _ {4} = g (M _ {1}, M _ {2}). +$$ + +# Simplifying the Model + +Important considerations are: + +- Life expectancy $(M_3)$ is more sensitive to change in total expenditure on health $(M_1)$ than inequities $(M_4)$ is. +- Altering ratio of public expenditures to private $(M_2)$ produces a more sudden response in inequities $(M_4)$ than in life expectancy $(M_3)$ . + +Thus, the model can be simplified to two single-variable functions: + +$$ +M _ {3} = f (M _ {1}), \qquad M _ {4} = g (M _ {2}). +$$ + +# Constructing the Functions + +# $M_{3}$ , Life Expectancy. + +The U.S. spends $15.4\%$ of GDP on health, which is the highest percentage in the world. The input and output of its health system have reached saturation. Despite putting more resources into the system, we get little more output, which doesn't match the high input. + +For a health system, the growth rate is low when the input (expenditure) is too small or too large but high when the input is appropriate. So we choose the logistic model to describe the function for $M_3$ : + +$$ +M _ {3} = \frac {a b}{b + (a - b) \exp (- c M _ {1})}. +$$ + +The value of the function is $b$ when the independent variable is 0, which stands for the HALE when a country spends none of its GDP on health. We use the HALE of year 1900 for the U.S., so we take $b = 47.3$ . The value of the function is $a$ when the independent variable goes to infinity, which stands for the saturation of HALE. The highest expectancy life now is about 78, thus we take $a = 80$ . We use data from 2004 and get $c = 0.201$ . Therefore, + +$$ +M _ {3} = \frac {8 0 \times 4 7 . 3}{4 7 . 3 + (8 0 - 4 7 . 3) \exp (- 0 . 2 0 1 M _ {1})}. +$$ + +# $M_4$ , Inequities. + +In our opinion, $M_4$ will decrease as $M_2$ (ratio of public to private expenditure) increases. For the sake of convenience, we select an inversely proportional function: + +$$ +M _ {4} = \frac {k}{M _ {2}}. +$$ + +We use data from 2004 and get $k = 0.548$ . Therefore, + +$$ +M _ {4} = \frac {0 . 5 4 8}{M _ {2}}. +$$ + +# Putting Forward Measures + +We consider several measures that alter one of the two inputs or both. Accordingly, the two outputs vary. + +1. Altering the ratio of government expenditure on health to private expenditure. In the U.S. system, the main use of government expenditure on + +health is to improve the health level of low-income people. Altering this can change the level of inequity. + +2. Limiting the rise of total expenditure on health as percentage of GDP to make it constant at an acceptable level. Though there is a sharp increase of total expenditure on health as percentage of GDP, the health level doesn't improve much. That is to say, it has reached a saturation point. +3. Limiting the items and the extension of public insurance. In the existing system, public insurance covers a lot of items, some of which may be unnecessary. +4. Increasing the coverage of public insurance. +5. Limiting strictly the use of new medicine, medical equipment, facilities, and medical technology. Research on these has cost too much, and some outcomes are not so important in improving the overall health level. +6. Regulating the cost of medicine. +7. Reducing excessive medical treatment. +8. Promoting positive competition between different hospitals to reduce the patient's cost on medicine and medical treatment. + +All these measures can be divided into three groups by their different effect on the inputs: + +- Group A (affect only $M_{1}$ ): Measures 2, 3, 5, 6, 7, 8 +- Group B (affect only $M_2$ ): Measure 1 +- Group C (affect both $M_{1}$ and $M_{2}$ ): Measure 4 + +# Testing Various Changes + +Maybe some measures can improve the healthcare system while others have the opposite effect. Therefore, we have to quantify how each kind of measure affects the system. + +In Task 4, we got the evaluation vector for the U.S. In this Task, we take $M_{1}$ and $M_{2}$ as the inputs of the system and $M_{3}$ and $M_{4}$ as the outputs. Because we are analyzing only one country without comparing it to another, we can't normalize the original data. If we calculate the vector as same as Task 4, it may lead to abnormal data. So it is necessary for us to modify the calculation method. + +$$ +\mathrm {E V} = (R, I): \qquad R = \frac {M _ {1}}{M _ {3}}, \qquad I = \frac {1}{M _ {2}}. +$$ + +So the initial evaluation vector of the U.S. is: + +$$ +\mathrm {E V} _ {0} = \left(R _ {0}, I _ {0}\right) = (. 2 0 6, . 6 7). +$$ + +- Measures in Group A can affect only the total expenditure on health as percentage of GDP $(M_{1})$ . Suppose that its initial value changes by $5\%$ . Calculating $M_{3}$ gives: + +$$ +\begin{array}{l} - \operatorname {If} + 5 \% : \mathrm {EV} _ {1} = (.214, .67). \\ - \text {If} - 5 \% : \mathrm {EV} _ {1} = (. 196, . 67). \\ \end{array} +$$ + +Hence, decreasing the total expenditure on health as percentage of GDP reasonably can improve the healthcare system of the U.S. + +- The measure in Group B affects only the ratio of public expenditure to private $(M_2)$ . Suppose that its initial value changes by $5\%$ . Calculating $M_4$ gives: + +$$ +\begin{array}{l} - \operatorname {If} + 5 \% : \mathrm {EV} _ {1} = (.206, .638). \\ - \text {If} - 5 \% : \mathrm {EV} _ {1} = (.206, .705). \\ \end{array} +$$ + +Hence, increasing the ratio of public expenditure to private can improve the healthcare system of the U.S. + +- The measure in Group C affects both $M_{1}$ and $M_{2}$ . Suppose that the initial values change by $5\%$ . Calculating $M_{3}$ and $M_{4}$ gives: + +- Case a: If $M_1 + 5\%$ and $M_2 + 5\%$ : $\mathrm{EV}_1 = (.214, .638)$ . +- Case b: If $M_1 + 5\%$ and $M_2 - 5\%$ : $\mathrm{EV}_1 = (.214, .705)$ . +- Case c: If $M_1 - 5\%$ and $M_2 + 5\%$ : $\mathrm{EV}_1 = (.196, .638)$ . +- Case d: If $M_1 - 5\%$ and $M_2 - 5\%$ : $\mathrm{EV}_1 = (.196, .705)$ . + +Evidently Case c is the best and Case b is the worst. + +The measure in Group C is coverage of public medical insurance. Increasing it on the one hand increases total expenditure of GDP but on the other hand also increases the ratio of public expenditure to private. So such an increase is similar to Case a. + +# Strengths and Weaknesses + +We have built a model that reveals how the system works based on the four metrics that we created in Task 3. Its parts combine well. Also, it is easy and convenient to test the measures with the model. But there are some weakness when simplifying the model. A single-independent-variable function is not the best to describe a healthcare system. + +# Suggestion: Major Change + +From the results above, we find two major changes that can improve the system: + +- Decrease total expenditure on health as percentage of GDP. +- Increase the ratio of government expenditure on health to private expenditure. + +# References + +Bureau of Labor Education, University of Maine. 2001. The U.S. health care system: Best in the world, or just the most expensive?. http://dll.umaine.edu/ble/U.S.%20HCweb.pdf. Accessed 15 Feb 2008. +Gakidou, Emmanuela, Christopher J.L. Murray, and Julio Frenk. Measuring preferences on health system performance assessment. GPE Discussion Paper No. 20. Geneva, Switzerland: World Health Organization. http://www.who.int/healthinfo/paper20.pdf. Accessed 15 Feb 2008. +Jiang, Qiyuan. 1993. Mathematical Modeling [in Chinese]. 2nd ed., pp. 224-244. Beijing, China: High Education Press. +Mathers, Colin D., Ritu Sadana, Joshua A. Salomon, Christopher J.L. Murray, and Alan D. Lopez. Estimates of DALE for 191 countries: Methods and results. GPE Discussion Paper No. 16. Geneva, Switzerland: World Health Organization. http://www.who.int/healthinfo/paper16.pdf. Accessed 15 Feb 2008. +National Center for Health Statistics. 2007. Health, United States, 2007. Washington, DC: U.S. Government Printing Office. http://www.cdc.gov/nchs/data/hus/hus07.pdf. Accessed 16 Feb 2008. +World Health Organization (WHO). 2000. The World Health Report 2000—Health Systems: Improving Performance. Geneva, Switzerland: World Health Organization. http://www.who.int/whr/2000/en/. Accessed 15 Feb 2008. +______ 2007. World Health Statistics 2007. Geneva, Switzerland: WHO Press. http://www.who.int/whosis/whostat2007.pdf. Accessed 15 Feb 2008. + +![](images/54375d4547681f42730149971688c79f72b9e3b340c64eb44b843adcb806339c.jpg) +From left to right: Advisor Qing Zhou, team members Luting Kong, Yiyi Chen, Chao Ye, and advisor Zuguo He. + +# Better Living through Math: An Analysis of Healthcare Systems + +Denis Aleshin +Bryce Lampe +Parousia Rockstroh + +Harvey Mudd College Claremont, CA + +Advisor: Darryl Yong + +# Summary + +Compelled by the great disparities among healthcare systems across the globe, we create a mathematical model to predict key areas for improvement in stunted healthcare systems. We first establish a framework for discussing and comparing healthcare systems; using data taken from the World Health Organization, we use this framework to rank the systems of the U.S., Sweden, and Nigeria. Our rankings agree with previous studies. + +Using a probabilistic model incorporating economic factors, we investigate the effects of various changes to the U.S. system and develop a strategy to improve its rank. Our results indicate that the U.S. should place more emphasis on the prevention of illness, and it should shift toward a more-centralized system so as to make care more accessible to lower- and middle-class individuals. + +# Introduction + +While the U.S. has historically spent more per capita on healthcare than most other countries, the U.S. has seen little improvement in healthcare, and even the U.S. Congress admits that the system is far from the best [1993]. Although healthcare is a significant voting issue, Americans remain confused as to what the remedy for their healthcare should be [Hitti 2008]. Additionally, recent problems such as medical tourism—traveling + +to foreign countries for healthcare—have reinforced the apparent need for reform [Kher 2006], but uncertainty remains as to what reforms should be implemented. + +We provide a guideline for improving U.S. healthcare. We offer a framework for comparing and predicting various aspects of healthcare systems. We define important terms and identify metrics for measuring quality. We use the combined metrics to rank the healthcare systems of the U.S., Nigeria, and Sweden; these rankings agree with previous literature and support the effectiveness of our metrics. + +We present a predictive model for a healthcare system that can account for different economic classes. Tests run with this model suggest that putting more emphasis on prevention of illness and shifting toward more-centralized healthcare would greatly benefit the U.S. + +# Defining Healthcare + +# What is Healthcare? + +Healthcare is the utilization of medical knowledge with the intent of maintaining or restoring an individual's health of body or mind. A healthcare system is a network of facilities and workers with the purpose of administering healthcare to a country's population. + +# Quality of Healthcare + +The quality of a healthcare system should reflect how proficient it is at keeping individuals healthy. However, what is considered healthy can change over time, so we define our terms to accommodate changes in medical opinions over time. + +The Organization for Economic Co-operation and Development (OECD), a large organization concerned with improving international living standards, defines quality of a national healthcare system as: + +The degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge. [2004] + +A health outcome is a measurable statistic associated with some feature of the overall health of a nation. We take desirable health outcomes to be universal, and we classify a health outcome as desirable or undesirable depending on the current consensus of the medical community. For example, an increase in a population's average lifespan should always be desired over a decrease, and fewer smokers in a population should always be desired over more smokers [Peto and Lopez 2000]. + +# Metrics for Assessing Quality + +We define a metric for the quality of a country's healthcare system as a measurement of something that is capable of impacting a health outcome. A desirable metric is associated with a desirable health outcome (e.g., average access to medical care, frequency of contraceptive use, frequency of immunizations), and vice versa for an undesirable metric (e.g. occurrence of diseases, waiting times for doctors, unaffordable costs). + +Due to the large differences in how healthcare is provided throughout the world, some metrics—especially those impacted by culture or geographic conditions—might be inappropriate for comparisons between nations. That is, for a metric to be an effective measure of quality, it should measure something that is impacted directly by health systems and it should be influenced by as few outside factors as possible. + +# Quality Criteria + +The OECD has identified three primary components of success of any healthcare system: + +- promotion of good health, +- prevention of illness, and +- treatment and diagnosis of illness [Kelley and Hurst 2006]. + +Additionally, the OECD has compiled a list of metrics that best measure the quality of each of these components [2004]. We use a slightly modified version of the OECD's description for a healthcare system; we consider a system to consist of the following components: + +Prevention. Since promotion of good health and prevention of illness primarily apply only to healthy populations, we treat these two components as one single component, measured by metrics suggested by the OECD for their prevention and promotion components [2004]. + +Accessibility. People are kept away from treatment or diagnosis by the lack of proximity of healthcare facilities, unavailability of staff, and the price of care [Feldstein 2006]. A healthcare system cannot be effective if it cannot be reached by its population. Metrics for this parameter should measure the system's ability to accommodate people's needs in these respects. + +Treatment. This component is unchanged from the OECD definition; the quality of this component should be measured by metrics suggested by the OECD for their treatment and diagnosis component [2004]. + +# Which Metrics to Use + +Two common metrics for healthcare quality are life expectancy and infant mortality rate, but both are influenced by factors beyond the control of a reasonable healthcare system [O'Neill and O'Neill 2007]. Life expectancy can be considered more a measure of quality of life than quality of healthcare; it does not distinguish between treatable causes of death (e.g., disease) and other causes (e.g., war). Similarly, infant mortality rates are strongly influenced by cultural, social, and educational factors. Because of the outside forces, comparisons made with only these metrics are not reliable [O'Neill and O'Neill 2007]. + +We follow guidelines of the OECD, which has concluded that an effective metric is best characterized by three things: + +First, it [must] capture an important performance aspect [of the healthcare system]. Second, it [must] be scientifically sound. And third, it [must be] potentially feasible. [2004] + +# Data for Metrics + +The World Health Organization (WHO) offers an abundance of statistics relating to healthcare, which are widely believed to be accurate and unbiased. We rely on the WHO as the primary source for health outcomes associated with our metrics. + +# Our Metrics + +We choose metrics based on the recommendations of OECD [2004] and the availability of data in the WHO database [2008]. We group them by component of health, as set out earlier. + +# Prevention + +Obesity. This metric reflects the emphasis that a healthcare system places on healthy dietary habits as well as the public's desire to adopt those habits. Data for this metric are readily available from the WHO as "Adults aged $>15$ years who are obese." + +Prevalence of contraceptives. Contraceptives prevent both unwanted pregnancies and the spread of sexually-transmitted diseases. The majority of abortions are performed due to unwanted pregnancies; abortions have substantial long-term consequences in women, both psychologically and medically OECD [2004]. This metric responds to measures taken by a healthcare system to reduce risks of unprotected sex. Data are available from WHO as "contraceptive prevalence rate." + +Smoking. Reducing smoking has traditionally been the responsibility of healthcare systems. This metric is a measure of how susceptible the public is to beneficial influence from the healthcare system [OECD 2004]. Data are available from WHO as "prevalence of current tobacco use among adults aged $>15$ years." + +Immunizations. These metrics quantify how proficient a healthcare system is at preventing and controlling communicable diseases [OECD 2004]. WHO offers data for diphtheria, measles, tetanus, hepatitis B, toxoid, and pertussis immunizations in one-year-olds [WHO 2008]. We take an additional data set for polio immunizations from Earth Trends [n.d.]. + +Low birth weight. This metric is an indicator of the prenatal care that at-risk mothers receive. It reflects a healthcare system's ability to identify risk factors in patients as well as its capacity for preventing those factors from causing serious harm [OECD 2004]. Data are available from WHO as "low birth weight, newborns." + +# Accessibility + +Abundance of medical personnel. This indicates the availability of professionals capable of administering care to the population. The WHO provides several data sets for this metric, including the proportions of physicians, nurses, midwives, dentists, and pharmacists in the population. + +Abundance of medical facilities. This metric measures the proximity to healthcare systems. Data for this metric is limited; the WHO provides data only for "medical beds per 100,000 population." + +Affordability for individuals. This metric measures how much money individuals pay for care. Data for this metric are not directly available from WHO but instead we derive them from its "private spending" and "out of pocket spending" statistics. + +# Treatment + +Success of treatments. This metric should reflect a healthcare system's level of care. The OECD suggests using the readmission rates for patients who have suffered congestive heart failure [2004], but these data are not widely available. Hence, we resort to using the "tuberculosis detection rate" and "tuberculosis treatment success" data provided by the WHO as an alternative. + +# Meta-Metrics + +It would be convenient to combine all the metrics in a meaningful way; we propose an algorithm for computing what we call meta-metrics. Begin + +by selecting a healthcare component; for each of the metrics corresponding to this component, do the following: + +- Determine the maximum and minimum values of the metric for a large sample of countries; if a large sample not available, then the metric cannot be used reliably. +- Scale each country's datum linearly into the interval $[0,1]$ , where the minimum value is mapped to 0 and the maximum value to 1. +- If the metric is undesirable (e.g., prevalence of obesity), subtract the scaled values from 1 to transform the metric into a desirable metric (e.g., lack of obesity). + +Then calculate the average value of all metrics associated with a country and define this number to be the country's meta-metric value for the chosen healthcare component. + +A meta-metric represent how well a country performs, on average, relative to the rest of the world for a given healthcare component. A value close to 1 signifies that the component delivers care of the highest quality currently available; a value near 0 signifies that the country delivers some of the poorest quality care. Because of their compactness, meta-metrics are easy to use for comparisons between existing and potential healthcare systems. + +# Comparing Healthcare Systems + +# United States + +The U.S. is the only developed country that does not employ universal coverage [Torrens 1978]. Instead, healthcare is different for every person, and consists of a loose association of coverage plans provided by private sources, the government and employers. The average middle-class person is usually covered by some sort of insurance and employs a private physician in sole charge of managing the individual's healthcare. Physicians exercise substantial influence on the U.S. system, because of their position in healthcare administration, as well as general tendencies of policy to favor private medical practice. This influence leads to the question of whether or not physicians or the federal government should control healthcare. More pressing issues are also troubling the U.S., as the increasing health budget is yielding little advance in the overall quality of care [Torrens 1978]. + +To test the effectiveness of the meta-metrics, we compare several countries for which there is a clear ranking of healthcare already established. Based on "financial fairness," the WHO ranked the healthcare systems of Sweden, the U.S., and Nigeria as 12th, 54th, and 180th in the world [2000b]. + +Meta-metric values, calculated from the metrics and processes described earlier, are given in Table 1. + +Table 1. +Meta-metrics. + +
Meta-metricU.S.SwedenNigeria
Prevention.68.79.54
Accessibility.61.80.23
Treatment.52.38.37
+ +# Sweden + +Sweden operates a nationalized healthcare system that every citizen contributes to based on a proportion of income. As a result, the OECD asserts that citizens enjoy roughly equal benefits, regardless of economic status [Tengstam 1975]. The system is heavily regulated and is run by the National Board of Health and Welfare, which is responsible for supervising medical care in both the public and private sectors. In addition, this Board is in charge of certifying physicians, nurses, and midwives, and also supervises and reviews the decisions of the County Councils, where most of the responsibility for funding and maintaining healthcare falls [Tengstam 1975]. Anderson [1972] suggests that in many ways the Swedish system is superior to that of the United States because of Sweden's longstanding commitment to, and enforcement of, universal healthcare. + +Sweden's average world ranking for healthcare trumps the U.S. in all areas but treatment. However, the treatment meta-metric is calculated using a weak tuberculosis metric. + +# Nigeria + +Nigeria operates a three-tiered health system comprised of a national healthcare system financed by all citizens; government health insurance that is provided for government employees; and firms that contract with private healthcare providers. However, a significant number of Nigerians do not enjoy all the benefits of this system. Like many other African countries, the roots of the Nigerian healthcare system can be traced back to a British colonial era. During this period, the health system was equipped to provide care for only a small portion of the population; the system was never adequately adapted to handle the region's growing population [World Bank 1994]. An additional hindrance in the system is an incredible disparity of wealth between upper- and lower-class citizens [World Bank 1994]. Examples of failures in the health system abound. In one case, a 1985 outbreak of yellow fever devastated a small town (killing more than 1,000 people) despite the fact that a vaccine has been available since 1930 [Vogel 1993]. + +Compared to the U.S. and Sweden, Nigeria's meta-metrics place it at the bottom. + +# Strengths and Limitations of Meta-Metrics + +Our meta-metrics demonstrate the following advantages: + +Flexibility. Additional metrics can be easily incorporated into the metametrics. + +Relevance. Meta-metrics convey the average performance of a country's healthcare system relative to the rest of the world. + +Accuracy. The WHO and our meta-metrics both rank the U.S., Sweden, and Nigeria in the same relative order. + +These meta-metrics also demonstrate the following disadvantages: + +Data is not concurrent. Data sets reported by the WHO can often be several years older than other data sets. + +Demanding. Data are required from a large number of countries in order to determine the worldwide maximum and minimum values for metrics. + +Simplicity. It may be wiser to weight the metrics in the calculation of metametrics instead of taking just their mean. + +# A Model for a Healthcare System + +# Assumptions + +We assume that for a given nation: + +Wealth is not distributed equally. This is especially true for the U.S. [Wolff 2004], which is the focus of most of our attention. + +WHO data for that nation is recent and reliable. This assumption is not entirely valid, since some statistics from the WHO that we use date back to 2000. However, this should be less of a problem as data become more widely and frequently reported. + +The healthcare system operates in a consistent way. This is not at all true, but for the sake of simplicity we must assume that the system is predictable. + +Meta-metrics accurately reflect the performance of the health system. + +Our results for the U.S., Sweden, and Nigeria support this assumption for all but the treatment meta-metric. + +Certain meta-metrics scale with income. Measures taken by a healthcare system to prevent illness affect all people equally [Torrens 1978]. To account for economic factors, we assume that accessibility and treatment scale with wealth. That is, an individual with more money has an easier time finding care and paying for treatment. This is a gross oversimplification, but it allows the model to convey more information. + +# Definition of the Model + +Let $A, T$ , and $P$ be the country's accessibility, treatment, and prevention meta-metrics. We treat them as probabilities of certain events occurring within the healthcare system: + +$P$ : the probability that an individual will be in good health; + +A: the probability that an individual will have access to affordable healthcare, should they need it; and + +$T$ : the probability that a sick individual will be correctly diagnosed and properly treated. + +We model a healthcare system as the stochastic process pictured in Figure 1, and we repeatedly apply this process to track the flow of healthy individuals through the system. + +![](images/3e84aa0dad56428f467d53a106e7eec5a436c931b2153f1e115652aed988c9d2.jpg) +Figure 1. Model of the healthcare process, with four states and probabilities of transitions among them. + +If at some time $n$ we have a population of $H_{n}$ healthy individuals, then we expect $H_{n}(1 - P)$ of those people to fall into poor health in the next time interval. Of those who fall ill, a proportion $AT$ of them will find access to treatment and become healthy. Hence, we predict the number of healthy individuals after $n + 1$ units of time to be + +$$ +H _ {n + 1} = H _ {n} - H _ {n} (1 - P) + H _ {n} (1 - P) A T. +$$ + +For an initial healthy population $H_0$ , this simplifies to + +$$ +H _ {n} = H _ {0} (P + A T - A P T) ^ {n}. \tag {1} +$$ + +# Retention of the Model + +To quantify the efficiency of a healthcare system, we consider how many iterations $n$ of the healthcare process are required before $H_{n}$ falls below some threshold $H_{\mathrm{min}}$ . Hence, we substitute $H_{n} = H_{\mathrm{min}}$ into (1) and solve for $n$ to find the retention $R$ : + +$$ +R = \frac {\ln H _ {\min}}{\ln H _ {0} + \ln (P + A T - A P T)}. \tag {2} +$$ + +The retention $R$ measures how long the modeled system can operate, starting from a healthy population, before an overwhelming majority of the population is no longer healthy. A larger retention value indicates a more effective system. For all calculations of $R$ , we take $H_0$ and $H_{\mathrm{min}}$ to be 100 and 1. + +# Economic Weighting + +One of the primary discriminatory factors of healthcare in the U.S. is economic status; we would like to take this into consideration. To do so, we consider three economic classes: + +- Group 1: Those who control the lowest quartile of wealth. +- Group 2: Those in the middle quartiles for wealth. +- Group 3: Those in the upper quartile for wealth, + +We adjust the parameters $A$ and $T$ based on the wealth of a group. + +Since our meta-metrics describe the average performance of the system, our model—without the economic weightings presented in this section—describes the effect of the system on the "average person," a person of median wealth (hence in Group 2). Analogously, we treat the median person in the lower quartile as a representative of Group 1 and the median person in the upper quartile as representative of Group 3. + +We adjust the probabilities $A$ and $T$ for Group 1 by a factor of $C_*$ , the ratio of the median wealth of an individual in the lowest quartile to that of the average person. + +Since wealth in the U.S. is so unevenly distributed, comparing the median individual in the upper quartile to the average person would be misleading. Instead, we adjust the probabilities $A$ and $T$ for Group 3 by a factor of $C^*$ , which now represents the wealth of the median individual in the upper quartile with respect to the richest person in Group 3. This gives us a weight based on how the wealth is distributed in the upper quartile. + +Simply put, these factors give us a sense of the economic disparity between the groups; quality of accessibly and treatment scales with wealth, and $C^*$ and $C_*$ and appropriate scaling factors. We calculate their values in the Appendix. + +Let $A_{i}$ denote the accessibility of the healthcare system for an individual in Group $i$ , and let $T_{i}$ represent the successful treatment of an individual in group $i$ . We weight each of these probabilities as follows: + +$$ +A _ {i} = \left\{ \begin{array}{l l} A C _ {*}, & \text {i f} i = 1; \\ A, & \text {i f} i = 2; \\ A + (1 - A) C ^ {*}, & \text {i f} i = 3. \end{array} \right. +$$ + +We will assume that treatment scales in the same way, so we have: + +$$ +T _ {i} = \left\{ \begin{array}{l l} T C _ {*}, & \text {i f} i = 1; \\ T, & \text {i f} i = 2; \\ T + (1 - T) C ^ {*}, & \text {i f} i = 3. \end{array} \right. +$$ + +The rationale for these weightings is that $A$ and $T$ increase as wealth increases. + +By considering healthcare with respect to the actual distribution of wealth, we add a great deal of richness to the model. Adjusted meta-metrics are given in Table 2. + +Table 2. Adjusted probabilities for economic classes. + +
UpperClass +MiddleLower
Accessibility A.87.61.11
Treatment T.82.52.09
+ +# Analysis of U.S. Healthcare + +Applying the data from Table 1 to (2) gives retention values shown in Table 3. These values show that the model preserves the earlier rankings. + +Table 3. Ention val + +
Sweden29.6
United States18.9
Nigeria8.5
+ +Our interests lie in using this model to make predictive judgments about changes to the U.S. system. To identify the areas in which the retention of the U.S. system is most susceptible to change, we vary one meta-metric while holding the others constant. As seen in Figure 2, the slope is much larger for prevention; retention responds more to changes in prevention than in + +other components. Interestingly, although the U.S. and Sweden have different values for their prevention meta-metrics, they respond essentially identically to changes in prevention. + +![](images/d20b2adc6d00a82d311e0da1383cabc9c121b9319fa8701049212e4d3744f5e6.jpg) +Figure 2. Variations in meta-metrics by nation. In each case, the remaining two meta-metrics are held constant at the values given in Table 1. Current values are depicted as dots. Sweden is represented by a dashed line, the U.S. by a bold line, and Nigeria a thin line. Although their prevention meta-metrics differ, the U.S. and Sweden effectively share the same prevention curve. + +![](images/749e21dfeb79b818ae4589602396a39a975877a1a651a4de008780b350d09fd1.jpg) + +![](images/5fdebe68b79eca415eb7a6722d0890fd6eeb4b3ea71c0436e89c4364bb84dc70.jpg) + +By considering the impact of economic status on accessibility and treatment, we can gain even more insight. Figure 3 shows how retention reacts to changes in meta-metrics by economic class: The economic levels in the U.S. react very differently to changes in meta-metrics. This result agrees with out hypothesis that a person's economic status plays a large role in determining the quality of healthcare that the individual receives. + +![](images/15fa7c32cdf3e5049a325a0ebb431fad046c3c33a07cc1859b094d1bb9fd969c.jpg) +Figure 3. Variations in meta-metrics by U.S. economic class. In each case, the remaining two meta-metrics are held constant at the values given in Table 2; for $P$ , we use the value given in for the U.S. in Table 1. Adjusted meta-metric values are depicted as dots. The middle class is represented by a dashed line, the lower class by bold line, and the upper class by a thin line. + +![](images/fc6afff8dd49ec557b31c58710707915b76bd76c7a7abcca30bc36b5dd67b42f.jpg) + +![](images/9f4f666f679a9cf5432aea93961b6f4f392029c383497886ca3f05697a5ae2c9.jpg) + +# Strengths + +Our model exhibits the following positive characteristics: + +Extendability. Our model is a very comprehensive assessment of the interaction of the healthcare system with the population. The advantage of using a stochastic process in creating this interaction is that we can always extend it to be more complex. For example, if we gained access to reliable data for readmission for failed treatments, we could add this pathway into our model and obtain an even more accurate simulation of the healthcare process. + +Predictive power. Our model is capable of accurately predicting areas in which national healthcare is lacking relative to other countries, and it can be used to provide insight into the most effective way to change its standing. + +Agreement with reality. As discussed later, the results from our model correspond to the current state of U.S. healthcare. Further testing could strengthen this claim. + +Economic associations. A large problem with U.S. healthcare is that it varies greatly among individuals, especially by wealth. By incorporating the relationship between availability and treatment into our model, we can more efficiently identify problem areas. + +# Limitations + +Our model also shows the following drawbacks: + +Possible failure. It is possible for the model to fail if a country dominates all metrics used in calculating the prevention meta-metric; the model would predict infinite retention. If this occurs, then additional metrics should be considered in the calculation of the prevention meta-metric. + +Oversimplification. Our probabilistic model is rather simple, although it produces surprisingly relevant results. + +Unconfirmed. Meta-metrics have only been verified to agree with past rankings for a selection of three countries. The accuracy of the model depends directly on the effectiveness of the meta-metrics. + +Limited. Our definition for healthcare includes mental health, although our data primarily correspond to physical problems. + +Demanding. The model depends on meta-metrics, which in turn require large amounts of worldwide data. + +# Major Suggestions for the U.S. + +Our model predicts that the quickest way to improve the world standing of the U.S. healthcare system is to enhance preventive measures. Lack of spending on the prevention component of the system partially explains the current dilemmas facing the U.S.—namely, the lack of response from increased spending on healthcare [O'Neill and O'Neill 2007] and the growing obesity problem [Wang and Beydoun 2007]. Additionally, it is likely that these inadequate preventive measures are causing more and more Americans to fall into poor health unnecessarily, thereby placing more strain on the system. + +We therefore suggest reallocation of funding to place more emphasis on promoting health and preventing illness. Figure 2 indicates that these changes could quickly increase the quality of U.S. healthcare to be more on a par with Sweden's system. + +Additionally, a common criticism of U.S. healthcare is the large inequities in affordability and quality of treatment between the upper, lower, and middle classes [Wolff 2004]. When simple economic factors are combined with our model, Figure 3 shows that the lower and middle classes experience little to no sensitivity to changes in the system's accessibility or treatment components. At the same time, however, the upper class gains significantly more retention from increases in both of these meta-metrics. Hence, the model suggests that money spent on improving the accessibility component of the system has had a minimal impact on a majority of the population. + +Thus, additional reform of U.S. healthcare is needed to make the system more accessible to the lower and middle classes. Sweden has had great success with its highly-regulated universal healthcare system. Therefore, we also suggest that the U.S. grant more control of healthcare to the government so it can enact and enforce stricter regulations on the preventive care provided by private practitioners. + +# Conclusion + +We have researched the motivations and goals of healthcare. Based on quality, relevance and availability, we selected a set of health outcomes that we grouped into metrics, and further organized into logical groups of metametrics. Applying these meta-metrics to compare the healthcare systems of the U.S., Nigeria, and Sweden confirmed their validity when considered alongside previous work. + +We then used those meta-metrics to construct a stochastic model to generalize healthcare and defined the concept of retention to compare different health systems. Furthermore, we incorporated economic factors into our model in order to distinguish between different income classes. By analyzing the influence of each metric on retention, we identified problems in the U.S. healthcare system. In light of these problems, the U.S. system should be restructured to improve promotion of health, and government control should be increased in order to provide more-accessible healthcare for the lower and middle classes. + +Future work should seek additional health outcome statistics to increase the accuracy of our metrics, especially for the treatment component of healthcare. Additionally, meta-metric values could be computed for additional countries to further investigate their potential for describing the quality of healthcare. + +# Appendix + +We compute the economic weights $C_*$ and $C^*$ by studying the distribution of wealth within the U.S. Economists often study the distribution of wealth in a country by constructing a Lorenz curve, which is the approach we take here. + +# The Lorenz Curve + +The Lorenz curve was described in 1905 by Max O. Lorenz to display the distribution of wealth or assets in a society. A Lorenz curve is obtained by plotting on the $x$ -axis the percentage of people and on the $y$ -axis the corresponding percentage of wealth. Thus, a point at $(x, y)$ on the Lorenz curve indicates the percentage $y$ of total wealth that the bottom $x\%$ of people have. Figure A1 shows the approximate Lorenz curve for wealth in the U.S. in 2001. The thin line with slope 1 is the line of perfect equality, which corresponds to an equal distribution of wealth among individuals in a society. + +![](images/2a097a0a564d0a8b47541978c19887289653380c0d455fb587cafa8f467c50da.jpg) +Figure A1. A Lorenz curve for wealth in the U.S. (bold) approximated using data from 2001, along with the line of perfect equality (thin). + +A Lorenz curve has properties useful in approximating it: + +- It begins at $(0,0)$ and ends at $(1,1)$ , +- cannot rise above the line of perfect equality, +is increasing, and +- is convex. + +# The Lorenz Curve for the U.S. + +We approximate the Lorenz curve for the U.S. using data from 2001. We display in Table A1 the data from Wolff [2004]. + +Financial wealth distribution by household in the U.S. in 2001, according to Wolff [2004, 30]. + +Table A1. + +
Top 1%Next 19%Bottom 80%
% wealth39.7%51.5%8.8%
+ +We also know that $0\%$ percent of the population have $0\%$ of the wealth, and the collective population has all the wealth. This gives us the boundary conditions $(0, 0)$ and $(1, 1)$ . + +We approximate the Lorenz curve using a Bézier spline fit algorithm because of its ability to generate a smooth curve with relatively few data points. The Bézier fit also guarantees that the curve will be convex, as we would expect a Lorenz curve to be. The disadvantage is that the curve does not pass through all the data points. + +# Computation of $C_*$ and $C^*$ + +We compute the weights $C_*$ and $C^*$ using the Lorenz curve, which we denote by $L(x)$ . We define $C_*$ to be the ratio of the cumulative wealth of the median person in the lowest quartile to the cumulative wealth of the average person. Thus, $C_*$ is given by: + +$$ +C _ {*} = \frac {\int_ {0} ^ {. 1 2 5} L (x) d x}{\int_ {0} ^ {. 5 0} L (x) d x} \approx . 1 7 +$$ + +Similarly, define $C^*$ to be the ratio of the cumulative wealth of the median person of the upper quartile to total wealth. So $C^*$ is given by: + +$$ +C ^ {*} = \frac {\int_ {0} ^ {. 8 7 5} L (x) d x}{\int_ {0} ^ {1} L (x) d x} \approx . 6 3 +$$ + +# The Gini Index of $L(x)$ + +The Gini index is a numerical measure of the distribution of wealth in a country, defined as + +$$ +G = 2 \int_ {0} ^ {1} [ x - L (x) ] d x = 1 - 2 \int_ {0} ^ {1} L (x) d x +$$ + +where $L(x)$ is a Lorenz curve. Thus, the Gini index is 1 minus twice the area below the Lorenz curve. Perfect equality in wealth corresponds to $G = 0$ , perfect inequality to $G = 1$ . Numerically integration of our function $L(x)$ gives $\int_0^1 L(x) \, dx \approx .21$ and hence $G_{\mathrm{USA},2004} \approx .579$ . + +# Limitations of our Approximation + +- Although we use data from 2001, the distribution of wealth does not change dramatically from year to year. +- We use only five data points, including the boundary conditions $(0,0)$ and $(1,1)$ . +- The Bézier curve passes through the boundary points but not through the data points. + +The Gini index for financial wealth of households in the U.S. in 2001 was .888 [Wolff 2004, 30], while our approximation is .579. We used scant data; moreover, the bottom $40\%$ of households combined have negative financial wealth ("net worth minus net equity in owner-occupied housing" [Wolff 2004, 5]). Davies et al. [2008] have different data (Table A2) for household wealth, which they take more conventionally to include "non-financial assets [presumably including home equity], financial assets and liabilities" [2008, 2]. + +Table A2. +Wealth distribution for families in the U.S. in 2001, according to Davies et al. [2008, 4, Table 1]. + +
Top 1%Top 5%Top 10%Bottom 50%
% wealth32.7%57.7%69.8%2.8%
+ +# Editor's Note: Calculation of the Gini Index from Available Data + +The U.S. Census Bureau publishes wealth and income data by quintiles. The income data are published separately for families and for households [2005a; 2005b], while the wealth data are published for households only [2008a]. A household includes related family members plus any unrelated people who share the housing unit. The Bureau also publishes Gini indexes for income [2008b; 2007b; 2007a] calculated from the full Lorenz curves, together with other measures of inequality [n.d.]. + +The Gini index cannot be approximated from quintile data by using Simpson's rule for an integral, since Simpson's rule requires an even number of intervals. Using the trapezoid rule would underestimate the Gini coefficient because of the concavity of the Lorenz curve. + +Gerber [2007] gives a simple method suitable to quintile data. For U.S. family income in 2000, the method gives a Gini index of .422, while the index given by the Census Bureau (based on the full Lorenz curve) is .433. + +Further information about both the Lorenz curve and Further details about the Lorenz curve and the Gini index are given in a series of UMAP Modules by Schey [1979]. + +# Editor's References + +Gerber, Leon. 2007. A quintile rule for the Gini coefficient. Mathematics Magazine 80 (2) (April 2007): 133-135. Note: The URL for reference 3 of this paper should read http://www.census.gov/hhes/www/income/histinc/f02.html. +Schey, Harry M. 1979. The distribution of resources. UMAP Modules in Undergraduate Mathematics and Its Applications: Modules 60-62. Newton, MA: COMAP. +U.S. Census Bureau. 2005a. Historical income tables—Families. Table F-2. Share of aggregate income received by each fifth and top 5 perc [sic] of households (all races): 1967 to 2001. http://www.census.gov/hhes/www/income/histinc/f02.html. +2005b. Historical income tables—Families. Table F-2. Share of aggregate income received by each fifth and top 5 percent of families (all races): 1947 to 2001. http://www.census.gov/hhes/www/income/histinc/f02.html. +. 2007a. Historical income inequality tables. http://www.census.gov/hhes/www/income/histinc/ineqtoc.html. +_______. 2007b. Historical income tables—Households. Table H-4. Gini ratios for households, by race and Hispanic origin of householder: 1967 to 2006. http://www.census.gov/hhes/www/income/histinc/h04.html. +2008a. Net worth and the assets of households: 2002. http://www.census.gov/prod/2008pubs/p70-115.pdf. +. 2008b. Historical income tables—Families. Table F-4. Gini ratios for families, by race and Hispanic origin of householder: 1947 to 2006. http://www.census.gov/hhes/www/income/histinc/f04.html. +______ . n.d. Selected measures of household income dispersion: 1967 to 2005. http://www.census.gov/hhes/www/income/histinc/p60no231 tablea3.pdf . + +# References + +Anderson, Odin. 1972. *Health Care: Can There Be Equity?* Chicago, IL: Wiley-Interscience. + +Davies, James B., Susanna Sandström, Anthony Shorrocks, and Edward N. Wolff. 2008. The world distribution of household wealth. Discussion Paper No. 2008/03. United Nations University UNU-WIDER World Institute for Development Economics Research. http://www.wider.unu.edu/publications/ + +working-papers/discussion-papers/2008/en_GB/dp2008-03/_files/78918010772127840/default/dp2008-03.pdf. +Earth Trends. n.d. Population, health and human well-being searchable database. http://earthtrends.wri.org/. +Feldstein, Martin. 2006. Balancing the goals of health care provision. National Bureau of Economic Research Working Paper Series. +Hitti, Miranda. 2008. Poll: U.S. split on socialized medicine. WebMD (February 2008). +Kelley, Edward and Jeremy Hurst. 2006. Health care quality indicators project conceptual framework paper. OECD Health Technical Papers 23 (March 2006). +Kher, Unmesh. 2006. Outsourcing your heart. Time (May 2006). +O'Neill, June E., and Dave M. O'Neill. 2007. Health status, health care and inequality: Canada vs. the U.S. National Bureau of Economic Research Working Paper Series (September 2007). +Organization for Economic Co-operation and Development (OECD). 2002. Measuring up. Improving Health System Performance in OECD Countries. Paris, France: OECD Publications Service. +2004. Selecting indicators for the quality of health promotion, prevention and primary care at the health systems level in OECD countries. OECD Health Technical Papers 16: 4 (December 2004), pp. 9-10, 13-41. +Peto, Richard, and Alan Lopez. 2000. The future worldwide health effects of current smoking patterns. http://www.ctsu.ox.ac.uk/pressreleases/2000-08-02/the-future-worldwide-health-effects-of-current-smoking-patterns. +Tengstam, Anders. 1975. Patterns of health care and education in Sweden. Centre for Educational Research and Innovation Papers, pp. 1-40. +Torrens, Paul R. 1978. The American Health Care System Issues and Problems. St. Louis, MO: C.V. Mosby Company. +U.S. Congress. 1993. International Health Statistics: What the Numbers Mean for the United States—Background Paper. Washington, DC: U.S. Government Printing Office. +U.S. Senate. 2005. Saving dollars, saving lives: The importance of prevention in curing medicine. Hearing before the Special Committee on Aging (June 2005). +Vogel, Ronald J. 1993. Financing Health Care in Sub-Saharan Africa. Westport, CT: Greenwood Press. + +Wang, Youfa, and May A. Beydoun. 2007. The obesity epidemic in the United States—gender, age, socioeconomic, racial/ethnic, and geographic characteristics: A systematic review and meta-regression analysis. Epidemiologic Reviews 29 (1) (January 2007): 6-28. http://epirev.oxfordjournals.org/cgi/reprint/mxm007v1. +Wolff, Edward N. 2004. Changes in household wealth in the 1980s and 1990s in the U.S. Working Paper No. 407. Levy Economics Institute of Bard College. http://www.levy.org/pubs/wp407.pdf. +World Bank. 1994. Better Health in Africa. Washington, DC: World Bank Staff. +World Health Organization(WHO). 2000a. Annex Table 7: Fairness of financial contribution to health systems in all member states. The World Health Report. +______ 2000b. How well do health systems perform? World Health Report, pp. 21-45. +______ 2006. National health accounts. WHO Statistical Information System. +_____. 2008. The WHO agenda. http://www.who.int/about/ agenda/en/index.html. + +![](images/e43a3dfd6422722031086b3f02da307bceb1ddd1b67f8e03ecef9e9047cdcc25.jpg) + +# The Most Expensive is Not the Best + +Hongxing Hao +Xiangrong Zeng +Boliang Sun + +National University of Defense Technology Changsha, China + +Advisor: Ziyang Mao + +# Abstract + +Motivated to evaluate healthcare systems more accurately, we analyze existing evaluation methods. Most methods mainly focus on outcomes and their metrics often ignore internal characteristics of the healthcare systems. + +We devise two methods: an improved World Health Organization (WHO) method and a comprehensive evaluation method. + +The improved WHO method uses the same metrics as the WHO method, which are determined by the outcomes of the healthcare system. Our improvement is to use a grey comprehensive evaluation and the principle of minimum loss of information to combine the metrics, rather than simply combining them linearly. + +In our comprehensive evaluation method, we define five new metrics that concern both outcomes and characteristics of the healthcare system itself, including the effect of the government and the basic situation of a country. Then we use the equal-interval method to get a final score. Compared with other methods, this one does a better job in distinguishing countries and in sensitivity. + +After comparing with other four countries that represent the four main modes of healthcare systems, we conclude that the most important reason why the highest cost can't make the U.S. the best is unfairness. + +Afterward, we use a neural network algorithm to predict what will happen to the U.S. if some values of the metrics change. We conclude that the U.S. can get the greatest benefit by improving fairness. + +We finally consider a policy change, a "medical insurance voucher," as a method to increase insurance coverage and reduce unfairness. + +# Introduction + +Many countries have recently introduced reforms in the health sector with the explicit aim of improving performance [Mathers et al. 2000; 2001]. There is extensive literature on health-sector reform, and recent debates have emerged on how best to measure performance so that the impact of reforms can be assessed [Goldstein 1996]. Measurement of performance requires an explicit framework defining the goals of a healthcare system and a suitable method to make a compelling evaluation. + +So our goal is pretty clear: + +Devise metrics to evaluate the effectiveness of a healthcare system. +Devise a method to combine the metrics. +- Compare several representative countries. +- Restructure the healthcare system of the U.S. and build predictive models to test the changes. + +Our approach is: + +- Analyze factors that can affect the performance of a healthcare system. +- Search the literature on existing evaluation methods and find their shortcomings. +- Develop a comprehensive evaluation method that asks only for existing data or data easy to measure and collect. +- Collect experimental data that can be used in our method. +- Compare current methods and determine their characteristics. +- Do a sensitivity analysis of variations of our models. +- Compare healthcare systems of several representative countries. +- Restructure the healthcare system of the U.S. and build a model based on neural networks to test changes. +- Do further discussion based on our work. + +# Four Representative Healthcare Systems + +The healthcare system, as an important part of the social security system, is essential to promote the stability of society, and it reflects social justice. Due to the different histories, cultures, and status of human rights protection, healthcare systems vary from country to country. + +There are four representative healthcare systems [Ding 2005]: + +- National insurance. The main countries using this system are the UK, Eastern Europe, and Russia. The government dominates, healthcare is free, with full medical treatment and complete coverage of the population. But it doesn't have high efficiency or make use of the market, and it is a heavy burden to the government. +- Commercial insurance. The U.S. is the main country using this system, which makes the market the guideline of the healthcare system. Cost is high, and a large number of people fail to pay. +- Social insurance. This system features mandatory coverage and fairness, as in Japan, Germany, and Canada. It has high cost and slow service. +- Savings insurance. Singapore is the representative country. The main disadvantage is a low service efficiency. Costs rise rapidly, and it cannot achieve full coverage. + +# Analysis of the WHO Estimation Method + +The WHO's methods focus on the outcomes of a healthcare system without considering any characteristics of the system itself. + +# Strengths + +The metrics that the WHO uses to evaluate a healthcare system aim to measure goal attainment, and they include most of the outcomes that a healthcare system should produce. + +# Weaknesses + +- The weights placed on each dimension are somewhat arbitrary. +- The approach heavily penalizes countries with epidemic disease unrelated to the healthcare system. +- This approach does not look at how the system is organized and managed. +- The WHO 2000 rankings do not look at access, utilization, quality, or cost-effectiveness. + +In addition, according to Almeida et al. [2001, 1693]: + +- "The measure of health inequalities does not reflect concerns about equity." +- "Important methodological limitations and controversies are not acknowledged." + +- "The multicomponent indices are problematic conceptually and methodologically; they are not useful to guide policy, in part because of the opacity of their component measures." +- "Primary health care is declared a failure without examining adequate evidence, apparently based on the authors' ideological position." +- "These methodological issues are not only matters of technical and scientific concern, but are profoundly political and likely to have major social consequences." + +# Improved WHO Method + +In the WHO methods, the weights in the construction of the composite index are used without considering uncertainty in the values of the different components. + +We use a grey comprehensive evaluation model1 to improve the WHO method to make the evaluation more credible. + +# Methodology + +Suppose that $c_{ik}$ , for are the raw data of the metrics $k = 1, \ldots, m$ in country $i$ , for $i = 1, \ldots, n$ , giving the $n \times m$ matrix $C = (c_{ik})$ . We suppose that $c^*$ is the best value in metric $k$ among all countries. We take $C^* = (c_k^*) = (c_1^*, \ldots, c_m^*)$ , a best possible situation, as a reference and compare the value of metric $k$ in country $i$ to this ideal via + +$$ +\xi_ {i} (k) = \frac {\min _ {i} | c _ {k} ^ {*} - c _ {i k} | + \rho \max _ {i} | c _ {k} ^ {*} - c _ {i k} |}{| c * _ {k} - c _ {i k} | + \rho \max _ {i} | c _ {k} ^ {*} - c _ {i k} |}, +$$ + +where $\rho \in (0,1)$ is a differentiation coefficient that we generally can take to be 0.5. Using $\xi_{i}(k)$ , we get the evaluation matrix $E = (\xi_{i}(k))_{n\times m}$ . + +Suppose $W = (w_{1},\ldots ,w_{m})$ is a weight-distribution vector for the $m$ metrics, with $w_{k}$ the weight of metric $k$ and $\sum w_{k} = 1$ . Based on the discussion above, we get the grey comprehensive evaluation model + +$$ +R = W \cdot E ^ {T} = (r _ {1}, \dots , r _ {n}), +$$ + +where $E^T$ is the transpose of $E$ and $r_i = \sum_{k=1}^{m} w_k \xi_i(k)$ is the relating degree. The vector $R = (r_1, \ldots, r_n)$ contains the final scores of the countries' healthcare systems. The larger $r_i$ , the better the system. + +# How to Determine the Weights + +We we want to determine the weight vector in a credible way. We use the principle of minimum loss [Wang et al. 2000]. Because our metrics $u_{j}$ evaluate information from different aspects, combining all the metrics in a linear way would lose a lot of information, according to entropy theory in informatics. + +We should maximize conservation of information. So we choose the most classical method: We calculate variance to represent information; the larger the variance, the more information. + +In the final score $d = w^T u$ , we should choose the best weight $w$ to make the variance of $d$ reach the maximum: + +$$ +D (d) = w ^ {T} D (u) w, +$$ + +where $D(d)$ is the variance matrix of $d$ . When $w^T w = 1$ , $D(d)$ achieves its maximum. + +We use the method of Lagrange multipliers. Suppose that + +$$ +\varphi (w, \lambda) = w ^ {T} D (u) w - \lambda (w ^ {T} w - 1). +$$ + +Then + +$$ +\frac {\partial \varphi}{\partial w} = 2 D (u) w - 2 \lambda w = 0, +$$ + +$$ +\frac {\partial \varphi}{\partial \lambda} = w ^ {T} w - 1 = 0, +$$ + +which reduces to + +$$ +D (u) w = \lambda w, +$$ + +$$ +\boldsymbol {w} ^ {T} \boldsymbol {w} = 1. +$$ + +So $\lambda$ is an eigenvalue of $D(u)$ with eigenvector $w$ . When $w^T w = 1$ , to make $D(d) = w^T D(u)w = \lambda w^T w = \lambda$ reach the maximum, we should take $\lambda$ as the maximum eigenvalue of $D(u)$ . + +In the real calculation, we do not know $D(u)$ , so we use the variance matrix $\hat{D}(u) = (\hat{\sigma}_{lj})$ of the sample $(c_{1j}, \ldots, c_{nj})$ of $u_j$ to represent it, where + +$$ +\hat {\sigma} _ {l j} = \frac {1}{n} \sum_ {k = 1} ^ {n} (x _ {k l} - \bar {x} _ {l}) (x _ {k j} - \bar {x} _ {j}), \qquad \bar {x} _ {j} = \frac {1}{n} \sum_ {k = 1} ^ {n} x _ {k j}. +$$ + +The variance matrix $\hat{D}(u)$ is a nonnegative symmetric real matrix, so all its eigenvalues are real. From the properties of Rayleigh's entropy, we get + +$$ +\lambda_ {0} = \max _ {w \neq 0} \frac {w ^ {T} \hat {D} (d) w}{w ^ {T} w} = \max _ {| | w | | = 1} \frac {w ^ {T} \hat {D} (d) w}{w ^ {T} w}, +$$ + +where $\lambda_0$ is the maximum eigenvalue of $\hat{D}(u)$ , and the eigenvector $w$ of $\hat{D}(u)$ is the weight vector that we seek. + +# A Partial Discussion + +The improved WHO method does not change the focus on outcomes of the healthcare system. Its improvement is in making the evaluation more credible. This kind of method makes its own sense in that it really can reflect the goals of the healthcare system, but it can't reflect the inside. For example, a country with an epidemic often gets a low score in WHO's evaluation method, but maybe this is not the problem of the healthcare system. So a new method that reflects the inside is needed. + +# Comprehensive Evaluation Method + +We bring up a method to evaluate a healthcare system, mentioned by [Ding 2005], that considers both the outcomes and properties of systems themselves. + +# Metrics to Evaluate Overall Effectiveness + +To make an overall comparison between countries' health care systems more objectively, fairly and quantitatively, the metrics must be made well. The World Bank has specified the goals of a healthcare system [Schieber and Maeda 1997, 2]: + +- "Improving a population's health status and promoting social well-being" +- "Ensuring equity and access to care" +- "Ensuring microeconomic and macroeconomic efficiency in the use of resources" +- "Enhancing clinical effectiveness" +- "Improving the quality of care and consumer satisfaction" +- "Assuring the system's long-run financial sustainability" + +Pursuant to this definition, we make five metrics for the overall healthcare system: + +- Efficiency, the proportionality between inputs and outcomes. It can be divided into technical efficiency, economic efficiency, and allocative efficiency. For our purposes, we choose technical efficiency. +- Fairness, both in medical treatment and in contributing to the costs. +- Responsiveness "refers to the non-health improving dimensions of the interactions of the populace with the health system, and reflects respect of persons and client orientation in the delivery of health services, among other factors" [Tandon et al. 2000, 2-3]. +The effect of the government. +- The basic situation of a country. This means a composite index of sectors, which include economy, education, scientific research, and population. + +# The Model to Deal with the Index and Data + +Accordingly, we make five new indexes, one for each metric above. + +# Choose the Operation Model + +We use the method of equal intervals to combine the indexes, which is also used in the Human Development Index by the United Nations to compare countries. We also solve the problem of how to determine the weights. + +# The Equal Interval Method + +# The Operating Process + +- Divide the subindexes into positive indexes and negative indexes. +- Use different algorithms to make the standardization to the two kinds of indexes. +- According to the subindexes, we can get the five main indexes' composite values. +- Calculate the final score of different countries based on the five metrics' values. + +# Classification of the Indexes + +- Classification. Positive index: the higher the value, the better the healthcare system; for example, availability of safe drinking water. Negative index: the higher the value, the worse the healthcare system; for example, the proportion of smokers. + +- Standardization. The indexes have different units, so we should standardize before calculating the final score. After the classification, we can deal with the two kinds of indexes differently. + +- Positive index: $F_{ilj} = \frac{R_{ilj} - R_{il\min}}{R_{il\max} - R_{il\min}} \times 100$ , +- Negative index: $F_{ilj} = \frac{R_{il\max} - R_{ilj}}{R_{il\max} - R_{il\min}} \times 100$ , + +where + +- $i$ is the one of five metrics, +- $l$ is the subindex of the metric $i$ , +- $j$ is the one of the countries, +- $R_{il\min}$ is the minimum value of the $l$ subindex of the metric $i$ in the statistical data, and +- $R_{il\max}$ is the maximum value of the $l$ subindex of the metric $i$ in the statistical data, and $F_{ilj}$ is the value of the $l$ subindex of the metric $i$ after standardization. + +Determine the weights. We can get the value of every metric using the function + +$$ +F _ {i j} = \left(\sum_ {i = 1} ^ {n} \frac {\left(F _ {i l j}\right) ^ {\alpha}}{n}\right) ^ {1 / \alpha}, +$$ + +where $n$ is the number of the subindex in metric $i$ and $\alpha$ is a weight of the metric $i$ . + +Get the final score of the evaluated country. Based on the discussion above, we get the function + +$$ +S = \left(\sum_ {i = 1} ^ {n} \frac {(F _ {i l j}) ^ {\alpha}}{k}\right) ^ {1 / \alpha}, +$$ + +where $k$ is the number of metrics (in our case, $k = 5$ ). + +# Comparisons between Methods + +Before the comparison, each component measure was rescaled on a 0 to 100 scale: + +- for healthy life expectancy, $H = \frac{\text{Health} - 20}{80 - 20} \times 100$ ; +- for health inequality, $\mathrm{HI} = (1 - \text{Health Inequality}) \times 100$ ; + +- for responsiveness level, $R = \frac{\text{Responsiveness}}{10} \times 100$ ; +- for responsiveness inequality, $\mathrm{RI} = 100(1 - \text{Responsiveness Inequality})$ ; +- for fairness in financing, $\mathrm{{FF}} =$ FairnessofFinancingContribution $\times {100}$ . + +The overall composite was, therefore, a number from 0 to 100. + +# Dipartite Degree Analysis + +As we know, a good metric should distinguish. But the WHO's method can't; for example, its method gives 36 countries the same value in its metric of responsiveness. We evaluate the degree of distinction via + +$$ +\mathrm {D D} = \sqrt {n _ {1} ^ {2} + \cdots + n _ {i} ^ {2}}, +$$ + +where $N - i$ is the number of countries that can't be distinguished in criterion $i$ . The smaller DD, the better the degree of distinction. + +# Monte Carlo Simulation + +To test the dipartite degree (degree of distinction) of every method, we use Monte Carlo simulation to make a small change to every data value, since the value must contain some error. The process is as below. + +First, we use the beta distribution to determine the change in each value. Because the beta distribution is restricted to the interval $[0, 1]$ , a linear function of a beta-distributed random variable can be used to scale the sampling interval appropriately. + +The beta distribution can be described by the probability density function + +$$ +\operatorname {B e t a} (\alpha , \beta) (x) = \left\{ \begin{array}{l l} \frac {\Gamma (\alpha + \beta)}{\Gamma (\alpha) \Gamma (\beta)} x ^ {\alpha - 1} (1 - x) ^ {\beta - 1}, & 0 < x < 1; \\ 0, & \text {e l s e}. \end{array} \right. +$$ + +It has expected value $E[X] = \alpha / (\alpha + \beta)$ . + +Suppose that $x_{ij}$ , with $1 \leq i \leq 191$ and $1 \leq j \leq 10$ , is the unknown true mean of the random variable $X_{ij}$ representing the $j$ th metric in country $i$ . We let + +$$ +X _ {i j} = (x _ {i j} - 1) + 2 \mathrm {B e t a} (2, 2) (X), +$$ + +which takes values in $[x_{ij} - 1, x_{ij} + 1]$ and has expected value + +$$ +E [ (x _ {i j} - 1) + 2 \mathrm {B e t a} (2, 2) (X) ] = x _ {i j} - 1 + 2 [ 2 / (2 + 2) ] = x _ {i j}. +$$ + +We use Monte Carlo simulation to create 1,000 numbers randomly in the interval $[x_{ij} - 1, x_{ij} + 1]$ and calculate a $95\%$ confidence interval for $x_{ij}$ . + +# Sensitivity Analysis + +# About the Values of the Metrics + +In this part, we change the values but keep the weights to see how can this change affect the evaluation result. Then we can arrive at the most important metric, the one that can affect the final score acutely. + +Suppose that $G_{p}$ and $G_{q}$ are the final scores of countries $p$ and $q$ . Let $U_{qr}$ be the value of metric $r$ in country $q$ . Change it to make $G_{p} = G_{q}$ ; then we can get the marginal value $U_{qr}^{B}$ : + +$$ +U _ {q r} ^ {B} = U _ {q r} + \frac {G _ {p} - G _ {q}}{w _ {r}}. +$$ + +We can do sensitivity analysis to the values of the metrics following the process below: + +- If $U_{qr}^{B}$ is outside of the allowable interval, whatever it changes, it won't change the order of the two countries; so $r$ is a value-insensitive metric. +- When $U_{qr}$ is close to $U_{qr'}^B$ , changing the value will change the order of the two countries; so $r$ is a value-sensitive metric. + +# About the Weights + +In this part, we change the weights but keep the values of the metrics to see how doing so affects the evaluation result. Then we can get the most important weight, the one that can affect the final score acutely. + +When a weight changes, it affects others, since the weights sum to 1. To make a simple analysis, when a weight changes, let only one another change at the same time, and keep the others fixed. + +Suppose that the weights' values before they change are $\bar{w}_j$ , $\bar{U}_{ij}$ , $\bar{G}_j$ and after changing they are $w_j$ , $U_{ij}$ , $G_j$ . Suppose that the changing weights are $r$ and $s$ , so that + +$$ +w _ {r} + w _ {s} = \bar {w} _ {r} + \bar {w} _ {s}. +$$ + +The changing interval of $w_{r}$ and $w_{s}$ is $[0, \bar{w}_{r} + \bar{w}_{s}]$ . When they change, maybe the final score of one country will equal that of another. Let the two countries be $p$ and $q$ . Then we can get the marginal weights + +$$ +w _ {r} ^ {B} = \frac {\bar {B} _ {p} - \bar {G} _ {q}}{(\bar {U} _ {p r} - \bar {U} _ {q r}) - (\bar {U} _ {p s} - \bar {U} _ {q s})}, +$$ + +$$ +w _ {s} ^ {B} = (\bar {w} _ {r} + \bar {w} _ {s}) - w _ {r} ^ {B}. +$$ + +When the two countries have the same score, we can get $r$ and $w_{s}$ as + +$$ +w _ {r} = \bar {w} _ {r} - w _ {r} ^ {B}, \qquad w _ {s} = \bar {w} _ {s} - w _ {s} ^ {B}. +$$ + +We can do the sensitivity analysis to the weights following the process below. Because the changing interval of $w_{r}$ and $w_{s}$ is $[0, \bar{w}_{r} + \bar{w}_{s}]$ , if $w_{r}$ and $w_{s}$ are outside the interval, the change won't affect the final order of the two countries; the metrics $r$ and $s$ are insensitive. If not, this change may affect the final order of the two countries. + +- If $w_r > \bar{w}_r$ , so that the weight of metric $r$ is bigger than $w_r$ , the final order of the two countries will be changed; then $r$ is a weight-insensitive metric for the country with the lower score. +- If $w_r < \bar{w}_r$ , when the weight of metric $r$ is smaller than $w_r$ , the final order of the two countries will be changed too; then $s$ is a weight-insensitive metric for the country with the lower score. + +# Analysis of American Healthcare System Based on Neural Networks + +# The Design of the Back Propagation Network + +Because of the difficulty of data collection, we choose just satisfaction and seven other indexes as the inputs to a back propagation (BP) neural network: + +- health expenditure per capita, +number of doctors per thousand people, +number of sickbeds per thousand people, +- anticipated lifespan, +- infant mortality, +- proportion of the healthcare cost in GDP, and +- extent of healthcare coverage. + +So, the network should have 7 nerve cells in input layer, and $15(= 2 \times 7 + 1)$ nerve cells in middle layer. + +We choose satisfaction to be the target of the network, so there is just one nerve cell in the output layer. According the principles for designing a BP network, the passing function to the middle layer is a sigmoid function. We created the neural network in Matlab. + +# Application of the BP Network + +To check the effect on satisfaction when an index changes, we make one of them rise by $20\%$ once and keep the others unchanged. Doing that, we can get the satisfaction for each year as shown in Figure 1. + +![](images/8960c0f410a0b286d2bbddce3924d1aac3bd6c5b6db112fe07f388d4f59de6a0.jpg) +Figure 1. The satisfaction curve after adjustment. + +The satisfaction has a rising trend when an index rises. The coverage of healthcare insurance improves the result to the greatest degree. So increasing the coverage of healthcare insurance is a good way to improve the performance of the U.S. healthcare system. + +# Advice to the Healthcare System of the U.S. + +According to the analysis above, the most important problem in the healthcare system of the U.S. is coverage. Though the government has established insurance for the elderly and for children, a lot of people still fail to buy insurance because it is expensive. Universal healthcare coverage will not only lead to fairness in healthcare but also encourage insurers to give better service. + +Based on this, we bring up a plan of a "medical insurance voucher" to make the U.S. reach universal healthcare coverage rapidly. We suggest that the government run an insurance institution itself, while at the same time encouraging commercial healthcare insurance institutions. The government should put out the same "medical insurance voucher" to all residents, who can choose a healthcare insurance institution in which to participate. + +To fund this program, we would tax smoking and alcohol consumption. + +The differences between public and private insurance are in service and cost. The government should provide basic medical care—the lowest level of service. Commercial insurance should offer more service and better conditions, at a slightly higher cost. A resident who participates in commercial insurance should thus pay a little more in addition to using the medical insurance voucher. When healthcare coverage becomes universal, people will pay only a small part of their income to get the healthcare. Advantages of this plan are: + +- The plan designs a competitive relationship among insurance institutions, to make them to do their best to reduce cost and improve quality of healthcare—thus improving the effectiveness of the healthcare system. +- In particular, there is a competitive relationship between government (social) insurance and commercial insurance. In some countries where social insurance dominates, needs can't be satisfied and effectiveness is low. Besides, setting social insurance at a minimal level can not only make commercial insurance institutions improve themselves, but adjust the national view. +- Collect the funding for the medical insurance vouchers by taxes, which solves the problem of fairness. Fairness asks the healthcare system not to provide the medical care by income but by need. The tax system has a target of reallocating incomes, and it also can be used to solve the problem of fairness. +- This plan protects the right of choice of residents. It combines competition and human rights, making for a balance between two important problems. + +# References + +Ding, Chun. 2005. An empirical comparison of four models of healthcare systems in the world. *World Economic Papers* 2005 (4) #36. http://scholar.ilib.cn/A-sjjjwh200504036.html. +Goldstein, H., and D.J. Spiegelhalter. 1996. League tables and their limitations: Statistical issues in comparisons of institutional performance. Journal of the Royal Statistical Society Series A 59 (3): 385-443. http://www.cmm.bristol.ac.uk/team/HG_Personal/limitations-of-league-tables.pdf. +Mathers, Colin D., Ritu Sadana, Joshua A. Salomon, Christopher J.L. Murray, and Alan D. Lopez. 2001. Healthy life expectancy in 191 countries, 1999. Lancet 357: 1685-1691. http://jasalomon.googlepages.com/ Mathersetal_HealthyLifeExpectancy_Lancet2001.pdf. + +2000. Estimates of DALE for 191 countries: methods and results. GPE Discussion Paper No. 16. Geneva, Switzerland: World Health Organization. https://www.who.int/entity/healthinfo/paper16.pdf. +Schieber, George, and Akiko Maeda. 1997. A curmudgeon's guide to financing health care in developing countries. In Innovations in Health Care Financing, edited by George J. Schieber, 1-40. World Bank Discussion Paper No. 365. Washington, DC: World Bank. +Tandon, Ajay, Christopher J.L. Murray, Jeremy A. Lauer, and David B. Evans. 2000. Measuring overall health system performance for 191 countries. GPE Discussion Paper No. 30. Geneva, Switzerland: World Health Organization. https://www.who.int/entity/healthinfo/paper30.pdf. +Zheng, Xiaowei, Gong Zhao Hui, and Wang Xue. 2000. Linear synthetical evaluation function and the determination of its weighting coefficients [in Chinese]. System Engineering—Theory and Practice 2000 (10). http://scholar.ilib.cn/A-xtgcllysj200010016.html. + +![](images/8b15bda4fd6404d0c4f394be65e546cfa8263a92a0e3e29cf2f9b4876149e456.jpg) +From left to right: Advisor Ziyang Mao and team members Hongxing Hao, Boliang Sun, and Xiangrong Zeng. + +# Judges' Commentary: The Outstanding Healthcare Papers + +Sarah Root +Dept. of Industrial Engineering +University of Arkansas +Fayetteville, AR + +Rodney Sturdivant +Frank Wattenberg +Dept. of Mathematical Sciences +U.S. Military Academy +West Point, NY + +# Introduction + +The Interdisciplinary Contest in Modeling (ICM) is a vehicle for students in teams to develop a model to address a problem posed to them, over a four-day weekend. The contest challenges not only students' creativity and modeling prowess, but also their ability to work together in a time-constrained environment. Many of these teams' modeling effort and analysis is truly impressive. In early April, eight judges gathered to read, compare, and contrast the submissions to the contest. Like many of the teams, the judges were an interdisciplinary group with backgrounds in mathematics, statistics, healthcare administration, industrial engineering, and operations research. + +# The Problem + +Teams faced the problem of developing a model to compare many of the world's healthcare systems. This is a timely and relevant issue, particularly in the U.S. with the upcoming presidential election. Although per capita the U.S. spends the most on healthcare, several countries have vastly better health outcomes. In addition, many point to inequities in the U.S. system, resulting + +The UMAP Journal 29 (2) (2008) 169-174. ©Copyright 2007 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +from a substantial percentage of the population that is either uninsured or underinsured, as evidence that the U.S. system could be improved. + +The problem required teams to look at issues faced by countries around the world, including developing a model to help compare the countries' healthcare systems across these issues. The problem tasks fell into four main categories: + +- Metric Identification and Selection. The teams had to identify metrics for efficiency and effectiveness of healthcare systems and then specify where such data could be obtained. Teams needed to incorporate at least three metrics into their models and justify the importance of the metrics selected. +- Model Development. After identifying the metrics, teams had to developing a methodology or model to use these data to compare healthcare systems. +- Analysis and Comparison of Healthcare Systems. Teams were to exercise their model to perform at least two comparisons: one to compare the U.S. to a country considered to have a good healthcare system, and a second to compare the U.S. to a country with a poor healthcare system. +- Recommendations to Restructure a Healthcare System. After performing the comparisons, teams were to select a country and use their model to suggest changes to restructure its healthcare system. They were either to adapt their existing model or to construct new models to suggest to what degree various changes would improve the system, as measured by the metrics that they had selected. + +Overall, the judges were impressed both by the strength of many of the submissions and by the variety of approaches used. + +# Judging Criteria + +To structure the judges' thinking and to ensure consistency across judges, we developed a rubric to evaluate submissions. Its framework encompassed: + +- Executive Summary: It was important that a team succinctly and clearly explain the highlights of their submission. The executive summary needed to include the metrics selected, modeling approach(es) used, results from comparisons, and recommendations to restructure a healthcare system. Most teams did this. What distinguished the Outstanding papers was the clarity with which the authors connected these topics and conveyed substantial information in the executive summary. +- Scientific Knowledge and Application: This task required that teams either have or develop some knowledge about healthcare systems and mechanisms by which effectiveness can be measured. The majority of teams did this relatively well, selecting relevant metrics to use in their analyses. The teams that excelled in this area went beyond traditionally available metrics. For example, some teams combined readily-available metrics into new ones + +that would more accurately assess a healthcare system's efficiency or fairness. Others arranged their metrics into groups that would shed light on the analysis. Another way in which teams distinguished themselves in this area was by exploring the tradeoffs between metrics and their limitations. For example, one team noted that although the lifespan of the population was an important metric, various interventions to improve the population's health would take a long time to impact the overall lifespan of the population. + +- Modeling and Assumptions: The most effective papers made their assumptions explicitly from the scientific foundation that they developed in order to build their models. Models ranged widely in complexity, with factor analysis the most popular approach to synthesizing metrics. It was important that the modeling process was well formulated and robust; but unfortunately, some papers had wonderful models that offered little explanation of how the model functioned or provided little use of the results in the analysis. The ability to use the model to make conclusions and recommendations about healthcare systems distinguished the Outstanding papers regardless of model choice. +- Analysis/Reflection: Successful papers discussed how their models addressed issues and tasks of improving the healthcare system in a country. The later requirements of the project were often not addressed, or only superficially. As an example, teams often used models to produce scores to compare countries and conclude that the healthcare system of one was better than another. However, they did not delve into why one country scored higher or to address whether the result was meaningful. In some cases, the final task of restructuring a healthcare system was given very little attention. The best papers used the results of their model to support their recommendations for changes in a healthcare system. +- Communication: The ability to communicate effectively really distinguished the best papers from the others. In some cases, the mathematical model was presented with little or no explanation; so, while the work appeared promising, judges could not follow the exposition or determine how the model was used to address the issues. The judges noted several very specific things that made papers stand out, including presenting the work clearly and concisely and effectively connecting the science to the modeling process. Some papers described the healthcare system and issues well but then lost that thread as they began the modeling process. Additionally, some papers were disjointed, possibly because different team members wrote the various sections without ensuring continuity throughout the document. + +# Discussion of the Outstanding Papers + +The Outstanding papers demonstrated true understanding of the difficulties and complexities of healthcare systems, included well-formulated models, and used this work to make thoughtful and interesting suggestions for improving healthcare. While the time constraint of a single weekend meant these papers were not perfect, each team produced work with distinguishing features. + +- The Beijing University of Posts and Telecommunications submission (pp. 113-134) is notable for the impressive array of modeling techniques utilized in attacking the problems. There were other papers with a similar level of modeling, but this group not only describes the modeling process clearly but connects the models coherently to the problem at hand. To improve healthcare in the U.S. they propose, among other things, increasing "the ratio of general government expenditure on health to private expenditure" while "decreasing total expenditure on health as a percentage of GDP." +- The paper from the National University of Defense Technology (pp. 155–168) includes perhaps the most comprehensive review of healthcare systems, metrics, and issues among all submissions. The paper is also notable for the sensitivity analysis of its models. Further, this team continues to tie their scientific knowledge throughout the paper, resulting in exceptional comparisons and evaluation of healthcare systems based on their models. They recommend a “medical insurance voucher” to “increase the insurance coverage and reduce the unfairness” in the U.S. healthcare system. +- The Harvey Mudd College submission (pp. 135-154) was among the most clearly and concisely written papers. The team uses "meta-metrics" to map scores on various healthcare metrics into three areas. These then feed into a stochastic model to analyze various changes to a healthcare system. The result is a very strong set of well-supported recommendations for healthcare change in the U.S., such as "emphasis on the prevention of illness," as well as a "shift towards a more centralized healthcare system in order to make care more accessible to lower- and middle-class individuals." + +Two other papers, not designated Outstanding and not published in this issue, stood out for the judges. + +- The first of these considers healthcare through the eyes and life of "Simon," "an entity who currently does not exist" but "is equally likely to be any person in the world." This paper is not only incredibly clever and creative but demonstrates an outstanding understanding of particularly the economic side of the healthcare debate. The abstract concludes, "Simon says the United States needs healthcare reform now. As we have been told since childhood, it is always good to do what Simon says." Hear, hear. +- The second paper considers the healthcare system through “the lives of John and Jane Doe” and builds a “Virtual Life Model” from first principles. Again, + +a very creative and interesting paper! This team notes improvement in the U.S. system but could improve through changes to "prevention, treatment, and access" components of healthcare. + +# Conclusion + +The judges extend their congratulations to all who participated in the contest. The submissions represented not only a variety of approaches that teams used to model the problem, but also a variety of approaches to analyzing the results obtained by the model. Reading your submissions was an enjoyable activity. As judges, we will be excited to see both the types of problems that you approach and the creativity that you use as interdisciplinary modelers after you complete your studies. + +# Recommendations for Future Participants + +- When ideas, assumptions, modeling concepts, and other aspects of the problem are clearly explained, it is easier to separate work that is outstanding from work that is just good. Aim to communicate your ideas clearly and concisely. +- Address all aspects of the problem that are asked. Omitting questions that are asked in the problem statement will not result in a submission that is competitive. +- Simple explanations are usually better than complicated ones. Both clarity and brevity in explanations are preferred to explanations that are long, rambling, and sometimes confusing. +- It is important to cite precise sources for work or words that are not your own. Material taken from other sources must be thoroughly documented and placed in quotation marks. +- The selection of an appropriate modeling approach is critical, but using the model to analyze the problem and present recommendations is often more important than the model itself. Teams should spend time not only in developing the model but in using it to obtain recommendations, analyze different scenarios, and perform sensitivity analysis. +- The recommendations that you make to decision-makers should stem from your model. They should not simply be the result of Internet research or other sources that are independent of your model. +- Team members should work to integrate their final submissions. The judges should not be able to distinguish clear breaks in communication or—wrongly identify contradicting information in portions of the paper that were written by different team members. + +# About the Authors + +![](images/78ef7aaaa6b8be308363edb263f47230f2d2f5f8a9157c9990375a03c4720687.jpg) + +Sarah Root is an Assistant Professor of Industrial Engineering at the University of Arkansas, where she teaches courses in operations research and service systems engineering. Her research interests are primarily in modeling and solving large-scale optimization problems, particularly those arising in logistics and healthcare systems. + +Rod Sturdivant is an associate professor at the United States Military Academy in West Point, NY. He earned a Ph.D. in biostatistics at the University of Massachusetts—Amherst and is currently program director for the probability and statistics course at West Point. He is also founder and director of the Center for Data Analysis and Statistics within the Department of Mathematical Sciences. His research interests are largely in applied statistics with an emphasis on hierarchical logistic regression models. + +![](images/fc2f6e013ff9c0086ca808fd7aadce1aa409729cc4a8e7c9b7f9f0ec261da48f.jpg) + +Frank Wattenberg is a professor in the Department of Mathematical Sciences at the United States Military Academy (USMA), West Point. He is particularly interested in modeling and simulation and in the use of technology for simulation and for education across the undergraduate curriculum. He is currently leading a team at the USMA that is developing Modeling in a Real and Complex World to be published as part of the MAA Online Book Project. He is also working with colleagues at USMA and elsewhere to develop rich immersive environments for modeling and simulation. This project will produce environments with both virtual and hands-on components that students will revisit from middle school through college and from many different subject areas and levels. The architecture will support collaborative modeling and simulation based in part on the ideas of multiplayer games. + +# Reviews + +Tung, K.K. 2007. Topics in Mathematical Modeling. Princeton, NJ: Princeton University Press; 336 pp, $45. ISBN 978-0-691-116426. + +The world today is awash in textbooks on modeling. These range in level from texts for students with very little mathematical background to texts for graduate students and professionals. Many of these books use modeling as a foil to promote a particular agenda: dynamical systems, or nonlinear differential equations, or perhaps finite mathematics. These books are less interested in teaching and more interested in planting a flag. + +The book under review is a refreshing departure from the sorts of polemics just described. Tung's preface shows that he is a dyed-in-the-wool teacher of considerable talent whose only mission is to show the student how to take raw empirical data and turn it into a mathematical paradigm that can be analyzed. His prerequisites are solid but minimal: calculus and a smattering of ordinary differential equations (ODEs). He is wise to provide an appendix with a quick treatment of ODEs for those whose background is deficient. Tung also describes in the preface a clear path for those who wish to avoid the differential equations altogether. + +Tung covers some of the usual modeling topics but also many others that are surprising and refreshing. Among the former are + +- Fibonacci numbers, +- compound interest, +- radiocarbon dating, +- Kepler's laws, +- nonlinear population models, and +- predator-prey problems. + +Among the latter are + +- global warming, +- marriage and divorce, +analysis of the El Niño effect, +the age of the Earth, + +- the Broughton and Tacoma Narrows bridges, +- climate models, +- HIV modeling, and +- mapping the World Wide Web. + +Tung uses a variety of techniques to analyze these different problems. Among these are differential equations, dynamical systems, linearization, phase-plane analysis, and many others. One important feature of the book is that an entire chapter is devoted to each problem and its related ideas. In a calculus class, the student typically sees examples and problems that can be solved in a few lines. Here the student sees the substantive development of mathematical ideas over the course of a prolonged discussion. The book does not contain any proofs per se, but it has discussions that have the gravitas of proofs. + +The writing in this book is delightful and elegant—almost literary in its beauty and precision. The presentation is thoughtful and readable. The organization is exemplary. As an instance, each chapter begins with a few words telling the reader exactly what mathematics will be needed for the discussion. Every chapter has a useful introduction. There are many interesting references of a philosophical or cultural nature. + +The book contains plenty of entertaining graphics, photographs of mathematicians, and other illustrative figures. The sections and their titles are chosen to give the reader a keen sense of the flow of ideas. The layout of the book is open and friendly. This is certainly an inviting text for students. + +One of the truly critical components of a successful textbook is the exercise sets. This text contains exercises that are quite thought-provoking. Each is a word problem that could be used for class or group discussion, and many of these problems could be developed into research projects or term papers. One might wonder whether it would have been propitious to include some elementary exercises as well. If I were to teach from this text—and I would certainly enjoy doing so—I would probably find myself hunting around in other texts for routine and drill exercises to give the students. (One may well puzzle over what these elementary exercises might consist of. But something must be provided to help bring students up to speed.) That would be too bad, for it is the author's job to provide that sort of material. But I must stress that the exercises that are provided are the product of much research and thoughtful editing. They are quite valuable and instructive. + +Another small criticism—or at least a comment—is that the book contains few if any displayed examples. One usually expects a textbook to have the format + +- Introductory patter, then +- enunciation of idea, followed by + +- illustrative example + +for each key topic. Tung's book deviates from that paradigm because in fact each chapter is a topic. Each chapter is an example. It actually does not make a great deal of sense—for what Tung is trying to achieve—to have displayed examples. Such items would be too trite. + +But the point of the discussion in the last two paragraphs is worth noting: For many if not most students, this course, and this book, will be a first exposure to serious mathematical discourse. Here, for the first time, the student will see protracted mathematical reasoning directed toward a sophisticated and well-defined goal. This is not the place for cute little problems with three-line solutions. This is instead the venue for rather recondite reasoning. Elementary exercises and elementary examples do not really have a place here. The student in a course like this will need to exert some effort in order to get something of value out of it. But the effort will be well rewarded. + +Another important point is that this text illustrates, unlike any text in a previous or more elementary course, the symbiosis of mathematics with other parts of science and technology. Mathematics is not a cottage industry that caters primarily to its own whims. Rather, mathematics is the key to understanding much of the world around us. Surely Tung's book illustrates this important point clearly and decisively. That is an incisive message for any student to see and understand, and it takes a good textbook to get the message across. + +Steven G. Krantz, Professor of Mathematics, Washington University in St. Louis, One Brookings Drive, St. Louis, MO 63130; sk@math.wustl.edu. + +Hunt, Earl. 2007. The Mathematics of Behavior. New York: Cambridge University Press, 2007; x+346 pp, $80, $34.99 (P). ISBN 978-9-521-85012-4, 978-0-521-61522-8. + +For too many mathematics professors, if they have any idea of industry, it is the government research laboratories. That is, they have no idea of the sort of jobs that most people working in industry with mathematics degrees experience. But I am going to give one piece of advice for the student facing industry, and then I will extrapolate from it back to academia. + +Suppose that a student with a recent degree in mathematics receives two job offers that seem utterly equivalent in pay, conditions, security, and so on. But there is one significant difference. In Corporation A, the worker will be working with many mathematical scientists in an environment where much of the work is inherently mathematical (and scientific). In Corporation B, the worker will be the house mathematician: the go-to person for mathematical questions. + +Now, though the second job might sound like a nice opportunity, the worker is almost certainly better off to take the job with Corporation A. In Corporation A, you have mathematical workers who make work for one another. With any luck, they will support one another and collaborate on papers and generate more mathematical work. But in Corporation B, the worker is very likely to be lonely—the corporate pariah. The mathematical advice that the worker is solicited to provide is likely to be met with incomprehension and suspicion. + +How does this apply to academia? Suppose that a student has a degree in mathematics but wants to pursue a Ph.D. in cultural anthropology. My advice is to pursue a degree in physical anthropology instead. Physical anthropology is an area that is much more quantitative and where the mathematically-trained student has a much greater chance of fitting in. After receiving the Ph.D., there is little to keep the physical anthropologist from venturing into cultural anthropology at least occasionally. There is an unmistakable trend towards greater use of mathematics in each of the social sciences. I have even heard of departments in the social sciences recruiting mathematics majors on the grounds that it is easier to convert them to the discipline at hand than trying to convert social science students to mathematical methods. (This sort of thing seems to occur frequently in different areas and is one reason to major in mathematics.) + +The book reviewed here is a survey of experimental psychology, a field that has a long tradition of mathematical modeling and in particular of first-rate statistical research. Psychology, as we are told on Day 1 in Psychology 100, is the study of behavior. Earl Hunt has practiced mathematical psychology for 50 years. The book is a fairly comprehensive survey of experimental psychology; it does not touch upon any area of clinical psychology, even when the controversies there may have mathematical content (largely through the use of statistics). I will not enumerate the topics covered in this book. Many of them will be familiar to anyone with a knowledge of applied mathematics. Some of the material will be familiar only to experimental psychologists—for example, the chapter on the physics of perception. That subject does not interest me much, but I found it valuable because of Hunt's historical approach. + +The book begins with Eratosthenes' work on estimation of the Earth's circumference. This is because Prof. Hunt first discusses the philosophy of mathematical modeling. His second chapter explores the foundations of probability. I think that he is a little too theoretical here, given the subject of the book; no one is going to learn probability here. He says (p. ix) that the book should be accessible to anyone with a "basic understanding of calculus, and most of the book will not even require that." I would require some calculus for mathematical maturity. I would also expect the reader to have some knowledge of probability and of statistics. + +Prof. Hunt, as an experimental psychologist, is quite good on statistical issues. But the reader should have some knowledge of it coming in. To + +put it succinctly: A freshman with one semester of calculus can learn a lot from this book and should acquire the gist of many topics; a senior with many mathematics courses can learn a lot more and can get a remarkably clear idea of experimental psychology; but knowledge of statistics would be most useful to the reader. There is serious attention to Bayes' Theorem, to covariance, to regression, and to factor analysis (a technique used by experimental psychologists probably more than by any other group). This book will give the mature student the information necessary to make an informed decision about pursuing experimental psychology in a graduate program. One topic in experimental psychology is among the most controversial and dangerous topics in all of academe. I am referring to intelligence testing, which has been the subject of a remarkable amount of political posturing and uninformed commentary, in some cases by mathematicians. Prof. Hunt's discussion of that area is far more detailed and sophisticated than what readers see in the popular press (and by "popular press" I include venues such as Scientific American). However, it cannot be considered a comprehensive survey; it is instead a good introduction to a contentious subject at an appropriate level. The wars over intelligence testing are far more intense than those in mathematics and the hard sciences. That controversy has had far more public exposure than most. + +A similar war in clinical psychology, over "repressed memory," has involved serious contributions by experimental psychology. Prof. Hunt shows the experimental approach to memory. He barely mentions the controversy and equivocates to some extent; other experimental psychologists have a great deal to say on this topic and there is no hint here of the intensity of the conflict. In fact, a survey of the battles over repressed memory is sufficient to show that conflicts in the social sciences tend to be more intense than those in math and the hard sciences and that social scientists do combat at an entirely different level. A corollary to this is that Ph.D. programs in these areas can be even more hazardous than those in the "hard" sciences. Keeping this in mind, mathematics students should consider the social sciences for graduate study. (This is certainly true of economics. Economics in the U.S. and the U.K. is so mathematical that I think that mathematics majors have an advantage.) + +The book reviewed here is a good survey of much of experimental psychology. It is also of interest to anyone pursuing the mathematics of sociology and to a lesser extent political science (for the treatment of Arrow's theorem). + +James M. Cargal, Mathematics Department, Troy University—Montgomery Campus, 231 Montgomery St., Montgomery, AL 36104; jmçargal@sprintmail.com. + +Nahin, Paul J. Digital Dice: Computational Solutions to Practical Probability Problems. Princeton, NJ: Princeton University Press; xi + 262 pp, $27.95. ISBN 978-0-691-12698-2. + +The evolution of computer languages is a fascinating but complex topic. But it is probably safe to say that prior to about 1985 most languages were quite useful for numerical computation (although there are obvious exceptions such as COBOL, the dominant business language). However, in the late 1980s, C and then $\mathrm{C + + }$ became dominant in computer science departments. The problem with C was twofold: + +- it is a cryptic language; and +- it is designed to give the user access to the guts of the machine—a capability largely irrelevant to numerical computation. + +C $(\mathrm{C} + +$ actually) was replaced as the dominant language by Java. Java was designed for Web development; again, a purpose largely irrelevant to computation. + +The result of all of this is that it is easy to encounter students who have passed courses in programming but who somehow do not know anything about simple control structures. Many of them cannot program a spreadsheet. (Spreadsheets can be remarkably efficient for a variety of tasks, especially in discrete mathematics but also in areas such as differential equations.) It appears that mathematicians are responding to this problem by doing their numerical programming in mathematical environments such as Mathematica, Maple, and Matlab. Also, I suspect that mathematics departments are teaching their majors programming themselves rather than sending them to the CS departments. + +Nahin's book reviewed here could be a useful for a course in mathematical programming. Paul J. Nahin is professor emeritus of electrical engineering at the University of New Hampshire. In recent years, he has published a number of books. His An Imaginary Tale: The Story of $i$ [1998] is I think the best introduction to complex arithmetic and analysis for the undergraduate in mathematics and the sciences. The sequel, Dr. Euler's Fabulous Formula [2006], is something of a tour de force. + +In 2000 Nahin published a book of probability problems, *Dueling Idiots and Other Probability Puzzlers* [2000], in which there is sometimes the suggestion of analyzing the problem by simulation. The book reviewed here is explicitly dedicated to that technique. What makes the book useful is that there is little formal probability. Density functions and probability distributions do not come up; so the teacher using this book as a source can concentrate on programming and logic. + +The book is divided into four parts: introduction, problems, solutions, and appendices. The solutions usually contain analysis as well as numerical results. Programming is in Matlab, which closely resembles structured Basic and functions effectively as readable pseudocode. The introduction + +is an informative essay, and the nine appendices are themselves likely to be of interest to the reader. + +Another current educational issue is that students in the mathematical sciences often do not seem to understand the power of computation. This is because the curriculum tends to lag behind technology—by decades. + +For example, statistical methods, to a remarkable extent, reflect the technology that was available to Ronald Fisher in the 1920s. Many statistical tests can be replaced by remarkably quick computational methods; moreover, these methods often are nonparametric and as such (I would say) esthetically superior to classical methods. The very first problem in Digital Dice is essentially a statistical test: Five dishwashers work in a restaurant. In a one-week period, five dishes are broken and four of those are broken by one individual. The individual claims that this is merely bad luck. The problem is to calculate the probability that one individual would break four out of the five dishes under the assumption (null hypothesis) that each dishwasher is equally likely to break a dish. Here, though, I believe that author Nahin is in error. He calculates the probability that the one particular worker would break four or more dishes under the null hypothesis. The correct approach, I am sure, is to calculate the probability that some one employee would break four or more of the five dishes under the null hypothesis. In any case, that very question is a good one for the students to contemplate. + +Much of the impetus of Digital Dice is the analysis of counterintuitive problems, with the idea that these problems might better motivate the students. There is a lot of truth to that; nothing is more motivating than results that are unexpected and surprising. These problems often motivate both approaches to the solution: analysis to understand why things work the way they do, and simulation to verify both the way things work and the analysis. + +In my review of books on investment [Cargal 2006, 87-89], I discussed a problem from Morton Davis and I recapitulated his analysis, which leads to a highly counterintuitive result about a reasonable-seeming investment strategy. When I first encountered the problem in 1989, I was working in industry. To simulate the problem, I would wait until after work and run it on as many as 10 personal computers simultaneously (this was when a $25\mathrm{MHz}$ Intel-386 processor was considered fast). I didn't doubt the analysis of the problem—it is at the precalculus level—but I had to see it with my own eyes. + +The fact is, though, that counterintuitive problems are not necessary. Students at this level usually are not acquainted with the gambler's ruin problem, for example, and this book offers a great set of exercises. Another writer, Julian Havil, has produced a remarkably similar set of books. His *Non-Plussed...* [2007] is devoted to counterintuitive problems; and its sequel—which I believe is of even greater interest to mathematics majors—* Impossible?: Surprising Solutions to Counterintuitive Conundrums* [2008] has + +just appeared, with a prepublication blurb by Nahin. (Havil's Gamma [2003] would roughly correspond to Nahin's Dr. Euler's Fabulous Formula.) However, the problems in Digital Dice are selected for computation exercises and are not necessarily as counterintuitive as those in the prior book *Dueling Idiots*. The most counterintuitive problem in Digital Dice is problem 14: Parrondo's Paradox, probably the most challenging probability paradox I have seen; it has received a fair amount of attention lately. As usual, Nahin both discusses computational simulation and provides an analysis. However, Havil provides probably a better analysis in *Non-Plussed* [2007]. + +All of the books by Nahin and Havil are worth having, including others not listed here. I particularly recommend Digital Dice for the task of teaching undergraduates in mathematics the fundamentals of computation and simulation. + +# References + +Cargal, J.M. 2006. Review of books on investment. The UMAP Journal 27 (1): 81-90. +Havil, Julian. 2003. Gamma. Princeton, NJ: Princeton University Press. +2007. Non-Plussed! Mathematical Proof of Implausible Ideas. Princeton, NJ: Princeton University Press. +_______. 2008. Impossible?: Surprising Solutions to Counterintuitive Conundrums. Princeton, NJ: Princeton University Press. +Nahin, Paul J. 2000. *Dueling Idiots and Other Probability Puzzlers*. Princeton, NJ: Princeton University Press. +_______. 1998. An Imaginary Tale: The Story of i. Princeton, NJ: Princeton University Press. +______ 2006. Dr. Euler's Fabulous Formula. Princeton, NJ: Princeton University Press. + +James M. Cargal, Mathematics Department, Troy University—Montgomery Campus, 231 Montgomery St., Montgomery, AL 36104; jmçargal@sprintmail.com. + +Shiflet, Angela B., and George B. Shiflet. Introduction to Computational Science: Modeling and Simulation for the Sciences. Princeton, NJ: Princeton University Press, 2006; xxiv + 554 pp, $69.50. ISBN 0-691-12565-1. + +Applied mathematics is almost synonymous with mathematical modeling—which is why there is an annual issue of this journal devoted to a modeling competition. It might also explain why the editor constantly + +exhorts me to find books on modeling. Very roughly, we can divide the requisite knowledge for modeling into three areas: + +mathematics, +applications, and +- computation and programming. + +Each area presents challenges to the student, and instructors probably tend to underestimate these challenges—which is why the first course in modeling can be an unpleasant experience for everyone involved. In pedagogical terms, this book is the best thing I have seen in a long time; books this good come along about once every five years. + +A great deal of work has gone into this book, and it does many things extremely well. Technically, the book does not require calculus; it introduces the concepts needed, such as rate of change, and in fact does a nice job of motivating the derivative. However, for all practical purposes, calculus 1 should be required. The book is also a fine introduction to differential equations; but again, the student who has had the course has an advantage. + +The same can be said about half a dozen other courses, but the course in differential equations is particularly useful, partly for a strange and perhaps controversial reason. Logically, there is a lot to be said for starting with first-order differential equations and Euler's method (and direction fields). However, the logical sequence in which students should acquire material and a pedagogical sequence—one that actually works in the classroom—are often at odds. This is one of the main challenges of teaching: how students learn and how they should learn are two different things. Differential equations is a great plus before using this text because the student is in a position to appreciate Euler's method (not to mention Runge-Kutta, which is also covered in the text). + +Nonetheless a course in differential equations is not necessary at all. Let us assume that the students are sophomores with a semester of calculus. For these students, part of the challenge of modeling is that they must learn the salient features of the modeling context at hand, which may be mechanics, probability, biology, or climatology, etc. As a rule, students feel that they do not get enough information from the instructor about that background. Of course, instructors disagree; but I sympathize with the students. In this regard, the book shines. It has excellent tutorials throughout that are clear, informative, and well-presented; a high school student could learn a lot from the book. Moreover, the book covers such a variety of subjects that any instructor will find pet topics. Since one co-author is a biologist, there is a range of biology projects; but there is also a surprising range of physics and engineering topics, as well as very good treatments of purely numerical issues. + +Introduction to Computational Science is divided into 13 sections. However, each section is divided into from 2 to 10 modules. Each module has + +an introduction and may have downloads, lessons, exercises, projects, references, and other items. The references are fairly thorough; nothing in this book is half-hearted. There is a variety of computational tools rather than concentration on the use of any single tool or language. + +I think this text is a masterpiece. I know of nothing comparable. I give it five stars. + +James M. Cargal, Mathematics Department, Troy University—Montgomery Campus, 231 Montgomery St., Montgomery, AL 36104; jmçargal@sprintmail.com. \ No newline at end of file diff --git a/MCM/1995-2008/2008MCM/2008MCM.md b/MCM/1995-2008/2008MCM/2008MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..bee4c5a98976421c8cec655354f5b50558a1732c --- /dev/null +++ b/MCM/1995-2008/2008MCM/2008MCM.md @@ -0,0 +1,4494 @@ +# The U + +# M + +# Publisher + +COMAP, Inc. + +# Executive Publisher + +Solomon A. Garfunkel + +# ILAP Editor + +Chris Arney + +Division Chief, Mathematical Sciences + +Program Manager, Cooperative Systems + +Army Research Office P.O.Box 12211 + +Research Triangle Park, NC 27709-2211 + +david.arney1@arl.army.mil + +# On Jargon Editor + +Yves Nievergelt + +Dept. of Mathematics Eastern Washington Univ. + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +# Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +# Chief Operating Officer + +Laurie W. Aragón + +# Production Manager + +George W. Ward + +# Production Editor + +Joyce Barnes + +# Distribution + +John Tomicek + +# Graphic Designer + +Daiva Chauhan + +# AP Journal + +Vol. 29, No. 3 + +# Editor + +Paul J. Campbell + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meyer + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young Univ. + +Army Research Office + +AT&T Shannon Res. Lab. + +U. of Houston-Downtn + +Harvey Mudd College + +Oberlin College + +Troy U.-Montgomery + +U. of Wisc.—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +Calif. State U., Fullerton + +Brigham Young Univ. + +Southern Methodist U. + +Harvey Mudd College + +Adelphi University + +Eastern Washington U. + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Institutional Web Membership (Web Only) + +Institutional Web Memberships does not provide print materials. Web memberships allow members to search our online catalog, download COMAP print materials, and reproduce for classroom use. + +(Domestic) #2830 $449 (Outside U.S.) #2830 $449 + +# Institutional Membership (Print Only) + +Institutional Memberships receive print copies of The UMAP Journal quarterly, our annual CD collection UMAP Modules, Tools for Teaching, and our organizational newsletter Consortium. + +(Domestic) #2840 $289 (Outside U.S.) #2841 $319 + +# Institutional Plus Membership (Print Plus Web) + +Institutional Plus Memberships receive print copies of the quarterly issues of The UMAP Journal, our annual collection UMAP Modules, Tools for Teaching, our organizational newsletter Consortium, and on-line membership that allows members to search our online catalog, download COMAP print materials, and reproduce for classroom use. + +(Domestic) #2870 $569 (Outside U.S.) #2871 $599 + +For individual membership options visit www.comap.com for more information + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +Send address changes to: info@comap.com + +COMAP, Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730 + +© Copyright 2007 by COMAP, Inc. All rights reserved. + +# Vol. 29, No. 3 2008 + +# Table of Contents + +# Publisher's Editorial + +Change Solomon A. Garfunkel 185 + +About This Issue 186 + +# Special Section on the MCM + +Results of the 2008 Mathematical Contest in Modeling Frank Giordano 187 +Abstracts of the Outstanding Papers and the Fusaro Papers 223 +The Impending Effects of North Polar Ice Cap Melt +Benjamin Coate, Nelson Gross, and Megan Longo. 237 +A Convenient Truth: Forecasting Sea Level Rise Jason Chen, Brian Choi, and Joonhahn Cho 249 +Fighting the Waves: The Effect of North Polar Ice Cap Melt on Florida Amy M. Evans and Tracy L. Stepien 267 +Erosion in Florida: A Shore Thing +Matt Thies, Bob Liu, and Zachary W. Ulissi 285 +Judge's Commentary: The Polar Melt Problem Papers +John L. Scharf 301 +A Difficulty Metric and Puzzle Generator for Sudoku Christopher Chang, Zhou Fan, and Yi Sun 305 +Taking the Mystery Out of Sudoku Difficulty: An Oracular Model Sarah Fletcher, Frederick Johnson, and David R. Morrison 327 +Difficulty-Driven Sudoku Puzzle Generation +Martin Hunt, Christopher Pong, and +George Tucker 343 +Ease and Toil: Analyzing Sudoku Seth B. Chadwick, Rachel M. Krieg, and Christopher E. Granade 363 +Cracking the Selenium: A Deterministic Approach David Martin, Erica Cross, and Matt Alexander 381 +Judges' Commentary: The Fusaro Award for the Sudoku Problem +Marie Vanisko and Peter Anspach 395 + +# Publisher's Editorial + +# Change + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +175 Middlesex Turnpike, Suite 3B + +Bedford, MA 01730-1459 + +s.garfunkel@mail.comap.com + +This is the season of change—for good and bad. As I write this editorial, the election is a little less than a month away. The financial markets are imploding and the country appears more than ready to head in a new direction, even if it is unsure where that direction will take us. By the time you read this, many things will be clear. We will have a new administration, perhaps a very new administration. And we will likely be living individually and collectively on less—perhaps a lot less. + +No matter. Some things still need to be done. I won't speak in this forum about health care or infrastructure or other changes in foreign and domestic policy—but I will speak of mathematics education. As small as our issues may seem at times of national and international stress, education—especially technical education—can always provide a way out and up. We cry out for mathematical and quantitative literacy, not because we are lobbysts or a special interest group trying to raise teacher salaries. We cry out for literacy because knowledge is the only way to prevent the abuses whose consequences we now endure. + +How many times in these last few months have we heard about people who didn't understand the terms of their mortgages; of managers and bankers who didn't understand their degree of risk; of policy makers who didn't understand how the dominos could fall? Yes, derivatives are confusing. And yes, derivatives of derivatives are more confusing. But isn't this just a perfect example of why we talk about teaching mathematical modeling as a life skill? Mathematics education is not a zero-sum game. We don't want our students to learn more mathematics than other countries' students. That is just a foolish argument used to raise money, that is, the fear that another country will out perform us or another state will take our + +high tech jobs. + +The problem is much, much bigger. There simply are not enough mathematically-trained people in the world to run the world. The proof of that statement is all around us. And it is as much in our interest that the world's people become more quantitatively literate as it is that the citizens of our city, our state, and our country do. In theory, now there is less money to fund changes in mathematics education. But we must. We must see the issues and problems, as global issues and problems and work together to solve them. + +The good news is that the energy and commitment to do the job are here. At the recent conference on the Future of Mathematics Education, co-sponsored by Math is More, I met with mathematics and mathematics education researchers, with college and high school faculty, with state and local administrators, with policy-makers, and with employers. We no longer talked about why; we talked about how. The need and desire for real change was palpable. And the energy was both exciting and challenging. People kept asking, "What can I do?"—as a classroom teacher, as a supervisor of mathematics, as a staff developer, as a curriculum developer, as a policy maker. + +So while the times and problems are difficult, the will for positive change is here. Now is the time for all of us to gather together to make that change a reality. + +# About This Issue + +Paul J. Campbell + +Editor + +This issue runs longer than a regular 92-page issue, to more than 200 pages. However, not all of the articles appear in the paper version. Some appear only on the Tools for Teaching 2008 CD-ROM (and at http://www.comap.com for COMAP members), which will reach members and subscribers later and will also contain the entire 2008 year of Journal issues. + +All articles listed in the table of contents are regarded as published in the Journal. The abstract of each appears in the paper version. Paging of the issue runs continuously, including in sequence articles that do not appear in the paper version. So if, say, p. 250 in the paper version is followed by p. 303, your copy is not necessarily defective! The articles on the intervening pages are on the CD-ROM. + +We hope that you find this arrangement agreeable. It means that we do not have to procrusteanize the content to fit a fixed number of paper pages. We might otherwise be forced to select only two or three Outstanding MCM papers to publish. Instead, we continue to bring you the full content. + +# Modeling Forum + +# Results of the 2008 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +frgiorda@nps.navy.mil + +# Introduction + +A total of 1,159 teams of undergraduates, from 338 institutions and 566 departments in 14 countries, spent the first weekend in February working on applied mathematics problems in the 24th Mathematical Contest in Modeling. + +The 2008 Mathematical Contest in Modeling (MCM) began at 8:00 P.M. EST on Thursday, February 14 and ended at 8:00 P.M. EST on Monday, February 18. During that time, teams of up to three undergraduates were to research and submit an optimal solution for one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problems at the appropriate time, and entered completion data through COMAP's MCM Website. After a weekend of hard work, solution papers were sent to COMAP on Monday. The top papers appear in this issue of The UMAP Journal. + +Results and winning papers from the first 23 contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2007). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains the 20 problems used in the first 10 years of the contest and a winning paper for each year. That volume and the special MCM issues of the Journal for the last few years are available from COMAP. The 1994 volume is also available on COMAP's special Modeling Resource CD-ROM. Also available is The MCM at 21 CD-ROM, which contains the 20 problems from the second 10 years of the contest, a winning paper from each year, and advice from advisors of Outstanding teams. These CD-ROMs can be ordered from COMAP at http://www.comap.com/product/cdrom/index.html. + +This year's Problem A asked teams to consider the effects on land from the melting of the North Polar ice cap due to the predicted increase in global temperatures. Specifically, teams were asked to model the effects on the coast of Florida due to the melting every 10 years for the next 50 years, with particular attention to large metropolitan areas. Additionally, they were asked to propose appropriate responses to deal with the melting. + +Problem B asked teams to develop an algorithm to construct Sudoku puzzles of varying difficulty. The problem required teams to develop metrics to define a difficulty level. Further, the team's algorithm and metrics were to be extensible to a varying number of difficulty levels, and they should illustrate their algorithm with at least four difficulty levels. The team's solution had to analyze the complexity of their algorithm. + +The 9 Outstanding solution papers are published in this issue of The UMAP Journal, along with relevant commentaries. + +In addition to the MCM, COMAP also sponsors the Interdisciplinary Contest in Modeling (ICM) and the High School Mathematical Contest in Modeling (HiMCM). The ICM runs concurrently with MCM and offers a modeling problem involving concepts in operations research, information science, and interdisciplinary issues in security and safety. The 2009 problem will have an environmental science theme. Results of this year's ICM are on the COMAP Website at http://www.comap.com/undergraduate/contests; results and Outstanding papers appeared in Vol. 29 (2008), No. 2. The HiMCM offers high school students a modeling opportunity similar to the MCM. Further details about the HiMCM are at http://www.comap.com/highschool/contests. + +# Problem A: Take a Bath + +Consider the effects on land from the melting of the North Polar ice cap due to the predicted increase in global temperatures. Specifically, model the effects on the coast of Florida every 10 years for the next 50 years due to the melting, with particular attention given to large metropolitan areas. Propose appropriate responses to deal with this. A careful discussion of the data used is an important part of the answer. + +# Problem B: Creating Sudoku Puzzles + +Develop an algorithm to construct Sudoku puzzles of varying difficulty. Develop metrics to define a difficulty level. The algorithm and metrics should be extensible to a varying number of difficulty levels. You should illustrate the algorithm with at least 4 difficulty levels. Your algorithm should guarantee a unique solution. Analyze the complexity of your algorithm. Your objective should be to minimize the complexity of the algorithm and meet the above requirements. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at either Appalachian State University (Polar Melt Problem) or at the National Security Agency (Sudoku Problem). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree, a third judge evaluated the paper. + +Additional Regional Judging sites were created at the U.S. Military Academy and at the Naval Postgraduate School to support the growing number of contest submissions. + +Final judging took place at the Naval Postgraduate School, Monterey, CA. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Polar Melt Problem464182315565
Sudoku Problem595296198594
91593785131159
+ +The 9 papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams and the Meritorious teams (and advisors) below; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +# Polar Melt Papers + +"The Impending Effects of North Polar Ice Cap Melt" + +College of Idaho + +Caldwell, ID + +Michael P. Hitchman + +Benjamin Coate + +Nelson Gross + +Megan Longo + +"A Convenient Truth: Forecasting Sea Level Rise" + +Duke University + +Durham, NC + +Scott McKinley + +Jason Chen + +Brian Choi + +Joonhahn Cho + +" Fighting the Waves: The Effect of North Polar Ice Cap Melt on Florida " + +University at Buffalo + +Buffalo, NY + +John Ringland + +Amy M. Evans + +Tracy L. Stepien + +"Erosion in Florida: A Shore Thing" + +University of Delaware + +Newark, DE + +Louis Frank Rossi + +Matt Thies + +Bob Liu + +Zachary W. Ulissi + +# Suzoku Papers + +"A Difficulty Metric and Puzzle Generator for Selenium" + +Harvard University + +Cambridge, MA + +Clifford H. Taubes + +Christopher Chang + +Zhou Fan + +Yi Sun + +"Taking the Mystery out of Sudoku Difficulty: An Oracular Model" + +Harvey Mudd College + +Claremont, CA + +Jon Jacobsen + +Sarah Fletcher + +Frederick Johnson + +David R. Morrison + +"Difficulty-Driven Sudoku Puzzle Generation" + +Harvey Mudd College + +Claremont, CA + +Zach Dodds + +Martin Hunt + +Christopher Pong + +George Tucker + +"Ease and Toil: Analyzing Selenium" + +University of Alaska Fairbanks + +Fairbanks, AK + +Orion S. Lawlor + +Seth B. Chadwick + +Rachel M. Krieg + +Christopher E. Granade + +"Cracking the Sudoku: A Deterministic Approach" + +Youngstown State University + +Youngstown, OH + +George T. Yates + +David Martin + +Erica Cross + +Matt Alexander + +# Meritorious Teams + +Polar Melt Problem (65 teams) + +Ann Arbor Huron High School, Mathematics, Ann Arbor, MI (Peter A. Collins) + +Beihang University, Beijing, China (HongYing Liu) + +Beijing Normal University, Beijing, Beijing, China (Li Cui) + +Beijing Normal University, Beijing, (Laifu Liu) + +Beijing University of Posts & Telecommunications, Electronic Engineering, Beijing, China (Zuguo He) + +Beijing University of Posts & Telecommunications, Applied Mathematics, Beijing, China (Hongxiang Sun) + +Central South University, Mechanical Design and Manufacturing Automation, Changsha, Hunan, China (Xinge Liu) + +Central University of Finance and Economics, Applied Mathematics, Beijing, China (Donghong Li) + +China University of Mining and Technology, Beijing, China (Lei Zhang) (two teams) + +China University of Petroleum (Beijing), Beijing, China (Ling Zhao) + +China University of Petroleum (East China), Qingdao, Shandong, China (Ziting Wang) + +Chongqing University, Applied Chemistry, Chongqing, China (Zhiliang Li) + +College of Charleston, Charleston, SC (Amy Langville) + +Concordia College-New York, Bronxville, NY (Karen Bucher) + +Dalian University of Technology, Software, Dalian, Liaoning, China (Zhe Li) + +Donghua University, Shanghai China (Liangjian Hu) + +Duke University, Durham, NC (Mark Huber) + +East China University of Science and Technology, Physics, Shanghai, China (Lu ,hong) + +Gannon University, Mathematics, Erie, PA (Jennifer A. Gorman) + +Hangzhou Dianzi University, Information and Mathematics Science, Hangzhou, Zhejiang, China (Wei Li) + +Harbin Institute of Technology Shi'an School, Mathematics, Harbin, Heilongjiang, China (Yunfei Zhang) + +Hiram College, Hiram, OH (Brad S. Gubser) + +McGill University, Mathematics and Statistics, Montreal, Quebec, Canada (Nilima Nigam) + +Nankai University, Management Science and Engineering, Tianjin, Tianjin, China (Wenhua Hou) + +National University of Defense Technology, Mathematics and Systems Science, Changsha, Hunan, China (Xiaojun Duan) + +National University of Defense Technology, Mathematics and Systems Science, Changsha, Hunan, China (Yi Wu) + +National University of Ireland, Galway, Galway, Ireland (Niall Madden) + +National University of Ireland, Galway, Mathematical Physics, Galway, Ireland (Petri T. Piroinen) + +Ningbo Institute of Technology of Zhejiang University, Ningbo, China (Lihui Tu) + +Northwestern Polytechnical University, Applied Physics, Xián, Shaanxi, China (Lei Youming) + +Northwestern Polytechnical University, Applied Chemistry, Xián, Shaanxi, China (Sun Zhongkui) + +Northwestern Polytechnical University, Natural and Applied Science, Xián, Shaanxi, China (Zhao Junfeng) + +Oregon State University, Corvallis, OR (Nathan L. Gibson) + +Pacific University, Physics, Forest Grove, OR (Juliet Brosing) + +Peking University, Beijing, China (Sharon Lynne Murrel) + +Providence College, Providence, RI, (Jeffrey T. Hoag) + +Rensselaer Polytechnic Institute, Troy, NY (Peter R. Kramer) + +University of Electronic Science and Technology of China, Applied Mathematics, Chengdu, Sichuan, China (Li Mingqi) + +Shanghai Foreign Language School, Computer Science, Shanghai, China (Yue Sun) + +Shanghai University of Finance & Economics, Applied Mathematics, Shanghai, China (Zhenyu Zhang) + +Sichuan University, Electrical Engineering and Information, Chengdu, Sichuan, China (Yingyi Tan) + +Slippery Rock University, Slippery Rock, PA (Richard J. Marchand) + +South China Agricultural University, GuangZhou, Guangdong (ShaoMei Fang) + +South China University of Technology, Guangzhou, Guangdong, China (Qin YongAn) + +Sun Yat-Sen (Zhongshan) University, Guangzhou, Guangdong, China (GuoCan Feng) + +Tsinghua University, Beijing, China (Jun Ye) + +Tsinghua University, Beijing, China (Zhiming Hu) + +Union College, Schenectady, NY (Jue Wang) + +U.S. Military Academy, West Point, NY (Edward Swim) + +University College Cork, Cork, Ireland (Benjamin W. McKay) + +University College Cork, Cork, Ireland (Liya A. Zhornitskaya) + +University of Guangxi, Mathematics & Information Science, Nanning, Guangxi, China (Ruxue Wu) + +University of Guangxi, Mathematics & Information Science, Nanning, Guangxi, China (Zhongxing Wang) + +University of Science and Technology Beijing, Beijing, China (Hu Zhixing) + +University of Technology Jamaica, Chemical Engineering, Kingston, Jamaica, West Indies (Nilza G. Justiz-Smith) + +Worcester Polytechnic Institute, Worcester, MA (Suzanne L. Weekes) + +Wuhan University, Wuhan, Hubei, China (Yuanming Hu) + +Xi'an Jiaotong University, Xian, Shaanxi, China (Jing Gao) + +Xi'an Jiaotong University, Center for Mathematics Teaching and Experiment, Xian, Shaanxi, China (Xiaoe Ruan) + +Xuzhou Institute of Technology, Xuzhou, Jiangsu, (Li Subei) + +York University, Mathematics and Statistics, Toronto, ON, Canada, (Hongmei Zhu) + +Yunnan University, Computer Science, Kunming, China (Shunfang Wang) + +Zhejiang University, Hangzhou, Zhejiang, China (Zhiyi Tan) + +Zhuhai College of Jinan University, Computer Science, Zhuhai, Guangdong, China (Zhang YunBiu) + +Sudoku Problem (96 teams) + +Beihang University, Beijing, China (Sun Hai Yan) + +Beijing Institute of Technology, Beijing, China (Guifeng Yan) + +Beijing Institute of Technology, Beijing, China (Houbao Xu) + +Beijing Normal University, Beijing, China (Laifu Liu) + +Beijing University of Posts & Telecommunications, Electronics Infomation Engineering, Beijing, China (Jianhua Yuan) + +Bethel University, Arden Hills, MN (Nathan M. Gossett) + +Cal Poly San Luis Obispo, San Luis Obispo, CA (Lawrence Sze) + +Carroll College, Chemistry, Helena, MT (John C. Salzsieder) + +Cheshire Academy, Cheshire, CT (Susan M Eident) + +Clarkson University, Computer Science, Potsdam, NY (Katie Fowler) + +College of Wooster, Wooster, OH (John R. Ramsay) + +Dalian Maritime University, Dalian, Liaoning, China (Naxin Chen) + +Dalian University of Technology, Software School, Dalian, Liaoning, China (Zhe Li) (two teams) + +Daqing Petroleum Institute, Daqing, Heilongjiang, China (Kong Lingbin) + +Daqing Petroleum Institute, Daqing, Heilongjiang, China (Yang Yunfeng) + +Davidson College, Davidson NC (Richard D. Neidinger) (two teams) + +East China Normal University, Shanghai, China (Yongming Liu) + +East China University of Science and Technology, Shanghai, China (Su Chunjie) + +Hangzhou Dianzi University, Information and Mathematics Science, Hangzhou, Zhejiang, China (Zheyong Qiu) + +Harbin Institute of Technology, School of Astronautics, Management Science, Harbin, + +Heilongjiang, China (Bing Wen) + +Harbin Institute of Technology, School of Science, Mathematics, Harbin, Heilongjiang, + +China (Yong Wang) + +Harvey Mudd College, Computer Science, Claremont, CA (Zach Dodds) + +Humboldt State University, Environmental Resources Engineering, Arcata, CA (Brad Finney) + +James Madison University, Harrisonburg, VA (David B. Walton) + +Jilin University, Changchun, Jilin, China (Huang Qingdao) + +Jilin Universit, Changchun, Jilin, China (Xianrui Lu) + +Korea Advanced Institute of Science & Technology, Daejeon, Korea (Yong-Jung Kim) + +Luther College, Computer Science, Decorah, IA (Steven A. Hubbard) + +Nanjing Normal University, Computer Science, Nanjing, Jiangsu, China (Wang Qiong) + +Nanjing University, Nanjing, Jiangsu, China (Ze-Chun Hu) + +Nanjing University of Posts & Telecommunications, Nanjing, Jiangsu, China (Jin Xu) + +Nanjing University of Posts & Telecommunications, Nanjing, Jiangsu, China (Jun Ye) + +National University of Defense Technology, Mathematics and Systems Science, Changsha, Hunan, China (Dan Wang) + +National University of Defense Technology Mathematics and Systems Science, Changsha, Hunan, China (Meihua Xie) + +National University of Defense Technology, Mathematics and Systems Science, Changsha, Hunan, China (Yong Luo) + +Naval Aeronautical Engineering Academy (Qingdao), Machinery, Qingdao, Shandong, China (Cao Hua Lin) + +North Carolina School of Science and Mathematics, Durham, NC (Daniel J. Teague) + +Northwestern Polytechnical University, Xi'an, Shaanxi, China (Xiao Huayong) + +Northwestern Polytechnical University, Xi'an, Shaanxi, China (Yong Xu) + +Northwestern Polytechnical University, Xi'an, Shaanxi, China (Zhou Min) + +Oxford University, Oxford, United Kingdom (Jeffrey H. Giansiracusa) (two teams) + +Päivölä College of Mathematics, Tarttila, Finland (Janne Puustelli) + +Peking University, Beijing, China (Xin Yi) + +Peking University, Beijing, China (Xufeng Liu) + +Peking University, Beijing, China (Yulong Liu) + +Peking University, Financial Mathematics, Beijing, China (Shanjun Lin) + +PLA University of Science and Technology, Meteorology, Nanjing, Jiangsu, China (Shen Jinren) + +Princeton University, Operations Research and Financial Engineering, Princeton, NJ (Warren B. Powell) + +Princeton University, Princeton, NJ (Robert Calderbank) + +Renmin University of China, Finance, Beijing, China (Gao Jinwu) + +Rensselaer Polytechnic Institute, Troy, NY (Donald Drew) + +Shandong University, Software, Jinan, Shandong, China (Xiangxu Meng) + +Shandong University, Mathematics & System Sciences, Jinan, Shandong, China (Bao Dong Liu) + +Shandong University, Mathematics & System Sciences, Jinan, Shandong, China (Xiao Xia Rong) + +Shandong University at Weihai, Weihai, Shandong, China (Yang Bing and Song Hui Min) + +Shandong University at Weihai, Weihai, Shandong, China (Cao Zhulou and Xiao Hua) + +Shanghai Foreign Language School, Shanghai, China (Liang Tao) + +Shanghai Foreign Language School, Shanghai, China (Feng Xu) + +Shanghai Sino European School of Technology, Shanghai, China (Wei Huang) + +Shanghai University of Finance and Economics, Shanghai, China (Wenqiang Hao) + +Shijiazhuang Railway Institute, Engineering Mechanics, Shijiazhuang, Hebei, China (Baocai Zhang) + +Sichuan University, Chengdu, China (Qiong Chen) + +Slippery Rock University, Physics, Slippery Rock, PA ( Athula R Herat) + +South China Normal University, Science of Information and Computation, Guangzhou, Guangdong, China (Tan Yang) + +noindent South China University of Technology, Guangzhou, Guangdong, China (Liang ManFa) + +South China University of Technology, Guangzhou, Guangdong, China (Liang ManFa) + +South China University of Technology, Guangzhou, Guangdong, China (Qin YongAn) + +Southwest University, Chongqing, China (Lei Deng) + +Southwest University, Chongqing, China (Xianning Liu) + +Southwestern University of Finance and Economics, Economics and Mathematics, Chengdu, Sichuan, China (Dai Dai) + +Sun Yat-Sen (Zhongshan) University, Guangzhou, Guangdong, China (XiaoLong Jiang) + +Tsinghua University, Beijing, China (Jun Ye) + +University of California-Davis, Davis, CA (Eva M. Strawbridge) + +University of Colorado-Boulder, Boulder, CO (Anne M. Dougherty) + +University of Colorado-Boulder, Boulder, CO (Luis Melara) + +University of Delaware, Newark, DE (Louis Frank Rossi) + +University of Iowa, Iowa City, IA (Ian Besse) + +University of New South Wales, Sydney, NSW, Australia (James W. Franklin) + +University of Puget Sound, Tacoma, WA (Michael Z. Spivey) + +University of Science and Technology Beijing, Computer Science and Technology, Beijing, China (Zhaoshun Wang) + +University of Washington, Applied and Computational Mathematical Sciences, Seattle, WA (Anne Greenbaum) + +University of Western Ontario, London, ON, Canada (Allan B. MacIsaac) + +University of Wisconsin-La Crosse, La Crosse, WI (Barbara Bennie) + +University of Wisconsin-River Falls, River Falls, WI (Kathy A. Tomlinson) + +Wuhan University, Wuhan, Hubei, China (Liuyi Zhong) + +Wuhan University, Wuhan, Hubei, China (Yuanming Hu) + +Xi'an Communication Institute, Xi'an, Shaanxi, China (Xinshe Qi) + +Xidian University, Xi'an, Shaanxi, China (Guoping Yang) + +Xidian University, Xi'an, Shaanxi, China (Jimin Ye) + +Xidian University, Industrial and Applied Mathematics, Xi'an, Shaanxi, China (Qiang Zhu) + +Zhejiang University, Hangzhou, Zhejiang, China (Yong Wu) + +Zhejiang University City College, Information and Computing Science, Hangzhou, Zhejiang, China (Gui Wang) + +Zhejiang University of Finance and Economics, Hangzhou, Zhejiang, China (Ji Luo) + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, recognized the teams from the College of Idaho (Polar Melt Problem) and University of Alaska Fairbanks (Sudoku Problem) as INFORMS Outstanding teams and provided the following recognition: + +- a letter of congratulations from the current president of INFORMS to each team member and to the faculty advisor; +- a check in the amount of $300 to each team member; +- a bronze plaque for display at the team's institution, commemorating their achievement; +- individual certificates for team members and faculty advisor as a personal commemoration of this achievement; +- a one-year student membership in INFORMS for each team member, which includes their choice of a professional journal plus the OR/MS Today periodical and the INFORMS society newsletter. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from the University at Buffalo (Polar Melt Problem) and Harvard University (Sudoku Problem). Each of the team members was awarded a $300 cash prize and the teams received partial expenses to present their results in a special Minisymposium at the SIAM Annual Meeting in San Diego, CA in July. Their schools were given a framed hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding North American team from each problem as an MAA Winner. The teams were from Duke University (Polar Melt Problem) and Harvey Mudd College Team (Hunt, Pong, and Tucker; advisor Dodds) (Sudoku Problem). With partial travel support from the MAA, the Duke University team presented their solution at a special session of the MAA Mathfest in Madison, WI in August. Each team member was presented a certificate by Richard S. Neal of the MAA Committee on Undergraduate Student Activities and Chapters. + +# Ben Fusaro Award + +One Meritorious or Outstanding paper was selected for each problem for the Ben Fusaro Award, named for the Founding Director of the MCM and awarded for the fifth time this year. It recognizes an especially creative approach; details concerning the award, its judging, and Ben Fusaro are in Vol. 25 (3) (2004): 195-196. The Ben Fusaro Award winners were the University of Buffalo (Polar Melt Problem) and the University of Puget Sound (Sudoku Problem). + +# Judging + +Director + +Frank R. Giordano, Naval Postgraduate School, Monterey, CA + +Associate Director + +William P. Fox, Dept. of Defense Analysis, Naval Postgraduate School, Monterey, CA + +# Polar Melt Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, Stillwater, OK + +Associate Judges + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, Appalachian State University, Boone, NC (Head Triage Judge) + +Patrick J. Driscoll, Dept. of Systems Engineering, U.S. Military Academy, West Point, NY + +Ben Fusaro, Dept. of Mathematics, Florida State University, Tallahassee, FL (SIAM Judge) + +Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC (Problem Author) + +Mario Juncosa, RAND Corporation, Santa Monica, CA (retired) + +Michael Moody, Olin College of Engineering, Needham, MA (MAA Judge) + +David H. Olwell, Naval Postgraduate School, Monterey, CA (INFORMS Judge) + +John L. Scharf, Mathematics Dept., Carroll College, Helena, MT (Ben Fusaro Award Judge) + +# Sudoku Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +Peter Anspach, National Security Agency, Ft. Meade, MD (Head Triage Judge) + +Kelly Black, Mathematics Dept., Union College, Schenectady, NY + +Karen D. Bolinger, Mathematics Dept., Clarion University of Pennsylvania, Clarion, PA + +Jim Case (SIAM Judge) + +Veena Mendiratta, Lucent Technologies, Naperville, IL (Problem Author) + +Peter Olsen, Johns Hopkins Applied Physics Laboratory, Baltimore, MD + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, Salisbury University, Salisbury, MD (MAA Judge) + +Dan Solow, Mathematics Dept., Case Western Reserve University, Cleveland, OH (INFORMS Judge) + +Michael Tortorella, Dept. of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ + +Marie Vanisko, Dept. of Mathematics, Carroll College, Helena MT (Ben Fusaro Award Judge) + +Richard Douglas West, Francis Marion University, Florence, SC + +Dan Zwillinger, Raytheon Company, Sudbury, MA + +# Regional Judging Session at U.S. Military Academy + +Head Judge + +Patrick J. Driscoll, Dept. of Systems Engineering, United States Military Academy (USMA), West Point, NY + +Associate Judges + +Tim Elkins, Dept. of Systems Engineering, USMA + +Michael Jaye, Dept. of Mathematical Sciences, USMA + +Tom Meyer, Dept. of Mathematical Sciences, USMA + +Steve Henderson, Dept. of Systems Engineering, USMA + +# Regional Judging Session at Naval Postgraduate School + +Head Judge + +William P. Fox, Dept. of Defense Analysis, Naval Postgraduate School (NPS), Monterey, CA + +Associate Judges + +William Fox, NPS + +Frank Giordano, NPS + +# Triage Session for Polar Melt Problem + +Head Triage Judge + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, Appalachian State University, Boone, NC + +Associate Judges + +Jeff Hirst, Rick Klima, and and René Salinas + +—all from Dept. of Mathematical Sciences, Appalachian State University, Boone, NC + +# Triage Session for Sudoku Problem + +Head Triage Judge + +Peter Anspach, National Security Agency (NSA), Ft. Meade, MD + +Associate Judges + +Other judges from inside and outside NSA, who wish not to be named. + +# Sources of the Problems + +The Polar Melt Problem was contributed by Jerry Griggs (Mathematics Dept., University of South Carolina, Columbia, SC), and the Sudoku Problem by Veena Mendiratta (Lucent Technologies, Naperville, IL). + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency (NSA) and by COMAP. We thank Dr. Gene Berg of NSA for his coordinating efforts. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +We also thank for their involvement and support the MCM judges and MCM Board members for their valuable and unflagging efforts, as well as + +- Two Sigma Investments. (This group of experienced, analytical, and technical financial professionals based in New York builds and operates sophisticated quantitative trading strategies for domestic and international markets. The firm is successfully managing several billion dollars using highly automated trading technologies. For more information about Two Sigma, please visit http://www.twosigma.com.) + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each paper here is the result of undergraduates working on a problem over a weekend. Editing (and usually substantial cutting) has taken place; minor errors have been corrected, wording altered for clarity or economy, and style adjusted to that of The UMAP Journal. The student authors have proofed the results. Please peruse their efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation +$\mathrm{H} =$ Honorable Mention +$\mathbf{M} =$ Meritorious +$\mathrm{O} =$ Outstanding (published in this special issue) + +
INSTITUTIONDEPT.CITYADVISOR
ALASKA
U. Alaska FairbanksCSFairbanksOrion S. LawlorBH
U. Alaska FairbanksCSFairbanksOrion S. LawlorBO
ARIZONA
Northern Arizona U.Math & StatsFlagstaffTerence R. BlowsAH
CALIFORNIA
Cal Poly San Luis ObispoMathSan Luis ObispoLawrence SzeBM
Cal Poly San Luis ObispoMathSan Luis ObispoLawrence SzeBH
California State Poly. U.PhysicsPomonaKurt VandervoortBP
California State Poly. U.Math & StatsPomonaJoe LatulippeBP
Calif. State U. at Monterey BayMath & StatsSeasideHongde HuAH
Calif. State U. at Monterey BayMath & StatsSeasideHongde HuAH
Calif. State U. NorthridgeMathNorthridgeGholam-Ali ZakeriBP
Cal-Poly PomonaMathPomonaHubertus F. von BremenAH
Cal-Poly PomonaPhysicsPomonaNina AbramzonBP
Harvey Mudd C.MathClaremontJon JacobsenAH
Harvey Mudd C.MathClaremontJon JacobsenBO
Harvey Mudd C.CSClaremontZach DoddsBM
Harvey Mudd C.CSClaremontZach DoddsBO
Humboldt State U.Env'1 Res. Eng.ArcataBrad FinneyAH
Humboldt State U.Env'1 Res. Eng.ArcataBrad FinneyBM
Irvine Valley C.MathIrvineJack ApplemanAP
Pomona C.MathClaremontAmi E. RadunskayaAH
Saddleback C.MathMission ViejoKarla WestphalAP
U. of California DavisMathDavisEva M. StrawbridgeAP
U. of California DavisMathDavisEva M. StrawbridgeBM
U. of California MercedNatural Sci.MercedArnold D. KimBH
U. of San DiegoMathSan DiegoCameron C. ParkerAP
U. of San DiegoMathSan DiegoCameron C. ParkerBH
COLORADO
U. of Colorado - BoulderAppl. Math.BoulderAnne M. DoughertyAH
U. of Colorado - BoulderAppl. Math.BoulderBengt FornbergAH
U. of Colorado - BoulderAppl. Math.BoulderAnne DoughertyBM
U. of Colorado - BoulderAppl. Math.BoulderBengt FornbergBH
U. of Colorado - BoulderAppl. Math.BoulderLuis MelaraBM
U. of Colorado DenverMathDenverGary A. OlsonAP
CONNECTICUT
Cheshire Acad.MathCheshireSusan M. EidentBM
Connecticut C.MathNew LondonSanjeeva BalasuriyaAP
Sacred Heart U.MathFairfieldPeter LothBP
Southern Connecticut State U.MathNew HavenRoss B. GingrichAH
Southern Connecticut State U.MathNew HavenRoss B. GingrichBH
DELAWARE
U. of DelawareMath Sci.NewarkLouis Frank RossiAO
U. of DelawareMath Sci.NewarkJohn A. PeleskoBP
U. of DelawareMath Sci.NewarkLouis RossiBM
FLORIDA
Bethune-Cookman U.MathDaytona BeachDeborah JonesAP
Jacksonville U.MathJacksonvilleRobert A. HollisterAH
GEORGIA
Georgia Southern U.Math Sci.StatesboroGoran LesajaAP
Georgia Southern U.Math Sci.StatesboroGoran LesajaBH
U. of West GeorgiaMathCarrolltonScott GordonAH
IDAHO
C. of IdahoMath/Phys. Sci.CaldwellMichael P. HitchmanAO
ILLINOIS
Greenville C.MathGreenvilleGeorge R. PetersAP
INDIANA
Goshen C.MathGoshenPatricia A. OakleyBH
Rose-Hulmann Inst. of Tech.ChemistryTerre HauteMichael MuellerBH
Rose-Hulman Inst. of Tech.ChemistryTerre HauteMichael MuellerBH
Rose-Hulman Inst. of Tech.MathTerre HauteWilliam S. GalinaitisAH
Rose-Hulman Inst. of Tech.MathTerre HauteWilliam S. GalinaitisBP
Saint Mary's C.MathNotre DameNatalie K. DomelleAH
Saint Mary's C.MathNotre DameNatalie K. DomelleBH
IOWA
Coe C.Math Sci.Cedar RapidsCalvin R. Van NiewaalBH
Grand View C.Math & CSDes MoinesSergio LochAH
Grand View C.Math & CSDes MoinesSergio LochAH
Grinnell C.Math & StatsGrinnellKaren L. ShumanAH
Luther C.CSDecorahSteven A. HubbardBH
Luther C.CSDecorahSteven A. HubbardBM
Luther C.MathDecorahReginald D. LaursenBH
Luther C.MathDecorahReginald D. LaursenBH
Simpson C.Comp. Sci.IndianolaPaul CravenAH
Simpson C.Comp. Sci.IndianolaPaul CravenAP
Simpson C.MathIndianolaWilliam SchellhornAH
Simpson C.MathIndianolaDebra CzarneskiAP
Simpson C.MathIndianolaRick SpellerbergAP
Simpson C.PhysicsIndianolaDavid OlsgaardAP
Simpson C.MathIndianolaMurphy WaggonerBP
Simpson C.MathIndianolaMurphy WaggonerBH
U. of IowaMathIowa CityBenjamin J. GalluzzoAH
U. of IowaMathIowa CityKevin MurphyAH
U. of IowaMathIowa CityIan BesseBM
U. of IowaMathIowa CityScott SmallBH
U. of IowaMathIowa CityBenjamin GalluzzoBH
KANSAS
Kansas State U.MathManhattanDavid R. AucklyBH
Kansas State U.MathManhattanDavid R. AucklyBH
KENTUCKY
Asbury C.Math & CSWilmoreDavid L. CoullietteAH
Asbury C.Math & CSWilmoreDavid L. CoullietteBH
Morehead State U.Math & CSMoreheadMichael DobranskiBP
Northern Kentucky U.MathHighl& HeightsLisa Joan HoldenAH
Northern Kentucky U.MathHighl& HeightsLisa HoldenBP
Northern Kentucky U.Phys. & Geo.Highl& HeightsSharmanthie FernandoAP
LOUISIANA
Centenary C.Math & CSShreveportMark H. GoadrichBP
Centenary C.Math & CSShreveportMark H. GoadrichBH
MAINE
Colby C.MathWatervilleJan HollyAP
MARYLAND
Hood C.MathFrederickBetty MayfieldAP
Loyola C.Math Sci.BaltimoreJiyuan TaoAH
Loyola C.Math Sci.BaltimoreJiyuan TaoBH
Mount St. Mary's U.MathEmmitsburgFred PortierBP
Salisbury U.Math & CSSalisburyTroy V. BanksBP
Villa Julie C.MathStevensonEileen C. McGrawAH
Washington C.Math & CSChestertownEugene P. HamiltonAP
MASSACHUSETTS
Bard C./Simon's RockMathGreat BarringtonAllen B. AltmanAP
Bard C./Simon's RockMathGreat BarringtonAllen AltmanBP
Bard C./Simon's RockPhysicsGreat BarringtonMichael BergmanAP
Harvard U.MathCambridgeClifford H. TaubesBO
Harvard U.MathCambridgeClifford H. TaubesBH
U. of Mass. LowellMath Sci.LowellJames Graham-EagleBP
Worcester Poly. Inst.Math Sci.WorcesterSuzanne L. WeekesAM
Worcester Poly. Inst.Math Sci.WorcesterSuzanne L. WeekesAH
MICHIGAN
Ann Arbor Huron HSMathAnn ArborPeter A. CollinsAM
Lawrence Tech. U.Math & CSSouthfieldRuth G. FavroAH
Lawrence Tech. U.Math & CSSouthfieldGuang-Chong ZhuAP
Lawrence Tech. U.Math & CSSouthfieldGuang-Chong ZhuAP
Lawrence Tech. U.Math & CSSouthfieldRuth FavroBH
Siena Heights U.MathAdrianJeff C. KallenbachAP
Siena Heights U.MathAdrianTim H. HusbandAP
Siena Heights U.MathAdrianTim H. HusbandBP
MINNESOTA
Bethel U.Math & CSArden HillsNathan M. GossettBM
Carleton C.MathNorthfieldLaura M. ChiharaAH
Northwestern C.Sci. & Math.St. PaulJonathan A. ZderadAP
MISSOURI
Drury U.Math & CSSpringfieldKeith James CoatesAH
Drury U.Math & CSSpringfieldKeith James CoatesAP
Drury U.PhysicsSpringfieldBruce W. CallenAP
Drury U.PhysicsSpringfieldBruce W. CallenAH
Saint Louis U.Math & CSSt. LouisDavid A. JacksonBH
Saint Louis U.Eng., Aviation & Tech.St. LouisManoj S. PatankarAH
Truman State U.Math & CSKirksvilleSteve Jay SmithBH
U. of Central MissouriMath & CSWarrensburgNicholas R. BaethAP
U. of Central MissouriMath & CSWarrensburgNicholas R. BaethBP
MONTANA
Carroll C.ChemistryHelenaJohn C. SalziesederBM
Carroll C.ChemistryHelenaJohn C. SalziesederAP
Carroll C.Math., Eng., & CSHelenaHolly S. ZulloBH
Carroll C.Math., Eng., & CSHelenaMark ParkerAH
NEBRASKA
Nebraska Wesleyan U.Math & CSLincolnMelissa Claire ErdmannAP
Wayne State C.MathWayneTim HardyAP
NEW JERSEY
Princeton U.MathPrincetonRobert CalderbankBM
Princeton U.OR & Fin. Eng.PrincetonRobert J. VanderbeiBH
Princeton U.OR & Fin. Eng.PrincetonRobert J. VanderbeiBH
Princeton U.OR & Fin. Eng.PrincetonWarren B. PowellBP
Princeton U.OR & Fin. Eng.PrincetonWarren B. PowellBM
Richard Stockton C.MathPomonaBrandy L. RapatskiAH
Rowan U.MathGlassboroPaul J. LaumakisBP
Rowan U.MathGlassboroChristopher Jay LackeBH
NEW MEXICO
NM Inst. Mining & Tech.MathSocorroJohn D. StarrettBP
New Mexico State U.Math Sci.Las CrucesCaroline P. SweezyAP
NEW YORK
Clarkson U.Comp. Sci.PotsdamKatie FowlerBH
Clarkson U.Comp. Sci.PotsdamKatie FowlerBM
Clarkson U.MathPotsdamJoseph D. SkufcaAH
Clarkson U.MathPotsdamJoseph D. SkufcaBP
Colgate U.MathHamiltonDan SchultBH
Concordia C.Bio. Chem. Math.BronxvilleKaren BucherAM
Concordia C.MathBronxvilleJohn F. LoaseAH
Concordia C.MathBronxvilleJohn F. LoaseBH
Cornell U.MathIthacaAlexander VladimirskyBH
Cornell U.OR & Ind'l Eng.IthacaEric FriedmanBH
Ithaca C.MathIthacaJohn C. MaceliBH
Ithaca C.PhysicsIthacaBruce G. ThompsonBH
Nazareth C.MathRochesterDaniel BirmajerAP
Rensselaer Poly. Inst.Math Sci.TroyPeter R. KramerAM
Rensselaer Poly. Inst.Math Sci.TroyPeter R. KramerBH
Rensselaer Poly. Inst.Math Sci.TroyDonald DrewBM
Rensselaer Poly.Inst.Math Sci.TroyDonald DrewBH
Union C.MathSchenectadyJue WangAM
U.S. Military Acad.Math Sci.West PointEdward SwimAM
U.S. Military Acad.Math Sci.West PointRobert BurksBH
U. at BuffaloMathBuffaloJohn RinglandAO
U. at BuffaloMathBuffaloJohn RinglandBH
Westchester Comm. Coll.MathValhallaMarvin LittmanBP
NORTH CAROLINA
Davidson C.MathDavidsonDonna K. MolinekBH
Davidson C.MathDavidsonDonna K. MolinekBH
Davidson C.MathDavidsonRichard D. NeidingerBM
Davidson C.MathDavidsonRichard D. NeidingerBM
Duke U.MathDurhamScott McKinleyAO
Duke U.MathDurhamMark HuberAM
Duke U.MathDurhamDavid KrainesBH
Duke U.MathDurhamDan LeeBP
Duke U.MathDurhamLenny NgBH
Duke U.MathDurhamBill PardonBH
Meredith C.Math & CSRaleighCammey Cole ManningAH
NC Schl of Sci. & Math.MathDurhamDaniel J. TeagueBM
NC Schl of Sci. & Math.MathDurhamDaniel J. TeagueBH
U. of North CarolinaMathChapel HillSarah A. WilliamsAH
U. of North CarolinaMathChapel HillBrian PikeAH
Wake Forest U.MathWinston SalemMiaohua JiangAH
Western Carolina U.Math & CSCullowheeJeff LawsonAH
Western Carolina U.Math & CSCullowheeErin K. McNelisBH
OHIO
C. of WoosterMath & CSWoosterJohn R. RamsayBM
Hiram C.MathHiramBrad S. GubserAM
Kenyon C.MathGambierDana C. PaquinAH
Malone C.Math & CSCantonDavid W. HahnAH
Malone C.Math & CSCantonDavid W. HahnBH
Miami U.Math & StatsOxfordDoug E. WardAP
Miami U.Math & StatsOxfordDoug E. WardBH
U. of DaytonMathDaytonYoussef N. RaffoulBH
Xavier U.Math & CSCincinnatiBernd E. RossaAH
Xavier U.Math & CSCincinnatiBernd E. RossaBH
Youngstown State U.Math & StatsYoungstownGeorge T. YatesAH
Youngstown State U.Math & StatsYoungstownAngela SpalsburyAH
Youngstown State U.Math & StatsYoungstownAngela SpalsburyAH
Youngstown State U.Math & StatsYoungstownGary J. KernsAH
Youngstown State U.Math & StatsYoungstownPaddy W. TaylorAP
Youngstown State U.Math & StatsYoungstownPaddy W. TaylorAH
Youngstown State U.Math & StatsYoungstownGeorge YatesBO
Youngstown State U.Math & StatsYoungstownGary KernsBH
OKLAHOMA
Oklahoma State U.MathStillwaterLisa A. MantiniBH
SE Okla. State U.MathDurantKarl H. FrinkleAP
OREGON
Lewis & Clark Coll.Math Sci.PortlandLiz StanhopeAP
Linfield C.Comp. Sci.McMinnvilleDaniel K. FordBH
Linfield C.MathMcMinnvilleJennifer NordstromAH
Linfield C.MathMcMinnvilleJennifer NordstromBH
Oregon State U.MathCorvallisNathan L. GibsonAM
Oregon State U.MathCorvallisNathan L. GibsonAP
Oregon State U.MathCorvallisVrushali A. BokilBH
Pacific U.MathForest GroveMichael BoardmanBH
Pacific U.MathForest GroveJohn AugustAH
Pacific U.PhysicsForest GroveJuliet BrosingAM
Pacific U.PhysicsForest GroveSteve HallAP
PENNSYLVANIA
Bloomsburg U.Math, CS, & StatsBloomsburgKevin FerlandAH
Bucknell U.MathLewisburgPeter McNamaraBH
Gannon U.MathErieJennifer A. GormanAM
Gettysburg C.MathGettysburgBenjamin B. KennedyBH
Gettysburg C.MathGettysburgBenjamin B. KennedyBP
Juniata C.MathHuntingdonJohn F. BukowskiAH
Shippensburg U.MathShippensburgPaul T. TaylorAH
Slippery Rock U.MathSlippery RockRichard J. MarchandAM
Slippery Rock U.MathSlippery RockRichard J. MarchandBH
Slippery Rock U.PhysicsSlippery RockAthula R. HeratBM
U. of PittsburghMathPittsburghJonathan RubinBH
Westminster C.Math & CSNew WilmingtonBarbara T. FairesAH
Westminster C.Math & CSNew WilmingtonBarbara T. FairesAH
Westminster C.Math & CSNew WilmingtonWarren D. HickmanBH
Westminster C.Math & CSNew WilmingtonCarolyn K. CuffBP
RHODE ISLAND
Providence C.MathProvidenceJeffrey T. HoagAM
SOUTH CAROLINA
C. of CharlestonMathCharlestonAmy LangvilleAM
C. of CharlestonMathCharlestonAmy LangvilleBH
Columbia C.Math & Comp.ColumbiaNieves A. McNultyBH
Francis Marion U.MathFlorenceDavid W. SzurleyBH
Midlands Technical Coll.MathColumbiaJohn R. LongAH
Midlands Technical Coll.MathColumbiaJohn R. LongBP
Wofford C.Comp. Sci.SpartanburgAngela B. ShifletBH
SOUTH DAKOTA
SD Schl of Mines & Tech.Math & CSRapid CityKyle RileyBP
TENNESSEE
Belmont U.Math & CSNashvilleAndrew J. MillerAH
Tennessee Tech U.MathCookevilleAndrew J. HetzelBH
U. of TennesseeMAthKnoxvilleSuzanne LenhartAP
TEXAS
Angelo State U.MathSan AngeloKarl J. HavlakBH
Angelo State U.MathSan AngeloKarl J. HavlakBP
Texas A& M-CommerceMathCommerceLaurene V. FausettAH
Trinity U.MathSan AntonioPeter OlofssonBP
Trinity U.MathSan AntonioDiane SaphireBH
VIRGINIA
James Madison U.Math & StatsHarrisonburgLing XuAP
James Madison U.Math & StatsHarrisonburgDavid B. WaltonBM
Longwood U.Math & CSFarmvilleM. Leigh LunsfordAP
Longwood U.Math & CSFarmvilleM. Leigh LunsfordBH
Maggie Walker Gov. SchlMathRichmondJohn BarnesBH
Mills E. Godwin HSSci. Math Tech.RichmondAnn W. SebrellBH
Mills E. Godwin HSSci. Math Tech.RichmondAnn W. SebrellBP
Roanoke C.Math CS Phys.SalemDavid G. TaylorAP
U. of RichmondMath & CSRichmondKathy W. HokeBH
U. of VirginiaMathCharlottesvilleIrina MitreaBH
U. of VirginiaMathCharlottesvilleTai MelcherBH
Virginia TechMathBlacksburgHenning S. MortveitBH
Virginia WesternMathRoanokeSteve HammerAP
WASHINGTON
Central Washington U.MathEllensburgJames BisgardAP
Heritage U.MathToppenishRichard W. SwearingenBH
Pacific Lutheran U.MathTacomaRachid BenkhaltiAH
Pacific Lutheran U.MathTacomaRachid BenkhaltiBH
Seattle Pacific U.Electr. Eng.SeattleMelani PlettBH
Seattle Pacific U.MathSeattleWai LauBH
Seattle Pacific U.MathSeattleWai LauBH
U. of Puget SoundMathTacomaMichael Z. SpiveyAH
U. of Puget SoundMathTacomaMichael Z. SpiveyBM
U. of WashingtonAppl./Comp'1 Math.SeattleAnne GreenbaumBM
U. of WashingtonAppl./Comp'1 Math.SeattleAnne GreenbaumAH
U. of WashingtonMathSeattleJames MorrowBH
U. of WashingtonMathSeattleJames Allen MorrowAH
Washington State U.MathPullmanMark F. SchumakerBH
Western Washington U.MathBellinghamTjalling YpmaAH
Western Washington U.MathBellinghamTjalling YpmaAH
WISCONSIN
Beloit C.Math & CSBeloitPaul J. CampbellBH
U. of Wisc.-La CrosseMathLa CrosseBarbara BennieBM
U. of Wisc.-Eau ClaireMathEau ClaireSimei TongBH
U. of Wisc.-River FallsMathRiver FallsKathy A. TomlinsonBM
AUSTRALIA
U. of New South WalesMath & StatsSydneyJames W. FranklinAH
U. of New South WalesMath & StatsSydneyJames W. FranklinBM
U. of S. QueenslandMath & Comp.ToowoombaSergey A. SuslovBH
CANADA
McGill U.Math & StatsMontrealNilima NigamAM
McGill U.Math & StatsMontrealNilima NigamAP
U. Toronto at ScarboroughCS & Math.TorontoPaul S. SelickAH
U. of Western OntarioAppl. Math.LondonAllan B. MacIsaacBM
York U.Math & StatsTorontoHongmei ZhiuAM
CHINA
Anhui
Anhui U.Appl. MathHefeiRanchao WuBH
Anhui U.Appl. MathHefeiQuanbing ZhangBH
Anhui U.Electr. Eng.HefeiQuancal GanBH
Anhui U.Electr. Eng.HefeiQuancal GanBH
Hefei U. of Tech.Appl. MathHefeiYongwu ZhouBP
Hefei U. of Tech.Comp'1 MathHefeiYoudu HuangAH
Hefei U. of Tech.MathHefeiXueqiao DuAH
Hefei U. of Tech.MathHefeiHuaming SuBP
Hefei U. of Tech.MathHefeiHuaming SuAP
Hefei U. of Tech.MathHefeiXueqiao DuBP
U. of Sci. & Tech. of ChinaCSHefeiLixin DuanAH
U. of Sci. & Tech. of ChinaElectr. Eng./InfoSci.HefeiXing GongBP
U. of Sci. & Tech. of ChinaGifted YoungHefeiWeining ShenAP
U. of Sci. & Tech. of ChinaModern PhysicsHefeiKai PanAP
U. of Sci. & Tech. of ChinaInfoSci. & Tech.HeFeiDong LiBP
U. of Sci. & Tech. of ChinaPhysicsHefeiZhongmu DengAP
Beijing
Acad. of Armored Force Eng.Funda. CoursesBeijingChen JianhuaBP
Acad. of Armored Force Eng.Funda. CoursesBeijingChen JianhuaBH
Acad. of Armored Force Eng.Mech. Eng.BeijingHan DeAP
Beihang U.Advanced Eng.BeijingWu San XingBH
Beihang U.AstronauticsBeijingSanxing WuBP
Beihang U.AstronauticsBeijingJian MaBP
Beihang U.Sci.BeijingLinping PengAP
Beihang U.Sci.BeijingSun Hai YanBM
Beihang U.Sci.BeijingSun Hai YanBH
Beihang U.Sci.BeijingHongYing LiuAM
Beijing Electr. Sci. & Tech. Inst.Basic EducationBeijingCui MengAP
Beijing Electr. Sci. & Tech. Inst.Basic EducationBeijingCui MengAP
Beijing Forestry U.InfoBeijingJie MaBH
Beijing Forestry U.InfoBeijingXiaochun WangAP
Beijing Forestry U.MathBeijingMengning GaoBP
Beijing Forestry U.MathBeijingLi HongjunAP
Beijing Forestry U.MathBeijingXiaochun WangBH
Beijing Forestry U.Mech. Eng.BeijingZhao DongBH
Beijing Forestry U.Sci.BeijingXiaochun WangAP
Beijing Forestry U.Sci.BeijingMengning GaoAP
Beijing High Schl FourMathBeijingJinli MiaoAP
Beijing High Schl FourMathBeijingJinli MiaoBH
Beijing Inst. of Tech.InfoTech.BeijingHongzhou WangAP
Beijing Inst. of Tech.MathBeijingHoubao XUBP
Beijing Inst. of Tech.MathBeijingHua-Fei SunAH
Beijing Inst. of Tech.MathBeijingBing-Zhao LiAP
Beijing Inst. of Tech.MathBeijingHongzhou WangAP
Beijing Inst. of Tech.MathBeijingChunguang XiongAP
Beijing Inst. of Tech.MathBeijingXuewen LiAH
Beijing Inst. of Tech.MathBeijingXiuling MaAP
Beijing Inst. of Tech.MathBeijingXiaoxia YanBH
Beijing Inst. of Tech.MathBeijingGuifeng YanBM
Beijing Inst. of Tech.MathBeijingHongzhou WangBH
Beijing Inst. of Tech.MathBeijingHoubao XUBM
Beijing Jiaotong U.Appl. MathBeijingJing ZhangAP
Beijing Jiaotong U.MathBeijingWeijia WangAP
Beijing Jiaotong U.MathBeijingHong ZhangAP
Beijing Jiaotong U.MathBeijingFaen WuAP
Beijing Jiaotong U.MathBeijingPengjian ShangAP
Beijing Jiaotong U.MathBeijingXiaoxia WangAP
Beijing Jiaotong U.MathBeijingZhonghao JiangAP
Beijing Jiaotong U.MathBeijingBingli FanAP
Beijing Jiaotong U.MathBeijingBingtuan WangBP
Beijing Jiaotong U.MathBeijingWeijia WangBP
Beijing Jiaotong U.MathBeijingKeqian DongBP
Beijing Jiaotong U.MathBeijingBingli FanBP
Beijing Jiaotong U.MathBeijingShangli ZhangAP
Beijing Jiaotong U.MathBeijingJun WangBH
Beijing Jiaotong U.MathBeijingMinghui LiuAP
Beijing Jiaotong U.MathBeijingXiaoming HuangAH
Beijing Jiaotong U.MathBeijingMinghui LiuBH
Beijing Jiaotong U.StatisticsBeijingWeidong LiBH
Beijing Lang. & Culture U.CSBeijingGuilong LiuBH
Beijing Normal U.GeographyBeijingYongjiu DaiAP
Beijing Normal U.MathBeijingYingzhe WangAH
Beijing Normal U.MathBeijingHe QingAH
Beijing Normal U.MathBeijingLi CuiAM
Beijing Normal U.MathBeijingLaifu LiuBM
Beijing Normal U.MathBeijingLiu YumingAP
Beijing Normal U.MathBeijingLaifu LiuAM
Beijing Normal U.MathBeijingZhengru ZhangBH
Beijing Normal U.MathBeijingHaiyang HuangAP
Beijing Normal U.MathBeijingHaiyang HuangAP
Beijing Normal U.ResourcesBeijingJianjun WuAP
Beijing Normal U.StatsBeijingChun YangAP
Beijing Normal U.Stats & Financial Math.BeijingXingwei TongAP
Beijing Normal U.Stats & Financial Math.BeijingCui HengjianAP
Beijing Normal U.Stats & Financial Math.BeijingJacob KingBH
Beijing Normal U.Stats & Financial Math.BeijingShumei ZhangBH
Beijing Normal U.Sys. Sci.BeijingZengru DiAP
Beijing U. of Aero. & Astro.Aero. Sci. & Eng.BeijingLinping PengAH
Beijing U. of Aero. & Astro.Instr. Sci. & Opto-electr. Eng.BeijingLinping PengAH
Beijing U. of Chem. Tech.Electr. Sci.BeijingXiaoding ShiBP
Beijing U. of Chem. Tech.Electr. Sci.BeijingGuangfeng JiangAP
Beijing U. of Chem. Tech.MathBeijingJinyang HuangAP
Beijing U. of Chem. Tech.MathBeijingXinhua JiangAH
Beijing U. of Chem. Tech.Math & InfoSci.BeijingHui LiuBP
Beijing U. of Posts & Tele.Appl. MathBeijingZuguo HeAH
Beijing U. of Posts & Tele.Appl. MathBeijingHongxiang SunAM
Beijing U. of Posts & Tele.Appl. MathBeijingHongxiang SunAP
Beijing U. of Posts & Tele.AutomationBeijingJianhua YuanBH
Beijing U. of Posts & Tele.AutomationBeijingJianhua YuanBP
Beijing U. of Posts & Tele.Comm. Eng.BeijingXiaoxia WangAP
Beijing U. of Posts & Tele.Comm. Eng.BeijingXiaoxia WangAP
Beijing U. of Posts & Tele.Comm. Eng.BeijingZuguo HeBH
Beijing U. of Posts & Tele.CS & Tech.BeijingHongxiang SunBH
Beijing U. of Posts & Tele.Econ. & MgmtBeijingTianping ShuaiAP
Beijing U. of Posts & Tele.Electr. Eng.BeijingQing ZhouAP
Beijing U. of Posts & Tele.Electr. Eng.BeijingZuguo HeAM
Beijing U. of Posts & Tele.Electr. Eng.BeijingZuguo HeBP
Beijing U. of Posts & Tele.Electr. & Information Eng.BeijingXinchao ZhaoBH
Beijing U. of Posts & Tele.Electr. & Information Eng.BeijingJianhua YuanBM
Beijing U. of Tech.Appl. Sci.BeijingXue YiAP
Beijing Wuzi U.InfoBeijingAdvisor GroupAP
Beijing Wuzi U.InfoBeijingAdvisor GroupAP
Beijing Wuzi U.MathBeijingAdvisor GroupBP
Beijing Wuzi U.MathBeijingAdvisor GroupBH
Central U. of Finance & Econ.Appl. MathBeijingXiugu WangAP
Central U. of Finance & Econ.Appl. MathBeijingXiugu WangAH
Central U. of Finance & Econ.Appl. MathBeijingZhaoxu SunAP
Central U. of Finance & Econ.Appl. MathBeijingDonghong LiAM
Central U. of Finance & Econ.Appl. MathBeijingXiaoming FanBH
China Agricultural U.Sci.BeijingZou HuiAP
China Agricultural U.Sci.BeijingLi GuoHuiBP
China Agricultural U.Sci.BeijingShi YuanChangBP
China Agricultural U.Sci.BeijingYang JianPingBH
China U. of GeoSci.InfoTech.BeijingCuixiang WangAP
China U. of GeoSci.InfoTech.BeijingShuai ZhangAP
China U. of GeoSci.InfoTech.BeijingCuixiang WangBH
China U. of GeoSci.InfoTech.BeijingShuai ZhangBP
China U. of GeoSci.MathBeijingLinlin ZhaoAP
China U. of GeoSci.MathBeijingHuangBH
China U. of Mining & Tech.Math Sci.BeijingLei ZhangAM
China U. of Mining & Tech.Math Sci.BeijingLei ZhangAM
China U. of Mining & Tech.Sci.BeijingPing JingAP
China U. of Mining & Tech.Sci.BeijingPing JingAP
China U. of PetroleumMath & PhysicsBeijingLing ZhaoAM
China U. of PetroleumMath & PhysicsBeijingXiaoguang LuAH
China U. of PetroleumMath & PhysicsBeijingPei WangBH
China Youth U. for Polit. Sci.Econ.BeijingYanxia ZhengBP
North China Electr. Power U.AutomationBeijingXiangjie LiuAH
North China Electr. Power U.AutomationBeijingGuotian YangBH
North China Electr. Power U.Electr. Eng.BeijingYini XieBP
North China Electr. Power U.Electr. Eng.BeijingYongqiang ZhuBH
North China Electr. Power U.Math & PhysicsbeijingQirong QiuAP
North China Electr. Power U.Math & PhysicsbeijingQirong QiuBH
North China U. of Tech.Math & InfoSci.BeijingQuan ZhengBH
Peking U.Ctr for Econ. Res.BeijingQiang GongBH
Peking U.CSBeijingLida ZhuBP
Peking U.Econ.BeijingDong ZhiyongBH
Peking U.Financial MathBeijingShanjun LinAH
Peking U.Financial MathBeijingShanjun LinBM
Peking U.Journalism & Comm.BeijingHua SunBH
Peking U.Life Sci.BeijingChengcai AnAP
Peking U.Machine IntelligenceBeijingJuan HuangBP
Peking U.Machine IntelligenceBeijingJuan HuangBP
Peking U.Math Sci.BeijingXufeng LiuAP
Peking U.Math Sci.BeijingYulong LiuBH
Peking U.Math Sci.BeijingYulong LiuBM
Peking U.Math Sci.BeijingMinghua DengBP
Peking U.Math Sci.BeijingSharon Lynne MirnelAM
Peking U.Math Sci.BeijingXin YiBM
Peking U.Math Sci.BeijingXin YiBP
Peking U.Math Sci.BeijingXufeng LiuBM
Peking U.Math Sci.BeijingMinghua DengAP
Peking U.MechanicsBeijingZhi LiAP
Peking U.PhysicsBeijingXiaodong HuAP
Peking U.PhysicsBeijingXiaodong HuBH
Peking U.PhysicsBeijingYongqiang SunBP
Peking U.Quantum ElectronicsBeijingZhigang ZhangBH
Peking U.Sci. & Eng. Comp.BeijingPeng HeBH
Renmin U. of ChinaFinanceBeijingGao JinwuBM
Renmin U. of ChinaInfoBeijingYonghong LongAP
Renmin U. of ChinaInfoBeijingYong LinBH
Renmin U. of ChinaInfoBeijingYong LinBH
Renmin U. of ChinaInfoBeijingLitao HanBP
Renmin U. of ChinaMathBeijingJinwu GaoAH
Tsinghua U.MathBeijingJun YeAM
Tsinghua U.MathBeijingZhiming HuAM
Tsinghua U.MathBeijingZhiming HuAP
Tsinghua U.MathBeijingJun YeBM
U. of Sci. & Tech. BeijingAppl. MathBeijingWang HuiAP
U. of Sci. & Tech. BeijingAppl. MathBeijingHu ZhixingAM
U. of Sci. & Tech. BeijingAppl. MathBeijingZhu JingAP
U. of Sci. & Tech. BeijingAppl. MathBeijingHu ZhixingBH
U. of Sci. & Tech. BeijingCS & Tech.BeijingZhaoshun WangBM
U. of Sci. & Tech. BeijingMathBeijingZhu JingBH
U. of Sci. & Tech. BeijingMathBeijingWang HuiAH
Chongqin
Chongqing Normal U.Math & CSChongqingXuewen LiuBP
Chongqing Normal U.Math & CSChongqingYan WeiBP
Chongqing U.Appl. ChemistryChongqingZhiliang LiAM
Chongqing U.Math & Phys., Info. & CSChongqingLi FuAH
Chongqing U.Software Eng.ChongqingXiaohong ZhangAP
Chongqing U.Stats & Act'1 Sci.ChongqingTengzhong RongAP
Chongqing U.Stats & Act'1 Sci.ChongqingZhengmin DuanAH
Chongqing U.Stats & Act'1 Sci.ChongqingZhengmin DuanBH
Southwest U.Appl. MathChongqingYangrong LiBH
Southwest U.Appl. MathChongqingXianning LiuBM
Southwest U.MathChongqingLei DengBM
Southwest U.MathChongqingLei DengBH
Fujian
Fujian Agri. & Forestry U.Comp. & InfoTech.FuzhouLurong WuAH
Fujian Agri. & Forestry U.Comp. & InfoTech.FuzhouLurong WuBH
Fujian Normal U.CSFuzhouChen QinghuaBH
Fujian Normal U.Education Tech.FuzhouLin MuhuiBP
Fujian Normal U.MathFuzhouZhiqiang YuanAP
Fujian Normal U.MathFuzhouZhiqiang YuanBH
Quanzhou Normal U.MathQuanzhouXiyang YangAH
Guangdong
Jinan U.Electr.GuangzhouShiqi YeBH
Jinan U.MathGuangzhouShizhuang LuoAP
Jinan U.MathGuangzhouDaiqiang HuBH
Shenzhen Poly.Electr. & InfoEng.ShenzhenJianLong ZhongBH
Shenzhen Poly.Ind'1 Training CtrShenzhenDong Ping WeiAP
Shenzhen Poly.Ind'l Training CtrShenzhenHong Mei TianAH
Shenzhen Poly.Ind'l Training CtrShenzhenZhiYong LiuAP
Shenzhen Poly.Ind'l Training CtrShenzhenJue WangBP
South China Agricultural U.MathGuangzhouShaoMei FangAM
South China Agricultural U.MathGuangzhouShaoMei FangBH
South China Agricultural U.MathGuangzhouShengXiang ZhangBP
South China Agricultural U.MathGuangzhouShengXiang ZhangBP
South China Normal U.Info & ComputationGuangzhouTan YangBM
South China Normal U.MathGuangzhouHenggeng WangAH
South China Normal U.MathGuangzhouShaohui ZhangBP
South China Normal U.MathGuangzhouHunan LiBH
South China U. of Tech.Appl. MathGuangzhouQin YongAnAM
South China U. of Tech.Appl. MathGuangzhouHuang PingAP
South China U. of Tech.Appl. MathGuangzhouQin YongAnBM
South China U. of Tech.Appl. MathGuangzhouLiang ManFaBM
South China U. of Tech.Appl. MathGuangzhouLiang ManFaBM
Sun Yat-Sen (Zhongshan) U.Comp. Sci.GuangzhouZePeng ChenBH
Sun Yat-Sen (Zhongshan) U.MathGuangzhouGuoCan FengAM
Sun Yat-Sen (Zhongshan) U.MathGuangzhouGuoCan FengBH
Sun Yat-Sen (Zhongshan) U.MathGuangzhouZhengLu JiangBH
Sun Yat-Sen (Zhongshan) U.MathGuangzhouXiaoLong JiangBM
Zhuhai C. of Jinan U.CSZhuhaiZhang YunBiuAM
Zhuhai C. of Jinan U.CSZhuhaiZhang YunBiuAP
Zhuhai C. of Jinan U.Packaging Eng.ZhuhaiZhiwei WangAP
Guangxi
GuangXi Teachers Educ. U.Math & CSNanningMai XiongfáAP
GuangXi Teachers Educ. U.Math & CSNanningWei ChengdongAP
GuangXi Teachers Educ. U.Math & CSNanningSu HuadongBP
GuangXi Teachers Educ. U.Math & CSNanningChen JianweiBP
Guilin U. of Electr. Tech.Math & Comp'l Sci.GuilinYongxiang MoAP
Guilin U. of Electr. Tech.Math & Comp'l Sci.GuilinNing ZhuAP
Guilin U. of Electr. Tech.Math & Comp'l Sci.GuilinNing ZhuBP
U. of GuangxiMath & InfoSci.NanningRuxue WuAM
U. of GuangxiMath & InfoSci.NanningRuxue WuAP
U. of GuangxiMath & InfoSci.NanningZhongxing WangAM
U. of GuangxiMath & InfoSci.NanningChunhong LiAP
U. of GuangxiMath & InfoSci.NanningYuejin LvBP
Hebei
Hebei Poly. U.Light IndustryTangshanLihui ZhouAP
Hebei Poly. U.Light IndustryTangshanYan YanBH
Hebei Poly. U.Light IndustryTangshanShaohong YANBP
Hebei Poly. U.Sci.TangshanYamian PengBH
Hebei Poly. U.Sci.TangshanLihong LIBP
Hebei U.Math & CSBaodingQiang HuaAP
Hebei U.Math & CSBaodingQiang HuaBH
North China Electr. Power U.Funda. CoursesBaodingJinWei ShiAP
North China Electr. Power U.Funda. CoursesBaodingGendai GuAP
North China Electr. Power U.Math & PhysicsBaodingPo ZhangBH
North China Electr. Power U.Math & PhysicsBaodingJinggang LiuAP
North China Electr. Power U.Math & PhysicsBaodingHuifeng ShiAP
North China Electr. Power U.Math & PhysicsBaodingYagang ZhangAP
North China Electr. Power U.Math & PhysicsBaodingJinggang LiuBH
Shijiazhuang Railway Inst.Eng. MechanicsShijiazhuangBaocai ZhangBM
Shijiazhuang Railway Inst.Eng. MechanicsShijiazhuangBaocai ZhangBH
Helongjiang
Daqing Petroleum Inst.MathDaqingYang YunfengAP
Daqing Petroleum Inst.MathDaqingYang YunfengBM
Daqing Petroleum Inst.MathDaqingKong LingbinBM
Harbin Eng. U.Appl. MathHarbinGao ZhenbinAP
Harbin Eng. U.Appl. MathHarbinGao ZhenbinAP
Harbin Eng. U.Info & CSHarbinZhang XiaoweiAH
Harbin Eng. U.Info & CSHarbinZhang XiaoweiAP
Harbin Eng. U.MathHarbinLuo YueshengBH
Harbin Inst. of Tech.Astro: Mgmt Sci.HarbinBing WenBM
Harbin Inst. of Tech.Astro: Mgmt Sci.HarbinBing WenAH
Harbin Inst. of Tech.Astro: MathHarbinDongmei ZhangBH
Harbin Inst. of Tech.Astro: MathHarbinJiqyun ShaoAP
Harbin Inst. of Tech.Astro: MathHarbinJiqyun ShaoBH
Harbin Inst. of Tech.CSHarbinZheng KuangAP
Harbin Inst. of Tech.CS & Tech.HarbinLili ZhangAH
Harbin Inst. of Tech.EE & Aut.: Math.HarbinGuanghong JiaoAP
Harbin Inst. of Tech.Mgmt Sci.HarbinJianguo BaoAP
Harbin Inst. of Tech.Mgmt Sci.HarbinJianguo BaoBH
Harbin Inst. of Tech.Mgmt: MathHarbinBoping TianBH
Harbin Inst. of Tech.Mgmt Sci. & Eng.HarbinHong GeAP
Harbin Inst. of Tech.Mgmt Sci. & Eng.HarbinWei ShangAP
Harbin Inst. of Tech.Mgmt Sci. & Eng.HarbinWei ShangBH
Harbin Inst. of Tech.MathHarbinXianyu MengBP
Harbin Inst. of Tech.MathHarbinXianyu MengBP
Harbin Inst. of Tech.MathHarbinYong WangBH
Harbin Inst. of Tech.MathHarbinYong WangBM
Harbin Inst. of Tech.MathHarbinChiping ZhangBP
Harbin Inst. of Tech.MathHarbinGuofeng FanAP
Harbin Inst. of Tech.MathHarbinShouting ShangBH
Harbin Inst. of Tech.MathHarbinGuofeng FanBP
Harbin Inst. of Tech.MathHarbinDaohua LiAH
Harbin Inst. of Tech.MathHarbinDaohua LiBP
Harbin Inst. of Tech.MathHarbinBaodong ZhengAP
Harbin Inst. of Tech.MathHarbinBoying WuAP
Harbin Inst. of Tech.MathHarbinBo HanBP
Harbin Inst. of Tech.MathHarbinBo HanBP
Harbin Inst. of Tech.Network ProjectHarbinXiaoping JiAP
Harbin Inst. of Tech.Sci.HarbinBoying WuBH
Harbin Inst. of Tech.Software Eng.HarbinYan LiuAP
Harbin Inst. of Tech.Software Eng.HarbinYan LiuAP
Harbin Inst. of Tech., Shiyan SchoolMathHarbinXiaofeng ShiBH
Harbin Inst. of Tech., Shiyan SchoolMathHarbinKean LiuAH
Harbin Inst. of Tech., Shiyan SchoolMathHarbinKean LiuBP
Harbin Inst. of Tech., Shiyan SchoolMathHarbinYunfei ZhangAH
Harbin Inst. of Tech., Shiyan SchoolMathHarbinYunfei ZhangAM
Harbin U. of Sci. & Tech.MathHarbinDongmei LiAP
Harbin U. of Sci. & Tech.MathHarbinFengqiu LiuAP
Harbin U. of Sci. & Tech.MathHarbinDongyan ChenBH
Harbin U. of Sci. & Tech.MathHarbinShuzhong WangBH
Harbin U. of Sci. & Tech.MathHarbinGuangyue TianBH
Heilongjiang Inst. of Sci. & Tech.Math & Mech.HarbinHongyan ZhangAH
Heilongjiang Inst. of Sci. & Tech.Math & Mech.HarbinHui ChenAP
Heilongjiang Inst. of Sci. & Tech.Math & Mech.HarbinYanhua YuanBP
Heilongjiang Inst. of Tech.MathHarbinDalu NieBP
Heilongjiang Inst. of Tech.MathHarbinDalu NieBP
Henan
Zhengzhou Inst. of Electr. Tech.The SecondZhengzhouXiaoyong ZhangAP
Zhengzhou Inst. of Electr. Tech.The SecondZhengzhouXiaoyong ZhangBH
Zhengzhou Inst. of Electr. Tech.The ThirdZhengzhouLixin JiaBH
Zhengzhou Inst. of Electr. Tech.The ThirdZhengzhouLixin JiaBH
Zhengzhou Inst. of Survey/Map.Cart./Geo. InfoEng.ZhengzhouShi BinAP
Zhengzhou Inst. of Survey/Map.Cart./Geo. InfoEng.ZhengzhouShi BinBP
Zhengzhou Inst. of Survey/Map.Geod./Navig. Eng.ZhengzhouLi GuohuiAP
Zhengzhou Inst. of Survey/Map.Geod./Navig. Eng.ZhengzhouLi GuohuiBP
Zhengzhou Inst. of Sci.Appl. MathZhengzhouJianfeng GuoBP
Zhengzhou Inst. of Sci.Appl. MathZhengzhouZhibo LuBH
Zhengzhou Inst. of Sci.Appl. PhysicsZhengzhouYuan TianAH
Hubei
Huazhong Normal U.Math & StatsWuhanBo LiAP
Huazhong U. of Sci. & Tech.Electr. & InfoEng.WuhanYan DongBP
Huazhong U. of Sci. & Tech.Ind'1 & Mfg SysEng.WuhanHaobo QiuAP
Huazhong U. of Sci. & Tech.Ind'1 & Mfg SysEng.WuhanLiang GaoBP
Huazhong U. of Sci. & Tech.MathWuhanZhengyang MeiBP
Huazhong U. of Sci. & Tech.MathWuhanZhengyang MeiBH
Three Gorges U.MathYichang CityQin ChenAP
Wuhan U.Appl. MathWuhanYuanming HuBP
Wuhan U.Appl. MathWuhanYuanming HuBM
Wuhan U.Civil Eng.: MathWuhanYuanming HuAH
Wuhan U.CSWuhanHu YuanmingBP
Wuhan U.Electr. InfoWuhanYuanming HuBP
Wuhan U.InfoSecurityWuhanXinqi HuAH
Wuhan U.MathWuhanYuanming HuAM
Wuhan U.Math & Appl. MathWuhanChengxiu GaoBP
Wuhan U.Math & StatsWuhanHu YuanmingAH
Wuhan U.Math & StatsWuhanShihua ChenAH
Wuhan U.Math & StatsWuhanLiuyi ZhongBM
Wuhan U.Math & StatsWuhanXinqi HuBP
Wuhan U.Math & StatsWuhanGao ChengxiuBP
Wuhan U.Math & StatsWuhanYuanming HuBP
Wuhan U.Software Eng.WuhanLiuyi ZhongAP
Wuhan U. of Sci. & Tech.Sci.WuhanAdvisor TeamBP
Wuhan U. of Sci. & Tech.Sci.WuhanAdvisor TeamBP
Wuhan U. of Tech.MathWuhanHuang XiaoweiAP
Wuhan U. of Tech.MathWuhanZhu HuiyingAP
Wuhan U. of Tech.MathWuhanChu YangjieAP
Wuhan U. of Tech.MathWuhanLiu YangBH
Wuhan U. of Tech.PhysicsWuhanHe LangBP
Wuhan U. of Tech.PhysicsWuhanChen JianyeAP
Wuhan U. of Tech.PhysicsWuhanChen JianyeBP
Wuhan U. of Tech.PhysicsWuhanHe LangBP
Wuhan U. of Tech.StatsWuhanMao ShuhuaBP
Wuhan U. of Tech.StatsWuhanChen JiaqingBP
Wuhan U. of Tech.StatsWuhanLi YuguangAP
Hunan
Central South U.AutomationChangshaHe WeiAP
Central South U.Biomedical Eng.ChangshaHou MuzhouAH
Central South U.Civil Eng.ChangshaShihua ZhuAP
Central South U.CSChangshaXuanyun QinBH
Central South U.CSChangshaXuanyun QinBH
Central South U.Eng. MgmtChangshaZhoushun ZhengAP
Central South U.Infomation & CSChangshaZhoushun ZhengBH
Central South U.Info-phys. & Geo. Eng.ChangshaZhang YanAP
Central South U.Info-phys. & Geo. Eng.ChangshaZhang YanAP
Central South U.Material PhysicsChangshaXuanyun QinBH
Central South U.Math & Appl. Math.Chang shaCheng LiuAH
Central South U.MathChangshaHe WeiAP
Central South U.Mech. Des. & Mfg Aut.ChangshaXin Ge LiuAM
Central South U.Mech. Des. & Mfg Aut.ChangshaXin Ge LiuAP
Central South U.Traffic & Info. Eng.ChangshaCheng LiuAP
Changsha U. of Sci. & Tech.Math & CSChangshaZhang TongAP
Changsha U. of Sci. & Tech.Math & CSChangshaLiang DaiBP
Changsha U. of Sci. & Tech.Math & CSChangshaQuan XieBP
Changsha U. of Sci. & Tech.Math & CSChangshaLiu TanBP
Hunan Inst. of HumanitiesSci. & Tech. MathLoudiDiChen YangAH
Hunan U.Math & EconometricsChangshaHuahui YanAP
Hunan U.Math & EconometricsChangshaYizhao ChenBH
Hunan U.Math & EconometricsChangshaYuanbei DengBH
Hunan U.Math & EconometricsChangshaChangrong LiuBP
National U. of Defense Tech.Math & Sys. Sci.ChangshaMengda WuAH
National U. of Defense Tech.Math & Sys. Sci.ChangshaLizhi ChengAH
National U. of Defense Tech.Math & Sys. Sci.ChangshaMeihua XieAH
National U. of Defense Tech.Math & Sys. Sci.ChangshaYi WuAM
National U. of Defense Tech.Math & Sys. Sci.ChangshaXiaojun DuanAH
National U. of Defense Tech.Math & Sys. Sci.ChangshaYong LuoAH
National U. of Defense Tech.Math & Sys. Sci.ChangshaXiaojun DuanAM
National U. of Defense Tech.Math & Sys. Sci.ChangshaLizhi ChengBP
National U. of Defense Tech.Math & Sys. Sci.ChangshaMeihua XieBM
National U. of Defense Tech.Math & Sys. Sci.ChangshaDan WangBH
National U. of Defense Tech.Math & Sys. Sci.ChangshaDan WangBM
National U. of Defense Tech.Math & Sys. Sci.ChangshaYong LuoBM
Inner Mongolia
Inner Mongolia U.MathHuhhotZhuang MaAP
Inner Mongolia U.MathHuhhotMei WangAP
Inner Mongolia U.MathHuhhotZhuang MaBP
Jiangsu
Huaiyin Inst. of Tech.Comp. Sci.HuaianZhuang YumingAP
Huaiyin Inst. of Tech.Comp. Sci.HuaianZhuang YumingBP
Nanjing Normal U.CSNanjingWang QiongBM
Nanjing Normal U.CSNanjingWang QiongBH
Nanjing Normal U.Financial MathNanjingWang Xiao QianAH
Nanjing Normal U.Financial MathNanjingWang Xiao QianBH
Nanjing Normal U.MathNanjingZhu Qun ShengAP
Nanjing Normal U.MathNanjingZhu Qun ShengBP
Nanjing U.Chem. & Chem. Eng.NanjingXujie ShenBP
Nanjing U.Electr. Sci. & Eng.NanjingHaodong WuAH
Nanjing U.Electr. Eng.NanjingJianchun ChengBH
Nanjing U.Env'tNanjingXin QianAH
Nanjing U.Intensive InstructionNanjingWeihua HuangBH
Nanjing U.Intensive InstructionNanjingWeiyi SuAP
Nanjing U.MathNanjingZe-Chun HuBM
Nanjing U.MathNanjingGuo Fei ZhouAP
Nanjing U.MathNanjingWeihua HuangBH
Nanjing U.MathNanjingMing KongBP
Nanjing U.MathNanjingMing KongBH
Nanjing U.MathNanjingZechun HuBH
+ +
INSTITUTIONDEPT.CITYADVISOR
Nanjing U. of Finance & Econ.Finance EconomyNanjingChen MeixiaBP
Nanjing U. of Posts & Tele.Math & PhysicsNanjingAiju ShiBP
Nanjing U. of Posts & Tele.Math & PhysicsNanjingKong GaohuaBH
Nanjing U. of Posts & Tele.Math & PhysicsNanjingKong GaohuaBH
Nanjing U. of Posts & Tele.Math & PhysicsNanjingLiWei XuBH
Nanjing U. of Posts & Tele.Math & PhysicsNanjingQiu ZhonghuaAP
Nanjing U. of Posts & Tele.Math & PhysicsNanjingJun YeBM
Nanjing U. of Posts & Tele.Math & PhysicsNanjingZhong Hua QiuBP
Nanjing U. of Posts & Tele.Math & PhysicsNanjingJin XuBM
Nanjing U. of Sci. & Tech.Appl. MathNanjingPeibiao ZhaoBP
Nanjing U. of Sci. & Tech.Appl. MathNanjingChungen XuBH
Nanjing U. of Sci. & Tech.MathNanjingZhipeng QiuBH
Nanjing U. of Sci. & Tech.StatsNanjingLiwei LiuAP
Nantong U.Arch. & Civil Eng.NantongHongmei LiuAP
Nantong U.Electr. Eng.NantongGuoping LuAP
Nantong U.Sci.NanTongXiaojian ZhouBP
PLA U. of Sci. & Tech.Comm. Eng.NanjingYao KuiAP
PLA U. of Sci. & Tech.Meteor: Appl. Math & Phys.NanjingShen JinrenBM
PLA U. of Sci. & Tech.Sci.: Appl. Math & Phys.NanjingLiu ShoushengBH
PLA U. of Sci. & Tech.Sci.: Appl. Math & PhysicsNanjingTeng JiajunAH
Southeast U. at JiulonghuMathNanjingJun HuangAH
Southeast U. at JiulonghuMathNanjingEnshui ChenAP
Southeast U. at JiulonghuMathNanjingJianhua ZhouBP
Southeast U. at JiulonghuMathNanjingXiang YinBH
Southeast U.MathNanjingFeng WangAP
Southeast U.MathNanjingXingang JiaBH
Southeast U.MathNanjingXingang JiaBH
Southeast U.MathNanjingDan HeAP
Southeast U.MathNanjingLiyan WangAP
Southeast U.MathNanjingDan HeAH
Southeast U.MathNanjingLiyan WangBH
Xi'an Jiaotong-Liverpool U.E-FinanceSuzhouAnnie ZhuAP
Xi'an Jiaotong-Liverpool U.Financial MathSuzhouMing YingAP
Xi'an Jiaotong-Liverpool U.Info & Comp.SuzhouLiying LiuAH
Xi'an Jiaotong-Liverpool U.Telecomm.SuzhouJingming GuoAP
Xuhai C./China U. Mining & Tech.MathXuzhouPeng HongjunAP
Xuhai C./China U. Mining & Tech.MathXuzhouPeng HongjunAH
Xuhai C./China U. Mining & Tech.PhysicsXuzhouZhang WeiAP
Xuzhou Inst. of Tech.MathXuzhouLi SubeiAM
Yangzhou U.Guangling C.YangzhouTao ChengBP
Yangzhou U.InfoEng.YangzhouWeijun LinAP
Yangzhou U.MathYangzhouFan CaiBH
Jianxi
Gannan Normal U.Comp.GanZhouYan Shen HaiAH
Gannan Normal U.Comp.GanzhouZhanJI ZhouAH
Gannan Normal U.MathGanzhouXie Xian HuaBP
Gannan Normal U.MathGanzhouXu Jing FeiBP
Jiangxi U. of Finance & Econ.InfoTech.NanchangChangsheng HuaBP
Nanchang Hangkong U.Appl. MathNanchangGensheng QiuAH
Nanchang U.MathNanchangQingyu LuoAP
Nanchang U.MathNanchangTao ChenAH
Nanchang U.MathNanchangLiao ChuanrongAP
Nanchang U.MathNanchangYang ZhaoAP
Nanchang U.MathNanchangChen YujuAH
Nanchang U.MathNanchangXianjiu HuangBH
Jilin
Beihua U.MathJilinLi TingbinAH
Beihua U.MathJilin CityWei YuncaiAP
Beihua U.MathJilin CityWei YuncaiAP
Beihua U.MathJilin CityChen ZhaojunAP
Beihua U.MathJilin CityZhao HongweiAH
Beihua U.MathJilin CityYang YuetingAP
Beihua U.MathJilin CityZhang WeiAP
Jilin Arch. & Civil Eng. Inst.Basic Sci.ChangchunJinLin & Lin DingAP
Jilin U.MathChangchunHuang QingdaoBM
Jilin U.MathChangchunCao YangBH
Jilin U.MathChangchunYao XiulingBH
Jilin U.MathChangchunLiu MingjiAP
Jilin U.MathChangchunXianrui LvBM
Jilin U.MathChangchunXianrui LvBP
Jilin U.MathChangchunShishun ZhaoBH
Liaoning
Anshan Normal U.MathAnshanLi Pi YuAP
Anshan Normal U.MathAnshanZhang ChunBP
Anshan Normal U.MathAnshanLiu Hui MinAP
Anshan Normal U.MathAnshanLiu Hui MinBP
Dalian Fisheries U.Sci.DalianZhang LifengAP
Dalian Jiaotong U.Sci.DalianGuocan WangAP
Dalian Jiaotong U.Sci.DalianGuocan WangBP
Dalian Jiaotong U.Sci.DalianDa-yong ZhouAP
Dalian Jiaotong U.Sci.DalianDa-yong ZhouBP
Dalian Maritime U.Appl. MathDalianY. ZhangBP
Dalian Maritime U.Appl. MathDalianY. ZhangBH
Dalian Maritime U.Appl. MathDalianXinnian WangBH
Dalian Maritime U.Appl. MathDalianXinnian WangAP
Dalian Maritime U.Appl. MathDalianDong YuAP
Dalian Maritime U.MathDalianShuqin YangBP
Dalian Maritime U.MathDalianGuoyan ChenBH
Dalian Maritime U.MathDalianNaxin ChenBM
Dalian Maritime U.MathDalianSheng BiBH
Dalian Maritime U.MathDalianYun Jie ZhangAH
Dalian Nationalities U.CSDalianXiaoniu LiBH
Dalian Nationalities U.CSDalianXiaoniu LiBP
Dalian Nationalities U.CS & Eng.DalianXiangdong LiuAP
Dalian Nationalities U.CS & Eng.DalianLiming WangAP
Dalian Nationalities U.CS & Eng.DalianDejun YanAH
Dalian Nationalities U.CS & Eng.DalianLiming WangBH
Dalian Nationalities U.Dean's OfficeDalianHengbo ZhangBP
Dalian Nationalities U.Dean's OfficeDalianFu JieBH
Dalian Nationalities U.Dean's OfficeDalianRendong GeBP
Dalian Nationalities U.Dean's OfficeDalianRendong GeBP
Dalian Nationalities U.Dean's OfficeDalianYumei MaBH
Dalian Nationalities U.Dean's OfficeDalianYumei MaBP
Dalian Nationalities U.Innovation Ed.DalianRixia BaiAP
Dalian Nationalities U.Innovation Ed.DalianXinwen ChenBH
Dalian Nationalities U.Innovation Ed.DalianTian YunBH
Dalian Nationalities U.Innovation Ed.DalianTian YunBP
Dalian Naval Acad.MathDalianFeng JieAH
Dalian Naval Acad.MathDalianFeng JieBH
Dalian Neosoft Inst. of InfoInfoTech & Business MgmtDalianSheng GuanBH
Dalian Neosoft Inst. of InfoInfoTech & Business MgmtDalianQian WangBP
Dalian U.Info. & Eng.DalianHe SunAP
Dalian U.MathDalianTan XinxinAP
Dalian U.MathDalianGang JiataiAP
Dalian U.MathDalianGang JiataiAP
Dalian U.MathDalianLiu GuangzhiAP
Dalian U.MathDalianZhang ChengAH
Dalian U. of Tech.Appl. MathDalianQiuhui PanAP
Dalian U. of Tech.Appl. MathDalianLiang ZhangAP
Dalian U. of Tech.Appl. MathDalianLiang ZhangAP
Dalian U. of Tech.Appl. MathDalianQiuhui PanBP
Dalian U. of Tech.Appl. MathDalianMingfeng HeBH
Dalian U. of Tech.Appl. MathDalianZhenyu WuBH
Dalian U. of Tech.Appl. MathDalianMingfeng HeBP
Dalian U. of Tech.Appl. MathDalianLiang ZhangBH
Dalian U. of Tech.Appl. MathDalianMingfeng HeBP
Dalian U. of Tech.City Inst.DalianXubin GaoAH
Dalian U. of Tech.City Inst.DalianXubin GaoAH
Dalian U. of Tech.City Inst.DalianHongzeng WangAP
Dalian U. of Tech.City Inst.DalianLina WanBH
Dalian U. of Tech.Innovation ExperimentDalianWanwu XiBH
Dalian U. of Tech.Innovation ExperimentDalianLin FengBP
Dalian U. of Tech.Innovation ExperimentDalianQiuhui PanAP
Dalian U. of Tech.Software SchlDalianZhe LiAH
Dalian U. of Tech.Software SchlDalianZhe LiAM
Dalian U. of Tech.Software SchlDalianZhe LiBM
Dalian U. of Tech.Software SchlDalianZhe LiBM
Northeastern U.AutocontrolShenyangYunzhou ZhangBH
Northeastern U.AutocontrolShenyangFeng PanBH
Northeastern U.Comp.ShenyangHuilin LiuBH
Northeastern U.InfoSci. & Eng.ShenyangChengdong WuAH
Northeastern U.InfoSci. & Eng.ShenyangShuying ZhaoAH
Northeastern U.Modern Design & AnalysisShenyangXuehong HeAH
Northeastern U.Sci.ShenyangPing SunAP
Northeastern U.Sys. SimulationShenyangJianJiang CuiAP
Shenyang Inst. of Aero. Eng.Electr.ShenyangWeifang LiuAP
Shenyang Inst. of Aero. Eng.Electr.ShenyangNa YinAP
Shenyang Inst. of Aero. Eng.Electr.ShenyangLin LiBH
Shenyang Inst. of Aero. Eng.Info & CSShenyangShiyun WangAH
Shenyang Inst. of Aero. Eng.Info & CSShenyangLi WangAP
Shenyang Inst. of Aero. Eng.Info & CSShenyangYong JiangAP
Shenyang Inst. of Aero. Eng.North Schl of Sci. & Tech.ShenyangLi LinAP
Shenyang Inst. of Aero. Eng.North Schl of Sci. & Tech.ShenyangWang XiaoyuanAP
Shenyang Inst. of Aero. Eng.North Schl of Sci. & Tech.ShenyangWang XiaoyuanAP
Shenyang Inst. of Aero. Eng.North Schl of Sci. & Tech.ShenyangLiu WeifangBP
Shenyang Inst. of Aero. Eng.Sci.ShenyangFeng ShanAP
Shenyang Inst. of Aero. Eng.Sci.ShenyangLimei ZhuAH
Shenyang Normal U.Math & Sys. Sci.ShenyangXiaoyi LiAP
Shenyang Normal U.Math & Sys. Sci.ShenyangYuzhong LiuAP
Shenyang Normal U.Math & Sys. Sci.ShenyangXianji MengBP
Shenyang Pharmaceutical U.Basic CoursesShenyangRongwu XiangAP
Shenyang Pharmaceutical U.Basic CoursesShenyangRongwu XiangBP
Shenyang U. of Tech.Basic Sci.ShenyangChen YanAP
Shenyang U. of Tech.Basic Sci.ShenyangChen YanAP
Shenyang U. of Tech.MathShenyangYan ChenAH
Shenyang U. of Tech.MathShenyangWang BoAP
Shenyang U. of Tech.MathShenyangYan ChenBP
Shenyang U. of Tech.MathShenyangDu Hong BoBH
Shaanxi
Inst. of Modern PhysicsComputational PhysicsXi'anJihong DouAH
Inst. of Modern PhysicsComputational PhysicsXi'anJihong DouBH
Inst. of VisualizationInfoTech.Xi'anLiantang WangBH
North U. of ChinaMathTaiyuanLei YingJieBH
North U. of ChinaMathTaiYuanYang MingBH
North U. of ChinaMathTaiyuanBi YongAP
North U. of ChinaSci.TaiyuanXue YakuiAP
Northwest A& F U.Sci.Xi'anZheng Zheng RenAP
Northwest A& F U.Sci.YanglingWang JingminAH
Northwest U.Ctr Nonlin. StudiesXi'anBo ZhangBH
Northwest U.Ctr Nonlin. StudiesXi'anMing GouAP
Northwest U.MathXi'anRuichan HeAP
Northwest U.MathXi'anBo ZhangBH
Northwest U.PhysicsXi'anYongFeng XuBP
Northwestern Poly. U.Appl. ChemistryXi'anSun ZhongkuiAM
Northwestern Poly. U.Appl. ChemistryXi'anTang YaningBH
Northwestern Poly. U.Appl. MathXi'anYufeng NieBH
Northwestern Poly. U.Appl. MathXi'anZheng HongchanBP
Northwestern Poly. U.Appl. PhysicsXi'anLu QuanyiBH
Northwestern Poly. U.Appl. PhysicsXi'anLei YoumingAM
Northwestern Poly. U.Natural & Appl. Sci.Xi'anXiao HuayongBM
Northwestern Poly. U.Natural & Appl. Sci.Xi'anZhou MinBM
Northwestern Poly. U.Natural & Appl. Sci.Xi'anYong XuBM
Northwestern Poly. U.Natural & Appl. Sci.Xi'anZhao JunfengAM
Xi'an Jiaotong U.Math Teaching & Exp'tXianXiaoe RuanAM
Xi'an Jiaotong U.Sci. Comp. & Appl. SftwrXi'anJian SuAH
Xi'an Comm. Inst.CSXi'anHong WangBH
Xi'an Comm. Inst.Electr. Eng.Xi'anJianhang ZhangBP
Xi'an Comm. Inst.MathXi'anXinshe QiBM
Xi'an Comm. Inst.PhysicsXi'anLi HaoBH
Xi'an Comm. Inst.PhysicsXi'anDongsheng YangAH
Xi'an Jiaotong U.Appl. MathXi'anJing GaoAM
Xi'an Jiaotong U.Sci. Comp. & Appl. SftwrXi'anWei WangBH
Xidian U.Appl. MathXi'anHailin FengAH
Xidian U.Comp'I MathXi'anHoujian TangBH
Xidian U.Ind'I & Appl. Math.Xi'anQiang ZHUBM
Xidian U.Sci.Xi'anGuoping YangBM
Xidian U.Sci.Xi'anJimin YeBM
Taiyuan Inst. of Tech.Electr. Ass'n Sci. & Tech.TaiyuanFan XiaorenBP
Taiyuan Inst. of Tech.Electr. Eng.TaiyuanXiao Ren FanBP
Taiyuan U. of Tech.MathTaiyuanYi-Qiang WeiBP
Shandong
China U. of PetroleumMath & Comp'I Sci.QingdaoZiting WangAP
China U. of PetroleumMath & Comp'I Sci.QingdaoZiting WangAM
Liaocheng U.Math Sci.LiaochengXianyang ZengAP
Linyi Normal U.MathLinyiZhaozhong ZhangAP
Linyi Normal U.MathLinyiZhaozhong ZhangAP
Linyi Normal U.StatsLinyiLifeng GaoBH
Naval Aero. Eng. Acad.MachineryQingdaoCao Hua LinBM
QiLu Software C. (SDU)CS & Tech.JinanJun Feng LuanAP
Qufu Normal U.Math Sci.QufuYuzhen BaiBP
Shandong U.CS & Tech.JinanHeji ZhaoAP
Shandong U.Econ.JinanWei ChenBH
Shandong U.Math FinanceJinanYufeng ShiAP
Shandong U.Math & Sys. Sci.JinanBao Dong LiuBH
Shandong U.Math & Sys. Sci.JinanBao Dong LiuBM
Shandong U.Math & Sys. Sci.JinanShu Xiang HuangBP
Shandong U.Math & Sys. Sci.JinanShu Xiang HuangBH
Shandong U.Math & Sys. Sci.JinanXiao Xia RongBM
Shandong U.Math & Sys. Sci.JinanHuang Shu XiangBH
Shandong U.PhysicsJinanXiucai ZhengBP
Shandong U.SoftwareJinanZhang SiHuaBP
Shandong U.SoftwareJinanXiangxu MengBM
Shandong U. at WeihaiAppl. MathWeihaiYangBing & SongHuiMinBM
Shandong U. at WeihaiAppl. MathWeihaiCao Zhulou & Xiao HuaBM
Shandong U. at WeihaiAppl. MathWeihaiZhulou CaoAP
Shandong U. at WeihaiInfoSci. & Eng.WeihaiHuaxiang ZhaoBP
Shandong U. at WeihaiInfoSci. & Eng.WeihaiHua XiaoAH
Shandong U. at WeihaiInfoSci. & Eng.WeihaiZengchao MuBP
Shandong U. of Sci. & Tech.Fund. CoursesQingdaoFangfang MaAH
Shandong U. of Sci. & Tech.InfoSci. & Eng.QingdaoXinzeng WangBH
Shandong U. of Sci. & Tech.InfoSci. & Eng.QingdaoPang Shan ChenBH
U. of JinanMathJinanZhenyu XuAP
U. of JinanMathJinanBaojian QiuAP
U. of JinanMathJinanHonghua WuBH
Shanghai
Donghua U.Appl. MathShanghaiYunsheng LuAP
Donghua U.Info.ShanghaiXie ShijieAP
Donghua U.InfoSci. & Tech.ShanghaiXianhui ZengAP
Donghua U.InfoSci. & Tech.ShanghaiHongrui ShiAP
Donghua U.Sci.ShanghaiLiangjian HuAM
East China Normal U.Finance & StatsShanghaiLinyi QianAH
East China Normal U.Finance & StatsShanghaiYiming ChengAH
East China Normal U.InfoSci. & Tech.ShanghaiMing LiAP
East China Normal U.MathShanghaiYongming LiuBM
East China Normal U.MathShanghaiChanghong LuBP
East China U. of Sci. & Tech.MathShanghaiLiu ZhaohuiBH
East China U. of Sci. & Tech.MathShanghaiSu ChunjieBM
East China U. of Sci. & Tech.PhysicsShanghaiQin YanAH
East China U. of Sci. & Tech.PhysicsShanghaiLu YuanhongAM
Fudan U.Econ.ShanghaiYan ZhangBH
Fudan U.Int'l FinanceShanghaiPan DengBH
Fudan U.Math Sci.ShanghaiYuan CaoBH
Fudan U.Math Sci.ShanghaiZhijie CaiBP
Fudan U.PhysicsShanghaiJiping HuangAP
Nanyang Model HSMathShanghaiTuqing CaoAP
Nanyang Model HSMathShanghaiTuqing CaoBP
Shanghai Finance U.MathShanghaiYumei LiangAP
Shanghai Finance U.MathShanghaiRongqiang CheAP
Shanghai Finance U.MathShanghaiXiaobin LiBP
Shanghai Finance U.MathShanghaiKeyan WangAP
Shanghai Foreign Lang. SchlCSShanghaiYue SunAP
Shanghai Foreign Lang. SchlCSShanghaiYue SunAM
Shanghai Foreign Lang. SchlMathShanghaiLiang TaoAP
Shanghai Foreign Lang. SchlMathShanghaiGan ChenAP
Shanghai Foreign Lang. SchlMathShanghaiGan ChenAH
Shanghai Foreign Lang. SchlMathShanghaiYu SunAH
Shanghai Foreign Lang. SchlMathShanghaiLiang TaoBM
Shanghai Foreign Lang. SchlMathShanghaiYu SunBH
Shanghai Foreign Lang. SchlMathShanghaiJian TianBP
Shanghai Foreign Lang. SchlMathShanghaiLiqun PanBH
Shanghai Foreign Lang. SchlMathShanghaiLiqun PanBH
Shanghai Foreign Lang. SchlMathShanghaiFeng XuBP
Shanghai Foreign Lang. SchlMathShanghaiFeng XuBM
Shanghai Foreign Lang. SchlMathShanghaiWeiping WangBP
Shanghai Foreign Lang. SchlMathShanghaiWeiping WangBH
Shanghai High SchlMathShanghaiXinyi YangAH
Shanghai Jiading No. 1 Sr. HSMathShanghaiXilin Xie Yunping FangBH
Shanghai Jiading No. 1 Sr. HSMathShanghaiXilin Xie Yunping FangBH
Shanghai Jiaotong U.MathShanghaiBaorui SongAH
Shanghai Jiaotong U.MathShanghaiJianguo HuangAP
Shanghai Jiaotong U.MathShanghaiBaorui SongBP
Shanghai Jiaotong U.MathShanghaiJianguo HuangBP
Shanghai Normal U.Math & Sci. C.ShanghaiJizhou ZhangAP
Shanghai Normal U.Math & Sci. C.ShanghaiRongguan LiuAP
Shanghai Normal U.Math & Sci. C.ShanghaiXiaobo ZhangBH
Shanghai Sino Euro Schl of Tech.MathShanghaiWei HuangAP
Shanghai Sino Euro Schl of Tech.MathShanghaiWei HuangBM
Shanghai Sino Euro Schl of Tech.MathShanghaiBingwu HeBP
Shanghai Sino Euro Schl of Tech.MathShanghaiFuping TanBP
Shanghai U. of Finance & Econ.Appl. MathShanghaiWenqiang HaoBM
Shanghai U. of Finance & Econ.Appl. MathShanghaiLing QiuAH
Shanghai U. of Finance & Econ.Appl. MathShanghaiXing ZhangBH
Shanghai U. of Finance & Econ.Appl. MathShanghaiZhenyu ZhangAM
Shanghai U. of Finance & Econ.Appl. MathShanghaiZhenyu ZhangAP
Shanghai U. of Finance & Econ.Econ.ShanghaiSiheng CaoAP
Shanghai U. of Finance & Econ.Econ.ShanghaiYan SunAH
Shanghai U. of Finance & Econ.FinanceShanghaiHao ChaBP
Shanghai U. of Finance & Econ.StatsShanghaiChunjie WuAP
Shanghai U. of Finance & Econ.StatsShanghaiJialun DuBP
Shanghai U.MathShanghaiYongjian YangBP
Shanghai U.MathShanghaiYongjian YangBP
Shanghai U.MathShanghaiDonghua WuAP
Shanghai U.MathShanghaiDonghua WuBH
Shanghai U.MathShanghaiYuandi WangAP
Shanghai U.MathShanghaiYuandi WangAP
Shanghai Youth Ctr Sci. & Tech. Ed.Appl. MathShanghaiGan ChenAH
Shanghai Youth Ctr Sci. & Tech. Ed.Sci. TrainingShanghaiGan ChenAP
Shanghai Youth Ctr Sci. & Tech. Ed.Sci. TrainingShanghaiGan ChenBP
Sydney Inst. of Lang. & CommerceMathShanghaiYouhua HeAP
Sydney Inst. of Lang. & CommerceMathShanghaiYouhua HeAP
Tongji U.Civil Eng.ShanghaiJialiang XiangAP
Tongji U.MathShanghaiJin LiangAH
Tongji U.MathShanghaiHualong ZhangAP
Tongji U.SoftwareShanghaiChangshui HuangBP
Xuhui Branch/Shanghai Jiaotong U.MathShanghaiLiuqing XiaoAH
Xuhui Branch/Shanghai Jiaotong U.MathShanghaiXiaojun LiuBP
Yucai Senior High SchlMathShanghaiZhenwei YangAH
Yucai Senior High SchlMathShanghaiXiaodong ZhouAP
Yucai Senior High SchlMathShanghaiZhenwei YangBH
Yucai Senior High SchlMathShanghaiXiaodong ZhouBH
Sichuan
Chengdu U. of Tech.InfoMgmttChengduHuang Guang XinAH
Chengdu U. of Tech.InfoMgmttChengduYuan YongBP
Sichuan U.Electr. Eng. & Info.ChengduYingyi TanAM
Sichuan U.Electr. Eng. & Info.ChengduYingyi TanBP
Sichuan U.MathChengduunknownAP
Sichuan U.MathChengduYonghong ZhaoAP
Schuan U.MathChengduHuilei HanBH
Sichuan U.MathChengduHai NiuBH
Sichuan U.MathChengduQiong ChenBM
Sichuan U.MathChengduHuiLei HanAP
Southwest Jiaotong U.MathChengduWang LuAP
Southwest Jiaotong U.MathChengduWang LuBH
Southwest Jiaotong U.MathChengduYueliang XuBP
Southwest Jiaotong U.MathChengduYueliang XuBH
Southwest U. of Sci. & Tech.Sci.MianyangKe Long ZhengAP
Southwestern U. of Finance & Econ.Econ. MathChengduDai DaiBM
Southwestern U. of Finance & Econ.Econ. MathChengduDai DaiBH
Southwestern U. of Finance & Econ.Econ. MathChengduSun YunlongBH
Southwestern U. of Finance & Econ.Econ. MathChengduChuan DingBH
U. of Elec. Sci. & Tech. of ChinaAppl. MathChengduLi MingqiAM
U. of Elec. Sci. & Tech. of ChinaAppl. MathChengduHe GuoLiangAP
U. of Elec. Sci. & Tech. of ChinaAppl. MathChengduLi MingqiAP
U. of Electr. Sci. & Tech. of ChinaChengdu C.: CSChengduQiu WeiBP
U. of Elec. Sci. & Tech. of ChinaInfo & CSChengduQin SiyiAP
Tianjin
Civil Aviation U. of ChinaAir Traffic MgmttTianjinZhaoning ZhangBH
Civil Aviation U. of ChinaCSTianjinXia FengBH
Civil Aviation U. of ChinaCS & Tech.TianjinYuxiang ZhangAH
Civil Aviation U. of ChinaCS & Tech.TianjinChunli LiBP
Civil Aviation U. of ChinaSci. C.TianjinZhang ChunxiaoBH
Civil Aviation U. of ChinaSci. C.TianjinDi Shang ChenBP
Nankai U.AutomationTianjinChen WanyiBH
Nankai U.Econ.TianjinQi BinAH
Nankai U.FinanceTianjinFang WangBP
Nankai U.Informatics & Prob.TianjinJishou RuanAP
Nankai U.Informatics & Prob.TianjinJishou RuanAP
Nankai U.Info & CS & Tech.TianjinZhonghua WuAH
Nankai U.InsuranceTianjinBin QiAP
Nankai U.Mgmtt Sci. & Eng.TianjinWenhua HouAM
Nankai U.PhysicsTianjinLi Ying ZhangAH
Nankai U.PhysicsTianjinLiying ZhangBH
Nankai U.SoftwareTianjinWei ZhangBP
Nankai U.StatsTianjinMin-qian LiuAH
Tianjin Poly. U.Sci.TianjinunknownAP
Tianjin Poly. U.Sci.TianjinunknownBH
Yunnan
Chuxiong Normal U.MathChuxiongJiade TangBH
Yunnan U.Comm. Eng'ingKunmingHaiyan LIBP
Yunnan U.CSKunmingShunfang WangAP
Yunnan U.CSKunmingShunfang WangAM
Yunnan U.Electr. Eng.KunmingHaiyan LiAP
Yunnan U.Electr. Eng.KunmingHaiyan LiBP
Yunnan U.InfoSci. & Tech.KunmingHong WeiBH
Yunnan U.StatsKunmingBo ZhangBP
Yunnan U.StatsKunmingJie MengAH
Yunnan U.StatsKunmingJie MengAH
Zhejiang
Hangzhou Dianzi U.Appl. PhysicsHangzhouJianlan ChenAH
Hangzhou Dianzi U.Appl. PhysicsHangzhouZhifeng ZhangBH
Hangzhou Dianzi U.Info & Math Sci.HangzhouWei LiAM
Hangzhou Dianzi U.Info & Math Sci.HangzhouZheyong QiuBM
Ningbo Inst. of Zhejiang U.Fund. CoursesNingboJufeng WangAH
Ningbo Inst. of Zhejiang U.Fund. CoursesNingboZhening LiAP
Ningbo Inst. of Zhejiang U.Fund. CoursesNingboLihui TuAH
Ningbo Inst. of Zhejiang U.Fund. CoursesNingboLihui TuAM
Ningbo Inst. of Zhejiang U.Fund. CoursesNingboJufeng WangBP
Shaoxing U.MathShaoxinghe jinghuiAP
Shaoxing U.MathShaoxinglu jueAP
Zhejiang Gongshang U.Appl. MathHangzhouZhao HengAP
Zhejiang Gongshang U.Appl. MathHangzhouZhao HengBH
Zhejiang Gongshang U.Info & Comp. Sci.HangzhouHua JiukunAP
Zhejiang Gongshang U.Info & Comp. Sci.HangzhouHua JiukunBH
Zhejiang Gongshang U.MathHangzhouDing ZhengzhongAP
Zhejiang Gongshang U.MathHangzhouDing ZhengzhongBH
Zhejiang Normal U.CSJinhuaQiusheng QiuAP
Zhejiang Normal U.CSJinhuaZuxiang ShengBP
Zhejiang Normal U.CS & Tech.JinhuaYing ZhangAH
Zhejiang Normal U.MathJinhuaGuolong HeBP
Zhejiang Normal U.MathJinhuaYuehua BuAP
Zhejiang Normal U.MathJinhuaWenqing BaoBH
Zhejiang Normal U.MathJinhuaYuanheng WangBH
Zhejiang Normal U.MathJinhuaDong ChenBH
Zhejiang Sci-Tech U.MathHangzhouShi GuoshengAP
Zhejiang Sci-Tech U.MathHangzhouJiang YiweiBP
Zhejiang Sci-Tech U.MathHangzhouLuo HuaBP
Zhejiang Sci-Tech U.PsychologyHangzhouHu JueliangBH
Zhejiang U.MathHangzhouZhiyi TanAM
Zhejiang U.MathHangzhouQifan YangAH
Zhejiang U.Sci.HangzhouShengyi CaiAP
Zhejiang U.Sci.HangzhouShengyi CaiBP
Zhejiang U.Sci.HangzhouYong WuBM
Zhejiang U. City C.Info & CSHangzhouXusheng KangAH
Zhejiang U. City C.Info & CSHangzhouGui WangAP
Zhejiang U. City C.Info & CSHangzhouXusheng KangBP
Zhejiang U. City C.Info & CSHangzhouGui WangBM
Zhejiang U. of Finance & Econ.Math & StatsHangzhouJi LuoBM
Zhejiang U. of Finance & Econ.Math & StatsHangzhouJi LuoBH
Zhejiang U. of Finance & Econ.Math & StatsHangzhouFulai WangBH
Zhejiang U. of Sci. & Tech.MathHangzhouYongzhen ZhuBH
Zhejiang U. of Tech.Jianxing C.HangzhouShiming WangAP
Zhejiang U. of Tech.Jianxing C.HangzhouShiming WangAP
Zhejiang U. of Tech.Jianxing C.HangzhouWenxin ZhuoAP
Zhejiang U. of Tech.Jianxing C.HangzhouWenxin ZhuoBH
FINLAND
Helsingin matematikkalukioMathHelsinkiEsa I. LappiBH
Helsinki U. of Tech.Math/ Sys. Anal.HelsinkiKenrick BinghamAH
Helsinki Upper Sec. SchlMathHelsinkiVille TilvisAH
Helsinki Upper Sec. SchlMathHelsinkiVille TilvisAH
Päivölä C. of MathMathTarttilaJanne PuustelliBM
Päivölä C. of MathMathTarttilaJanne PuustelliBH
Päivölä C. of MathMathTarttilaMerikki LappiBP
U. of HelsinkiMath & StatsHelsinkiPetri OlaAP
GERMANY
Jacobs U.Eng. & Sci.BremenMarcel OliverBH
HONG KONG
Chinese U. of Hong KongCS & Eng.ShatinFung Yu YoungBP
Chinese U. of Hong KongSys. Eng.ShatinNan ChenAH
Hong Kong Baptist U.MathKowloonWai Chee ShiuAP
Hong Kong Baptist U.MathKowloonChong Sze TongBP
Hong Kong U. of Sci. & Tech.MathHong KongMin YanBH
INDONESIA
Institut Teknologi BandungMathBandungAgus Yodl GunawanAH
Institut Teknologi BandungMathBandungRieske HadiantiAH
IRELAND
U. C. CorkAppl. MathCorkLiya A. ZhornitskayaAM
U. C. CorkMathCorkBenjamin W. McKayAM
U. C. CorkStatsCorkSupratik RoyAH
National U. of IrelandMathGalwayNiall MaddenAM
National U. of IrelandMathGalwayNiall MaddenAH
National U. of IrelandMath'1 PhysicsGalwayPetri T. PiiroinenAM
National U. of IrelandMath'1 PhysicsGalwayPetri T. PiiroinenBH
JAMAICA
U. of Tech.Chem. Eng.KingstonNilza G. Justiz-SmithAM
U. of Tech.Chem. Eng.KingstonNilza G. Justiz-SmithBH
KOREA
Korea Adv. Inst. of Sci. & Tech.Math Sci.DaejeonYong-Jung KimBM
Korea Adv. Inst. of Sci. & Tech.Math Sci.DaejeonYong-Jung KimBH
MEXICO
U. Autónoma de YucatánMathMéridaEric J. Avila-ValesBH
SINGAPORE
National U. of SingaporeMathSingaporeGongyun ZhaoAH
National U. of SingaporeMathSingaporeKarthik NatarajanBH
SOUTH AFRICA
Stellenbosch U.Math Sci.StellenboschJacob A.C. WeidemanAH
Stellenbosch U.Math Sci.StellenboschJacob A.C. WeidemanAP
UNITED KINGDOM
Oxford U.MathOxfordJeffrey H. GiansiracusaBM
Oxford U.MathOxfordJeffrey H. GiansiracusaBM
+ +# The Impending Effects of North Polar Ice Cap Melt + +Benjamin Coate + +Nelson Gross + +Megan Longo + +College of Idaho + +Caldwell, ID + +Advisor: Michael P. Hitchman + +# Abstract + +Because of rising global temperatures, the study of North Polar ice melt has become increasingly important. + +- How will the rise in global temperatures affect the melting polar ice caps and the level of the world's oceans? +- Given the resulting increase in sea level, what problems should metropolitan areas in a region such as Florida expect in the next 50 years? + +We develop a model to answer these questions. + +Sea level will not be affected by melting of the floating sea ice that makes up most of the North Polar ice cap, but it will be significantly affected by the melting of freshwater land ice found primarily on Greenland, Canada, and Alaska. Our model begins with the current depletion rate of this freshwater land ice and takes into account + +- the exponential increase in melting rate due to rising global temperatures, +- the relative land / ocean ratios of the Northern and Southern Hemispheres, +- the percentage of freshwater land ice melt that stays in the Northern Hemisphere due to ocean currents, and +- thermal expansions of the ocean due to increased temperatures on the top layer. + +We construct best- and worst-case scenarios. We find that in the next 50 years, the relative sea level will rise $12\mathrm{cm}$ to $36\mathrm{cm}$ . + +To illustrate the consequences of such a rise, we consider four Florida coastal cities: Key West, Miami, Daytona Beach, and Tampa. The problems that will arise in many areas are + +- the loss of shoreline property, +- a rise of the water table, +instability of structures, +- overflowing sewers, +- increased flooding in times of tropical storms, and +drainage problems. + +Key West and Miami are the most susceptible to all of these effects. While Daytona Beach and Tampa are relatively safe from catastrophic events, they will still experience several of these problems to a lesser degree. + +The effects of the impending rise in sea level are potentially devastating; however, there are steps and precautions to take to prevent and minimize destruction. We suggest several ways for Florida to combat the effects of rising sea levels: public awareness, new construction codes, and preparedness for natural disasters. + +The text of this paper appears on pp. 237-247. + +# A Convenient Truth: Forecasting Sea Level Rise + +Jason Chen +Brian Choi +Joonhahn Cho + +Duke University Durham, NC + +Advisor: Scott McKinley + +# Abstract + +Greenhouse-gas emissions have produced global warming, including melting in the Greenland Ice Sheet (GIS), resulting in sea-level rise, a trend that could devastate coastal regions. A model is needed to quantify effects for policy assessments. + +We present a model that predicts sea-level trends over a 50-year period, based on mass balance and thermal expansion acting on a simplified ice-sheet geometry. Mass balance is represented using the heat equation with Neumann conditions and sublimation rate equations. Thermal expansion is estimated by an empirically-derived equation relating volume expansion to temperature increase. Thus, the only exogenous variables are time and temperature. + +We apply the model to varying scenarios of greenhouse-gas-concentration forcings. We solve the equations numerically to yield sea-level increase projections. We then project the effects on Florida, as modeled from USGS geospatial elevation data and metropolitan population data. + +The results of our model agree well with past measurements, strongly supporting its validity. The strong linear trend shown by our scenarios indicates both insensitivity to errors in inputs and robustness with respect to the temperature function. + +Based on our model, we provide a cost-benefit analysis showing that small investments in protective technology could spare coastal regions from flooding. Finally, the predictions indicate that reductions in greenhouse-gas emissions are necessary to prevent long-term sea-level-rise disasters. + +The text of this paper appears on pp. 249-265. + +# Fighting the Waves: The Effect of North Polar Ice Cap Melt on Florida + +Amy M. Evans +Tracy L. Stepien + +University at Buffalo, The State University of New York Buffalo, NY + +Advisor: John Ringland + +# Abstract + +A consequence of global warming that directly impacts U.S. citizens is the threat of rising sea levels due to melting of the North Polar ice cap. One of the many states in danger of losing coastal land is Florida. Its low elevations and numerous sandy beaches will lead to higher erosion rates as sea levels increase. The direct effect on sea level of only the North Polar ice cap melting would be minimal, yet the indirect effects of causing other bodies of ice to melt would be crucial. We model individually the contributions of various ice masses to rises in sea level, using ordinary differential equations to predict the rate at which changes would occur. + +For small ice caps and glaciers, we propose a model based on global mean temperature. Relaxation time and melt sensitivity to temperature change are included in the model. Our model of the Greenland and Antarctica ice sheets incorporates ice mass area, volume, accumulation, and loss rates. Thermal expansion of water also influences sea level, so we include this too. Summing all the contributions, sea levels could rise 11-27 cm in the next half-century. + +A rise in sea level of one unit is equivalent to a horizontal loss of coastline of 100 units. We investigate how much coastal land would be lost, by analyzing relief and topographic maps. By 2058, in the worst-case scenario, there is the potential to lose almost $27\mathrm{m}$ of land. Florida would lose most of its smaller islands and sandy beaches. Moreover, the ports of most major cities, with the exception of Miami, would sustain some damage. + +Predictions from the Intergovernmental Panel on Climate Change (IPCC) and from the U.S. Environmental Protection Agency (EPA) and simulations + +from the Global Land One-km Base Elevation (GLOBE) digital elevation model (DEM) match our results and validate our models. + +While the EPA and the Florida state government have begun to implement plans of action, further measures need to be put into place, because there will be a visible sea-level rise of $3 - 13\mathrm{cm}$ in only 10 years (2018). + +The text of this paper appears on pp. 267-284. + +# Erosion in Florida: A Shore Thing + +Matt Thies +Bob Liu +Zachary W. Ulissi + +University of Delaware Newark, DE + +Advisor: Louis Frank Rossi + +# Abstract + +Rising sea levels and beach erosion are an increasingly important problems for coastal Florida. We model this dynamic behavior in four discrete stages: global temperature, global sea level, equilibrium beach profiles, and applications to Miami and Daytona Beach. We use the Intergovernmental Panel on Climate Change (IPCC) temperature models to establish predictions through 2050. We then adapt models of Arctic melting to identify a model for global sea level. This model predicts a likely increase of $15\mathrm{cm}$ within 50 years. + +We then model the erosion of the Daytona and Miami beaches to identify beach recession over the next 50 years. The model predicts likely recessions of $66\mathrm{m}$ in Daytona and $72\mathrm{m}$ in Miami by 2050, roughly equal to a full city block in both cases. Regions of Miami are also deemed to be susceptible to flooding from these changes. Without significant attention to future solutions as outlined, large-scale erosion will occur. These results are strongly dependent on the behavior of the climate over this time period, as we verify by testing several models. + +The text of this paper appears on pp. 285-300. + +# A Difficulty Metric and Puzzle Generator for Sudoku + +Christopher Chang + +Zhou Fan + +Yi Sun + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Abstract + +We present here a novel solution to creating and rating the difficulty of Sudoku puzzles. We frame Sudoku as a search problem and use the expected search time to determine the difficulty of various strategies. Our method is relatively independent from external views on the relative difficulties of strategies. + +Validating our metric with a sample of 800 puzzles rated externally into eight gradations of difficulty, we found a Goodman-Kruskal $\gamma$ coefficient of 0.82, indicating significant correlation [ Goodman and Kruskal 1954]. An independent evaluation of 1,000 typical puzzles produced a difficulty distribution similar to the distribution of solve times empirically created by millions of users at http://www.websudoku.com. + +Based upon this difficulty metric, we created two separate puzzle generators. One generates mostly easy to medium puzzles; when run with four difficulty levels, it creates puzzles (or boards) of those levels in 0.25, 3.1, 4.7, and $30\mathrm{min}$ . The other puzzle generator modifies difficult boards to create boards of similar difficulty; when tested on a board of difficulty 8,122, it created 20 boards with average difficulty 7,111 in $3\mathrm{min}$ . + +The text of this paper appears on pp. 305-326. + +# Taking the Mystery Out of Sudoku Difficulty: An Oracular Model + +Sarah Fletcher + +Frederick Johnson + +David R. Morrison + +Harvey Mudd College + +Claremont, CA + +Advisor: Jon Jacobsen + +# Abstract + +In the last few years, the 9-by-9 puzzle grid known as Sudoku has gone from being a popular Japanese puzzle to a global craze. As its popularity has grown, so has the demand for harder puzzles whose difficulty level has been rated accurately. + +We devise a new metric for gauging the difficulty of a Sudoku puzzle. We use an oracle to model the growing variety of techniques prevalent in the Sudoku community. This approach allows our metric to reflect the difficulty of the puzzle itself rather than the difficulty with respect to some particular set of techniques or some perception of the hierarchy of the techniques. Our metric assigns a value in the range $[0, 1]$ to a puzzle. + +We also develop an algorithm that generates puzzles with unique solutions across the full range of difficulty. While it does not produce puzzles of a specified difficulty on demand, it produces the various difficulty levels frequently enough that, as long as the desired score range is not too narrow, it is reasonable simply to generate puzzles until one of the desired difficulty is obtained. Our algorithm has exponential running time, necessitated by the fact that it solves the puzzle it is generating to check for uniqueness. However, we apply an algorithm known as Dancing Links to produce a reasonable runtime in all practical cases. + +The text of this paper appears on pp. 327-341. + +# Difficulty-Driven Sudoku Puzzle Generation + +Martin Hunt + +Christopher Pong + +George Tucker + +Harvey Mudd College + +Claremont, CA + +Advisor: Zach Dodds + +# Abstract + +Many existing Sudoku puzzle generators create puzzles randomly by starting with either a blank grid or a filled-in grid. To generate a puzzle of a desired difficulty level, puzzles are made, graded, and discarded until one meets the required difficulty level, as evaluated by a predetermined difficulty metric. The efficiency of this process relies on randomness to span all difficulty levels. + +We describe generation and evaluation methods that accurately model human Sodomu-playing. Instead of a completely random puzzle generator, we propose a new algorithm, Difficulty-Driven Generation, that guides the generation process by adding cells to an empty grid that maintain the desired difficulty. + +We encapsulate the most difficult technique required to solve the puzzle and number of available moves at any given time into a rounds metric. A round is a single stage in the puzzle-solving process, consisting of a single high-level move or a maximal series of low-level moves. Our metric counts the numbers of each type of rounds. + +Implementing our generator algorithm requires using an existing metric, which assigns a puzzle a difficulty corresponding to the most difficult technique required to solve it. We propose using our rounds metric as a method to further simplify our generator. + +The text of this paper appears on pp. 343-362. + +# Ease and Toil: Analyzing Selenium + +Seth B. Chadwick + +Rachel M. Krieg + +Christopher E. Granade + +University of Alaska Fairbanks + +Fairbanks, AK + +Advisor: Orion S. Lawlor + +# Abstract + +Sudoku is a logic puzzle in which the numbers 1 through 9 are arranged in a $9 \times 9$ matrix, subject to the constraint that there are no repeated numbers in any row, column, or designated $3 \times 3$ square. + +In addition to being entertaining, Sudoku promises insight into computer science and mathematical modeling. Since Sudoku-solving is an NP-complete problem, algorithms to generate and solve puzzles may offer new approaches to a whole class of computational problems. Moreover, Sudoku construction is essentially an optimization problem. + +We propose an algorithm to construct unique Sudoku puzzles with four levels of difficulty. We attempt to minimize the complexity of the algorithm while still maintaining separate difficulty levels and guaranteeing unique solutions. + +To accomplish our objectives, we develop metrics to analyze the difficulty of a puzzle. By applying our metrics to published control puzzles with specified difficulty levels, we develop classification functions. We use the functions to ensure that our algorithm generates puzzles with difficulty levels analogous to those published. We also seek to measure and reduce the computational complexity of the generation and metric measurement algorithms. + +Finally, we analyze and reduce the complexity involved in generating puzzles while maintaining the ability to choose the difficulty level of the puzzles generated. To do so, we implement a profiler and perform statistical hypothesis-testing to streamline the algorithm. + +The text of this paper appears on pp. 363-379. + +# A Crisis to Rival Global Warming: Sudoku Puzzle Generation + +Jake Linenthal + +Alex Twist + +Andy Zimmer + +University of Puget Sound + +Tacom, WA + +Advisor: Michael Z. Spivey + +# Abstract + +We model solution techniques and their application by an average Sudoku player. A simulation based on our model determines a likely solution path for the player. We define a metric that is linear in the length of this path and proportional to a measure of average difficulty of the techniques used. We use this metric to define seven difficulty levels for Sudoku puzzles. + +We confirm the accuracy and consistency of our metric by considering rated puzzles from USA Today and Sodomu.org.uk. Our metric is superior to a metric defined by the count of initial hints, as well to a metric that measures the constraints placed on the puzzle by the initial hints. + +We develop an algorithm that produces puzzles with unique solutions with varying numbers of initial hints. Our puzzle generator starts with a random solved Sudoku board, removes a number of hints, and employs a fast solver to ensure a unique solution. We improve the efficiency of puzzle generation by reducing the expected number of calls to the solver. On average, our generation algorithm performs more than twice as fast as the baseline generation algorithm. + +We apply our metric to generated puzzles until one matches the desired difficulty level. Since certain initial board configurations result in puzzles that are more difficult on average than a random configuration, we modify our generation algorithm to restrict the initial configuration of the board, thereby reducing the amount of time required to generate a puzzle of a certain difficulty. + +[EDITOR'S NOTE: This Meritorious paper won the Ben Fusaro Award for the Sudoku Problem. The full text of the paper does not appear in this issue of the Journal.] + +# Cracking the Sudoku: A Deterministic Approach + +David Martin + +Erica Cross + +Matt Alexander + +Youngstown State University + +Youngstown, OH + +Advisor: George T. Yates + +# Summary + +We formulate a Sudo- and puzzle-solving algorithm that implements a hierarchy of four simple logical rules commonly used by humans. The difficulty of a puzzle is determined by recording the sophistication and relative frequency of the methods required to solve it. Four difficulty levels are established for a puzzle, each pertaining to a range of numerical values returned by the solving function. + +Like humans, the program begins solving each puzzle with the lowest level of logic necessary. When all lower methods have been exhausted, the next echelon of logic is implemented. After each step, the program returns to the lowest level of logic. The procedure loops until either the puzzle is completely solved or the techniques of the program are insufficient to make further progress. + +The construction of a Sudoizu puzzle begins with the generation of a solution by means of a random-number-based function. Working backwards from the solution, numbers are removed one by one, at random, until one of several conditions, such as a minimum difficulty rating and a minimum number of empty squares, has been met. Following each change in the grid, the difficulty is evaluated. If the program cannot solve the current puzzle, then either there is not a unique solution, or the solution is beyond the grasp of the methods of the solver. In either case, the last solvable puzzle is restored and the process continues. + +Uniqueness is guaranteed because the algorithm never guesses. If there + +is not sufficient information to draw further conclusions—for example, an arbitrary choice must be made (which must invariably occur for a puzzle with multiple solutions)—the solver simply stops. For obvious reasons, puzzles lacking a unique solution are undesirable. Since the logical techniques of the program enable it to solve most commercial puzzles (for example, most “evil” puzzles from Greenspan and Lee [2008]), we assume that demand for puzzles requiring logic beyond the current grasp of the solver is low. Therefore, there is no need to distinguish between puzzles requiring very advanced logic and those lacking unique solutions. + +The text of this paper appears on pp. 381-394. + +Pp. 237-248 can be found on the Tools for Teaching 2008 CD-ROM. + +# The Impending Effects of North Polar Ice Cap Melt + +Benjamin Coate + +Nelson Gross + +Megan Longo + +College of Idaho + +Caldwell, ID + +Advisor: Michael P. Hitchman + +# Abstract + +Because of rising global temperatures, the study of North Polar ice melt has become increasingly important. + +- How will the rise in global temperatures affect the melting polar ice caps and the level of the world's oceans? +- Given the resulting increase in sea level, what problems should metropolitan areas in a region such as Florida expect in the next 50 years? + +We develop a model to answer these questions. + +Sea level will not be affected by melting of the floating sea ice that makes up most of the North Polar ice cap, but it will be significantly affected by the melting of freshwater land ice found primarily on Greenland, Canada, and Alaska. Our model begins with the current depletion rate of this freshwater land ice and takes into account + +- the exponential increase in melting rate due to rising global temperatures, +- the relative land / ocean ratios of the Northern and Southern Hemispheres, +- the percentage of freshwater land ice melt that stays in the Northern Hemisphere due to ocean currents, and +- thermal expansions of the ocean due to increased temperatures on the top layer. + +We construct best- and worst-case scenarios. We find that in the next 50 years, the relative sea level will rise $12\mathrm{cm}$ to $36\mathrm{cm}$ . + +To illustrate the consequences of such a rise, we consider four Florida coastal cities: Key West, Miami, Daytona Beach, and Tampa. The problems that will arise in many areas are + +- the loss of shoreline property, +- a rise of the water table, +- instability of structures, +- overflowing sewers, +- increased flooding in times of tropical storms, and +drainage problems. + +Key West and Miami are the most susceptible to all of these effects. While Daytona Beach and Tampa are relatively safe from catastrophic events, they will still experience several of these problems to a lesser degree. + +The effects of the impending rise in sea level are potentially devastating; however, there are steps and precautions to take to prevent and minimize destruction. We suggest several ways for Florida to combat the effects of rising sea levels: public awareness, new construction codes, and preparedness for natural disasters. + +# Introduction + +We consider for the next 50 years the effects on the Florida coast of melting of the North Polar ice cap, with particular attention to the cities noted. This question can be broken down into two more-detailed questions: + +- What is the melting rate, and its effects on sea level? +- How will the rising water affect the Florida cities, and what can they do to counteract and prepare? + +Our models use the geophysical data in Table 1 and the elevations of cities in Table 2. + +Table 1. +Geophysical data. + +
EntityValueUnit
Total volume of ice caps2.422 × 107km3
Surface area of world's oceans3.611 × 108km2
Surface area of ice on Greenland1.756 × 106km2
Volume of ice on Greenland2.624 × 106km3
+ +Table 2. Elevations of Florida cities. + +
CityAverage elevation (m)Maximum elevation (m)
Key West2.445.49
Miami2.1312.19
Daytona Beach2.7410.36
+ +# Preliminary Discussion of Polar Ice + +There are two types of polar ice: + +- frozen sea ice, as in the North Polar ice cap; and +- freshwater land ice, primarily in Greenland, Canada, and Alaska. + +# Frozen Seawater + +Melting of frozen seawater has little effect because it is already floating. According to the Archimedean principle of buoyancy, an object immersed in a fluid is buoyed up by a force equal to the weight of the fluid that is displaced by the object. About $10\%$ of sea ice is above water, since the densities of seawater and solid ice are $1026 \, \mathrm{kg/m^3}$ and $919 \, \mathrm{kg/m^3}$ . So, if this ice were to melt, $10\%$ of the original volume would be added as water to the ocean. There would be little effect on relative sea level if the entire North Polar ice cap were to melt. + +# The Ice Caps + +Although the melting of the ice caps will not cause a significant rise in the sea level, several problems will indeed arise if they disappear. + +- Initially there will be a small decrease in the average temperature of the oceans in the Northern Hemisphere. +- The ice caps reflect a great deal of sunlight, which in turn helps to reduce temperature in that region. When that ice is gone, additional energy will be absorbed and over time we will see a significant increase in global temperatures, both in the oceans and the air. + +# Freshwater Ice on Land + +When freshwater ice on land melts and runs into the ocean, that water is added permanently to the ocean. The total volume of the ice on Greenland alone is $2.624 \times 10^{6} \mathrm{~km}^{3}$ . If all of this ice were to melt and add to the ocean (not taking into account possible shifting/depressing of the ocean floor or + +added surface area of the ocean), the average global sea level would rise $6.7\mathrm{m}$ just from the ice on Greenland. + +Our question now becomes: + +How will the melting of freshwater land ice affect the relative level of the world's oceans over the next 50 years? + +# Model 1: Constant Temperature + +# Predicted Increase in Sea Level + +To model the effects of ice-cap melt on Florida, we develop a model that provides a quick estimate of expected flooding. We assume: + +- No increase in the rate of current ice-melt. +- Uniform distribution of the water from the ice melt throughout the world's oceans. +- No significant change in global temperatures and weather conditions. + +We use the notation: + +$\% \text{Melt} =$ percentage of land ice melting per decade + +$V_{I} =$ current volume of land ice in Northern Hemisphere + +$C_{\mathrm{I}\rightarrow \mathrm{W}} =$ conversion factor volume of ice to volume of water $= 0.919$ + +$\mathrm{SA}_{\mathrm{WO}} = \text { surface area of the world 's oceans } = 3.611 \times 10^{8} \mathrm{~km}^{2}$ + +For a given decade, our equation becomes + +$$ +\text{Increase in ocean sea level} = \frac{\% \mathrm{Melt}\times V_{I}\times C_{\mathrm{I}\rightarrow\mathrm{W}}}{\mathrm{SA}_{\mathrm{WO}}}. +$$ + +Data from satellite images show a decrease in the Greenland ice sheet of $239\mathrm{km}^3$ per year [Cockrell School of Engineering 2006]. Extrapolating linearly, after 50 years we get an increase in sea level of $3.3\mathrm{cm}$ . + +We must also take into account the contributions of smaller land ice masses in Alaska and Canada, whose melting is contributing to the ocean sea level rises of $0.025\mathrm{cm}$ and $0.007\mathrm{cm}$ per year [Abdalati 2005]. Extrapolating linearly over 50 years, the total from the two is $1.6\mathrm{cm}$ , giving a total increase in sea level of $4.9\mathrm{cm} \approx 5\mathrm{cm} \approx 2$ in. by 2058. + +# Effects on Major Metropolitan Areas of Florida + +Even after 50 years there will not be any significant effect on the coastal regions of Florida, since all of these coastal cites are at least $2\mathrm{m}$ above sea + +level on average. There will, however, be correspondingly higher flooding during storms and hurricanes. + +Unfortunately, these results are based on simple assumptions that do not account for several factors that play a role in the rising sea level. We move on to a second model, which gets us closer to a realistic value. + +# Model 2: Variable-Temperature Model + +Our next model takes into account the effect of a variable temperature on the melting of the polar ice caps. Our basic model assumes constant overall temperature in the polar regions, which will not be the case. + +# Predicted Increase in Temperature + +The average global temperature rose about $1^{\circ}\mathrm{C}$ in the 20th century, but over the last 25 years the rate has increased to approximately $^\circ \mathrm{C}$ per century [National Oceanic and Atmospheric Administration (NOAA) 2008]. In addition, much of the added heat and carbon dioxide gas will be absorbed by the ocean, which will increase its temperature. + +Consequently, scientists project an increase in the world's temperature by 0.7 to $2.9^{\circ}\mathrm{C}$ over the next 50 years [Ekwurzel 2007]. An increase in overall temperature will cause freshwater land ice to melt faster, which in turn will cause the ocean to rise higher than predicted by the basic model. + +We examine how an increase of 0.7 to $2.9^{\circ}\mathrm{C}$ over the next 50 years will affect sea level. + +# Model Results + +We consider best- and worst-case scenarios. Again, we linearize; for example, for the best-case scenario of $0.7^{\circ}\mathrm{C}$ over 50 years, we assume an increase of $0.14^{\circ}\mathrm{C}$ per decade. + +# Best-Case Scenario: Increase of $0.7^{\circ}\mathrm{C}$ Over 50 Years + +The ice caps will absorb more heat and melt more rapidly. We calculate sea-level rise at 10-year intervals. + +The extra heat $\dot{Q}_x$ absorbed can be quantified as + +$$ +Q _ {x} = m s T, +$$ + +where + +$x$ is the duration (yrs), + +$m$ is mass of the ice cap (g), + +$s$ is the specific heat of ice $(2.092\mathrm{J} / \mathrm{g}^{-\circ}\mathrm{C})$ , and + +$T$ is the change in overall global temperature $(^{\circ}\mathrm{C})$ . + +We find + +$$ +Q _ {5 0} = 4. 8 5 \times 1 0 ^ {1 8} \mathrm {k J}. +$$ + +To determine how much extra ice will melt in the freshwater land-ice regions due to an overall increase in $0.7^{\circ}\mathrm{C}$ , we divide the amount of heat absorbed by the ice by the specific latent heat of fusion for water, $334\mathrm{kJ/kg}$ at $0^{\circ}\mathrm{C}$ , getting a mass of ice melted of $1.45 \times 10^{16}\mathrm{kg}$ . + +Since water has a mass of $1,000\mathrm{kg}$ per cubic meter, the total volume of water added to the ocean is $1.45 \times 10^{13}\mathrm{m}^3$ . Dividing by the surface area of the ocean gives a corresponding sea-level rise of $4.0\mathrm{cm}$ . + +This volume is in addition to the height of $4.9\mathrm{cm}$ calculated in the steady-temperature Model 1. Thus, in our best-case scenario, in 50 years the ocean will rise about $9\mathrm{cm}$ . + +# Worst-Case Scenario: Increase of $2.9^{\circ}\mathrm{C}$ Over 50 Years + +Using the same equations, we find in our worst-case scenario that in 50 years the ocean will rise about $21~\mathrm{cm}$ . + +# Model 3: Ocean Volume under Warming + +The previous two models determined the total volume of water to be added to the world's oceans as a result of the melting of freshwater land ice. However, they do not take into account the relative surface areas of the oceans of the Northern Hemisphere and the Southern Hemisphere. The difference in the ratios of land area to ocean area in the two hemispheres is quite striking and gives a way of improving our model of water distribution. + +# Northern Hemisphere Ocean Surface Area + +Approximately $44\%$ of the world's ocean surface area is located in the Northern Hemisphere and $56\%$ in the Southern Hemisphere [Pidwirny 2008]. The surface area of the ocean in the Northern Hemisphere is $1.58 \times 10^{8} \mathrm{~km}^{2}$ . + +# Percentage of Ice Melt Staying in the Northern Hemisphere + +Similar melting freshwater land-ice is occurring in southern regions. So, we have water pouring down from the North Pole and water rushing up from the South Pole. There is very little information regarding flow + +rates and distributions of water throughout the world's oceans. Since most of the ice melt is added to the top layer of the ocean, that water will be subject to the major ocean currents, under which water in the Northern Hemisphere mainly stays in the north. For the sake of argument, we assume conservatively that just half of the melted freshwater land ice from the north stays in the Northern Hemisphere. + +# Expanding Volume Due to Increasing Ocean Temperatures + +Several factors contribute to warming the ocean: + +- The rising air temperature too will warm the ocean. +- As the polar ice caps melt, they will reflect less and less sunlight, meaning that the ocean will absorb a great deal of that heat. +- Progressively higher levels of carbon dioxide will be forced into the ocean. + +In the ocean below $215\mathrm{m}$ , the pressure and lack of sunlight counteract increases in temperature. The water in the top $215\mathrm{m}$ of the ocean, however, will warm and expand in volume. Water at that temperature $(15^{\circ}\mathrm{C})$ has a coefficient of thermal expansion of $2.00\times 10^{-4}\mathrm{K}^{-1}$ . We estimate the water level rise for the best and worst-case scenarios via: + +$$ +V _ {\mathrm {c h a n g e}} = V _ {\mathrm {s t a r t}} B T _ {\mathrm {c h a n g e}}, +$$ + +where + +- $V_{\mathrm{start}} =$ initial volume, +- $V_{\text {change}} = \text {change in volume},$ +- $B =$ the thermal expansion factor $(2.00 \times 10^{-4} \mathrm{~K}^{-1})$ , and +- $T_{\text {change}} =$ the change in temperature. + +By dividing out the surface area of both volumes (roughly equal), we find a change in depth: $2\mathrm{cm}$ in the best-case scenario, and $12.5\mathrm{cm}$ in the worst case, after 50 years. + +# Putting It All Together + +Figure 1 shows the results from Model 3. After 50 years, the sea level surrounding Florida will rise between 12 and $36\mathrm{cm}$ . + +# Effect on Florida + +While the ocean-level rise surrounding each of the four cities will be comparable, there will be differential effects due to topography. + +![](images/8b93025e21d252e5cc8b3f436f8ff808a4b77c3b6d3b3022224ff79baf0e39d5.jpg) +Figure 1. Results from Model 3. + +# Key West + +Key West is the lowest in elevation of our four chosen coastal cities, with an average elevation of $2.44\mathrm{m}$ . After 50 years, the sea level will rise between $12\mathrm{cm}$ (4.7 in.) and $36\mathrm{cm}$ (14.3 in.). + +This city is by far the most susceptible to flooding. When the sea level rises, there will be a proportional rise in the water table of the city. So, not only will the city begin to flood at higher elevations than it does currently, but it will also be harder to drain water after storms. In addition, there will be problems with overflowing sewers. + +Based on our projections in Model 3, $75\%$ of Key West will be at serious risk for flooding in about 50 years, including the airport. Key West needs to consider how to prevent water from entering the airport area or even start thinking about building a new airport at a higher elevation. [This is of particular importance considering the flooding of Key West in the summer of 2008.] + +# Miami + +Miami will experience problems similar to those of Key West. Under the range of the scenarios, there will be a small loss of beachfront land and some minor flooding along the Miami River. Again, there will be possible problems with overflowing sewers and drainage due to the raised water table. However, one of the biggest problems might arise during a significant storm such as a hurricane. With the added height of the ocean and the low elevation of the Miami downtown area, the city could experience long-lasting floods of up to $36\mathrm{cm}$ where flooding is now currently minimal. + +In 50 years, many buildings could be far too close to the ocean for comfort, and their structural integrity might be compromised. + +# Daytona Beach + +Daytona Beach will experience some loss of shoreline property and be slightly more susceptible to flooding in low-lying areas. In addition, flood risks will be more severe in times of tropical storms and hurricanes. However, since there is a sharp increase in the elevation as one goes inland, flooding will be minimal and city drainage will remain relatively normal. + +# Tampa + +Tampa will experience very little change from its current situation, since its lowest-lying regions are above $8\mathrm{m}$ . However, Tampa needs to be prepared for additional flooding and possible drainage problems. + +# General Recommendations for Coastal Florida + +- Limit coastal erosion. The more erosion, the more beachfront property will be lost. +- Monitor the water table. As the sea level rises, so will the water table, which affects foundations of buildings and sewers. It would be advisable to restrict building construction within a set distance of the coast. +- Prepare for flooding. Higher sea level will produce greater flooding in storms. Cities should prepare evacuation and emergency plans. +- Use government information resources. When it comes to predicting whether or not one's particular town is in danger, there is an excellent online source for viewing potential flood levels. We highly recommend use of such resources of the Federal Emergency Management Agency at www.fema.gov. +- Inform the public now. Information is the key to preparation, and preparation in turn is the best way to combat the effects of the rising sea level over the years to come. + +# References + +Abdalati, Waleed. 2005. Canada's shrinking ice caps. Arctic Science Journeys radio script. Sea Grant Alaska. http://seagrant.uaf.edu/news/05ASJ/03.25.05canada-ice.html. Accessed 17 February 2008. + +Chang, Raymond. 2003. General Chemistry: The Essential Concepts. 3rd ed. New York: McGraw-Hill. +Chen, J.L., C.R. Wilson, and B.D. Tapley. 2006. Satellite gravity measurements confirm accelerated melting of Greenland Ice Sheet. 7 July 2006. http://www.sciencemag.org/cgi/content/abstract/1129007v1. +Cockrell School of Engineering, University of Texas at Austin. 2006. Greenland's ice loss accelerating rapidly, gravity-measuring satellites reveal. 10 August 2006. http://www.engr.utexas.edu/news/articles/200608101082/index.cfm. Accessed 16 February 2008. +Ekwurzel, B. 2007. Findings of the IPCC Fourth Assessment Report: Climate Change Science. http://www.ucsusa.org/global_warming/science_and_impacts/science/findings-of-the-ipcc-fourth-2.html. Accessed 16 February 2008. +Florida Department of Environmental Protection. n.d. Erosion control line. http://data.labins.org/2003/SurveyData/WaterBoundary/ecl/ecl_search.cfm. Accessed 16 February 2008. +Galapagos ocean currents. 2007. GalapagosOnline. http://www.galapagosonline.com/Galapagos_Natural_History/Oceanography/Currents.html. Accessed 17 February 2008. +Haxby, William. 2000. Water world. http://www.pbs.org/wgbh/nova/warnings/waterworld/. Accessed 16 February 2008. +Morano, Marc. 2007. Latest scientific studies refute fears of Greenland melt. U.S. Senate Committee on Environment and Public Works. http://epw.senate.gov/public/index.cfm?FuseAction=Minority.Blogs&ContentRecord_id=175b568a-802a-23ad-4c69-9bdd978fb3cd.Accessed 17 February 2008. +National Oceanic and Atmospheric Administration (NOAA). 2008. NOAA helps prepare East Coast communities for tsunami, storm-driven flood threats. http://www.noanews.noaa.gov/stories2007/20071203_eastcoasttsunami.html. Accessed 16 February 2008. +National Snow and Ice Data Center. 2008. All about sea ice. http://nsidc.org/seaice/intro.html. Accessed 16 February 2008. +NSTATE, LLC. 2007. The geography of Florida. http://www.netstate.com/states/geography/f1/geography.htm. Accessed 16 February 2008. +Pidwirny, Michael. 2008. Ocean. In The Encyclopedia of Earth, edited by Cutler J. Cleveland. Washington, DC: Environmental Information Coalition, National Council for Science and the Environment. http://www.eoearth.org/article/Ocean. + +![](images/d00e4d4778245e1425940774ddaa36436129dfbc7e678eef230144db607add08.jpg) + +Team members Megan Longo, Nelson T. Gross, Benjamin Coate, and advisor Dr. Mike Hitchman. + +# A Convenient Truth: Forecasting Sea Level Rise + +Jason Chen +Brian Choi +Joonhahn Cho + +Duke University Durham, NC + +Advisor: Scott McKinley + +# Abstract + +Greenhouse-gas emissions have produced global warming, including melting in the Greenland Ice Sheet (GIS), resulting in sea-level rise, a trend that could devastate coastal regions. A model is needed to quantify effects for policy assessments. + +We present a model that predicts sea-level trends over a 50-year period, based on mass balance and thermal expansion acting on a simplified ice-sheet geometry. Mass balance is represented using the heat equation with Neumann conditions and sublimation rate equations. Thermal expansion is estimated by an empirically-derived equation relating volume expansion to temperature increase. Thus, the only exogenous variables are time and temperature. + +We apply the model to varying scenarios of greenhouse-gas-concentration forcings. We solve the equations numerically to yield sea-level increase projections. We then project the effects on Florida, as modeled from USGS geospatial elevation data and metropolitan population data. + +The results of our model agree well with past measurements, strongly supporting its validity. The strong linear trend shown by our scenarios indicates both insensitivity to errors in inputs and robustness with respect to the temperature function. + +Based on our model, we provide a cost-benefit analysis showing that small investments in protective technology could spare coastal regions from flooding. Finally, the predictions indicate that reductions in greenhouse-gas emissions are necessary to prevent long-term sea-level-rise disasters. + +# Introduction + +There is strong evidence of global warming; temperatures have increased by about $0.5^{\circ}\mathrm{C}$ over the last 15 years, and global temperature is at its highest level in the past millennium [Hansen et al. 2000]. One of the feared consequences of global warming is sea-level rise. Satellite observations indicate that a rise of $0.32 \pm 0.02$ cm annually 1993-1998 [Cabanaes et al. 2001]. Titus et al. [1991] estimate that a 1-meter rise in sea levels could cause $270-475 billion in damages in the U.S. alone. + +Complex factors underlie sea-level rise. Thermal expansion of water due to temperature changes was long implicated as the major component, but it alone cannot account for observed increases [Wigley and Raper 1987]. Mass balance of large ice sheets, in particular the Greenland Ice Sheet, is now believed to play a major role. The mass balance is controlled by accumulation (influx of ice to the sheet, primarily from snowfall) and ablation (loss of ice from the sheet, a result of sublimation and melting) [Huybrechts 1999]. + +Contrary to popular belief, floating ice does not play a significant role. By Archimedes' Principle, the volume increase $\Delta V$ of a body of water with density $\rho_{\mathrm{ocean}}$ due to melting of floating ice of weight $W$ (assumed to be freshwater, with liquid density $\rho_{\mathrm{water}}$ ) is + +$$ +\Delta V = W \left(\frac {1}{\rho_ {\mathrm {w a t e r}}} - \frac {1}{\rho_ {\mathrm {o c e a n}}}\right). +$$ + +The density of seawater is approximately $\rho_{\mathrm{ocean}} = 1024.8~\mathrm{kg / m^3}$ [Fofonoff and Millard 1983]; the mass of the Arctic sea ice is $2\times 10^{13}\mathrm{kg}$ [Rothrock and Jang 2005]. Thus, the volume change if all Arctic sea ice melted would be + +$$ +\Delta V = 2 \times 1 0 ^ {1 3} \mathrm {k g} \left(\frac {1}{1 0 0 0 \mathrm {k g / m ^ {3}}} - \frac {1}{1 0 2 4 . 8 \mathrm {k g / m ^ {3}}}\right). +$$ + +Approximating that $360\mathrm{Gt}$ of water causes a rise of $0.1\mathrm{cm}$ in sea level [Warrick et al. 1996], we find that volume change accounts for a rise of + +$$ +4. 8 4 \times 1 0 ^ {8} \mathrm {m} ^ {3} \times \frac {1 0 0 0 \mathrm {k g}}{\mathrm {m} ^ {3}} \times \frac {1 \mathrm {G t}}{9 . 0 7 2 \times 1 0 ^ {1 1} \mathrm {k g}} \times \frac {0 . 1 \mathrm {c m}}{3 6 0 \mathrm {G t}} \approx 0. 0 0 0 1 5 \mathrm {c m}. +$$ + +This small change is inconsequential. + +We also neglect the contribution of the Antarctic Ice Sheet because its overall effect is minimal and difficult to quantify. Between 1978 and 1987, Arctic ice decreased by $3.5\%$ but Antarctic ice showed no statistically significant changes [Gloersen and Campbell 1991]. Cavalieri et al. projected minimal melting in the Antarctic over the next 50 years [1997]. Hence, our model considers only the Greenland Ice Sheet. + +Models for mass balance and for thermal expansion are complex and often disagree (see, for example, Wigley and Raper [1987] and Church et al. [1990]). We develop a model for sea-level rise as a function solely of temperature and time. The model can be extended to several different temperature forcings, allowing us to assess the effect of carbon emissions on sea-level rise. + +# Model Overview + +We create a framework that incorporates the contributions of ice-sheet melting and thermal expansion. The model: + +- accurately fits past sea-level-rise data, +- provides enough generality to predict sea-level rise over a 50-year span, +computes sea-level increases for Florida as a function of only global temperature and time. + +Ultimately, the model predicts consequences to human populations. In particular, we analyze the impact in Florida, with its generally low elevation and proximity to the Atlantic Ocean. We also assess possible strategies to minimize damage. + +# Assumptions + +- Sea-level rise is primarily due to the balance of accumulation/ablation of the Greenland Ice Sheet and to thermal expansion of the ocean. We ignore the contribution of calving and direct human intervention, which are difficult to model accurately and have minimal effect [Warrick et al. 1996]. +- The air is the only heat source for melting the ice. Greenland's land is permafrost, and because of large amounts of ice on its surface, we assume that it is at a constant temperature. This allows us to use conduction as the mode of heat transfer, due to the presence of a key boundary condition. +- The temperature within the ice changes linearly at the steady state. This assumption allows us to solve the heat equation for Neumann conditions. By subtracting the steady-state term from the heat equation, we can solve for the homogeneous boundary conditions. +- Sublimation and melting processes do not interfere with each other. Sublimation primarily occurs at below-freezing temperatures, a condition during which melting does not normally occur. Thus, the two processes are temporally isolated. This assumption drastically simplifies computation, since we can consider sublimation and melting separately. + +- The surface of the ice sheet is homogeneous with regard to temperature, pressure, and chemical composition. This assumption is necessary because there are no high-resolution spatial temperature data for Greenland. Additionally, simulating such variation would require finite-element methods and mesh generation for a complex topology. + +# Defining the Problem + +Let $M$ denote the mass balance of the Greenland Ice Sheet. Given a temperature-forcing function, we estimate the sea-level increases (SLR) that result. These increases are a sum of $M$ and thermal expansion effects, corrected for local trends. + +# Methods + +# Mathematically Modeling Sea-Level Rise + +Sea-level rise results mostly from mass balance of the Greenland Ice Sheet and thermal expansion due to warming. The logic of the simulation process is detailed in Figure 1. + +# Temperature Data + +We create our own temperature data, using input forcings that we can control. We use the EdGCM global climate model (GCM) [Shopsin et al. 2007], based on the NASA GISS model for climate change. Its rapid simulation (10 h for a 50-year simulation) allows us to analyze several scenarios. + +Three surface air temperature scenarios incorporate the low, medium, and high projections of carbon emissions in the IS92 series resulting from the IPCC Third Assessment Report (TAR) [Edmonds et al. 2000]. The carbon forcings are shown in Figure 2. All other forcings are kept at default according to the NASA GISS model. + +One downside to the EdGCM is that it can output only global temperature changes; regional changes are calculated but are difficult to access and have low spatial accuracy. However, according to Chylek and Lohmann [2005], the relationship between Greenland temperatures and global temperatures is well approximated by + +$$ +\Delta T _ {\mathrm {G r e e n l a n d}} = 2. 2 \Delta T _ {\mathrm {g l o b a l}}. +$$ + +# The Ice Sheet + +We model the ice sheet as a rectangular box. We assume that each point on the upper surface is at constant temperature $T_{a}$ , because our climate + +![](images/2daac3ba7a3affad3c51b1094131ed7f3f9ea535eaf33270b0218e8c16a52f7e.jpg) +Figure 1. Simulation flow diagram. + +![](images/343c9af6b53ecaff949e220051bd9adcb9677e4712acf71bdbded7bf2ac463c6.jpg) + +![](images/8cb3e02926a603408bda6ecd38b576ff6ac316b66c380b3d3d3487a4e6dc9f43.jpg) + +![](images/2bdb950ef480afa15834bb65bae5cc7d02f4bc9c1d18fb4f970a31217a8b109f.jpg) +Figure 2. Carbon dioxide forcings for the EdGCM models. + +model does not have accurate spatial resolution for Greenland. The lower surface, the permafrost layer, has constant temperature $T_{l}$ . + +To compute heat flux, and thus melting and sublimation through the ice sheet, we model it as an infinite number of differential volumes (Figure 3). + +![](images/4b4c18b0bf8d55ebf2f33c993ee922c2b3c6db8e88207db12dc4b9b5584f9fef.jpg) +Figure 3. Differential volumes of the ice sheet. + +The height $h$ of the box is calculated using data provided by Williams and Ferrigno [1999]: + +$$ +h = \frac {\mathrm {V o l u m e _ {i c e}}}{\mathrm {S u r f a c e _ {i c e}}} = \frac {2 . 6 \times 1 0 ^ {6} \mathrm {k m ^ {3}}}{1 . 7 3 6 \times 1 0 ^ {6} \mathrm {k m ^ {2}}} = 1. 5 \mathrm {k m}. +$$ + +The primary mode of sea-level rise in our model is through mass balance: accumulation minus ablation. + +# Mass Balance: Accumulation + +Huybrechts et al. [1991] show that the temperature of Greenland is not high enough to melt significant amounts of snow. Furthermore, Knight [2006] shows that the rate of accumulation of ice is well-approximated by a linear relationship of $0.025 \, \text{m} / \text{month}$ of ice. In terms of mass balance, we have + +$$ +M _ {\mathrm {a c}} = 0. 0 2 5 L D, +$$ + +where $L$ and $D$ are the length and width of the rectangular ice sheet. + +# Mass Balance: Ablation + +We model the two parts of ablation, sublimation and melting. + +# Sublimation + +The sublimation rate (mass flux) is given by: + +$$ +S _ {0} = e _ {\mathrm {s a t}} (T) \left(\frac {M _ {w}}{2 \pi R T}\right) ^ {1 / 2}, +$$ + +where $M_{w}$ is the molecular weight of water and $T$ is the temperature in kelvins. This expression can be derived from the ideal gas law and + +the Maxwell-Boltzmann distribution [Andreas 2007]. Substituting Buck's [1981] expression for $e_{\mathrm{sat}}$ , we obtain: + +$$ +S _ {0} = 6. 1 1 2 1 \exp \left[ \frac {\left(1 8 . 6 7 8 - \frac {T}{2 3 4 . 5}\right) T}{2 5 7 . 1 4 + T} \right] \left(\frac {M _ {w}}{2 \pi R (T + 2 7 3 . 1 5)}\right) ^ {1 / 2}, +$$ + +where we now scale $T$ in $^\circ \mathrm{C}$ . Buck's equation is applicable over a large range of temperatures and pressures, including the environment of Greenland. To convert mass flux into rate of change of thickness the ice, we divide the mass flux expression by the density of ice, getting the rate of height change as + +$$ +S _ {h} = \frac {6 . 1 1 2 1 d}{\rho_ {\mathrm {i c e}}} \exp \left[ \frac {\left(1 8 . 6 7 8 - \frac {T}{2 3 4 . 5}\right) T}{2 5 7 . 1 4 + T} \right] \left(\frac {M _ {w}}{2 \pi R (T + 2 7 3 . 1 5)}\right) ^ {1 / 2}, +$$ + +where $d$ is the deposition factor, given by $d = (1 - \text{deposition rate}) = 0.01$ [Buck 1981]. + +The thickness of the ice sheet after one timestep (= one month) of the computational model is + +$$ +S (t) = h - S _ {h} t, +$$ + +where $h$ is the current thickness of the ice sheet and $t$ is one timestep. Substituting for $S_{h}$ the expression above and the molecular weight of water yields + +$$ +S (t) = h - \frac {6 . 1 1 2 1 \times 1 0 ^ {- 2} t}{\rho_ {\mathrm {i c e}}} \exp \left[ \frac {(1 8 . 6 7 8 - \frac {T}{2 3 4 . 5}) T}{2 5 7 . 1 4 + T} \right] \left(\frac {M _ {w}}{2 \pi R (T + 2 7 3 . 1 5)}\right) ^ {1 / 2}. +$$ + +# Melting + +To model melting, we apply the heat equation + +$$ +U _ {t} (x, t) = k U _ {x x} (x, t), +$$ + +using $k = 0.0104$ as the thermal diffusivity of the ice [Polking et al. 2006]. For the Neumann conditions, we assume a steady-state $U_{s}$ with the same boundary conditions as $U$ and that is independent of time. The residual temperature $V$ has homogeneous boundary conditions and initial conditions found from $U - U_{s}$ . Thus, we can rewrite the heat equation as + +$$ +U (x, t) = V (x, t) + U _ {s} (x, t). +$$ + +The steady-state solution is + +$$ +U _ {s} = T _ {l} + \frac {T _ {a} - T _ {l}}{S (t)} x, +$$ + +subject to the constraints $0 < x < S(t)$ and $0 < t < 1$ month. Directly from the heat equation we also have + +$V_{t}(x,t) = kV_{xx}(x,t) + f,\quad \mathrm{where~}f\mathrm{~is~a~foring~term};$ and + +$V(0,t) = V\big(S(t),t\big) = 0,$ for the homogeneous boundary equations. + +Since no external heat source is present and temperature distribution depends only on heat conduction, we take as the forcing term $f = 0$ . To calculate change in mass balance on a monthly basis, we solve analytically using separation of variables: + +$$ +V (x, t) = \frac {a _ {0}}{2} + \sum_ {n = 1} ^ {\infty} a _ {n} \exp \left[ \frac {- n ^ {2} \pi^ {2} t}{s ^ {2}} \right] \cos \left(\frac {n \pi x}{s}\right), +$$ + +where + +$$ +a _ {0} = \frac {2}{s} \int_ {0} ^ {s} \left(T _ {l} + \frac {T _ {a} - T _ {l}}{s} x\right) d x = 2 T _ {1} + T _ {a} - T _ {l} = T _ {l} + T _ {a} +$$ + +and + +$$ +\begin{array}{l} a _ {0} = \frac {2}{s} \int_ {0} ^ {s} \left(T _ {l} + \frac {T _ {a} - T _ {l}}{s} x\right) \cos \left(\frac {n \pi x}{s}\right) d x \\ = \left(\frac {s}{n \pi}\right) ^ {2} (\cos (n \pi) - 1) \\ = \left(\frac {s}{n \pi}\right) ^ {2} \left((- 1) ^ {n} - 1\right). \\ \end{array} +$$ + +Therefore, + +$$ +V (x, t) = \frac {T _ {l} + T _ {a}}{2} + \sum_ {n = 1} ^ {\infty} \frac {2 (T _ {a} - T _ {l})}{(n \pi) ^ {2}} \big ((- 1) ^ {n} - 1 \big) \exp \left[ \frac {- n ^ {2} \pi^ {2} t}{s ^ {2}} \right] \cos \left(\frac {n \pi x}{s}\right). +$$ + +Having found $V(x,t)$ and $U_{s}(x,t)$ , we obtain an expression for $U(x,t)$ from + +$$ +U (x, t) = V (x, t) + U _ {s} (x, t). +$$ + +Since $U$ is an increasing function of $x$ , and for $x > k$ , we have $U(x, t) > 0$ for fixed $t$ ; the ice will melt for $k < x < h$ . To determine ablation, we solve $U(k, t) = 0$ for $k$ using the first 100 terms of the Fourier series expansion and the Matlab function fzero. We use the new value of $k$ to renew $h$ as the new thickness of the ice sheet for the next timestep. + +With these two components, we can finalize an expression for ablation and apply it to a computational model. The sum of the infinitesimal changes + +in ice sheet thickness for each differential volume gives the total change in thickness. To find these changes, we first note that + +Mass balance loss due to sublimation $= (h - S)LD$ , + +Mass balance loss due to melting $= (S - k)LD,$ + +where the product $LD$ is the surface area of the ice sheet. In these equations, the "mass balance" refers to net volume change. Thus, ablation is given by + +$$ +M _ {\mathrm {a b}} = (h - S) L D + (S - k) L D = (h - k) L D. +$$ + +# Mass Balance and Sea-Level Rise + +Combining accumulation and ablation into an expression for mass balance, we have + +$$ +M = M _ {\mathrm {a c}} - M _ {\mathrm {a b}} = 0. 0 2 5 L D - (h - k) L D. +$$ + +Relating this to sea-level rise, we use the approximation $360\mathrm{Gt}$ water $= 0.1$ cm sea-level rise. Thus, + +$$ +\mathrm {S L R} _ {\mathrm {m b}} = M \rho_ {\mathrm {i c e}} \frac {0 . 1 \mathrm {c m}}{3 6 0 \mathrm {G t}}, +$$ + +which quantifies the sea-level rise due to mass balance. + +# Thermal Expansion + +According to Wigley and Raper [1987], for the current century thermal expansion of the oceans due to increase in global temperature will contribute at least as much to rise in sea level as melting of polar ice [Huybrechts et al. 1991; Titus and Narayanan 1995]. So we incorporate thermal expansion into our model. + +Temperature plays the primary role in thermal expansion, but the diffusion of radiated heat, mixing of the ocean, and various other complexities of ocean dynamics must be accounted for in a fully accurate description. We adapt the model of Wigley and Raper [1987]. Based on standard greenhouse-gas emission projections and a simple upwelling-diffusion model, the dependency of the model can be narrowed to a single variable, temperature, using an empirical estimation: + +$$ +\Delta z = 6. 8 9 \Delta T k ^ {0. 2 2 1}, +$$ + +where + +$\Delta z$ is the change in sea level due to thermal expansion (cm), + +$\Delta T$ is the change in global temperature $(^{\circ}\mathrm{C})$ , and + +$k$ is the diffusivity. + +# Localization + +A final correction must be added to the simulation. The rise in sea level will vary regionally rather significantly. The local factors often cited include land subsidence, compaction, and delayed response to warming [Titus and Narayanan 1995]. We thus assume that previous patterns of local sea-level variation will continue, yielding the relationship + +$$ +\operatorname {l o c a l} (t) = \operatorname {n o r m a l i z e d} (t) + \operatorname {t r e n d} (t - 2 0 0 8), +$$ + +where + +- $\operatorname{local}(t)$ is the expected sea level rise at year $t$ (cm), +- normalized $(t)$ is the estimate of expected rise in global sea level change relative to the historical rate at year $t$ , and +- trend is the current rate of sea-level change at the locale of interest. + +The normalization prevents double-counting the contribution from global warming. + +In our model, the rates of sea-level change are averaged over data for Florida from Titus and Narayanan [1995] to give the trend. This is reasonable because the differences between the rates in Florida are fairly small. The normalized $(t)$ at each year is obtained from + +$$ +\mathrm {g l o b a l} (t) - \mathrm {h i s t o r i c a l r a t e} (t - 2 0 0 8), +$$ + +where $\text{global}(t)$ is the expected sea-level rise at year $t$ from our model and historical rate is chosen uniformly over the range taken from Titus and Narayanan [1995]. + +# Simulating Costs of Sea-Level Rise to Florida + +To model submersion of regions of Florida due to sea-level rise, we created a raster matrix of elevations for various locations, using USGS data (GTOPO30) [1996]. The 30-arc-second resolution corresponds to about $1\mathrm{km}$ ; however, to yield a more practical matrix, we lowered the resolution to 1 minute of arc (approx. $2\mathrm{km}$ ). + +The vertical resolution of the data is much greater than $1\mathrm{m}$ . To model low coastal regions, the matrix generation code identified potential sensitive areas and submitted these to the National Elevation Dataset (NED) [Seitz 2007] for refinement. (NED's large size and download restrictions restrict its use to sensitive areas.) The vertical resolution of NED is very high [USGS 2006]. We use these adjustments to finalize the data. + +We measure the effect of sea-level rise on populations by incorporating city geospatial coordinates and population into the simulation. We + +obtained geospatial coordinates from the GEOnames Query Database [National Geospatial Intelligence Agency 2008] and population data from the U.S. Census Bureau [2000]. + +We used the sea-level rise calculated from our model as input for the submersion simulation, which subtracts the sea-level increase from the elevation. If rising sea level submerges pixels in a metropolitan area, the population is considered "displaced." + +A key limitation of the model is that the population is considered to be concentrated in the principal cities of the metropolitan areas, so a highly accurate population count cannot be assessed. This simplification allows quick display of which cities are threatened without the complexity of hard-to-find high-resolution population distribution data. + +We checked the model for realism at several different scenarios. As shown in Figure 4, our expectations are confirmed: + +- $0\mathrm{m}$ : No cities are submerged and no populations or land areas are affected. +- $10 \mathrm{~m}$ : This is slightly higher than if all of the Greenland Ice Sheet melted (approx. $7 \mathrm{~m}$ ). Many cities are submerged, especially in southern Florida. +- $100 \mathrm{~m}$ : Most of Florida is submerged. + +![](images/609ab01474199863719f1297cc45975bb7ac12723725efa3d311c94ee2c358ce.jpg) +Figure 4. Effects of 0, 10, and 100-meter sea-level rise. + +![](images/5e6e100eef59e57e49d25d48e4689822a509d5a35ace708bfe9e16ef0e64bf31.jpg) + +![](images/855c6d7a5cf818a9fe79775d6905bd75ad73fc7d8ccb4df808d3c367a47f347c.jpg) + +# Results + +# Output Sea-Level-Rise Data + +We ran the program with a Matlab script for the IS92e (high), IS92a (intermediate), and IS92c (low) carbon-emissions models. The program produces a smooth trend in sea-level increase for each of the three forcings, as shown in Figure 5: Higher temperature corresponds to higher sea-level rise, as expected. The sea-level output data are then used to calculate submersion consequences. + +![](images/6edb85d30d3aee06d436fe0a40dfe3472723f03160e56defd24d03b21b620b8a.jpg) +Figure 5. Sea level rise as a function of time for the three temperature models. + +# Submersion Simulation Results + +Output consists of the submerged land area and displaced population statistics. The program quantified the effects noted in Table 1. For the low and medium scenarios, no metropolitan areas are submerged until after 30 years. In all scenarios, Miami Beach and Key Largo are submerged after 40 years. + +# Discussion and Conclusion + +The estimated sea-level rises (Figure 5) for the three scenarios seem reasonable. The 50-year projection is in general agreement with models proposed by the IPCC, NRC, and EPA (less than $10\mathrm{cm}$ different from each) [Titus et al. 1991]. Additionally, the somewhat-periodic, somewhat-linear trend is similar to past data of mean sea-level rise collected in various locations [Titus et al. 1991]. Thus, the projections of our model are reasonable. + +The high-emission scenario results in a 40-50 cm rise in sea level by 2058, with results from the intermediate scenario 6-10 cm lower and the + +Table 1. +Effects under different scenarios (using current population values). + +
Time (yrs)HighMediumLow
Displaced (×103)Submerged (km2×103)Displaced (×103)Submerged (km2×103)Displaced (×103)Submerged (km2×103)
1006.506.406.2
20127.506.906.8
301009.2127.707.1
401009.71009.01008.0
10013510.01009.51009.2
+ +low-emission scenario trailing intermediate by $5 - 8\mathrm{cm}$ . The model thus works as expected for a wide range of input data: Higher temperatures lead to increased sea level rise. + +Overall, the damage due to sea-level change seems unremarkable. Even in the worst-case scenario, in 50 years only 135,000 people are displaced and 10,000 square kilometers are submerged, mostly in South Florida. + +However, these projections are only the beginning of what could be a long-term trend. As shown by the control results, a sea-level increase of $10\mathrm{m}$ would be devastating. Further, not all possible damages are assessed in our simulation. For example, sea-level increases have been directly implicated also in shoreline retreat, erosion, and saltwater intrusion. Economic damages are not assessed. Bulkheads, levees, seawalls, and other structures are often built to counteract the effect of rising sea levels, but their economic impacts are outside the scope of the model. + +Our model has several key limitations. The core assumption of the model is the simplification of physical features and dynamics in Greenland. The model assumes an environment where thickness, temperature, and other physical properties are averaged out and evenly distributed. The "sublimate, melt, and snow" dynamics are simulated with a monthly timestep. Such assumptions are too simplistic to capture fully the ongoing dynamics in the ice sheets. But we do not have the data and computing power to perform a full-scale 3-D grid-based simulation using energy-mass balance models, as in Huybrechts [1999]. + +With regard to minor details of the model, the assumed properties regarding the thermal expansion, localization, and accumulation also take an averaging approach. We make an empirical estimate adapted from Wigley and Raper [1987]. Consequently, our model may not hold over a long period of time, when its submodels for accumulation, thermal expansion, and localization might break down. + +The assumptions of the EdGCM model are fairly minimal, and the projected temperature time series for each scenario are consistent with typical carbon projections [Edmonds et al. 2000]. Although the IS92 emissions scenarios are very rigorous, they are the main weakness of the model. Because + +all of the other parameters depend on the temperature model, our results are particularly sensitive to factors that directly affect the EdGCM output. + +Despite these deficiencies, our model is a powerful tool for climate modeling. Its relative simplicity—while it can be viewed as a weakness—is actually a key strength of the model. The model boasts rapid runtime, due to its simplifications. Furthermore, the model is a function of time and temperature only; the fundamentals of our model imply that all sea-level increase is due to temperature change. But even with less complexity, our model is comprehensive and accurate enough to provide accurate predictions. + +# Recommendations + +In the short term, preventive action could spare many of the model's predictions from becoming reality. Key Largo and Miami Beach, which act as a buffer zone preventing salinization of interior land and freshwater, are particularly vulnerable. If these regions flood, seawater intrusion may occur, resulting in widespread ecological, agricultural, and ultimately economical damage. Titus and Narayanan [1995] recommend building sand walls. + +In the long term, carbon emissions must be reduced to prevent disasters associated with sea-level rise. + +# References + +Andreas, E.L. 2007. New estimates for the sublimation rate of ice on the Moon. Icarus 186, 24-30. +Buck, A.L. 1981. New equations for computing vapor pressure and enhancement factor. Journal of Applied Meteorology 20: 1527-1532. +Cabanes, C., A. Cazenave, and C.L. Provost. 2001. Sea level rise during past 40 years determined from satellite and in situ observations. Science 294: 840-842. +Cavalieri, D.J., P. Gloersen, C.L. Parkinson, J.C. Comiso, and H.J. Zwally. 1997. Observed hemispheric asymmetry in global sea ice changes. Science 278: 1104-1106. +Church, J.A., J.S. Godfrey, D.R. Jackett, and T.T. McDougall. 1990. Sea level rise during past 40 years determined from satellite and in situ observations. Journal of Climate 4: 438-456. +Chylek, P., and U. Lohmann. 2005. Ratio of the Greenland to global temperature change: Comparison of observations and climate modeling results. Geophysical Research Letters 32: 1-4. + +Edmonds, J., R. Richels, and M. Wise. 2000. Ratio of the Greenland to global temperature change: Comparison of observations and climate modeling results: A review. In *The Carbon Cycle*, edited by T.M.L. Wigley and D.S. Schimel, 171-189. Cambridge, UK: Cambridge University Press. +Fofonoff, P., and R.C. Millard, Jr. 1983. Algorithms for computation of fundamental properties of seawater. UNESCO Technical Papers in Marine Sciences 44: 1-53. +Gloersen, P., and W.J. Campbell. 1991. Recent variations in arctic and antarctic sea-ice covers. Nature 352: 33-36. +Hansen, J., M. Sato, R. Ruedy, A. Lacis, and V. Oinas. 2000. Global warming in the twenty-first century: An alternative scenario. Proceedings of the National Academy of Science 97: 9875-9880. +Huybrechts, P. 1999. The dynamic response of the Greenland and Antarctic ice sheets to multiple-century climatic warming. Journal of Climate 12: 2169-2188. +_____, A. Letreguilly, and N. Reeh. 1991. The Greenland ice sheet and greenhouse warming. Palaeography, Palaeoclimatology, Palaeoecology 89: 399-412. +Knight, P. 2006. Glacier Science and Environmental Change. Oxford, UK: Wiley-Blackwell. +National Geospatial-Intelligence Agency. 2008. GEOnames Query Database. http://earth-info.nga.mil/gns/html/index.html. +Polking, J., A. Boggess, and D. Arnold. 2006. Differential Equations with Boundary Value Problems. 2nd ed. Upper Saddle River, NJ: Pearson Education. +Rothrock, D.A., and J. Zhang. 2005. Arctic Ocean sea ice volume: What explains its recent depletion? Journal of Geophysical Research 110: 1-10; http://psc.apl.washington.edu/pscweb2002/pubs/rothrockJRG05.pdf. +Seitz, M. 2007. Lat/lon to elevation. http://www.latlontoelevation.com/. +Shopsin, M., M. Shopsin, and K. Mankoff. 2007. *Education Global Climate Modeling*. New York: Columbia University Press. +Titus J.G., and V.K. Narayanan. 1995. The Probability of Sea Level Rise. Washington, DC: U.S. Environmental Protection Agency. +Titus, J.G., R.A. Park, S.P. Leatherman, J.R. Weggel, M.S. Greene, P.W. Mausel, S. Brown, G. Gaunt, M. Trehan, and G. Yohe. 1991. Greenhouse effect and sea level rise: The cost of holding back the sea. Coastal Management 19: 171-204; http://www.ocr.ahr.state.nc.us/ref/16/15086.pdf. + +U.S. Census Bureau. 2000. Census 2000 datasets. http://www2.census.gov/census_2000/datasets/. +U.S. Geological Survey (USGS). 1996. GTOPO30 Tile W100N40. Earth Resources Observation and Sciences. http://edc.usgs.gov/products/elevation/gtopo30/w100n40.html. +______ 2006. Accuracy of the National Elevation Dataset. http://ned.usgs.gov/Ned/accuracy.asp. +Warrick, R., C.L. Provost, M. Meier, J. Oerlemans, and P. Woodworth 1996. Climate Change 1995: The Science of Climate Change. Cambridge, UK: Cambridge University Press. +Wigley, T.M.L., and S.C.B. Raper. 1987. Thermal expansion of sea water associated with global warming. Nature 330: 127-131. +Williams, Richard S., Jr., and Jane G. Ferrigno. 1999. Estimated present-day area and volume of glaciers and maximum sea level rise potential. U.S. Geological Survey Professional Paper 1386-A. In *Satellite Image Atlas of Glaciers of the World*, Chapter A, edited by Richard S. Williams, Jr., and Jane G. Ferrigno. Washington, DC: U.S. Government Printing Office. http://www.smith.edu/libraries/research/class/idp108USGS_99.pdf. An updated version was in press in 2007 and is to be available from http://pubs.usgs.gov/fs/2005/3056/fs2005-3056.pdf. + +![](images/63341ecada74dceec8740e3a4d7e37157518f49e8e28ea27cbdfc367bbde05a7.jpg) +Brian Choi, Joonhahn Cho, and Jason Chen. + +Pp. 267-300 can be found on the Tools for Teaching 2008 CD-ROM. + +# Fighting the Waves: The Effect of North Polar Ice Cap Melt on Florida + +Amy M. Evans +Tracy L. Stepien + +University at Buffalo, The State University of New York Buffalo, NY + +Advisor: John Ringland + +# Abstract + +A consequence of global warming that directly impacts U.S. citizens is the threat of rising sea levels due to melting of the North Polar ice cap. One of the many states in danger of losing coastal land is Florida. Its low elevations and numerous sandy beaches will lead to higher erosion rates as sea levels increase. The direct effect on sea level of only the North Polar ice cap melting would be minimal, yet the indirect effects of causing other bodies of ice to melt would be crucial. We model individually the contributions of various ice masses to rises in sea level, using ordinary differential equations to predict the rate at which changes would occur. + +For small ice caps and glaciers, we propose a model based on global mean temperature. Relaxation time and melt sensitivity to temperature change are included in the model. Our model of the Greenland and Antarctica ice sheets incorporates ice mass area, volume, accumulation, and loss rates. Thermal expansion of water also influences sea level, so we include this too. Summing all the contributions, sea levels could rise 11-27 cm in the next half-century. + +A rise in sea level of one unit is equivalent to a horizontal loss of coastline of 100 units. We investigate how much coastal land would be lost, by analyzing relief and topographic maps. By 2058, in the worst-case scenario, there is the potential to lose almost $27\mathrm{m}$ of land. Florida would lose most of its smaller islands and sandy beaches. Moreover, the ports of most major cities, with the exception of Miami, would sustain some damage. + +Predictions from the Intergovernmental Panel on Climate Change (IPCC) and from the U.S. Environmental Protection Agency (EPA) and simulations + +from the Global Land One-km Base Elevation (GLOBE) digital elevation model (DEM) match our results and validate our models. + +While the EPA and the Florida state government have begun to implement plans of action, further measures need to be put into place, because there will be a visible sea-level rise of $3 - 13\mathrm{cm}$ in only 10 years (2018). + +# Introduction + +Measurements and observations of Earth's ice features (e.g., glaciers, ice sheets, and ice packs) indicate changes in the climate [Kluger 2006; NASA Goddard Institute for Space Studies 2003; Natural Resources Defense Council 2005] and consequent raised ocean levels resulting from their melting. + +Over the past 30 years, the amount of ice covering the North Pole has been reduced by $15\% -20\%$ . Additionally, the snow season in which ice is restored to the pack has grown shorter. By 2080, it is expected that there will be no sea ice during the summer [Dow and Downing 2007]. + +Besides the Arctic ice pack, glaciers of around the world are also shrinking. Warmer air and ocean waters cause the melting, and most glaciers have retreated at unparalleled rates over the past 60 years [Dow and Downing 2007]. + +Two other signs of a changing climate have direct impacts on people: increased weather-related disasters and a rising sea level. In 2005, the United States experienced 170 floods and 122 windstorms, compared with 8 floods and 20 windstorms in 1960. The statistics are similar for other countries, with 110,000 deaths due to weather-related catastrophes worldwide in 2005. + +Sea-level rise results are visible. Small, low-lying islands in the Southern Pacific Ocean have either disappeared (e.g., two of the Kiribati islands in 1999) or had to be abandoned by residents (e.g., Carteret Islands in Papua New Guinea). Over the 20th century, the average sea-level rise was roughly $15\mathrm{cm}$ . If this trend continues, many more islands as well as the coastline of some countries would be lost [Dow and Downing 2007]. + +# Assumptions + +All the documentation that we encountered stated the same basic claim: The North Polar ice cap melting will on its own affect the global ocean level by only a negligible amount. This claim is simply a matter of the Archimedes Principle: The volume of water that would be introduced to the world's oceans and seas is already displaced by the North Polar ice pack, since it is comprised of frozen sea water floating in the Arctic Ocean. + +However, the disappearing Arctic ice pack will speed up global warming, which encourages the melting of other land ice masses on Earth (e.g. Greenland, Antarctica, etc.). Thus, ocean levels will rise more as the North + +Polar ice cap shrinks, due to indirect effects. In fact, North Polar ice cap melt is used as an "early warning system" for climate change around the world [Arctic Climate Impact Assessment 2004]. + +# Worldwide Consequences of the Warming Arctic + +As greenhouse gases increase in the atmosphere, snow and ice in the Arctic form later in the fall and melt earlier in the spring. As a result, there is less snow to reflect the sun's rays and more dark space (land and water) to absorb energy from the sun. The result, then, is the snow and ice forming later and melting earlier: A cycle emerges. Along the same lines, as snow and ice recede on the Arctic tundra, vegetation will grow on the exposed land, which will further increase energy absorption. Even though new trees would take in some of the $\mathrm{CO}_{2}$ in the atmosphere, it would not be enough to compensate for the human-produced $\mathrm{CO}_{2}$ causing the warming. Also, humans produce soot that is deposited in the Arctic by wind currents; the soot darkens the snow and further adds to soaking up energy from the sun. All of these changes will vary the world climate and lead to an increased global temperature [Arctic Climate Impact Assessment 2004]. + +When ice forms on the Arctic ice pack, most of the salt is pushed to the water directly below the mass. Therefore, the salinity of the water where sea ice is being formed and melted increases, which is an important step in thermohaline circulation—a system driven by the differences in heat and salt concentration that is related to ocean currents and the jetstream. Heating and melting in the Arctic will greatly affect the ocean currents of the world by slowing thermohaline circulation: The rate of deep water formation will decrease and lead to less warm water being brought north to be cooled. As a result, there will be regional cooling in the northern seas and oceans and an overall thermal expansion in the rest of the world, leading to a rise in sea level [Arctic Climate Impact Assessment 2004; Bentley et al. 2007]. + +Another direct impact of warming in the Arctic is the melting of permafrost, permanently frozen soil in the polar region. The melting of permafrost could lead to the release of large amounts of carbon dioxide, methane, and other greenhouse gases into the atmosphere [NASA Goddard Institute for Space Studies 2003]. Although warming would have to be fairly significant for this to occur, the consequences could be great, since another cycle of warming will take hold [Arctic Climate Impact Assessment 2004]. + +As the Arctic warms and global temperatures continue to rise, land ice will melt at an increasing rate. The associated sea-level rise will cause major loss of coastal land around the globe [Dow and Downing 2007]. + +The Arctic ecosystem itself will be completely disrupted by the warming environment. Food and habitat destruction will have a devastating effect on the mammals, fish, and birds that thrive in this cold environment. What's more, ecosystems farther south will be impacted, because a large number of Arctic animals move there during the summer months in search of food and + +for breeding purposes [Arctic Climate Impact Assessment 2004; Bentley et al. 2007]. + +Finally, the warming Arctic will change the lives of humans around the globe. The most directly impacted will be the native peoples living in the North Polar region who depend on the ice pack and northern glaciers as a home and hunting ground. These people will be forced to move farther south and find new means of survival. Many fishing industries that depend on the Arctic as a source of income will see a reduction in catches. There will also be easier access to oil and minerals that lie under the ocean floor—a happy thought for some and horrific for others [Arctic Climate Impact Assessment 2004; Bentley et al. 2007]. + +We focus on the effect of small ice caps, glaciers, and the Greenland and Antarctica ice sheets melting over the next 50 years, since these will have a direct effect on sea level. Furthermore, we predict the effects of a sea-level rise on Florida. Finally, we propose a response plan. + +# Modeling Small Ice Caps and Glaciers + +Though ice caps and glaciers are small compared to the Greenland and Antarctica ice sheets, they are located in warmer climates and tend to have a quicker reaction rate in response to climate change, and they will cause more-immediate changes in sea level [Oerlemans 1989]. + +# Global Mean Temperature + +Global mean temperature is a measure of world-wide temperature change and is based on various sets of data. Overall trends in the temperature change can be detected, and periods of global warming and global cooling can be inferred. + +Trends can clearly be seen in annual temperature anomaly data (relative changes in temperature) [NASA Goddard Institute for Space Studies 2008]. Figure 1 shows global annual mean temperature for January through December of each year. + +In the late 1800s, the temperature anomalies were negative yet increasing. For 1930-1970, the temperature anomalies hovered around 0; but then by the 1980s they remained positive and have been increasing. + +# Assumptions and Formation + +We model the contribution of small ice caps and glaciers to sea-level rise by a model that uses a relation between the global mean temperature and the mass change of the small ice caps and glaciers [Oerlemans 1989; Wigley and Raper 1993]. We begin with the equation + +![](images/adfafc94abe06d4afdc7bfafeba36c577b12dfa2603bfed93f68566332c1f480.jpg) +Figure 1. Anomalies in global mean temperature, 1880-2007. The temperature is scaled in $0.01^{\circ}\mathrm{C}$ . Data from NASA Goddard Institute for Space Studies [2008]. + +$$ +\frac {d z}{d t} = \frac {- z + \left(Z _ {0} - z\right) \beta \overline {{\Delta T}}}{\tau}, \tag {1} +$$ + +where + +$z$ is the sea-level change (initially zero) $(\mathfrak{m})$ + +$\tau$ is the relaxation time (years), + +$\beta$ is a constant representing glacier melt sensitivity to temperature change $(^{\circ}\mathrm{C}^{-1})$ , + +$Z_{0}$ is the initial ice mass in sea-level equivalent, and + +$\overline{\Delta T}$ is the global mean temperature change $(^{\circ}\mathrm{C})$ + +We set $Z_0 = 0.45 \, \text{m}$ , based on data from Oerlemans [1989]. We use data from Wigley and Raper [1993] for the values of $\tau$ and $\beta$ , and we set these parameters for various estimates of sea-level rise as follows: + +Low: $(\tau, \beta) = (30, 0.10)$ + +Medium: $(\tau, \beta) = (20, 0.25)$ + +High: $(\tau, \beta) = (10, 0.45)$ . + +The last parameter to estimate is $\overline{\Delta T}$ . This could be done by finding a best-fit curve to the temperature anomaly data of Figure 1. For the years + +after 1980, linear and logarithmic curves appear to fit the data. However, we use a temperature perturbation as an estimate for the change in annual global mean temperature, since this was implemented into models used in Oerlemans [1989]. The equation for the temperature perturbation is + +$$ +T ^ {\prime} = \eta (t - 1 8 5 0) ^ {3} - 0. 3 0, \tag {2} +$$ + +where + +$\eta$ is the constant $27 \times 10^{-8} \mathrm{~K} \cdot \mathrm{yr}^{-3}$ , + +$t$ is the year, + +1850 is used as a reference year in which the Earth was in a state unperturbed by global warming, and + +0.30 is a vertical shift $(^{\circ}\mathrm{C})$ + +The comparison of (2) to the data in Figure 1 is given in Figure 2. + +![](images/83f00a1c021e12dc4039af806f8392a20cd7ac8c9726e1a61fc418ee7ff5bcb5.jpg) +Figure 2. Comparison of $T'$ and actual data. + +While the curve is not extremely accurate to each data point, the broad shape of the trend reflects the actual change in the global mean temperature. Fitting a polynomial of high degree could match the data better, but extrapolation past 2007 could be highly inaccurate. The moderate increase in global mean temperature represented by $T'$ is realistic for our purposes of keeping the model simple. + +Now we set $\overline{\Delta T}$ equal to $T^{\prime}$ by plugging (2) into (1), giving + +$$ +\frac {d z}{d t} = \frac {- z + (Z _ {0} - z) \beta (\eta (t - 1 8 5 0) ^ {3} - 0 . 3 0)}{\tau}. +$$ + +# Results of the Model + +# Low Sea-Level Rise + +With $\tau = 30$ (the relaxation time in years) and $\beta = 0.10$ (the glacier melt sensitivity to temperature change in ${}^{\circ}\mathrm{C}^{-1}$ ), a low sea-level rise is estimated. Using these parameters, there is a decrease in sea level between 1850 and 1910, then a steady increase to a change of about $0.10\mathrm{m}$ by 2100 (Figure 3a). This curve is concave up. Focusing on the years 2008-2058, the change in sea level ranges between $0.015\mathrm{m}$ and $0.055\mathrm{m}$ (Figure 3b). This curve is also concave up. + +# Medium Sea-Level Rise + +With $\tau = 20$ and $\beta = 0.25$ , a medium sea-level rise is estimated. There is a decrease in sea level between 1850 and about 1900, and then a steady increase to a change of about $0.20\mathrm{m}$ by 2100 (Figure 3c). The curve is concave up, with a slight possible change to concave down around 2075. For 2008-2058, the change in sea level ranges between $0.045\mathrm{m}$ and $0.13\mathrm{m}$ (Figure 3d). This curve is almost linear. + +# High Sea-Level Rise + +With $\tau = 10$ and $\beta = 0.45$ , a high sea-level rise is estimated. There is a decrease in sea level between 1850 and 1890, and then a steady increase to a change of about $0.275\mathrm{m}$ by 2100 (Figure 3e). This curve is concave up with a shift to concave down around 2025. Focusing on the years 2008-2058, the change in sea level ranges between $0.10\mathrm{m}$ and $0.21\mathrm{m}$ (Figure 3f). This curve is concave down. + +# Modeling Ice Sheets + +We focus on modeling the contribution of the ice sheets in Greenland and Antarctica. There are only simple models to simulate changes in volume over time, since "existing ice-sheet models cannot simulate the widespread rapid glacier thinning that is occurring, and ocean models cannot simulate the changes in the ocean that are probably causing some of the dynamic ice thinning" [Bentley et al. 2007]. + +![](images/240d8609c5265473665f57ff31391191abaa4024e35b5ac91fb7a247dbb30672.jpg) +1880-2100 + +![](images/b83456ca325225fd353a35621f88d0e38f880d92cbe475ffbcc3edd9f3cad24a.jpg) +2008-2058 +Figure 3b. $\tau = 30, \beta = 0.10$ . + +![](images/7b2c831d2d59eb946f962e0415d3a286cf42d73d99f43fe2d54b22a4bbf02dfa.jpg) +Figure 3a. $\tau = 30, \beta = 0.10$ . + +![](images/886c3a2a76b4df72e41a86a41739114b34b5d03cbd6170f7a2c2df055cd07be7.jpg) +Figure 3d. $\tau = 20, \beta = 0.25$ . + +![](images/2e9cf466f676c2238f5186737c5a4098df089b288fd30af64d6f256133177c9e.jpg) +Figure 3c. $\tau = 20, \beta = 0.25$ . +Figure 3e. $\tau = 10, \beta = 0.45$ . +Figure 3. Change in sea level for small ice caps and glaciers, in m/yr. + +![](images/28482fc217eec727b60d32e3337d96c45f3e0fd5eb147e6697e5557662a51fca.jpg) +Figure 3f. $\tau = 10, \beta = 0.45$ . + +# Assumptions and Formation + +To create a simple model of sea-level rise, we make assumptions about average volumes and ice-loss rates. These averages were taken from a number of sources that used laser measurements as well as past trends to make conclusions as accurately as possible [NASA 2008; Steffen 2008; Thomas et al. 2006]. Table 1 lists the parameters and their values for our equation to compute the contribution to sea-level rise. + +Table 1. Parameters and their values. + +
SymbolMeaningValueUnits
AoTotal water area of the Earth361,132,000km2
AgArea of Greenland1,736,095km2
AaArea of Antarctica11,965,700km2
VgGreenland ice sheet volume2,343,728km3
VaAntarctica ice sheet volume26,384,368km3
δgGreenland accumulation26cm/yr
δaAntarctica Accumulation16cm/yr
λgGreenland loss rate (absolute value)238km3/yr
λaAntarctica loss rate (absolute value)149km3/yr
ρFresh water density1000kg/m3
μGlacier ice density900kg/m3
+ +The equations for volume changes and corresponding sea-level rise are based on a simple model [Parkinson 1997]. We make a few modifications to this model to show a gradual change over a time period of 50 years (starting in 2008). The basic principle is to convert the loss rates of Greenland $(\lambda_{g})$ and Antarctica $(\lambda_{a})$ into water volumes using: + +$$ +\frac {d V _ {g}}{d t} = \frac {V _ {g} \mu}{\rho} \lambda_ {g}, \quad \frac {d V _ {a}}{d t} = \frac {V _ {a} \mu}{\rho} \lambda_ {a}. +$$ + +The total volume change from the contributions of Greenland and Antarctica is a simple matter of addition: + +$$ +\frac {d V}{d t} = \frac {d V _ {g}}{d t} + \frac {d V _ {a}}{d t}. +$$ + +To calculate the total rise in sea level, there is one more aspect to consider—thermal expansion. As water warms, it expands in volume. We calculate the total sea-level rise by adding to the rise $\gamma$ due to thermal expansion $(\gamma)$ (approximately $1.775\mathrm{mm}$ per year [Panel on Policy Implications of Greenhouse Warming 1992]) the rise due to losses in Greenland and Antarctica: + +$$ +\delta = \gamma + \frac {\frac {d V}{d t}}{A _ {o}} \times 1 0 0 0. +$$ + +Sea-level rises produced by complete melting would be $7\mathrm{m}$ (Greenland) [Arctic Climate Impact Assessment 2004] and more than $70\mathrm{m}$ (Antarctica) [Kluger 2006]. + +# Results of the Model + +The contributions of the largest ice sheets plus thermal expansion do not raise the sea level as much as might be thought: $5.7\mathrm{cm}$ after 50 years. + +# Limitations of the Models + +We chose efficiency and simplicity over complex models that apply only to small sections of the world, since they rely heavily on factors of specific ocean temperature, salinity and depth. + +# Model for Small Ice Caps and Glaciers + +Parameter values have uncertainty, because it is difficult to measure the exact area, volume, and sea level equivalent of the small ice caps and glaciers. The same relaxation time and sensitivity values are used for all glaciers; incorporating many individual values would be difficult, because there is no specific information regarding how response time is related to ice volume [Oerlemans 1989]. + +We set the only cause of the melting of small ice caps and glaciers to be changes in global mean temperature. However, the causes of previous melting have not yet been specifically determined [Oerlemans 1989], so predicting the causes of future melting is limited in scope. Many other factors, such as accumulation and ablation rates, could play a role. + +# Model for Ice Sheets + +The most prevalent uncertainties for the ice sheets are in the loss rates, plus thermal expansion of water. Loss rates were calculated by averaging over a number of decades. + +Liquid densities depend on temperature, which does not factor into this model. The density of fresh water is approximately $1,000\mathrm{kg} / \mathrm{m}^3$ at $4^{\circ}\mathrm{C}$ [SiMetric 2008], which is the value we use, since the water generated by ice sheets will be near freezing. Similarly, glacier ice density is generally between $830\mathrm{kg} / \mathrm{m}^3$ and $917\mathrm{kg} / \mathrm{m}^3$ Parkinson [1997], with an average of about $900\mathrm{kg} / \mathrm{m}^3$ [Menzies 2002]—the value we use. + +The thermal expansion factor contributes the greatest amount of uncertainty to this particular model. + +# Validation of the Models + +Adding the total sea-level rise for the small ice caps, glaciers, and ice sheets results in the overall total rise by 2058 of between $11\mathrm{cm}$ and $27\mathrm{cm}$ . Using 2008 as the reference year, beginning in 2018 there is a linear relationship between time and total sea-level rise, as shown in Figure 4. + +![](images/26570dcf4692102645144df075d205a8c0323b287d82965fa8166c9615cd8690.jpg) +Figure 4. Change in sea level, 2008-2058, for low, medium, and high scenarios. + +These results are in the range of sea-level-rise predictions from many sources: + +- $50~\mathrm{cm}$ in the next century [Arctic Climate Impact Assessment 2004]. +- $10 - 30 \mathrm{~cm}$ by 2050 [Dow and Downing 2007]. +- 1 m by 2100 [Natural Resources Defense Council 2005]. +- Probabilities of increases by 2050 relative to 1985 are: $10 \, \text{cm}$ , $83\%$ ; $20 \, \text{cm}$ , $70\%$ ; and $30 \, \text{cm}$ , $55\%$ [Oerlemans 1989]. (Note: The change from 1992 and 2007 was approximately $0.50 \, \text{cm}$ [Nerem et al. 2008].) +- 8–29 cm by 2030 and 21–71 cm by 2070 [Panel on Policy Implications of Greenhouse Warming 1992]. +- 18-59 cm in the next century [U.S. Environmental Protection Agency 2008]. +- $20 - 30 \mathrm{~cm}$ increase by 2050 [Wigley and Raper 1993]. + +# Modeling the Coast of Florida + +There is a direct relationship between vertical rise in the ocean and a horizontal loss of coastline. Specifically, one unit rise in the sea level corresponds to a horizontal loss of 100 units of land [Panel on Policy Implications of Greenhouse Warming 1992]. + +Hence, for a worst-case rise of $27\mathrm{cm}$ by 2058, we estimate the loss of $27\mathrm{m}$ of coastline. This does not appear to be as disastrous as one might think. We examine the extent of flooding. + +# Effects on Florida + +In 2000 Florida had approximately 16 million people [Office of Economic and Demographic Research 2008]. Maps of population density, geographic relief, and topography show that about $30\%$ of the counties in Florida are at a high risk of losing coastline; these counties also have large populations. + +Many of Florida's major cities are located in these counties. We examine how much damage a retreat of $27\mathrm{m}$ of coastline would affect the cities of Cape Coral, Jacksonville, Pensacola, Miami, St. Petersburg, and Tampa. + +# Effects on Major Cities + +In the next 50 years, most of the major cities are safe from destruction, but the outlying islands and outskirts of the cities are in danger. We measured the distance from the coastline near major cities inland to predict the extent of land that would be covered by water [Google Maps 2008]. + +- Cape Coral: Sanibel and Pine Islands would be mostly flooded, though the city center of Cape Coral would be spared. +- Jacksonville: Jacksonville would lose all of its coastal beaches. The city would also be in danger, depending on how much the St. Johns River rises as well. The outskirts of the city will be affected by flooding from the river. +- Pensacola: The harbor and the edges of the city would be covered by water, and a large portion of Gulf Breeze, Santa Rosa Island, and the Pensacola Naval Air Station would be submerged. +- Miami: Miami would be spared, at least for the next 50 years. Key Biscayne and Miami Beach would not be as lucky, though, and most of the Florida Keys would disappear under the ocean. However, predictions further into the future indicate that Miami will most likely be the first major city of Florida to become completely submerged. +- St. Petersburg and Tampa: Edges of St. Petersburg would be under water. The boundaries of Tampa would also be lost due to the surrounding + +Old Tampa Bay and Hillsborough Bay. All coastal beaches, such as Treasure Island, would be mostly submerged. This area will have the largest displacement of urban population by the year 2058. + +# Validation of Loss of Coastal Land + +The prediction of Florida coastline loss is validated by simulations from the Global Land One-km Base Elevation (GLOBE) digital elevation model (DEM) and is illustrated in Figure 5. The majority of Florida's population would be safe for the next 50 years, but mere loss of land is only one of the problems that would occur due to global warming. + +![](images/82619c2aa9b0d985d142b70247cd015ed9a4f63f8fa90677b42aee7ee6d41beb.jpg) +Figure 5. The effect of $27\mathrm{-cm}$ sea-level rise in Florida: The coastline of Florida that would be covered with water shown in red [Center for Remote Sensing of Ice Sheets 2008]. + +# Other Impacts on Florida + +The impacts of global warming (enhanced by the melting of the polar ice cap) on Florida could be tremendous [Natural Resources Defense Council 2001]. They include: + +overall changing climate, +-dying coral reefs, +- "saltwater intrusion into inland freshwater aquifers" (thus impacting groundwater), +- "an upswing in forest fires," +- "warmer air and sea-surface temperatures," +- "retreating and eroding shorelines," +- health threats, +and increased hurricane intensity. + +A few of the effects are described in further detail below. + +# Endangered Species and Biodiversity + +The World Wildlife Fund has identified the Florida Keys and Everglades, located in southern Florida where there is the greatest risk of lost coastline, as one of the Earth's "200 Most Valuable Ecoregions" [World Wildlife Fund 2008]. Wildlife, including the Florida panther, roseate spoonbill, and green sea turtle, is greatly threatened by habitat loss. Plants and animals will most likely have a difficult time adapting to new climatic conditions and stresses, and the change in biodiversity in Florida will ultimately result in problems for biodiversity in surrounding areas [Dow and Downing 2007]. + +# Tourism + +Tourism, one of Florida's biggest industries, is in extreme danger if Florida loses most of its coastline. + +# Health threats + +As of 2000, the annual number of Disability Adjusted Life Years per million people from malnutrition, diarrhea, flooding, and malaria caused by climate-related conditions was under 10 in the United States [Dow and Downing 2007]. However, with higher global temperatures, Florida is at risk for various diseases and pests. Lyme disease is spreading in the United States and flooding of the Florida coastlines could increase the risk of cholera, typhoid, dysentery, malaria, and yellow fever [Dow and Downing 2007]. + +# Food Production + +Most of the orange and grapefruit production occurs in southern Florida, and many orchards are located along the coast, so orchards will slowly lose land. Increased salt concentration in groundwater will also threaten citrus crops [Natural Resources Defense Council 2001]. + +# Possible Responses + +Responses have been prepared to the various threats that global warming poses to the state of Florida [Florida Environment 2000; Natural Resources Defense Council 2001; U.S. Environmental Protection Agency 2008]. The U.S. Environmental Protection Agency (EPA) and the Florida state government have begun implementing some of these suggestions (marked with an asterisk in the lists below) [U.S. Environmental Protection Agency 2002]. + +# Responses to Changing Landscape + +- Limit or stop land development along coastlines. +- * Work to protect coastlines and sand dunes that could weaken and erode. +- Enact a program to prevent people living on the coast from removing vegetation and trees as well as to encourage their planting. +- Set up a fund (either state or national) to aid people in the case of an emergency evacuation due to land loss and to aid those whose businesses will be obliterated. + +# Responses to Changing Climate + +- Improve drainage systems, to decrease flooding and to avert stagnant water (a breeding ground for mosquitoes). +- Make flotation devices a mandatory feature of all homes and businesses in flooding areas. +- Encourage the public to keep emergency preparation kits and provide suggestion lists in supply stores.* +- Build more hurricane shelters and increase standards for new buildings to withstand hurricanes. +- Put permanent fire breaks around large areas at risk of burning. + +# Responses to Health Threats + +- Store malaria pills in preparation to combat an increased mosquito population. +- Make emergency management drills a monthly or bi-monthly event, rotating among major cities. +- Improve interoperability between fire, EMS, and police services. + +# Responses to Global Warming + +- Provide incentives for people to lead a "green" lifestyle, e.g., free street/ garage parking for hybrids, tax cuts for purchasing Energy Star products, etc. +- Use heat-reflective paint on the tops of buildings to reduce air conditioning use. +- Encourage renewable energy sources. +- *Work with corporations and companies to reduce their output of greenhouse gases. + +- * Work to protect indigenous wildlife and plants as well as the unique landscape, such as the Everglades. + +Action should be taken now with the worst-case scenario in mind. The most important aspect, however, is to keep people informed; make it clear that land will be lost no matter what, but people can slow down the process by becoming part of the solution. + +# Conclusion + +Over the next 50 years, Florida will experience changes to its geography. Melting of ice sheets and glaciers and thermal expansion in the oceans will lead to a gradual rise in sea level. + +The loss of land over time illustrates the seriousness of the problems of global warming. Living generations may be faced with the consequences of lost coastal land. If steps are not taken to reduce the increase in sea level, southern Florida will slowly disappear. + +# References + +Arctic Climate Impact Assessment. 2004. Impacts of a Warming Arctic: Arctic Climate Impact Assessment. New York: Cambridge University Press. http://amap.no/acia/. +Bentley, Charles R., Robert H. Thomas, and Isabella Velicogna. 2007. Global outlook for ice and snow: Ice sheets. United Nations Environment Programme (2007): 99-114. +Center for Remote Sensing of Ice Sheets. 2008. Sea level rise maps and GIS data. https://www.cresis.ku.edu/research/data/sea_level_rise/. +Dow, Kirstin, and Thomas E. Downing, 2007. The Atlas of Climate Change: Mapping the World's Greatest Challenge. London: Earthscan. +Florida Environment. 2000. Public plans for sea level rise. http://www.floridaenvironment.com/programs/fe00501.htm. +Google Maps. 2008. http://maps.google.com/. +Kluger, Jeffrey. 2006. Global warming heats up. http://www.time.com/time/magazine/article/0,9171,1176980-1,00.html. +Menzies, John. 2002. Modern and Past Glacial Environments. Boston, MA: Butterworth-Heinemann. + +National Aeronautics and Space Administration (NASA). 2008. Educational brief: Ice sheets. http://edmall.gsfc.nasa.gov/99invest. Site/science-briefs/ice/ed-ice.html. +NASA Goddard Institute for Space Studies. 2003. Recent warming of Arctic may affect worldwide climate. http://www.nasa.gov/centers/goddard/news/topstory/2003/1023esuice.html. +______ 2008. GISS surface temperature analysis. http://data.giss.nasa.gov/gistemp/. +Natural Resources Defense Council 2001. Global warming threatens Florida. http://www.nrdc.org/globalwarming/nflorida.asp. +______ 2005. Global warming puts the Arctic on thin ice. http://www.nrdc.org/globalWarming/qthinice.asp. +Nerem, R. Steven, Gary T. Mitchum, and Don P. Chambers. 2008. Sea level change. http://sealevel.colorado.edu/. +Oerlemans, Johannes. 1989. A projection of future sea level. Climatic Change 15: 151-174. +Office of Economic and Demographic Research: The Florida Legislature. 2008. Florida population. http://edr.state.fl.us/population.htm. +Panel on Policy Implications of Greenhouse Warming. 1992. *Policy Implications of Greenhouse Warming: Mitigation, Adaptation, and the Science Base*. Washington, DC: National Academy Press. +Parkinson, Claire L. 1997. Ice sheets and sea level rise. The PUMAS Collection http://pumas.jpl.nasa.gov/. +SiMetric. 2008. Density of water $(\mathrm{g} / \mathrm{cm}^3)$ at temperatures from $0^{\circ}\mathrm{C}$ (liquid state) to $30.9^{\circ}\mathrm{C}$ by 0.1°Cinc. http://www.simetric.co.uk/si_water.htm. +Steffen, Konrad. 2008. Cyrospheric contributions to sea-level rise and variability. http://globalwarming.house.gov/tools/ assets/files/0069.pdf. +Thomas, R., E. Frederick, W. Krabill, S. Manizade, and C. Martin 2006. Progressive increase in ice loss from Greenland. Geophysical Research Letters 33: L1053. +U.S. Environmental Protection Agency. 2002. Saving Florida's vanishing shores. http://www.epa.gov/climatechange/effects/coastal/saving_FL.pdf. +______ 2008. Coastal zones and sea level rise. http://www.epa.gov/climatechange/effects/coastal/. + +Wigley, T.M.L., and S.C.B. Raper. 1993. Future changes in global mean temperature and sea level. In Climate and Sea Level Change: Observations, Projections and Implications, edited by R.A. Warrick, E.M. Barrow, and T.M.L. Wigley, 111-133. New York: Cambridge University Press. + +World Wildlife Fund. 2008. South Florida. http://www.worldwildlife.org/wildplaces/sfla/. + +![](images/de1d63f30278459fa82f9ef0438ef13b76b9d6ca388e684dc08c42113bd5b75b.jpg) + +Amy Evans, John Ringland (advisor), and Tracy Stepien. + +# Erosion in Florida: A Shore Thing + +Matt Thies +Bob Liu +Zachary W. Ulissi + +University of Delaware Newark, DE + +Advisor: Louis Frank Rossi + +# Abstract + +Rising sea levels and beach erosion are an increasingly important problems for coastal Florida. We model this dynamic behavior in four discrete stages: global temperature, global sea level, equilibrium beach profiles, and applications to Miami and Daytona Beach. We use the Intergovernmental Panel on Climate Change (IPCC) temperature models to establish predictions through 2050. We then adapt models of Arctic melting to identify a model for global sea level. This model predicts a likely increase of $15\mathrm{cm}$ within 50 years. + +We then model the erosion of the Daytona and Miami beaches to identify beach recession over the next 50 years. The model predicts likely recessions of $66\mathrm{m}$ in Daytona and $72\mathrm{m}$ in Miami by 2050, roughly equal to a full city block in both cases. Regions of Miami are also deemed to be susceptible to flooding from these changes. Without significant attention to future solutions as outlined, large-scale erosion will occur. These results are strongly dependent on the behavior of the climate over this time period, as we verify by testing several models. + +# Introduction + +The northern ice cap plays an important role in global climate and oceanic conditions, including interactions with the global oceanic currents, regulation of the atmospheric temperature, and protection from solar radiation [Working Group II 2007]. There are significant recent trends in polar melting, global temperature, and global sea level. + +By correlating the effects of an increasing sea level on beach erosion, we can strategically develop our coast for the future so that homes and businesses can remain untouched by disaster. + +# Approach + +- Analyze existing arctic and climate models to determine the most reasonable predictions for future changes. +- Identify the best available models for global change. +- Relate the future trends and physical melting processes to time and predicted temperatures. +- Examine and apply the Bruun model for beach erosion. +- Establish realistic physical models and parameters of Daytona Beach and Miami. +- Model the long-term erosion of the beach shores in those beaches. +- Propose cost-effective solutions to minimize the impact of erosion. + +# Arctic Melting + +# Justified Assumptions + +- The northern ice cap includes the North Polar ice cap (over seawater) and the Greenland ice sheet (over land). +- The IPCC temperature models are accurate and stable within the time period of interest. +- The melting of the North Polar ice cap does not contribute directly to global water levels. +- Tectonic considerations within the IPCC model are relevant to the coast of Florida. +- Changes in oceanic salinity cause negligible changes in sea levels. +- Changes in ocean temperature will lead directly to increases in sea level within the time period of interest. + +# Polar Ice Cap + +The North Polar ice cap is essentially a source of fresh water. Because of its composition and unsupported status, $90\%$ [Stendel et al. 2007] of it is largely suspended beneath the surface of the Arctic Ocean. Since the density + +of ice is only $10\%$ lower than that of water $(0.92\mathrm{g} / \mathrm{cm}^3$ vs $1.0\mathrm{g} / \mathrm{cm}^3)$ , any melting of the North Polar ice cap contributes negligibly to global water levels. + +The primary effect of the North Polar ice cap is to regulate global and oceanic temperatures, through solar deflection and melting. As the ice cap melts further, this capability is diminished, and temperatures change. Current models for the ice cap, atmosphere, and global temperatures are complex; we capture the time-dependent effects through existing temperature predictions. + +# Greenland Ice Sheet + +Since the Greenland ice sheet is supported on a land mass, its contribution to global climate and sea level is considerably different from the polar ice caps (which are floating ice). Melting ice from the Greenland ice sheet contributes directly to the total volume of water in the oceans. This contribution to global sea levels is not captured directly by existing temperature models and hence must be related back to historic data. + +# Temperature Effects + +The density of water is temperature dependent. As the temperature of the oceans increase, the corresponding decrease in water density will lead to an overall increase in volume. + +# Salinity Changes + +Since both the Greenland ice sheet and the North Polar ice cap are pure freshwater sources, any melting will result in slight reductions in the salinity of the global oceans. The two effects of this interaction are a slight change in density due to the reduced salt content and a possible decrease in the rate at which the North Polar ice cap melts (due to osmotic forces based on the salt concentrations, an effect commonly observed in chemistry). + +However, according to the IPCC [Working Group II 2007], these changes are relatively small compared to the thermal effects of the warming process. Thus, these effects are included in our model through the sea level predictions of the IPCC and only applied as a direct relationship to global temperatures. + +# Tectonic Effects + +In addition to global trends from the rising sea level, shifts within the tectonic plates of the Earth have been argued to cause an upward movement of some of the ocean bottoms, and thus contribute to local deviations in the sea level change [Nerem et al. 2006]. Such effects are outside our scope here. + +![](images/166dcc4ddcffd89259b9582111f97d8c85fc28272b09fff1ea66cc28b50780e4.jpg) +Figure 1. A global temperature model endorsed by the IPCC [Working Group II 2007] + +# Global Temperature Model + +Many large-scale computer simulations and models have been proposed to predict the effects of arctic melting. These results have been compiled and studied by the IPCC fourth assessment report [Working Group II 2007], and its predictions for global temperature are used within this report. Criticism of IPCC modeling is common due to its simplified assumptions, but however we have not seen a better alternative. + +We use the temperature models shown in Figure 1, which shows historical data and several scenarios for the future. We make graphical fits and show corresponding information. We conclude that simulations that use constant conditions prevailing in 2000 are unrealistic. Therefore, we consider only the low-growth and high-growth model cases. We assume a cubic growth model for temperature change: + +$$ +\Delta T (t) = a t ^ {3} + b t ^ {2} + c t + d. +$$ + +# Modeling Sea-Level Changes + +# Justified Assumptions + +- The IPCC temperature and sea-level estimates are accurate. +- Sea-level change is global and equal everywhere. +- Sea-level changes can be broken into factors directly related to temperature, and factors whose rate is dependent on temperature. + +# Sea Level Model + +While the IPCC [Working Group II 2007] predicts temperature changes for the next century, the only predictions for sea level changes are possible ranges at the end of the century. To develop time-dependent models for the sea level rise, we correlate these changes to the temperature model. + +The IPCC simulations include ranges for the effects of various parameters on the global sea level change [Working Group II 2007]. These effects can be broken roughly into $55\%$ indirect effects leading to temperature change (and thermal expansion), and $45\%$ other volume effects, such as the melting of the Greenland ice sheet (see Table 1). + +Table 1. Results from the third IPCC report for 2100 [Working Group II]. + +
SourceSea Rise (mm)Mean rise (mm)
Thermal expansion110–430270
Glaciers10–230130
Greenland ice20–9035
Antarctic ice170–20-95
Terrestrial storage83–30-26.5
Ongoing contributions from ice sheets0–5527.5
Thawing of permafrost0–52.5
Total global-average sea level rise110–770440
+ +For the $55\%$ of changes related directly to temperature, we consider the corresponding sea level to be proportional to temperature: + +$$ +S _ {1} = \gamma \Delta T (t), +$$ + +$$ +\gamma = \frac {\Delta S (2 1 0 0)}{\Delta T (2 1 0 0)}. +$$ + +Since the Greenland ice sheet is noticeably devoid of water (whatever melts, runs off the ice sheet), the primary limitation on ice melting is assumed to be limitations of heat transfer from the air above the ice shelf. + +To model this, we use a generic heat exchanger rate equation, with an arbitrary thermal coefficient $U_{a}$ . To determine the rate, we use the average summer temperature of Greenland, $6^{\circ}\mathrm{C}$ [Vinther et al. 2006]. We integrate the resulting equation and obtain scaling coefficients: + +$$ +\frac {d S _ {2}}{d t} \propto q = U _ {a} (T _ {1} - T _ {2}), +$$ + +$$ +\begin{array}{l} \Delta S _ {2} = \alpha \int_ {2 0 0 0} ^ {t _ {f}} U _ {a} \left(T _ {1} (t) - T _ {2}\right) d t \\ = \alpha \int_ {2 0 0 0} ^ {t _ {f}} U _ {a} \left(T + (a x ^ {3} + b x ^ {2} + c x + d) - 0\right) d t \\ \end{array} +$$ + +$$ +\beta = \alpha U _ {a} = \frac {\Delta S (2 1 0 0)}{\int_ {2 0 0 0} ^ {t _ {f}} \bigl (T + (a x ^ {3} + b x ^ {2} + c x + d) \bigr) d t}. +$$ + +We determine the scaling coefficient $\beta$ for each simulation, and calculate the overall sea-level rise as follows: + +$$ +\Delta S (t) = 0. 5 5 S _ {1} (t) + 0. 4 5 S _ {2} (t). +$$ + +The resulting predicted sea-level rises are shown in Figure 2. The lower and upper bounds on the predictions are shown by calculating the rises for the lower range of the low-growth model and the upper range of the high-growth model. The predicted sea-level rises for the mean rises of both scenarios through 2050 are quite similar, and using either is sufficient. However, such engineering modeling questions often need to err on the side of caution, so we consider the upper extreme in later models. Historical data are included for comparison and agree reasonably with the predicted trends. + +The predicted sea-level increases are shown in Table 2. + +Table 2. Model predictions for future sea level rises. + +
YearSea Level Increase (cm)
20104.14.42.6
20206.87.74.4
20309.611.56.2
204012.515.68.0
205015.320.29.9
+ +![](images/fdde06cb4e8c842b14b12c6320fbfe8a47b615c8c313fadc979d95ea7b7a7ff6.jpg) +Figure 2. The model for global sea-level changes through 2100. + +# Beach Erosion Models + +# Justified Assumptions + +- Beach erosion is continuous when observed over long time periods. +- Beach profiles do not change. +- Only direct cause of erosion is sea-level change. + +# Overview + +Beach erosion is complex, since the behavior of the beach depends on a huge number of local beach and weather parameters, as well as being linked to the physical bathymetry of the surrounding sea bed. + +# Seasonal and Weather Effects + +Seasonal temperature changes can cause differing rates of erosion, and winter weather has been observed to cause formation of offshore bars, af + +![](images/6ad0f46f551db0a7f066c5befad7859854ca1ea2214d5b9820e8e9d3aa9423f5.jpg) +Figure 3. The Bruun model for equilibrium beach profiles [Bruun 1983]. + +fecting the relative rates of erosion. Storms and hurricanes generally show no lasting long-term effect on the state of a beach [Walton, Todd L. 2007]. + +Thus, for the purposes of this model, these effects are unimportant. Predicting weather activity is impossible on a short time scale, and attempting to simulate any sort of effects over a long (50-year) period would be unreasonable. + +# Bruun Model + +Instead of modeling transient effects on beach erosion, we use the well-known Bruun model of beach profiles [Herbich and Bretschneider 1992]. At the core of the model is the observation that many beaches fit the general profile: + +$$ +h (x) = A x ^ {2 / 3}, +$$ + +where $h$ is the depth of the water, $x$ is the distance from the shoreline, and $A$ is a static parameter related to the average particle size of the beach material. We illustrate the model in Figure 3. + +Using this model, Bruun found that the ratio between the rise $R$ in sea level and the recession $\Delta S$ of a beach front are linearly related through a constant $K$ , + +$$ +R = K \Delta S. \tag {1} +$$ + +The constant $K$ can be calculated using the long-range profile of the coast [Herbich and Bretschneider 1992] via + +$$ +K = \frac {l}{h}, +$$ + +where $l$ is the distance from the shoreline and $h$ is the depth at $l$ . We fit the parameter $K$ and use this linear relation to predict future erosion. + +# Justification of Erosion Model Choice + +There has been widespread criticism of the assumptions made by Bruun in his constant-profile model. However, it is the only beach erosion model to have received significant experimental testing. A thorough review of the current state of the Bruun model and additions was performed in Slott [2003], with modifications proposed by Dean in Miller and Dean [2004]. + +# Effects on Florida + +# Justified Assumptions + +- Beach profiles are consistent for all locations on a given beach (city location). +- The profile parameters are time-independent. + +# Geographical Overview + +Florida sits on a shelf projected between the Atlantic Ocean and the Gulf of Mexico. The topography is characterized by extremely low elevation. There are significant urban areas situated along most of the coastline, with significant centers at Tampa Bay on the west coast and at Miami and Daytona on the East coast. In addition, barrier islands are present on much of Florida's east coast, with large implications for modeling. + +# Primary Effects + +We consider two primary effects within our model and examine the flooding implications of a rise in sea level. + +We conclude that beach erosion is be the primary effect of a rising sea level. We present these results for several scenarios. + +# Daytona Beach + +# Physical Profile + +We show a topographical and bathymetric map in Figure 4 [NOAA 2007]. The elevation is at least several meters for all major inhabited areas, so we neglect the likelihood of direct flooding from the predicted rise. + +![](images/332aabf764768640fe4c8ac2db5ebc0895241b88396af68967c1720ed547c9af.jpg) +Figure 4. Topography and bathymetry of Daytona Beach, with five sampled points (in red) lying along a line from (25 98) to (70 0). + +# Beach Profile + +To determine the constant $K$ in (1) for Daytona Beach, we collect sample points (shown in red in Figure 4). We use these results with the corresponding elevation and position of the shoreline to determine the ratio as follows: + +$$ +K _ {i} = \frac {\sqrt {(\Delta x) ^ {2} + (\Delta y) ^ {2}}}{\Delta h}. +$$ + +We show the results of this calculation for all five points in Table 3 and arrive at a mean value $K = 452$ . + +We observe the effectiveness of the Bruun approximation when we fit an averaged profile for Daytona Beach (Figure 5). + +# Future Erosion of Daytona Beach + +We use the sea levels in Table 2 to calculate values for the beach recession at the necessary intervals. Daytona Beach contains a series of barrier islands, and we assume that the small separation between them and the mainland will prevent any significant erosion on the Daytona mainland. + +Table 3. Determination of the scaling coefficient $K$ for Daytona Beach. + +
PointDistance (km)Elevation difference (m)K
19.6520.29475.7
29.3920.51457.8
39.6621.15456.6
49.6422.18434.44
59.2221.13436.31
Mean452 ± 17
+ +![](images/24156c78478d8b9a28d8f2e446d920b9310b0e7c14dbb81e8ab4547accaa977f.jpg) +Bruun Profile Applied to Daytona Beach +Figure 5. Appropriateness of the Bruun model. + +![](images/05cdbefdd02aaa7f8ef3851946dff48d2ef8feb0865f783a93b11a2e499cee51.jpg) +Figure 6. Effect of two climate scenarios on the erosion of Daytona Beach (Overlay: Google Earth [2008]). Shaded regions indicate increments of 10 years from 2000. + +To gauge the impact of this erosion, we overlay the results for the likely- and worst-case scenarios for each decade onto a Google Earth [2008] map of Daytona Beach (Figure 6). Nearly a full block width of the city will be destroyed by 2050 if no precautions are put into place. + +# Miami Beach + +# Physical Profile + +Again we work with topographical and bathymetric representations [NOAA 2008]. The low elevation of the boundaries of Miami yield problems for the city with the rise in sea level. The effects of the likely $17~\mathrm{cm}$ rise in sea level are visualized in Figure 7. + +The regions of concern are already surrounded by high walls. They should be reinforced. + +# Beach Profile + +We determine the constant $K$ for Miami in a similar manner to that for Daytona Beach; but rather than using multiple samples, we obtain an aver + +![](images/8434cc0f4a4c3810f0208374a496f8ca8414e8a40cbec0774a9cbffefef9cf9f.jpg) +Figure 7. Regions of Miami susceptible to a $17~\mathrm{cm}$ rise in sea level. Dark (blue) is existing water, light (green) is safe land, and dark (red) regions inside the light is susceptible land. + +age beach profile through averaging. This results in $K = 520.83$ , a higher value than for Daytona Beach, due to the significantly greater gradual slope in the coastal area just off the shore of Miami. + +# Future Erosion of Miami Beach + +We show the results in Figure 8. As with Daytona Beach, without intervention a width of nearly a city block width will be lost to the ocean. + +# Common Solution for Daytona and Miami + +Our beach erosion model is grounded in the observation that most beaches return to an equilibrium profile based on the average particle sizes reflected in the coefficient $A$ . To take advantage of the predictions of our model, we propose a solution for Miami and Daytona, based on raising the average height of the curve at the bottom of the slope to allow for a more stable beach front. This is visualized in Figure 9. + +There are several key benefits to this design. The use of a retainer along the bottom allows the natural tendency of the waves to carry sand and sedimentation to fill in the beach naturally, without the need for costly and + +![](images/358fbdd36723c315c2aaa4850ea84c8f766336942fb71def57cdf273e3aaf698.jpg) +Figure 8. Effect of two climate scenarios on the erosion of Miami Beach. Shaded regions indicate increments of 10 years from 2000. + +continuous additions of sand and filler. The ideal design for these retainers would be anchored concrete shapes, built to withstand the continuous force of the waves over long periods. + +# Conclusion + +Several important conclusions can be made about future problems for the coastal cities of Florida. The sea level is definitely rising, and our model linking this activity to changes in the northern ice caps suggest an acceleration of this trend. Our model predicts a likely beach recession of $60\mathrm{m}$ by 2050, with up to $90\mathrm{m}$ possible. This recession would severely damage the first block nearest the ocean in each city unless there is intervention. Due to its lower elevation, Miami is significantly more at risk than more northern cities like Daytona, so it should be more concerned. + +![](images/d1df01bea33deb057b322849968a9ba18b73168b12f90b0b2b1c1d4a406a9536.jpg) +Figure 9. Proposed solution for Daytona and Miami. + +# References + +Bruun, Per. 1983. Review of conditions for uses of the Bruun rule of erosion. Coastal Engineering 7 (1) (February 1983): 77-89. +Google Earth. 2008. http://earth.google.com/. +Herbich, John B., and Charles L. Bretschneider. 1992. Handbook of Coastal and Ocean Engineering. Houston: Gulf Publishing Co. + +Intergovernmental Panel on Climate Change, Working Group II. 2007. Climate Change 2001: Impacts, Adaptation and Vulnerability. Working Group II Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. http://www.gtp89.dial.pipex.com/chpt.htm. +McCarthy, JamesJ., Osvaldo F. Canziani, Neil A. Leary, David J. Dokken, and Kasey S. White (eds.). 2001. Climate Change 2001: Impacts, Adaptation, and Vulnerability. Contribution of Working Group II to the Third Assessment Report of the Intergovernmental Panel on Climate Change. New York: Cambridge University Press. http://www.citeulike.org/user/slow-fi/article/297638. +Miller, J.K., and R.G. Dean. 2004. A simple new shoreline change model. Coastal Engineering 51: 531-556. +Nerem, Robert Steven, Eric Leuliette, and Anny Cazenave. 2006. Present-day sea-level change: A review. Comptes rendus Géoscience 338: 1077-1083. +National Oceanic and Atmospheric Administration (NOAA) Satellite and Information Service, National Geophysical Data Center. 2007. Daytona + +![](images/87ee697566e304884f0b36a875a62bd3b0ac25f52ce7000e35199dbe7b53f88a.jpg) + +Coach Lou Rossi (seated) with Mathematical Modeling teams' members (from left) senior Matthew Thies, senior Zachary Ulissi, junior Bob Liu, and freshman Kyle Thomas (kneeling). + +Beach, FL 1/3 arc-second tsunami inundation DEM. + +http://www.ngdc.noaa.gov/dem/showdem.jsp?dem=Daytona%20Beach&state $\equiv$ FL&cell $= 1 / 3\%$ 20arc-second. + +________. 2008. Topographical maps of Florida. + +Slott, Jordan. 2003. Shoreline response to sea-level rise: Examining the Bruun rule. Technical report. Nicholas School of the Environment and Earth Sciences, Department of Earth and Ocean Sciences, Duke University, Durham, NC. + +Stendel, Martin, Vladimir E. Romanovsky, Jens H. Christensen, and Tatiana Sazonova. 2007. Using dynamical downscaling to close the gap between global change scenarios and local permafrost dynamics. Global and Planetary Change 56: 203-214. + +Vinther, B.M., K.K. Andersen, P.D. Jones, K.R. Briffa, and J. Cappelen. 2006. Extending Greenland temperature records into the late eighteenth century. Journal of Geophysical Research 111, D11105, doi:10.1029/2005JD006810. http://www.agu.org/pubs/crossref/2006/2005JD006810.shtml. + +Walton, Todd L., Jr.. 2007. Projected sea level rise in Florida. *Ocean Engineering* 34: 1832-1840. + +# Judge's Commentary: The Polar Melt Problem Papers + +John L. Scharf + +Dept. of Mathematics, Engineering, and Computer Science + +Carroll College + +Helena, MT 59625 + +jscharf@carroll.edu + +# Introduction + +The 2008 Polar Melt Problem presented teams with the challenge to model the effects over the next 50 years on the coast of Florida from melting of the North Polar ice cap due to the predicted increases in global temperatures. Teams were to pay particular attention to large metropolitan areas and propose appropriate responses to the effects predicted by their models. Teams were also encouraged to present a careful discussion of the data used. + +From the judges' perspectives, this problem was especially interesting but at the same time somewhat challenging to judge, because of the wide variety in points of focus that the teams could choose to take: the physics of the model and the physical impacts of rising sea levels on coastal areas; indirect effects such as increases in the frequency and severity of hurricanes; and environmental, societal, and/or economic impacts. Regardless of the choice of focus selected by a team, in the final analysis it was good modeling that allowed the judges to discern the outstanding papers. + +# Judging + +Judging of the entries occurs in three stages. The first stage is Triage, where a judge spends approximately 10 min on each paper. In Triage, a complete and concise Executive Summary is critically important because this is what the triage judges primarily use to pass first judgment on an entry. In reviewing the Executive Summary, judges look to see indications that the paper directly responds to the problem statement, that it uses good modeling practice, and + +that the mathematics is sound. Because of the limited time that the triage judges spend on each paper, it is very likely that some potentially good papers get cut from advancing in the competition because of poor Executive Summaries. The importance of a good Executive Summary cannot be overstated. + +For those papers that make it past triage, the remaining two stages of judging are the Preliminary Rounds and the Final Judging. In the Preliminary Rounds, the judges read the body of the paper more carefully. The overriding question on the mind of most judges is whether or not the paper addresses the problem and whether it answers all of the specific questions. Papers that rate highly are those that directly respond to the problem statement and specific questions, clearly and concisely show the modeling process, and give concrete results with some analysis of their validity and reliability. + +In the Final Judging, the judges give very careful consideration of the methods and results presented. The features that judges look for in an Outstanding Paper are: + +- a summary of results and their ramifications; +- a complete and comprehensive description of the model, including assumptions and the refinements that were made during development; +- a mathematical critique of the model, including sensitivity analysis and a description of its strengths and weaknesses; and +- recommendations for possible further work to improve the model. + +The judges select as Outstanding the papers best in including and presenting each of these features. + +# The Papers: The Good + +Specifically for the "Take a Bath" problem, the judges identified a number of positive characteristics in the submitted papers. While many teams used regressions on historical sea-level data to predict future sea levels, the papers that were viewed more favorably were those that modeled the melting of the ice and its effects. Some even included thermal expansion of the water due to rising temperatures, and many recognized that melting of the floating portions of the North Polar ice cap would have much less impact than the melting of the ice supported by land in Greenland. While there was a wide range in the sea-level increases predicted by the models, many teams bounded their results using estimates of the total rise in sea levels worldwide if all the ice on Greenland were to melt. This estimate is widely available in the literature, and it enabled many of the teams to make judgments about what increases in sea levels might be reasonable (or unreasonable) to expect over the next 50 years. + +The judges also favored papers that adequately addressed the impacts on Florida, especially in the metropolitan areas. Some of these papers predicted + +large increases in sea levels and showed how the major cities would be impacted, whether it was only on structures near the coasts or in widespread flooding of the urban area. Others predicted small increases in sea levels, in which case the impacts were often limited to increased beach erosion and/or salt water intrusion into fresh water in the ground and on the surface. Good papers also proposed appropriate responses to the effects, whether they were great or small. Other important considerations that some teams investigated were the potential impact of larger and more frequent hurricanes and the impact of rising sea levels on the natural environment in Florida, particularly on the Everglades. + +# The Papers: The Bad + +In some of the submitted papers, the judges also identified negative characteristics that should generally be avoided in good mathematical modeling and reporting. These items can detract from a paper that might otherwise be a good paper, and they may even result in removal of a potentially good paper from further contention: + +- Some teams used regression and curve-fitting to develop a model from existing data, and then used the model to extrapolate over the next 50 years. The functions chosen for regression often had no rational basis for fitting the data. As one judge pointed out, "sixth degree polynomials rarely occur in nature." Extrapolation beyond the domain of the regression data must always be used with extreme caution, especially when there is no physical or other rational justification for the regression function in the context of the problem. +- While many of the teams did a good literature search to support their work, others used sources that were questionable. Before they are considered for use in a project, sources of information and data should always be critically judged as to their veracity, validity, and reliability. +- Some teams presented results to a degree of precision that is not appropriate. For example, one paper reported the predicted rise in sea level to a precision of eight significant digits. Modelers must always be cognizant of what degree of precision is appropriate for a given situation. +- Finally, some teams were not careful with units. Units should always be included and should be checked for correctness. + +How a team addresses details like those listed here can make a big difference in how a judge rates a paper. Paying proper attention to such details in a team's report can help ensure that an otherwise worthy paper advances in the competition. + +# Conclusion + +By and large, the judges were pleased with the overall quality of the papers submitted for the Polar Melt Problem in the 2008 MCM. Selecting the final Outstanding papers was especially difficult this year because so many of the papers were of high quality and they were competitive. As always, the judges are excited when they see papers that bring new ideas to a problem and go beyond looking up and applying models that are available in the literature. This year the judges had much to be excited about. + +# About the Author + +John L. Scharf is the Robert-Nix Professor of Engineering and Mathematics at Carroll College in Helena, MT. He earned a Ph.D. in structural engineering from the University of Notre Dame, an M.S. degree in structural engineering from Columbia University, and a B.A. in mathematics from Carroll College. He has been on the Carroll College faculty since 1976 and served as Chair of the Department of Mathematics, Engineering, and Computer Science from 1999 to 2005. He also served as Interim Vice President for Academic Affairs during the 2005-06 academic year. He has served as an MCM judge in every year but one since 1996. + +Pp. 305-362 can be found on the Tools for Teaching 2008 CD-ROM. + +# A Difficulty Metric and Puzzle Generator for Sudoku + +Christopher Chang + +Zhou Fan + +Yi Sun + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Abstract + +We present here a novel solution to creating and rating the difficulty of Sudoku puzzles. We frame Sudoku as a search problem and use the expected search time to determine the difficulty of various strategies. Our method is relatively independent from external views on the relative difficulties of strategies. + +Validating our metric with a sample of 800 puzzles rated externally into eight gradations of difficulty, we found a Goodman-Kruskal $\gamma$ coefficient of 0.82, indicating significant correlation [Goodman and Kruskal 1954]. An independent evaluation of 1,000 typical puzzles produced a difficulty distribution similar to the distribution of solve times empirically created by millions of users at http://www.websudoku.com. + +Based upon this difficulty metric, we created two separate puzzle generators. One generates mostly easy to medium puzzles; when run with four difficulty levels, it creates puzzles (or boards) of those levels in 0.25, 3.1, 4.7, and $30\mathrm{min}$ . The other puzzle generator modifies difficult boards to create boards of similar difficulty; when tested on a board of difficulty 8,122, it created 20 boards with average difficulty 7,111 in $3\mathrm{min}$ . + +# Introduction + +In Sodomu, a player is presented with a $9 \times 9$ grid divided into nine $3 \times 3$ regions. Some of the 81 cells of the grid are initially filled with digits + +between 1 and 9 such that there is a unique way to complete the rest of the grid while satisfying the following rules: + +1. Each cell contains a digit between 1 and 9. +2. Each row, column, and $3 \times 3$ region contains exactly one copy of the digits $\{1, 2, \ldots, 9\}$ . + +A Sudoku puzzle consists of such a grid together with an initial collection of digits that guarantees a unique final configuration. Call this final configuration a solution to the puzzle. The goal of Sudoku is to find this unique solution from the initial board. + +Figure 1 shows a Sudoku puzzle and its solution. + +![](images/6af6822e6117471c630c29e41bb284ef660e35a8d7277bd383d83a6f2b13c79f.jpg) +Figure 1. Sudoku puzzle and solution from the London Times (16 February 2008) [Sudoku n.d.]. + +![](images/a6f686c606442234763338084fb2319401298cbca3af624938217509ae57bf22.jpg) + +We cannot have 8, 3, or 7 appear anywhere else on the bottom row, since each number can show up in the bottommost row only once. Similarly, 8 cannot appear in any of the empty squares in the lower left-hand region. + +# Notation + +We first introduce some notation. Number the rows and columns from 1 to 9, beginning at the top and left, respectively, and number each $3 \times 3$ region of the board as in Figure 2. + +We refer to a cell by an ordered pair $(i,j)$ , where $i$ is its row and $j$ its column, and group will collectively denote a row, column, or region. + +Given a S日上午 board $B$ , define the S日上午 board $B'$ to be the structure that associates with each cell in $B$ the set of digits currently thought to be candidates for the cell. For example, in Figure 1, cell (9, 9) cannot take the values $\{1, 3, 4, 7, 8, 9\}$ because it shares a group with cells with these values. Therefore, this cell has values $\{2, 5, 6\}$ in the corresponding SSG. + +![](images/9618f707c8570f6c5f370298f3029bd0d5f4b2ad96e2a2f7a4a075f864a4ddb2.jpg) +Figure 2. Numbering of $3 \times 3$ regions of a Sudoku board. + +To solve a Sudoku board, a player applies strategies, patterns of logical deduction (see the Appendix). We assume the SSG has been evaluated for every cell on the board before any strategies are applied. + +# Problem Background + +Most efforts on Sudoku have been directed at solving puzzles or analyzing the computational complexity of solving Sudoku [Lewis 2007, Eppstein 2005, and Lynce and Ouaknine 2006]. Sudoku can be solved extremely quickly via reduction to an exact cover problem and an application of Knuth's Algorithm X [2000]. However, solving the $n^2 \times n^2$ generalization of Sudoku is known to be NP-complete [Yato 2003]. + +We investigate: + +1. Given a puzzle, how does one define and determine its difficulty? +2. Given a difficulty, how does one generate a puzzle of this difficulty? + +While generating a valid Sudoku puzzle is not too complex, the non-local and unclear process of deduction makes determining or specifying a difficulty much more complicated. + +Traditional approaches involve rating a puzzle by the strategies necessary to find the solution, while other approaches have been proposed by Caine and Cohen [2006] and Emery [2007]. A genetic algorithms approach found some correlation with human-rated difficulties [Mantere and Koljonen 2006], and Simonis presents similar findings with a constraint-based rating [2005]. However, in both cases, the correlation is not clear. + +Puzzle generation seems to be more difficult. Most existing generators use complete search algorithms to add numbers systematically to cells in a grid until a unique solution is found. To generate a puzzle of a given difficulty, this process is repeated until the desired difficulty is achieved. This is the approach found in Mantere and Koljonen [2006], while Simonis [2005] posits both this and a similar method based on removal of cells from + +a completed board. Felgenhauer and Jarvis [2005] calculate the number of valid Sudoku puzzles. + +We present a new approach. We create hsolve, a program to simulate how a human solver approaches a puzzle, and present a new difficulty metric based upon hsolve's simulation of human solving behavior. We propose two methods based on hsolve to generate puzzles of varying difficulties. + +# Problem Setup + +# Difficulty Metric + +We create an algorithm that takes a puzzle and returns a real number that represents its abstract "difficulty" according to some metric. We base our definition of difficulty on the following general assumptions: + +1. The amount of time for a human to solve a puzzle increases monotonically with difficulty. +2. Every solver tries various strategies. To avoid the dependence of our results on a novice's ignorance of strategies and to extend the range of measurable puzzles, we take our hypothetical solver to be an expert. + +Hence, we define the difficulty of a Sudoku puzzle to be the average amount of time that a hypothetical Sudoku expert would spend solving it. + +# Puzzle Generation + +Our main goal in puzzle generation is to produce a valid puzzle of a given desired difficulty level that has a unique solution. We take a sample of 1,000 Sudoku puzzles and assume that they are representative of the difficulty distribution of all puzzles. We also endeavor to minimize the complexity of the generation algorithm, measured as the expected execution time to find a puzzle of the desired difficulty level. + +# A Difficulty Metric + +# Assumptions and Metric Development + +To measure the time for an expert Sudoolver to solve a puzzle, there are two possibilities: + +1. Model the process of solving the puzzle. +2. Find some heuristic for board configurations that predicts the solve time. + +There are known heuristics for difficulty of a puzzle—for example, puzzles with a small number of initial givens are somewhat harder than most. However, according to Hayes [2006], the overall correlation is weak. + +Therefore, we must model the process of solving. We postulate the following assumptions for the solver: + +1. Strategies can be ranked in order of difficulty, and the solver always applies them from least to most difficult. This assumption is consistent with the literature. We use a widely accepted ranking of strategies described in the Appendix. +2. During the search for a strategy application, each ordering of possible strategy applications occurs with equal probability. There are two components of a human search for a possible location to apply a strategy: complete search and intuitive pattern recognition. While human pattern recognition is extremely powerful (see, for example, Cox et al. [1997]), it is extremely difficult to determine its precise consequences, especially due to possible differences between solvers. Therefore, we do not consider any intuitive component to pattern recognition and restrict our model to a complete search for strategy applications. Such a search will proceed among possible applications in the random ordering that we postulate. + +We define a possible application of a strategy to be a configuration on the board that is checked by a human to determine if the given strategy can be applied; a list of exactly which configurations are checked varies by strategy and is given in the Appendix. We model our solver as following the algorithm HumanSolve defined as follows: + +Algorithm HumanSolve repeats the following steps until there are no remaining empty squares: + +1. Choose the least difficult strategy that has not yet been searched for in the current board configuration. +2. Search through possible applications of any of these strategies for a valid application of a strategy. +3. Apply the first valid application found. + +We take the difficulty of a single run of HumanSolve to be the total number of possible applications that the solver must check; we assume that each check takes the same amount of time. Multiple runs of this method on the same puzzle may have different difficulties, due to different valid applications being recognized first. + +For a board $B$ , its difficulty metric $m(B)$ is the average total number of possible applications checked by the solver while using the HumanSolve algorithm. + +# hsolve and Metric Calculation + +To calculate $m(B)$ , we use hsolve, a program in Java 1.6 that simulates HumanSolve and calculates the resulting difficulty: + +1. Set the initial difficulty $d = 0$ . +2. Repeat the following actions in order until $B$ is solved or the solver cannot progress: + +(a) Choose the tier of easiest strategies $S$ that has not yet been searched for in the current board configuration. +(b) Find the number $p$ of possible applications of $S$ . +(c) Find the set $V$ of all valid applications of $S$ and compute the size $v$ of $V$ . +(d) Compute $E(p, v)$ , the expected number of possible applications that will be examined before a valid application is found. +(e) Increment $d$ by $E(p, v) \times t$ , where $t$ is the standard check time. Pick a random application in $V$ and apply it to the board. + +3. Return the value of $d$ and the final solved board. + +While hsolve is mostly a direct implementation of HumanSolve, it does not actually perform a random search through possible applications; instead, it uses the expected search time $E(p,v)$ to simulate this search. The following lemma gives an extremely convenient closed-form expression for $E(p,v)$ that we use in hsolve. + +Lemma. Assuming that all search paths through $p$ possible approaches are equally likely, the expected number $E(p, a)$ of checks required before finding one of $v$ valid approaches is given by + +$$ +E (p, v) = \frac {p + 1}{v + 1}. +$$ + +Proof: For our purposes, to specify a search path it is enough to specify the $v$ indices of the valid approaches out of $p$ choices, so there are $\binom{p}{v}$ possible search paths. Let $I$ be the random variable equal to the smallest index of a valid approach. Then, we have + +$$ +\begin{array}{l} E (p, v) = \sum_ {i = 1} ^ {p - v + 1} i P (I = i) = \sum_ {i = 1} ^ {p - v + 1} \sum_ {j = i} ^ {p - v + 1} P (I = j) = \sum_ {i = 1} ^ {p - v + 1} P (I \geq i) \\ = \frac {1}{\binom {p} {v}} \sum_ {i = 1} ^ {p - v + 1} \binom {p + 1 - i} {v} = \frac {1}{\binom {p} {v}} \sum_ {j = 0} ^ {p - v} \binom {v + j} {v} = \frac {\binom {p + 1} {v + 1}}{\binom {p} {v}} = \frac {p + 1}{v + 1}, \\ \end{array} +$$ + +where we've used the "hockeystick identity" [AoPS Inc. 2007]. + +Given a puzzle $B$ , we calculate $m(B)$ by running hsolve several times and take the average of the returned difficulties. Doing 20 runs per puzzle gives a ratio of standard deviation to mean of $\frac{\sigma}{\mu} \approx \frac{1}{10}$ , so we use 20 runs per puzzle. + +# Analysis + +Our evaluation of hsolve consists of three major components: + +1. Checking that hsolve's conception of difficulty is correlated with existing conceptions of difficulty. +2. Comparing the distribution of difficulties generated by hsolve to established distributions for solve time. +3. Finding the runtime of the algorithm. + +# Validation Against Existing Difficulty Ratings + +For each of the difficulty ratings in {supereasy, veryeasy, easy, medium, hard, harder, veryhard, superhard}, we downloaded a set of 100 puzzles from Hanssen [n.d.]. No other large datasets with varying difficulty ratings were available. + +We ran hsolve on each puzzle 20 times and recorded the average difficulty for each board. We classified boards by difficulty on a ranking scale, with 8 groups of 100 puzzles. Table 1 shows the results. + +Table 1. Results: ${\chi }^{2} = {6350}\left( {\mathrm{{df}} = {49}}\right) ,\gamma = {0.82}$ . + +
Difficulty12345678
supereasy8119000000
veryeasy19681210000
easy08383318210
medium022629221740
hard0210192030118
harder00572226364
veryhard019716132727
superhard00042122161
+ +A $\chi^2$ -test for independence gives $\chi^2 = 6350 (p < 0.0001)$ . Thus, there is a statistically significant deviation from independence. + +Furthermore, the Goodman-Kruskal coefficient is $\gamma = 0.82$ is relatively close to 1, indicating a somewhat strong correlation between our measure of difficulty and the existing metric. This provides support for the validity of our metric; more precise analysis seems unnecessary because we are only checking that our values are close to those of others. + +# Validation of Difficulty Distribution + +When run 20 times on each of 1,000 typical puzzles from Lenz [n.d.], hsolve generates the distribution for measured difficulty shown in Figure 3. The distribution is sharply peaked near 500 and has a long tail + +![](images/b2d5a96c41c4f6acc4f3b433a74e0e330775a923029c3795328f2197e12d7439.jpg) +Figure 3. Histogram of measured difficulty for 1,000 typical puzzles. + +towards higher difficulty. + +We compare this difficulty distribution plot with the distributions of times required for visitors to http://www.websudoku.com to solve the puzzles available there [Web Sudoku n.d.]. This distribution, generated by the solution times of millions of users, is shown in Figure 4. + +![](images/ceafa109c67c0bd7ef9c6a6ab47e0934acdea9c892431292a63ad17cc9711091.jpg) +Figure 4. A distribution plot of the time to solve Easy-level puzzles on www.websudoku.com; the mean is 5 min 22 sec. + +The two graphs share a peak near 0 and are skewed to the right. + +# Runtime + +With running 20 iterations of hsolve per puzzle, rating 100 puzzles requires $13\mathrm{min}$ , or about 8 sec per puzzle, on a 2 Ghz Centrino Duo processor + +with 256 MB of Java heap space. While this runtime is slower than existing difficulty raters, we feel that hsolve provides a more detailed evaluation of difficulty that justifies the extra time. + +# Generator + +Our choice of using a solver-based metric for difficulty has the following implications for puzzle generation: + +- It is impossible to make a very accurate prediction of the difficulty of the puzzle in the process of generating it, before all of the numbers on the puzzle have been determined. This is because adding or repositioning a number on the board can have a profound impact on which strategies are needed to solve the puzzle. + +Thus, given a difficulty, we create a puzzle-generating procedure that generates a puzzle of approximately the desired difficulty and then runs hsolve on the generated puzzle to determine if the actual difficulty is the same as the desired difficulty. This is the approach that we take in both the generator and pseudo-generator described below. + +- There is an inevitable trade-off between the ability to generate consistently difficult puzzles and the ability to generate truly random puzzles. A generator that creates puzzles with as randomized a process as possible is unlikely to create very difficult puzzles, since complex strategies would not be employed very often. + +Hence, for a procedure that consistently generates hard puzzles, we must either reduce the randomness in the puzzle-generating process or limit the types of puzzles that can result. + +- The speed at which puzzles can be generated depends upon the speed of hsolve. + +We describe two algorithms for generating puzzles: a standard generator and a pseudo-generator. + +# Standard Generator + +Our standard puzzle generator follows this algorithm: + +1. Begin with an empty board and randomly choose one number to fill into one cell. +2. Apply hsolve to make all logical deductions possible. (That is, after every step of generating a puzzle, keep track of the Sudoku Solution Graph for all cells of the board.) +3. Repeat the following steps until either a contradiction is reached or the board is completed: + +- Randomly fill an unoccupied cell on the board with a candidate for that cell's SSG. +- Apply hsolve to make all logical deductions (which will fill in naked and hidden singles and adjust the SSG accordingly) +- If a contradiction occurs on the board, abort the procedure and start the process again from an empty board. + +If no contradiction is reached, then eventually the board must be completely filled, since a new cell is filled in manually at each iteration. + +The final puzzle is the board with all of the numbers that were filled in manually at each iteration of the algorithm (i.e., the board without the numbers filled in by hsolve). + +# Guaranteeing a Unique Solution with Standard Generator + +For this algorithm to work, a small modification must be made in our backtracking strategy. If the backtracking strategy makes a guess that successfully completes the puzzle, we treat it as if this guess does not complete the puzzle but rather comes to a dead end. Thus, the backtracking strategy only makes a modification to the board if it makes a guess on some square that results in a contradiction, in which case it fills in that square with the other possibility. With this modification, we easily see that if our algorithm successfully generates a puzzle, then the puzzle must have a unique solution, because all of the cells of the puzzle that are not filled in are those that were determined at some point in the construction process by hsolve. With this updated backtracking strategy, hsolve makes a move only if the move follows logically and deterministically from the current state of the board; so if hsolve reaches a solution, it must be the unique one. + +# Pseudo-Generator + +Our pseudo-generator takes a completed Sudoku board and a set of cells to leave empty at beginning of a puzzle, called the reserved cells. The idea is to guarantee the use of a high-level strategy, such as Swordfish or Backtracking, by ensuring that a generated puzzle cannot be completed without such a strategy. Call the starting puzzle the seed board and the solution the completed seed board. To use the pseudo-generator, we must first prepare a list of reserved cells, found as follows: + +1. Take a seed board that hsolve cannot solve using strategies only up to tier $k$ , but hsolve can solve with strategies up to tier $k + 1$ (see Appendix for the different tiers of strategies we use). +2. Use hsolve to make all possible deductions (i.e. adjusting the SSG) using only strategies up to tier $k$ . +3. Create a list of cells that are still empty. + +We then pass to the pseudo-generator the completed seed board and this list of reserved cells. The pseudo-generator iterates the algorithm below, starting with an empty board, until all the cells except the reserved cells are filled in: + +1. Randomly fill an unoccupied, unreserved cell on the board with the number in the corresponding cell of the completed seed board. +2. Apply hsolve to make logical deductions and to complete the board as much as possible. + +# Differences From Standard Generator + +The main differences between the pseudo-generator and the standard generator are: + +1. When filling in an empty cell, the standard generator uses the number in the corresponding cell of the completed puzzle, instead of choosing this number at random from the cell's SSG. +2. When selecting which empty cell to fill in, the pseudo-generator never selects one of the reserved cells. +3. hsolve is equipped with strategies only up to tier $k$ . +4. The pseudo-generator terminates not when the board is completely filled in but rather when all of the unreserved cells are filled in. + +The pseudo-generator is only partially random. It provides enough clues so that the unreserved cells of the board can be solved with strategies up to tier $k$ , and the choice of which of these cells to reveal as clues is determined randomly. However, the solution of the generated puzzle is independent of these random choices and must be identical to the completed seed board. For the same reason as in the standard generator, the solution must be unique. + +The pseudo-generator never provides clues for reserved cells; hence, when hsolve solves a puzzle, it uses strategies of tiers 0 through $k$ to fill in the unreserved cells, and then is forced to use a strategy in tier $k + 1$ to solve the remaining portion of the board. + +# Pseudo-Generator Puzzle Variability + +The benefit of the pseudo-generator over the standard generator is generating puzzles in which a strategy of tier $k + 1$ must be used, thus guaranteeing a high level of difficulty (if $k$ is high). The drawback is that the pseudo-generator cannot be said to generate a puzzle at random, since it starts with a puzzle already generated in the past and constructs a new puzzle (using some random choices) out of its solution. + +We implement the pseudo-generator by first randomly permuting the rows, columns, and numbers of the given completed puzzle, so as to create an illusion that it is a different puzzle. Ideally, we should have a large database of difficult puzzles to choose from (together with the highest tier strategy needed to solve each puzzle and its list of reserved cells that cannot be filled with strategies of lower tiers). + +# Difficulty Concerns + +"Difficulty level" is not well-defined: In a system of three difficulty levels, how difficult is a medium puzzle, as compared to a hard or easy puzzle? In the previous correlation analysis in which we divided 800 puzzles into eight difficulty levels, we forced each difficulty level to contain 100 puzzles. + +# Generating Puzzles with a Specific Difficulty + +Figure 5 shows the measured difficulty of 1,000 puzzles generated by the standard generator. We can divide the puzzles into intervals of difficulty, with equal numbers of puzzles in each interval. To create a puzzle of given difficulty level using the standard generator, we iterate the generator until a puzzle is generated whose difficulty value falls within the appropriate interval. + +![](images/34244882f6b9ee093ea50472adfb06e914659c6f5a197bb5cb6be347b84928e5.jpg) +Figure 5. A histogram of the measured difficulty of 1,000 puzzles generated by the standard generator. + +# Standard Generator Runtime + +It took 3 min to generate 100 valid boards (and 30 invalid boards) and 12 min to determine the difficulties of the 100 valid boards. Thus, 100 boards take a total 15 min to run, or an average of about 9 sec per valid board. + +From the difficulty distribution in Figure 5, we can obtain an expected runtime estimate for each level of difficulty. For four levels, the expected number of boards that one needs to construct to obtain a board of level 1 is a geometric random variable with parameter $p = \frac{598}{1000}$ , so the expected runtime to obtain a board of level 1 is $0.15 \times \frac{1000}{598} = 0.25$ min. Similarly, the expected runtimes to obtain boards of level 2, level 3, and level 4 are 3.1, 4.7, and 30 min. + +# Using Pseudo-Generator to Generate Difficult Puzzles + +To generate large numbers of difficult boards, it would be best to employ the pseudo-generator. We fed the pseudo-generator a puzzle ("Riddle of Sho") that can be solved only by using the tier-5 backtracking strategy [Sudoku Solver n.d.]. The difficulty of the puzzle was determined to be 8,122, while the average difficulty of 20 derived puzzles generated using this puzzle was 7,111. Since all puzzles derived from a puzzle fed into the pseudo-generator must share application of the most difficult strategy, the difficulties of the derived puzzles are approximately the same as that of the original puzzle. + +With a database of difficult puzzles, a method of employing the pseudogenerator is to find the midpoint of the difficulty bounds of the desired level, choose randomly a puzzle whose difficulty is close to this midpoint, and generate a derived puzzle. If the difficulty of the derived puzzle fails to be within our bounds, we continue choosing an existing puzzle at random and creating a derived puzzle until the bound condition is met. The average generation time for a puzzle is 9 sec, the same as for the standard generator. For difficult boards, there is a huge difference between the two strategies in the expected number of boards that one needs to construct, and the pseudo-generator is much more efficient. + +# Conclusion + +# Strengths + +Our human solver hsolve models how a human Sudoku expert would solve a Sudoku puzzle by posing Sudoku as a search problem. We judge the relative costs of each strategy by the number of verifications of possible strategy applications necessary to find it and thereby avoid assigning explicit numerical difficulty values to specific strategies. Instead, we allow the difficulty of a strategy to emerge from the difficulty of finding it, giving a more formal treatment of what seems to be an intuitive notion. This derivation of the difficulty provides a more objective metric than that used in most existing difficulty ratings. + +The resulting metric has a Goodman-Kruskal $\gamma$ -coefficient of 0.82 with + +an existing set of hand-rated puzzles, and it generates a difficulty distribution that corresponds to one empirically generated by millions of users. Thus, we have some confidence that this new metric gives an accurate and reasonably fast method of rating Sudoku puzzle difficulties. + +We produced two puzzle generators, one able to generate original puzzles that are mostly relatively easy to solve, and one able to modify pre-existing hard puzzles to create ones of similar difficulty. Given a database of difficult puzzles, our pseudo-generator is able to reliably generate many more puzzles of these difficulties. + +# Weaknesses + +It was difficult to test the difficulty metric conclusively because of the dearth of available human-rated Sodomoku puzzles. Hence, we could not conclusively establish what we believe to be a significant advantage of our difficulty metric over most existing ones. + +While our puzzle generator generated puzzles of all difficulties according to our metric, it experienced difficulty creating very hard puzzles, as they occurred quite infrequently. Although we attempted to address this flaw by creating the pseudo-generator, it cannot create puzzles with entirely different final configurations. + +Because of the additional computations required to calculate the search space for human behavior, both the difficulty metric and the puzzle generator have relatively slow runtimes compared to other raters and generator. + +# Appendix: Sudoku Strategies + +Most (but not all) Sudoku puzzles can be solved using a series of logical deductions [What is Sudoku? n.d.]. These deductions have been organized into a number of common patterns, which we have organized by difficulty. The strategies have been classed into tiers between 0 and 5 based upon the general consensus of many sources on their level of complexity (for example, see Johnson [n.d.] and Sudoku Strategy [n.d.]). + +In this work, we have used what seem to be the most commonly occurring and accessible strategies together with some simple backtracking. There are, of course, many more advanced strategies, but since our existing strategies suffice to solve almost all puzzles that we consider, we choose to ignore the more advanced ones. + +# 0. Tier 0 Strategies + +- Naked Single: A Naked Single exists in the cell $(i,j)$ if cell $(i,j)$ on the board has no entry, but the corresponding entry $(i,j)$ on the Sudoku Solution Graph has one and only one possible value. For example, in Figure A1. We see that cell $(2,9)$ is empty. Furthermore, + +![](images/2627571fb6dc8ea4811abff8186f03af9c764740d769fc2d634677b9a7e0a736.jpg) +Figure A1. Example for Naked Single strategy. + +the corresponding Sudoku Solution Graph entry in (2, 9) can only contain the number 9, since the numbers 1 through 8 are already assigned to cells in row 2. Therefore, since cell (2, 9) in the corresponding Sudoku Solution Graph only has one (naked) value, we can assign that value to cell (2, 9) on theoku board. + +Application Enumeration: Since a Naked Single could occur in any empty cell, this is just the number of empty cells, since checking if any empty cell is a Naked Single requires constant time. + +- Hidden Single: A Hidden Single occurs in a given cell $(i,j)$ when: + +(a) $(i,j)$ has no entry on the Sudoku board +(b) $(i,j)$ contains the value $k$ (among other values) on the Tokoku Solution Graph +(c) No other cell in the same group as $(i,j)$ has $k$ as a value in its Sodomoku Solution Graph + +Once we find a hidden single in $(i,j)$ with value $k$ , we assign $k$ to $(i,j)$ on the Sudoku board. The logic behind hidden singles is that given any group, all numbers 1 through 9 must appear exactly once. If we know cell $(i,j)$ is the only cell that could contain the value $k$ in a given row, then we know that it must hold value $k$ on the actual Sudoku board. We can consider the example in Figure A2. + +We look at cell (1, 1). First, (1, 1) does not have an entry, and we can see that its corresponding entry in the Sudoku Solution Graph contains $\{1, 2, 7, 8, 9\}$ . However, we see that the other cells in region 1 that don't have values assigned, i.e. cells (1, 2), (1, 3), (2, 1) and (3, 1), do not have the value 1 in their corresponding Sudoku Solution Graph cells; that is, none of the other four empty cells in the board besides (1, 1) can hold the value 1, and so we can assign 1 to the cell (1, 1). + +![](images/2641711039b46b61c775e25fa884b7972d5857436224ec05eac58763074efcb7.jpg) +Figure A2. Example for Hidden Single strategy. + +Application Enumeration: Since a Hidden Single could occur in any empty cell, this is just the number of empty cells, since checking if any empty cell is a Hidden Single requires constant time (inspecting other cells in the same group). + +# 1. Tier 1 Strategies + +- Naked Double: A Naked Double occurs when two cells on the board in the same group $g$ do not have values assigned, and both their corresponding cells in the Sudoku Solution Graph have only the same two values $k_{1}$ and $k_{2}$ assigned to them. A naked double in $(i_{1}, j_{1})$ and $(i_{2}, j_{2})$ does not immediately give us the values contained in either $(i_{1}, j_{1})$ or $(i_{2}, j_{2})$ , but it does allow us to eliminate $k_{1}$ and $k_{2}$ from the Sudoku Solution Graph of all cells in $g$ beside $(i_{1}, j_{1})$ and $(i_{2}, j_{2})$ . + +Application Enumeration: For each row, column, and region, we sum up $\binom{n}{2}$ where $n$ is the number of empty cells in each group, since a Naked Double requires two empty cells in the same group. + +- Hidden Double: A Hidden Double occurs in two cells $(i_1, j_1)$ and $(i_2, j_2)$ in the same group $g$ when: + +(a) $(i_1,j_1)$ and $(i_2,j_2)$ have no values assigned on the board +(b) $(i_1, j_1)$ and $(i_2, j_2)$ share two entries $k_1$ and $k_2$ (and contain possibly more) in the Sudoku Solution Graph +(c) $k_{1}$ and $k_{2}$ do not appear in any other cell in group $g$ on the Sudoku Solution Graph + +A hidden double does not allow us to immediately assign values to $(i_1, j_1)$ or $(i_2, j_2)$ , but it does allow us to eliminate all entries other than $k_1$ and $k_2$ in the Sudoku Solution Graph for cells $(i_1, j_1)$ and $(i_2, j_2)$ . + +Application Enumeration: For each row, column, and region, we + +sum up $\binom{n}{2}$ where $n$ is the number of empty cells in each group, since a Hidden Double requires two empty cells in the same group. + +- Locked Candidates: A Locked Candidate occurs if we have cells (for simplicity, suppose we only have two: $(i_1, j_1)$ and $(i_2, j_2)$ ) such that: + +(a) $(i_1,j_1)$ and $(i_2,j_2)$ have no entries on the board +(b) $(i_1,j_1)$ and $(i_2,j_2)$ share two groups, $g_{1}$ and $g_{2}$ (i.e. both cells are in the same row and region, or the same column and region) +(c) $(i_1, j_1)$ and $(i_2, j_2)$ share some value $k$ in the Sudoku Solution Graph +(d) $\exists g_{3}$ , a group of the same type as $g_{1}, g_{1} \neq g_{3}$ , such that $k$ occurs in cells of $g_{2} \cap g_{3}$ +(e) $k$ does not occur elsewhere in $g_{3}$ besides $g_{3} \cap g_{2}$ +(f) $k$ does not occur in $g_{2}$ aside from $(g_{2} \cap g_{1}) \cup (g_{2} \cap g_{3})$ + +Then, since $k$ must occur at least once in $g_{3}$ , we know $k$ must occur in $g_{2} \cap g_{3}$ . However, since $k$ can only occur once in $g_{2}$ , then $k$ cannot occur in $g_{2} \cap g_{1}$ , so we can eliminate $k$ from the Sudoku Solution Graph cells corresponding to $(i_{1}, j_{1})$ and $(i_{2}, j_{2})$ . A locked candidate can also occur with three cells. + +Application Enumeration: For every row $i$ , we examine each three-cell subset $rs_{ij}$ formed as the intersection with some region $j$ ; there are twenty-seven such subsets. Out of those twenty-seven, we denote the number of subsets that have two or three empty cell as $r_l$ . We define $c_l$ for columns analogously, so this is just the sum $r_l + c_l$ . + +# 2. Tier 2 Strategies + +- Naked Triple: A Naked Triple occurs when three cells on the board, $(i_1,j_1),(i_2,j_2)$ and $(i_3,j_3)$ , in the same group $g$ do not have values assigned, and all three of their corresponding cells in the Sudoku Solution Graph share only the same three possible values, $k_{1}, k_{2}$ and $k_{3}$ . However, each cell of a Naked Triple does not have to have all three values, e.g. we can have $(i_1,j_1)$ have values $k_{1}, k_{2}$ and $k_{3}$ , $(i_2,j_2)$ have $k_{2}, k_{3}$ and $(i_3,j_3)$ have $k_{1}$ and $k_{3}$ on the Sudoku Solution Graph. We can remove $k_{1}, k_{2}$ and $k_{3}$ from all cells except for $(i_1,j_1),(i_2,j_2)$ and $(i_3,j_3)$ in the Sudoku Solution Graph that are also in group $g$ ; the logic is similar to that of the Naked Double strategy. + +Application Enumeration: For each row, column, and region, we sum up $\binom{n}{3}$ where $n$ is the number of empty cells in each group, since a Naked Triple requires three empty cells in the same group. + +- Hidden Triple: A Hidden Triple is similar to a Naked Triple the way a Hidden Double is similar to a Naked Double, and occurs in cells $(i_{1}, j_{1}), (i_{2}, j_{2})$ and $(i_{3}, j_{3})$ sharing the same group $g$ when: + +(a) $(i_{1}, j_{1}),(i_{2}, j_{2})$ and $(i_{3}, j_{3})$ contain no values on the Sudoku Board + +(b) Values $k_{1}, k_{2}$ and $k_{3}$ appear among $(i_{1}, j_{1}), (i_{2}, j_{2})$ and $(i_{3}, j_{3})$ in their SSG +(c) $k_{1}, k_{2}$ and $k_{3}$ do not appear in any other cells of $g$ in the SSG + +Then, we can eliminate all values beside $k_{1}, k_{2}$ and $k_{3}$ in the SSG of cells $(i_{1}, j_{1}), (i_{2}, j_{2})$ and $(i_{3}, j_{3})$ . The reasoning is the same as for the Hidden Double strategy. + +Application Enumeration: For each row, column, and region, we sum up $\binom{n}{3}$ where $n$ is the number of empty cells in each group, since a Hidden Triple requires three empty cells in the same group. + +- X-Wing: Given a value $k$ , an X-Wing occurs if: + +(a) $\exists$ two rows, $r_1$ and $r_2$ , such that the value $k$ appears in the SSG for exactly two cells each of $r_1$ and $r_2$ +(b) $\exists$ distinct columns $c_{1}$ and $c_{2}$ such that $k$ only appears in rows $r_{1}$ and $r_{2}$ the Sudoku Solution Graph in the set $(r_{1} \cap c_{1}) \cup (r_{1} \cap c_{2}) \cup (r_{2} \cap c_{1}) \cup (r_{2} \cap c_{2})$ + +Then, we can eliminate the value $k$ as a possible value for all cells in $c_{1}$ and $c_{2}$ that are not also in $r_{1}$ and $r_{2}$ , since $k$ can only appear in each of the two possible cells of in each row $r_{1}$ and $r_{2}$ and $k$ . Similarly, the X-Wing strategy can also be applied if we have a value $k$ that is constrained in columns $c_{1}$ and $c_{2}$ in exactly the same two rows. + +Application Enumeration: For each value $k$ , 1 through 9, we count the number of rows that contain $k$ exactly twice in the SSG of its empty cells, $r_k$ . Since we need two such rows to form an X-Wing for any one number, we take $\binom{r_k}{2}$ . We also count the number of columns that contain $k$ exactly twice in the SSG of its cells, $c_k$ , and similarly take $\binom{c_k}{2}$ . We sum over all values $k$ , so this value is $\sum_{k} \binom{r_k}{2} + \binom{c_k}{2}$ . + +# 3. Tier 3 Strategies + +- Naked Quad: A Naked Quad is similar to a Naked Triple; it occurs when four unfilled cells in the same group $g$ contain only elements of set $K$ of at most four possible values in their SSG. In this case, we can remove all values in $K$ from all other cells in group $g$ , since the values in $K$ must belong only to the four unfilled cells. + +Application Enumeration: For each row, column, and region, we sum up $\binom{n}{4}$ where $n$ is the number of empty cells in each group, since a Naked Quad requires three four empty cells in the same group. + +- Hidden Quad: A Hidden Quad is analogous to a Hidden Triple. It occurs when we have four cells $(i_{1}, j_{1}),(i_{2}, j_{2}),(i_{3}, j_{3})$ and $(i_{4}, j_{4})$ in the same group $g$ such that: + +(a) $(i_1,j_1),(i_2,j_2),(i_3,j_3)$ and $(i_4,j_4)$ share (among other elements) elements of the set $K$ of at most four possible values in their SSG +(b) No values of $K$ appear in the SSG of any other cell in $g$ . + +Then we can eliminate all values that cells $(i_1, j_1), (i_2, j_2), (i_3, j_3)$ and $(i_4, j_4)$ take on other than values in $K$ from their corresponding cells in the Sudoku Solution Graph. The reasoning is analogous to the Hidden Triple strategy. + +Application Enumeration: For each row, column, and region, we sum up $\binom{n}{4}$ where $n$ is the number of empty cells in each group, since a Hidden Quad requires three four empty cells in the same group. + +- Swordfish: The Swordfish Strategy is the three-row analogue to the X-Wing Strategy. Suppose we have three rows, $r_1, r_2$ and $r_3$ , such that the value $k$ has not been assigned to any cell in $r_1, r_2$ or $r_3$ . If the cells of $r_1, r_2$ and $r_3$ that have $k$ as a possibility in their corresponding SSG are all in the same three columns $c_1, c_2$ and $c_3$ , then no other cells in $c_1, c_2$ and $c_3$ can take on the value $k$ , so we may eliminate the value $k$ from the corresponding cells in the SSG. (This strategy can also be applied if we have columns that restrict the occurrence of $k$ to three rows). + +Application Enumeration: For each value $k$ , 1 through 9, we count the number of rows that contain $k$ exactly two or three times in the SSG of its empty cells, $r_k$ . Since we need three such rows to form a Swordfish for any one number we take $\binom{r_k}{3}$ . We also count the number of columns that contain $k$ two or three times in the SSG of its cells, $c_k$ , and similarly take $\binom{c_k}{3}$ . We sum over all values $k$ , so this value is $\sum_{k} \binom{r_k}{3} + \binom{c_k}{3}$ . + +# 4. Tier 4 Strategies + +- Jellyfish: The Jellyfish Strategy is analogous to the Swordfish and X-Wing strategies. We apply similar reasoning to four rows $r_1, r_2, r_3$ and $r_4$ in which some value $k$ is restricted to the same four columns $c_1, c_2, c_3$ and $c_4$ . If the appearance of $k$ in cells of $r_1, r_2, r_3$ and $r_4$ in the Sudoiku Solution Graph is restricted to four specific columns, then we can eliminate $k$ from any cells in $c_1, c_2, c_3$ and $c_4$ that are not in one of $r_1, r_2, r_3$ or $r_4$ . Like the Swordfish strategy, the Jellyfish strategy may also be applied to columns instead of rows. + +Application Enumeration: For each value $k$ , 1 through 9, we count the number of rows that contain $k$ exactly two, three or four times in the SSG of its empty cells, $r_k$ . Since we need four such rows to form a Jellyfish for any one number $k$ , we take $\binom{r_k}{4}$ . We also count the number of columns that contain $k$ two, three or four times in the SSG of its cells, $c_k$ , and similarly take $\binom{c_k}{4}$ . We sum over all values $k$ , so this value is $\sum_{k} \binom{r_k}{4} + \binom{c_k}{4}$ . + +# 5. Tier 5 Strategies + +- Backtracking: Backtracking in the sense that we use is a limited + +version of complete search. When cell $(i,j)$ has no assigned value, but exactly 2 possible values $k_{1}, k_{2}$ in its SSG, the solver will assign a test value (assume $k_{1}$ ) to cell $(i,j)$ and continue solving the puzzle using only Tier 0 strategies. + +There are three possible results. If the solver arrives at a contradiction, he deduces that $k_{2}$ is in cell $(i,j)$ . If the solver completes the puzzle using the test value, this is the unique solution and the puzzle is solved. Otherwise, if the solver cannot proceed further but has not solved the puzzle completely, backtracking has failed and the solver must start a different strategy. + +Application Enumeration: Since we only apply Backtracking to cells with exactly two values in its SSG, this is just the number of empty cells that have exactly two values in their SSG. + +# References + +AoPS Inc. 2007. Combinatorics and sequences. http://www.artofproblemsolving.com/Forum/viewtopic.php? t=88383. +Caine, Allan, and Robin Cohen. 2006. MITS: A Mixed-Initiative Intelligent Tutoring System for Selenium. Advances in Artificial Intelligence 550-561. +Cox, Kenneth C., Stephen G. Eick, Graham J. Wills, and Ronald J. Brachman. 1997. Brief application description; visual data mining: Recognizing telephone calling fraud. Data Mining and Knowledge Discovery 225-331. +Emery, Michael Ray. 2007. Solving Sudoku puzzles with the COUGAAR agent architecture. Thesis. http://www.cs.montana.edu/techreports/2007/MichaelEmery.pdf. +Eppstein, David. 2005. Nonrepetitive paths and cycles in graphs with application to Sudoku. http://www.citebase.org/abstract?id=oai: arXiv.org:cs/0507053. +Felgenhauer, Bertram, and Frazer Jarvis. 2005. Enumerating possible Sudoku grids. http://www.afjarvis staff.shef.ac.uk/sudoku/sudoku.pdf. +Goodman, Leo A., and William H. Kruskal. 1954. Measures of association for cross classifications. Journal of the American Statistical Association, 49 (December 1954) 732-764. +GraphPad Software. n.d. QuickCalcs: Online calculators for scientists. http://www.graphpad.com/quickcalcs/PValue1.cfm. +Hanssen, Vegard. n.d. Sudoku puzzles. http://www.menneske.no/sudoku/eng/. + +Hayes, Brian. 2006. Unwed numbers: The mathematics of Sudoku, a puzzle that boasts "No math required!" American Scientist Online http:// www.americanscientist.org/template/AssetDetail/assetid/48550?print $\equiv$ yes. +Johnson, Angus. n.d. Solving Sudoku. http://www.angusj.com/sudoku/hints.php. +Knuth, Donald Ervin. 2000. Dancing links. In *Millennial Perspectives in Computer Science: Proceedings of the 1999 Oxford-Microsoft Symposium in Honour of Professor Sir Antony Hoare*, edited by Jim Davies, Bill Roscoe, and Jim Woodcock, 187-214. Basingstoke, U.K.: Palgrave Macmillan. http://www-cs-faculty.stanford.edu/~uno/preprints.html. +Lenz, Moritz. n.d. Sudoku Garden. http://sudokugarden.de/en. +Lewis, Rhyd. 2007. Metaheuristics can solve sudoku puzzles. Journal of Heuristics 13 (4): 387-401. +Lync, Inês, and João Ouaknine. 2006. Sudoku as a SAT Problem. http://sat.inesc-id.pt/~ines/publications/aimath06.pdf. +Mantere, Timo, and Janne Koljonen. 2006. Solving and rating Sudoku puzzles with genetic algorithms. In Proceedings of the 12th Finnish Artificial Intelligence Conference STeP. http://www.stes.fi/scai2006/proceedings/ step2006-86-solving-and-rating-sudoku-puzzles.pdf. +Simonis, Helmut. 2005. Sudoku as a constraint problem. In Modelling and Reformulating Constraint Satisfaction, edited by Brahim Hnich, Patrick Prosser, and Barbara Smith, 13-27. http://homes.ieu.edu.tr/~bhnich/mod-proc.pdf#page=21. +Sudu. n.d. Times Online. http://entertainment.timesonline.co.uk/tol/arts_and_enteartainment/games_and_puzzles/sudu/. +S神经系统. http://www.scanraid.com/sensor.htm. +Sudu k strategy. n.d. Sudu k Dragon. http://www.sudu k dragon.com/ sudokustrategy.htm. +Web Selenium. n.d. URL: http://www.webselenium.com/. +What is Selenium? n.d. http://www.sudokuaddict.com. +Yato, Takayuki. 2003. Complexity and completeness of finding another solution and its application to puzzles. Thesis, January 2003. http://www-imai.is.s.u-tokyo.ac.jp/~yato/data2/MasterThesis.pdf. + +![](images/8ebeab46050fd85b60eb9fed8c3334d441d84d816ee9e62fc53b40a5dc659815.jpg) +Zhou Fan, Christopher Chang, and Yi Sun. + +# Taking the Mystery Out of Sudoku Difficulty: An Oracular Model + +Sarah Fletcher + +Frederick Johnson + +David R. Morrison + +Harvey Mudd College + +Claremont, CA + +Advisor: Jon Jacobsen + +# Summary + +In the last few years, the 9-by-9 puzzle grid known as Sudoku has gone from being a popular Japanese puzzle to a global craze. As its popularity has grown, so has the demand for harder puzzles whose difficulty level has been rated accurately. + +We devise a new metric for gauging the difficulty of a Sudoku puzzle. We use an oracle to model the growing variety of techniques prevalent in the Sudoku community. This approach allows our metric to reflect the difficulty of the puzzle itself rather than the difficulty with respect to some particular set of techniques or some perception of the hierarchy of the techniques. Our metric assigns a value in the range $[0, 1]$ to a puzzle. + +We also develop an algorithm that generates puzzles with unique solutions across the full range of difficulty. While it does not produce puzzles of a specified difficulty on demand, it produces the various difficulty levels frequently enough that, as long as the desired score range is not too narrow, it is reasonable simply to generate puzzles until one of the desired difficulty is obtained. Our algorithm has exponential running time, necessitated by the fact that it solves the puzzle it is generating to check for uniqueness. However, we apply an algorithm known as Dancing Links to produce a reasonable runtime in all practical cases. + +# Introduction + +The exact origins of the Sudoku puzzle are unclear, but the first modern "Suduoku" puzzle showed up under the name "Number Place" in a 1979 puzzle magazine put out by Dell Magazines. Nikoli Puzzles introduced the puzzle to Japan in 1984, giving it the name "Suju wa dokushin ni kagiru," which was eventually shortened to the current "Suduoku." In 1986, Nikoli added two new constraints to the creation of the puzzle: There should be no more than 30 clues (or givens), and these clues must be arranged symmetrically. With a new name and a more esthetically-pleasing board, the game immediately took off in Japan. In late 2004, Sudoku was introduced to the London Times; and by the summer of 2005, it had infiltrated many major American newspapers and become the latest puzzle craze [Wikipedia 2008b]. + +Sudopedia is a Website that collects and organizes electronic information on